Note: If you like our article, don’t forget to click on alibaba Nanjing technology special issue! This article is reprinted from Alibaba Nanjing technology special issue – Zhihu, welcome danniu Niu niu to apply for the position of front-end/back-end development in Ali Nanjing, please see Ali Nanjing inviting front-end partners to join us.

Some time ago, when I checked Twitter, I saw big Vs mentioned Apollo and predicted that it would rise in 2018. The opportunity to use GraphQL came up, and after looking through the Apollo documentation, I decided to try to write the data layer entirely using Apollo instead of Redux in my new front-end project. Now, a month later, I must come out and praise the Sun god.

GraphQL

It’s 2018, and GraphQL is no longer a new term. After 15 years of brief discussion, it doesn’t seem to be heard much. However, Github has matured over the years, and Github has implemented the new API entirely in GraphQL. I won’t go into GraphQL itself here, it makes it much easier to get data between the front and back ends.

Redux

When it comes to front-end data management, the first thing that comes to mind is Redux. I think many of you have experienced the stages from unfamiliar to familiar with Redux. It looks something like this:

  • Start: Flux architecture designed by Facebook. It’s awesome. Everyone is using it, so I’ll use it
  • Six months: Data management is clearer, and there is no need to shuffle setstates around in components
  • One year: I’m a CRUD engineer, writing cookie-cutter lists. Redux is a hell of a lot of code for form pages
  • A year and a half: Redux-Action, Redux-Promise, DVA, Mirror… , customized the most appropriate middleware and plug-ins for the team’s business scenarios. The code is clean again
  • Two years: the toss all toss over. I’m a little tired, but I can’t leave.

Why are you tired? Because Flux’s one-way data flow is no longer new to you. Most of the time, the store stores data that is requested from the back end, and for them, how to dispatch and reduce is not the key, but how to design the store is worth considering.

When Redux meets a business need

Let’s go straight to a real world scenario:

This is a very common list of Comments. Once we get the requirements, we start writing our
component. Under The Paradigm of Redux, we inevitably follow this logic:

  1. inCommentsdidMountIn,dispatchOne that gets the dataactionAnd, in thefetch actionSend the request inside. In order to do loading, we will most likely dispatch another action to notify Redux that we initiated a request.
  2. If the request is successful, we dispatch an action that requested the data successfully and then process and update the data in the Reducer.
  3. inCommentsWe received data from PROPS and started rendering

A lot of our work is spent on how to get the data. What are our challenges? Look at a few requirements that product managers might mention

  • Users create or modify comments and see updates in the list immediately;

Simply request the entire list interface again! In general, this is enough, but demanding products may require you to make “optimistic” updates to make the experience better. This is no problem, add a reducer.

  • When the mouse hover on the user’s picture, to pop up the user’s detailed data (personal profile, contact information…)

If you want to add these fields to the interface data of the comments, you can add these fields to the interface data of the comments. Considering the amount of user data and the number of similar users in the comments, it makes sense not to include them in the list. Normalize all the data structures in the front end and store the data in hash tables based on user IDS. In one afternoon, you’ve got the perfect solution.

Faced with this scenario, we wrote too much imperative code, we described step by step how to get the review data, extract all the user ids after getting the review data, and then request to get all the user data again after de-redo, and so on. We also need to consider the details of normalize, caching, optimistic updates, and so on. And that’s exactly what Redux can’t help us with. As a result, we will package more powerful libraries and frameworks based on Redux, but the real focus on data acquisition does not seem to be very appropriate.

Declarative vs. Imperative

So what was it like in Apollo’s world?

import { graphql } from 'react-apollo';

const CommentsQuery = gql`
    query Comments() {
        comments {
            id
            content
            creator {
                id
                name
            }
        }
    }
`;

export default graphql(CommentsQuery)(Comments);
Copy the code

We use GraphQL (analogous to Connect in Redux) as a high-level component to bind a GraphQL query to the Comments component, and you’re all set. Is it that simple? Yeah, we no longer need to describe how to send a request in didMount, how to handle the data that comes in. Instead, we delegated Apollo to handle all of this for us, and it did a good job of sending requests to get data when we needed it, and then mapping the data to the props of Comments for us.

Not only that, but it’s a lot easier when we do updates. Like modifying a comment. We define a mutation operation for graphQL:

// ...

const updateComment = gql`
    mutation UpdateComment($id: Int! .$content: String!) {
      UpdateComment(id: $id, content: $content) {
        id
        content
        gmtModified
      }
    }
`;

class Comments extends React.Component {
    // ...
    onUpdateComment(id, content) {
        this.props.updateComment(id, content);
    }

    // ...
}

export default graphql(updateComment)(graphql(CommentsQuery)(Comments));
Copy the code

When we call updateComment, you’ll magically see that the comment data in the list is automatically updated. Any data returned by the GraphQL node will be automatically used to update the cache. In UpdateComment, we defined its return value. A new modified Comment of type Comment and specifying the fields to accept, Content and gmtModified. This way, Apollo-client will automatically update the cache by ID and type to re-render our list.

Looking at the remaining requirements, we need to expand user details when the mouse hovers over the user’s avatar. With this requirement we need to not only define what data we need, but also care about “how” to get it (sending the request in hover avatar). Apollo also provided us with “imperative” support.

class UserItem extends React.Component {
    // ...
    onHover() { const { client, id } = this.props; client.query({ query: UserQuery, variables: { id } }).then(data => { this.setState({ fullUserInfo: data }); }); }}export default withApollo(UserItem);
Copy the code

Fortunately, we still don’t have to worry about caching ourselves. Thanks to the global data cache of Apollo, when we query user A, the data with the same ID will directly hit the cache again. Apollo-client will directly resolve the data in the cache without sending the request. So the question is, what if I just want to query again every time?

client.query({
   query: UserQuery,
   variables: { id },
   fetchPolicy: 'cache-and-network'
});
Copy the code

Apollo provides a number of strategies for customizing cache logic, such as cache-first by default, cache-and-network by default, cache-only and network-only.

So those are some of the things that really appealed to me about GraphQL and Apollo. When you start to think about it from the GraphQL perspective, you’re more concerned with what data your business components need, rather than how to get it step by step. Most of the remaining business scenarios can be solved automatically through data type derivation and caching on the front end. Of course, space doesn’t have time to mention many elegant aspects, such as paging, direct manipulation of caches for optimistic updates, polling queries, and data subscriptions. We can go further if we have the opportunity.

REST and other local states?

Looking at this, you might be thinking, “GraphQL is cool, Apollo is cool, but my backend is REST, so they’re out of the picture for now.” Apollo Link was introduced in Version 2.0 of the Apollo Client. In theory, you can fetch data from any type of data source through GraphQL.

“Through GraphQL” means that we can use the query statement that writes GraphQL to retrieve data from either rest API or client state, so that Apollo Client can manage all the data in our application for us, including caching and data concatenation.

const MIXED_QUERY = gql`
    query UserInfo() {
        // graphql endpoint
        currentUser {
            id
            name
        }
        // client state
        browserInfo @client {
            platform
        }
        // rest api
        messages @rest(route: '/user/messages') @type(type: '[Message]') {
            title
        }
    }
`;
Copy the code

In such a Query, we use GraphQL directive to concatenate data from GraphQL, REST, and client state, abstracting them together for maintenance. Similarly, we can also encapsulate the corresponding mutation implementation.

The tail

These are some of the things I’ve been doing with Apollo and GraphQL. Although I didn’t know much about it, I could feel the more elegant solution that Thinking in GraphQL brings to the front end and the efficiency of a complete front-end data layer solution like Apollo Client. I believe they will see even greater growth in 2018 and even replace Redux as a universal data management solution.

Apollo related community is also more active, dev-blog.apollodata.com often published some very valuable articles, you can feel free to read ~