Abstract

From a front-end developer’s point of view, GraphQL is a data layer paradigm that supports Optimistic updates to data, declaratively fetching data alongside React components, and what you see is what you get. Promoted by Facebook, it has many advances over RESTful apis. With an example from a production environment, this article showed you how to wrap an existing RESTful API into a GraphQL API that is easy to use on the front end without affecting back-end developers.

background

On a front-end project I took over from an outsourcers, I saw many interesting relics, such as useless imports copied and pasted between pages, many old pages with a.bak suffix that was not very different from another page, childish component naming, etc. The use of apis is particularly moving, and goes something like this:

import * as Mapi from '.. /.. /lib/Mapi'; / /... DistrictPie (this.props. Id).then(json => {if (json.data.code === 0 && json.data.data ! == undefined) { this.setState({ districtPiedata: json.data.data, }); }}); Mapi.pie() .then(json => { if (json.data.code === 0) { this.setState({ piedata: json.data.data, }); }}); Mapi.whoami() .then(json => { this.setState({ user: json.data.data }); }); }Copy the code

It looks a bit bloated because there is a lot of duplicate code. It’s kind of confusing because we can’t see what’s in pieData. If you have a deeper level, it’s easy to say Cannot read property ‘machine’ of undefined or undefined is not an object, especially if you use ES6 to calculate property values, The debug time will be a little longer.

There are many other problems, such as I have to do non-null checking of the data in the UI.


  

Copy the code

The problem here is that we put the data in state, which unlike props provides type checking and default values when null.

While using a data flow framework such as Redux can solve most of the problem, it is important to keep the refactoring iteration alive as long as possible ———— to solve the big problems that interfere with developers’ understanding of the code. What I really want to solve is the visibility of the data to me: “I want to be able to see what the data looks like when I write the UI, and I want the data to look exactly like the structure of my UI to make binding data to the UI easy.”

As it is, every time I bind data, I have to ask Postman to see what the returned data looks like. Then I call the API and pull out some of the data that I need to bind to the UI. Due to business changes, I had to use several apis in one UI component. And I like to capitalize letters in common words, like deviceID, while the back-end uncle likes lower case, like dviceId. Oh, my God, he spelled it with an E! Ideally, the back end should move quickly as the business changes, break up the API, and then go through the baby name book and give each API a new meaningful name. But obviously, the backend is not going to drop all the things it’s doing to help you change the API. After all, why should data follow the UI? Do you think your UI is cool? And what if the demand changes three days later? Ask the back end to rewrite the REST endpoint, he’ll hit you with his fur coat.

Both Redux and GraphQL are used to solve these problems.

Declare data format

In the past, in REST, we separated the data into different paths by business, which is equivalent to giving each pile of data a name that is easy to understand at the time, depending on the business requirements when you did the first release, and then hoping to use one for each business later:

Export function whoami() {return apiget('/ API /account/whoami'); } export function pie() { return apiget('/api/data/index/pie'); } export function entry() { return apiget('/api/info/entry'); } export function districtPie(id) { return apiget(`/api/data/district/${id}/pie`); } export function siteOverview(id) { return apiget(`/api/data/site/${id}/overview`); } export function sitePie(id) { return apiget(`/api/data/site/${id}/pie`); } export function cabinetsSwitches(id) { return apiget(`/api/data/site/${id}/cabinets/switches`); } export function siteWarningQuantity(id) { return apiget(`/api/alarm/unread_quantity/site/${id}`); }Copy the code

In GraphQL, we represent data as a tree and can request any leaf directly. Similar to REST, we can use names that are easy to understand, except that instead of fantasizing about what data is hidden in a path, we can directly observe smaller granularity data, something like this:

# / API /account/whoami type UserType {logined: Boolean username: String password: String token: String id: Int! name: String companyId: Int companyName: String departmentId: Int departmentName: String role: String} # / API /info/entry/API /data/site/{id}/ info/entry/API /data/site/{id}/ info/entry/API /data/site/{id}/ info/entry/API /data/site/{id} Type PowerEntityType {id: Int! name: String! areaType: AreaType! Address: String coordinate: String companyId: Int districtId: Int siteId: Int gatewayID: Int alarmInfos(pagesize: Int, pageIndex: Int, orderBy: OrderByType, fromTime: String, toTime: String, filterAlarmCode: String): [AlarmInfoType] unreadAlarm: Int pie: PieGraphType infos(siteID: Int!) [InfoType] # wires(siteID: Int!) # wires(siteID: Int!) : [WireType] cabinets(siteID: Int!) : [CabinetType] children(areaType: AreaType, id: Int): [PowerEntityType] }Copy the code

You can see the relationship between each data type and the original REST endpoint in the comments. A data type can be pieced together from multiple original RESTful data sources (for example, integrating multiple microservices), or it can be common sense to recompose a REST data source into multiple data types so that after three months, You can understand her when you reread the code.

This transformation has two immediate benefits: first, you can see what the data source looks like! No more incessant Postman requests to capture data because you can’t remember what fields each data endpoint has. Second, it provides static type checking, so you can figure out what each type should look like as you write it, and then forget about checking the availability of data and focus on the business…

There are two indirect benefits: first, it writes like Flowtype or TypeScript. You’re now annotating data with type annotations, and in a few months you’ll feel right at home when you start learning Rust, beginner to master in 21 minutes. Second, you can comment data endpoints with “???” The black question mark operator declaratively expresses your confusion about the business, and can also be used with “! The rowing operator describes the indispensability of a data item.

GraphQL

The language we used to declare types above is called GraphQL. Github is moving their API to GraphQL, so you can check it out first. It is written in a schema.js file that looks something like this:

// schema.js export const typeDefinitions = 'schema {# / API /account/whoami type UserType {# Username: String Password: String Token: String ID: Int! Name: String companyID: Int companyName: String departmentd: Int departmentName: String role: String # } #... Omit other type declarations'Copy the code

It is written in a multi-line text with the first line placed on the first line for easy viewing of the line number DEBUG. We export it as a constant typeDefinitions. After writing the data type declaration, we will continue to explain in schema.js how each of these data types is retrieved. We do this with “analytic functions” :

// schema.js export const resolvers = { // ... Omit other analytic function, the code will be subject to physical UserType: {logined ({token}, args, context) {return context. User. GetLoginStatus (token); // Where does the User in context come from? All you need to know is that it is a data source}, username({token}, args, context) {return context.user.getUsername (token); }, password({ token }, args, context) { return context.User.getPassWord(token); }, token({ token }, args, context) { return token; }, id({ token }, args, context) { return context.User.getMetaData('id', token); }, name({ token }, args, context) { return context.User.getMetaData('name', token); }, companyID({ token }, args, context) { return context.User.getMetaData('companyId', token); }, companyName({ token }, args, context) { return context.User.getMetaData('companyName', token); }, departmentID({ token }, args, context) { return context.User.getMetaData('departmentId', token); // Notice I changed Id to Id, }, departmentName({token}, args, context) {return context.user.getmetadata ('dpartmentName', token); Typo}, role({token}, args, context) {return context.user.getmetadata ('role', token); }}, / /... Omit other parsing functions, code in object prevail}Copy the code

We can see that we are actually for every field may need to provide a function as a data source, and when we need certain fields in the UI, we sent out the request will trigger the certain function, thus will return only those fields we need the corresponding data, then these data after type checking, Go back to the UI where we made the request.

It might look something like this in UI:

function mapStateToProps(state) { return { token: State.auth.getin (['form', 'token']), // We get the token from Redux; } const MainPageData = gql` query MainPageData($token: String!) {# we declare that we want to pass tokens here User(token: $token) {id logined name companyName departmentName # we declare we will use this data role}} '; const queryConfig = { options: ({ token }) => ({ variables: {token}, // We configure graphQL HOC to use the token of its props pollInterval: 10000}), props: ({ownProps, data: { refetch, loading, User } }) => ({ refetch, loading, User, }) }; @connect(mapStateToProps) // Redux QueryConfig) // graphQL HOC export default class Main extends Component {static propTypes = {// Apollo @graphQL provides Loading: PropTypes. Bool. IsRequired refetch: PropTypes. Func. IsRequired, / / Redux @ the token of the connect with: PropTypes. String. IsRequired, / / we declare to User data: PropTypes. Object. IsRequired,} the static defaultProps = {User: {id: '', logined: false, name: '', companyId: '', companyName: '', departmentName: '', role: '', }, } render() { // ... Write your UI}}Copy the code

Client GraphQL data source

In fact, the GraphQL data endpoints, the data types in schema.js, and the parsing functions, were supposed to be written by the back-end uncle. While the back end guy has a family and a life of his own, you shouldn’t ask him to give up all the good things in his life just to make it more enjoyable for you to write UI. We can’t be that cruel.

Better yet, insert a simple GraphQL data endpoint as a mediator between our UI layer and data layer. In this mediation, we can split the API that originally returned a large chunk of data into fine-grained data types, correct the typo in the data returned in the background, and check whether the data is empty. Let’s take a look at an example:

// rest2graphqlinterface.js Import ApolloClient, {printAST, addTypename, addQueryMerging} from 'Apollo-client '; import { graphql } from 'graphql'; import { merge, mapValues } from 'lodash'; import { makeExecutableSchema } from './schemaGenerator'; import { typeDefinitions as rootSchema, resolvers as rootResolvers } from './schema'; import { User } from './models'; Import HouTaiDaShuConnector from './HouTaiDaShuConnector'; const typeDefs = [...rootSchema]; const resolvers = merge(rootResolvers); const executableSchema = makeExecutableSchema({ typeDefs, resolvers }); const serverConnector = new HouTaiDaShuConnector(); const REST2GraphQLInterface = { query(GraphQLRequest) { return graphql( executableSchema, printAST(GraphQLRequest.query), undefined, { User: new User({ connector: ServerConnector}), // Add your own connector} to your model, graphqlRequest. variables); }}; const client = new ApolloClient({ networkInterface: addQueryMerging(REST2GraphQLInterface), queryTransformer: addTypename, shouldBatch: true, }); export default client;Copy the code

Then replace the Redux Provider with the GraphQL member-exclusive ApolloProvider: from

{/* Your root component */}Copy the code
{/* Your root component */}Copy the code

Thus, the chain of fetching data is roughly connected, and you can see that the data flows like this: ExecutableSchema -> REST2GraphQLInterface -> Resolver functions -> executableSchema -> REST2GraphQLInterface -> ApolloClient -> Redux -> props for your UI -> your UI

What are connector and Model?

We just saw that when we created the REST2GraphQLInterface, we wrote:

User: new User({connector: serverConnector}), // Introduce your own connector for your model hereCopy the code

Where User is a Model. Why did you write that? In fact, we can put all the data acquisition, namely the request to the background uncle RESTful API, in the analytic function, but we will write a lot of repeated code, and for every little bit of data we have to fetch once, it is a waste of resources, a weak water three thousand take a bucket of the concept. So we abstract out a layer of Model:

// models.js export class User { constructor({ connector }) { this.connector = connector; this.metaData = {}; } async getLoginStatus(token) {try {const meta = await this.getallMetadata (); return true; } catch (error) { return false; } } async getAllMetaData(token) { const metaData = await this.connector.get('/api/account/whoami', token); // This is a naive caching solution. I have a clever solution, but there is too little space for me to write it. this.metaData[token] = metaData; return this.metaData[token]; } getMetaData(field, token) { return this.getAllMetaData(token) .then(meta => has(meta, field) ? meta[field] : Promise.reject(new Error(MODEL_DONT_HAVE_THIS_FIELD + ' : ' + field))); }}Copy the code

If we want the Model to focus on data caching and data cleaning, we need to abstract the network request out of the connector:

// HouTaiDaShuConnector.js export default class HouTaiDaShuConnector { get(route, token) { const tokenWithPrefix = `${/\? /.test(route) ? '&' : '? '}token=${token}`; return Promise.try(() => fetch(`${HouTaiDaShuPATH}/${route}${tokenWithPrefix}`, { method: 'GET', headers: {}, }) ) .then(checkStatus) .then(response => response.json()) .then(json => { if (json.code ! == 0) { return Promise.reject(json.message); } return json.data; }); }}Copy the code

Thus, we can easily switch the connector between the test server on the Intranet and the production server on the external network without affecting the logic in the Model and Schema. Because you connect to the test server to update the code for the latest business.)

conclusion

We set up a lightweight GraphQL server on the client side to act as an intermediary between us and the backend. Based on testing in production, this has no visible impact on performance, and good caching logic can even speed up page loading. After using this technology, data acquisition is not written in componentWillMount(), not deep in the Reducer of Redux, not hidden in the generator of Redux-saga, but really deep in the basic level, right next to our UI, love me, love my job, Two birds with one stone, what you see is what you get.

The client-side-GraphQL-server approach is also suitable for controlling IOT devices. After all, it is not practical to install server-side-GraphQL-server in IOT devices. However, using RESTful API to write the control of the Internet of Things would be bloated, and converting these data into GraphQL on the client side is a feasible way.

China GraphQL User Group 302490951 For example, some comrades asked me if Relay was good after I wrote a tutorial about it. I said, “It’s difficult to use, so NOW I use ApolloStack.”

Reference and discussion of ideas

Wrapping a REST API in GraphQL apollo-client issue #379