The file data

As with any type of software project, good documentation can go a long way toward accelerating API adoption.

Two of the most popular options from the “Specifications” section of the first article in this series can also be addressed as documentation options. If you have adopted OAS, you already have a great solution for documfying and integrating manual testing of the API in the form of the Swagger UI.

The PostMan application provides similar functionality in this regard, and as a last resort, you always have the option of creating your own documentation from scratch (although this would be harder to maintain and would lack an integrated test environment).

Don’t forget your mistakes

While your consumers can often easily determine the resources and architecture of an API through a lack of information and a small number of test requests, the errors returned by the API can be difficult to detect. You need to document all of these files in detail, along with the endpoints that can return them and/or the conditions that caused them to occur

Experience has shown that errors are the first thing in the documentation that the team cannot update to reflect changes in its code (when a normative approach (such as OAS or the like) is not being used in the work).

2. Physical reference

If your application is large enough, your API may return multiple types of entities, sometimes with complex relationships between them. Some of these entities may become large enough to contain large amounts of data that you might not want to return on each call.

Field filtering may return the default filtered view of the object, which lists only a few available properties, so it is not possible to provide the user with a complete overview of each entity on a simple request.

In such cases, it is best to have a single index of the complete available entities that developers can browse to disambiguate the use of each field and the relationship between the entities. In addition, if you do not provide an SDK or a set of business objects, they are excellent resources for developing your own data model on the client side.

3. Version history

This is important to ensure that your clients are always up to date. You can add continuous incremental updates to the API without having to deprecate your major version (adding minor versions), but you should make this clear to your consumers.

You should maintain a separate version of the documentation for each new (or even minor) version of the API that is still available in production. That way, even if they skip some minor releases, your users can track your changes as soon as possible and take advantage of them in the next version of the client application.

The available change log between versions is also useful as a quick overview of all updates.

Beyond the conventional

As I mentioned in the introduction, this article does its best not to repeat the common advice that you read online in all the other REST API development resources. Based on this, I’m assuming that you’ve read at least two such resources before you get here, and that you have a clear understanding of the basic functionality the API needs to implement.

In this section, I’ll mention a few features that I’ve found quite useful over the years and don’t talk about very often.

1. HATEOAS

Despite Roy Fielding’s paper pointing out that there can be no REST without HATEOAS, and calls for even less support for the Restless API, a growing number of people are finding that HATEOAS… Question (understatement).

Until a few years ago, I took a strictly utilitarian approach to REST API design. Having to work on large projects with a long history and serious version control issues forced me to re-evaluate my approach. While I’m still skeptical about whether HAL bloat or other Hypermedia formats make sense, I do appreciate practical solutions such as GitHub’s use of Hypermedia (although GitHub has moved to GraphQl for the latest version of its API).

I don’t claim to have a complete understanding of the API in practice with HATEOAS and all the benefits it touts, but I have already appreciated some of them. I haven’t seen an ideal client yet – one that works just by knowing the basic URL of the API, but it’s also possible that I haven’t done enough searching.

2. The cache

The HTTP protocol has supported caching since the early days of the web, with varying degrees of success. Although the initial protocol standards can be improved, failures are usually attributed to server configuration errors or inadequate server or client implementation (including early Web browsers).

As we approach the end of 2018, HTTP 1.1 is almost universally available, and adoption of HTTP 2.0 is rapidly increasing, but many Web APIs (not those of major companies) have been completely ignoring standard protocol caching mechanisms. At best, the client will try to use its own caching methods (via the implicit parameters provided in the API response), and at worst, there will be no caching at all.

The above situation sounds bad, but considering that the entire protocol cache infrastructure runs automatically for most clients (browsers, native network libraries, etc.), it’s even worse as long as the API returns the correct cache headers! If that doesn’t sound too onerous, help improve the network by using caching in the API.

Since you can read many resources online, I won’t go into all the details of the HTTP caching mechanism, but if you know what caching means in general, it’s a lot easier to implement caching in this case than you might think. All you need to do is let the client know whether they are allowed to cache the response and when you want its contents to change (thus invalidating the cache).

If you want the client to cache a specific response, but you are not sure when to change it, do not have the client download the entire response body every time. With ETag, you can simply provide a short reply to tell them if the content they requested has changed since they last downloaded it. Finally, unless you have to support true legacy clients, you can skip the HTTP 1.0 caching mechanism entirely (although it’s easy to implement if you’ve done everything else right).

3. Rate Limits

Typically, rate limits are the last thing an API developer implements, usually after reporting some kind of service abuse. I believe it should be part of any API that serves a small number of users, and should be implemented carefully.

A common mistake developers make is to rely on naive implementations that have only been tested by a few people during development. Don’t be silly! A simple algorithm that measures IP addresses in relation to time will likely cause a lot of denial of service for your B2B, whose network clients use limited IP space! Always think about authentication tokens, and be very careful how you enforce rate limits – otherwise, don’t do it until you’ve taught yourself and considered all the possible consequences.

If your API does enforce rate limits, be sure to notify your users by providing their current status in the response header. If you need inspiration, check out some examples on Twitter and GitHub.

4. Accept – Language headers

Although this very important and ubiquitous request header is very useful, it is often overlooked. Its basic purpose is to provide a list (in order of priority) of the server languages supported by the client.

I’ve seen many APIs use custom parameters for language selection and ignore this header, although most clients send the header by default (respecting the end user’s localization preferences). You can use this method when your API needs to serve localized content of any kind.

The end of the

Thanks for reading this article. I hope you didn’t find this series of posts too laborious to read, and that they provide you with at least a small amount of relevant information to help you build great APIs.