OkHttp 3.10.0, the latest OkHttp: 4.0.1 logic is not much changed from version 3, but kotlin implementation.

OkHttp introduction

OkHttp is the most frequently used web request framework on Android and is open sourced by Square. Google started to replace HttpURLConnection’s underlying implementation with OKHttp in Android4.4, and the popular Retrofit framework also uses OKHttp for its underlying implementation.

Advantages:

  • Supports Spdy, http1. X, Http2, Quic, and WebSocket
  • Connection pools reuse the underlying TCP(Socket) to reduce request latency
  • Seamless support for GZIP reduces data traffic
  • Caching response data reduces repeated network requests
  • Request failure automatically retry other IP of host, automatic redirection
  • … .

Using the process

When making a Request using OkHttp, there are at least three roles for the user: OkHttpClient, Request, and Call. Where OkHttpClient and Request can be created using the Builder mode it provides for us. A Call is a Request that is returned ready to be executed after a Request is handed over to OkHttpClient.

Builder pattern: Separate a complex build from its representation so that the same build process can create different representations. When instantiating OKHttpClient and Request, because there are so many properties to set and the developer’s set of requirements is so variable, using the Builder mode eliminates the need for the user to care about the internal details of the class. Once configured, the builder will help us initialize the presentation object step-by-step

At the same time OkHttp in the design of the facade mode, the complexity of the whole system to hide, subsystem interface through a client OkHttpClient unified exposure.

OkHttpClient is full of configurations such as proxy configuration, SSL certificate configuration, and so on. Call itself is an interface, and we get the implementation as RealCall

static RealCall newRealCall(OkHttpClient client, Request originalRequest, boolean forWebSocket) {
    // Safely publish the Call instance to the EventListener.
    RealCall call = new RealCall(client, originalRequest, forWebSocket);
    call.eventListener = client.eventListenerFactory().create(call);
    return call;
}
Copy the code

Call’s execute represents synchronous requests, while enQueue represents asynchronous requests. The only difference is that one initiates a network request directly, while the other uses OkHttp’s built-in thread pool. This involves OkHttp’s task dispatcher.

The dispenser

A Dispatcher is a Dispatcher that dispatches requests, and it contains a thread pool. We can create the dispenser by passing in our own thread pool when creating OkHttpClient.

The members of this Dispatcher are:

// The maximum number of simultaneous asynchronous requests
private int maxRequests = 64;
// Asynchronous request Maximum number of simultaneous requests for the same domain name
private int maxRequestsPerHost = 5;
// Idle tasks (some tasks can be performed when there is no request, set by the user)
private @Nullable Runnable idleCallback;

// The thread pool used by the asynchronous request
private @Nullable ExecutorService executorService;

// Asynchronous request waiting execution queue
private final Deque<AsyncCall> readyAsyncCalls = new ArrayDeque<>();

// The asynchronous request is being queued
private final Deque<AsyncCall> runningAsyncCalls = new ArrayDeque<>();

// The synchronization request is executing on the queue
private final Deque<RealCall> runningSyncCalls = new ArrayDeque<>();
Copy the code

A synchronous request

synchronized void executed(RealCall call) {
	runningSyncCalls.add(call);
}
Copy the code

Because synchronous requests do not require a thread pool, there are no restrictions. So the dispenser just keeps track.

An asynchronous request

synchronized void enqueue(AsyncCall call) {
	if (runningAsyncCalls.size() < maxRequests && runningCallsForHost(call) < maxRequestsPerHost) 	  {
		runningAsyncCalls.add(call);
		executorService().execute(call);
	} else{ readyAsyncCalls.add(call); }}Copy the code

If the number of ongoing tasks does not exceed the maximum limit of 64 and runningCallsForHost(call) < maxRequestsPerHost does not exceed five requests from the same Host, the task is added to the ongoing queue and submitted to the thread pool. Otherwise, join the waiting queue first.

There is nothing to say about joining the thread pool and executing directly, but if you join the queue, you need to wait for a free number to start executing. Therefore, the distributor’s FINISHED method is called each time a request is executed

// Asynchronous request invocation
void finished(AsyncCall call) {
	finished(runningAsyncCalls, call, true);
}
// Synchronous request invocation
void finished(RealCall call) {
	finished(runningSyncCalls, call, false);
}

private <T> void finished(Deque<T> calls, T call, boolean promoteCalls) {
	int runningCallsCount;
	Runnable idleCallback;
	synchronized (this) {
        / / whether asynchronous or synchronous, after the execution should be removed from the queue (runningSyncCalls/runningAsyncCalls)
		if(! calls.remove(call))throw new AssertionError("Call wasn't in-flight!");
		if (promoteCalls) promoteCalls();
        // The sum of asynchronous tasks and synchronous tasks being executed
		runningCallsCount = runningCallsCount();
		idleCallback = this.idleCallback;
	}
	// Execute idle tasks when no task is executed
	if (runningCallsCount == 0&& idleCallback ! =null) { idleCallback.run(); }}Copy the code

Note that only asynchronous tasks have restrictions and waits, so promoteCalls() is executed after the asynchronous task finishes removing elements from the executing queue. Obviously this method will redeploy the request.

private void promoteCalls(a) {
    // Return if the task is full
	if (runningAsyncCalls.size() >= maxRequests) return; 
    // There are no tasks waiting to be executed
	if (readyAsyncCalls.isEmpty()) return; 
    // Traverse the wait queue
	for (Iterator<AsyncCall> i = readyAsyncCalls.iterator(); i.hasNext(); ) {
		AsyncCall call = i.next();
        // If a waiting task is to be executed, there must be no more than 5 hosts requested by the waiting task
		if (runningCallsForHost(call) < maxRequestsPerHost) {
			i.remove();
			runningAsyncCalls.add(call);
			executorService().execute(call);
		}

		if (runningAsyncCalls.size() >= maxRequests) return; // Reached max capacity.}}Copy the code

If the condition is met, the task in the wait queue is moved to runningAsyncCalls and handed over to the thread pool for execution. So this is the end of the distributor. The logic is pretty simple.

Request process

The user does not need to directly manipulate the task distributor. The RealCall provides execute and enQueue to start synchronous or asynchronous requests, respectively.

@Override public Response execute(a) throws IOException {
    synchronized (this) {
      if (executed) throw new IllegalStateException("Already Executed");
      executed = true;
    }
    captureCallStackTrace();
    eventListener.callStart(this);
    try {
      // Invoke the dispenser
      client.dispatcher().executed(this);
      // Execute the request
      Response result = getResponseWithInterceptorChain();
      if (result == null) throw new IOException("Canceled");
      return result;
    } catch (IOException e) {
      eventListener.callFailed(this, e);
      throw e;
    } finally {
      // Request completed
      client.dispatcher().finished(this); }}Copy the code

Asynchronous request follow-up at the same time is called getResponseWithInterceptorChain () to perform the request

@Override
public void enqueue(Callback responseCallback) {
	synchronized (this) {
		if (executed) throw new IllegalStateException("Already Executed");
		executed = true;
	}
	captureCallStackTrace();
	eventListener.callStart(this);
    // Invoke the dispenser
	client.dispatcher().enqueue(new AsyncCall(responseCallback));
}
Copy the code

If the RealCall has already been executed, it is not allowed to be executed again. An asynchronous request submits an AsyncCall to the dispenser.

AsyncCall is a subclass of Runnable. When a Runnable is started with a thread, the run method is executed, which is redirected to the execute method in AsyncCall:

final class AsyncCall extends NamedRunnable {
	private final Callback responseCallback;

	AsyncCall(Callback responseCallback) {
		super("OkHttp %s", redactedUrl());
		this.responseCallback = responseCallback;
	}

    // Thread pool execution
	@Override
	protected void execute(a) {
	 boolean signalledCallback = false;
      try {
        Response response = getResponseWithInterceptorChain();
       / /...
      } catch (IOException e) {
       / /...
      } finally {
        // Request completed
        client.dispatcher().finished(this); }}}public abstract class NamedRunnable implements Runnable {
    protected final String name;

    public NamedRunnable(String format, Object... args) {
        this.name = Util.format(format, args);
    }

    @Override
    public final void run(a) {
        String oldName = Thread.currentThread().getName();
        Thread.currentThread().setName(name);
        try {
            execute();
        } finally{ Thread.currentThread().setName(oldName); }}protected abstract void execute(a);
}
Copy the code

AsyncCall is also a normal inner class of RealCall, meaning that it holds a reference to the external RealCall class and can get methods that call the external class directly.

Can see both synchronous and asynchronous request actually perform the work in getResponseWithInterceptorChain (). This approach is at the heart of OkHttp: the interceptor responsibility chain. But before we get to chains of responsibility, let’s review the basics of thread pools.

Dispatcher thread pool

As mentioned earlier, the dispatcher is used to allocate request tasks and contains a thread pool. When an asynchronous request is made, the request task is handed off to the thread pool for execution. How is the default thread pool defined in the dispenser? Why is it defined that way?

public synchronized ExecutorService executorService(a) {
    if (executorService == null) {
      executorService = new ThreadPoolExecutor(
          					0.// Core thread
                            Integer.MAX_VALUE,  // Maximum thread
                            60.// Idle thread idle time
                            TimeUnit.SECONDS,	// Idle time unit
                            new SynchronousQueue<Runnable>(), // Thread waiting queue
                            Util.threadFactory("OkHttp Dispatcher".false) // Threads create factories
      );
    }
    return executorService;
}
Copy the code

In OkHttp dispenser in the thread pool defined as above, actually is and Executors. NewCachedThreadPool () to create a thread. First, the core thread is 0, which means that the thread pool will not cache threads for us all the time. All threads in the thread pool will be recycled if they do not work within 60 seconds. The maximum throughput is achieved by combining the maximum thread integer. MAX_VALUE with the SynchronousQueue. That is, when the thread pool is required to perform a task, if there are no idle threads do not need to wait, immediately create a new thread to perform the task! Different wait queues specify different queuing mechanisms for thread pools. In general, there are ArrayBlockingQueue, LinkedBlockingQueue, and SynchronousQueue.

Assume that the core threads are occupied when the task is submitted to the thread pool:

ArrayBlockingQueue: Array-based blocking queue, initialized to specify a fixed size.

When this queue is used, the task submitted to the thread pool will be first added to the waiting queue. When the waiting queue is full, the task will be submitted again. The attempt to join the queue will fail. So you might end up with the task that was submitted later executing first, and the task that was submitted first waiting.

LinkedBlockingQueue: A blocking queue based on a linked list implementation, initialized with or without a specified size.

When the size is specified, the behavior is the same as ArrayBlockingQueu. If the size is not specified, the default integer.max_value is used as the queue size. At this point, the maximum number of threads parameter for the thread pool is useless, because submitting a task to the thread pool to the wait queue will succeed anyway. This ultimately means that all tasks are executed in the core thread. If the core thread is always occupied, keep waiting.

SynchronousQueue: No capacity queue.

Using this queue means that you want maximum concurrency. Because submitting a task to a thread pool, submitting a task to a queue will fail anyway. If there are no free non-core threads, it checks if the number of threads in the current thread pool has not reached the maximum and creates a new thread to execute the newly submitted task. There is no wait at all, and the only constraint is the maximum number of threads. Thus, in general, integer.max_value is used to achieve true waitlessness.

But be careful, as we all know, process memory is limited, and each thread needs to allocate a certain amount of memory. So you can’t have an infinite number of threads. When the maximum number of threads is set to integer. MAX_VALUE, OkHttp also has a limit of 64. This solves the problem and maximizes throughput.

Interceptor chain of responsibility

OkHttp is the core work in getResponseWithInterceptorChain (), before entering this method, we first to get to know what is the chain of responsibility pattern, because this method is to use the chain of responsibility pattern of complete step by step request.

Responsibility chain is a chain composed of a series of responsible people, similar to a factory assembly line, you know, many students’ boyfriends/girlfriends come from this way.

Chain of Responsibility model

A chain of recipient objects is created for the request. This pattern decouples the sender and receiver of the request by giving the type of request. In this pattern, each receiver typically contains a reference to the other receiver. If an object cannot handle the request, it passes the same request to the next recipient, and so on. Such as:

The Tanabata Festival has just passed. Zhou Zhou (I don’t know why the first thought is zhou Zhou) was a single dog when I was studying. After seeing many beauties in the study room every day, I went to study every night and did the same thing.

Zhou Zhou sat in the last row of the study room every night and found a piece of paper that said, “Hi, can you be my girlfriend? My specialty is special long, if you don’t want to please prequel “. The note was passed on one by one, and finally passed to the sweeping aunt. Finally and sweep the floor aunt lived a happy life, this is really a…. Happy story.

So what does the process look like?

// The next transmitter in the chain of responsibility protected Transmit nextTransmit; boolean request(String msg); public void setNextTransmit(Transmit transmit){ nextTransmit = transmit; }} public class Zero extends Transmit{public Boolean request(String MSG){system.out.println ("Zero "); boolean resp = nextTransmit.request(msg); return resp; }} public class extends Transmit{public Boolean request(String MSG){system.out.println ("Alvin "); boolean resp = nextTransmit.request(); return resp; }} public class extends Transmit{public Boolean request(String MSG){system.out.println ("Lucy ");}} public class extends Transmit{public Boolean request(String MSG){system.out. return true; } } private static Transmit getTransmits(){ Transmit zero = new Zero(); Transmit alvin = new Alvin(); Lucy lucy = new Lucy(); zero.setNextTransmit(alvin); alvin.setNextTransmit(lucy); return errorLogger; } public static void main(String[] args) { Transmit transmit = getTransmits(); "Hi, can you be my girlfriend?" ); }Copy the code

In the chain of responsibility pattern, each object’s references to its next parent are linked together to form a chain. Requests pass along the chain until one of the objects on the chain decides to process the request. The client does not know which object on the chain ultimately handles the request, and the system can dynamically reorganize the chain and assign responsibilities without affecting the client. The handler has two choices: take the blame or pass it on to the next employer. A request can end up not being accepted by any receiving object.

Interceptor flow

While OkHttp getResponseWithInterceptorChain () in the process of experience

Requests are handed to interceptors in the chain of responsibility. There are five interceptors by default:

  1. RetryAndFollowUpInterceptor

    The first touches the request and the last touches the response; Responsible for determining whether the entire request needs to be reinitiated

  2. BridgeInterceptor

    Complete the request and perform additional processing on the response

  3. CacheInterceptor

    Query the cache before the request to get the response and determine whether the cache is needed

  4. ConnectInterceptor

    The TCP connection is complete with the server. Procedure

  5. CallServerInterceptor

    Communicate with the server; Encapsulate request data and parse response data (e.g., HTTP packets)

Interceptor details

Retry and redirect interceptors

The first interceptor: RetryAndFollowUpInterceptor, main is to complete two things: retry and redirection.

retry

If a RouteException or IOException occurs during the request phase, the request will be re-initiated.

RouteException

catch (RouteException e) {
	// ToDO routing exception, connection failed, request not sent yet
    if(! recover(e.getLastConnectException(), streamAllocation,false, request)) {
    	throw e.getLastConnectException();
    }
    releaseConnection = false;
    continue;
} 
Copy the code

IOException

catch (IOException e) {
	// The todo request was sent, but communication with the server failed. (The socket stream disconnects while reading or writing data)
    / / ConnectionShutdownException only to HTTP2 exist. Let's say it's false
	booleanrequestSendStarted = ! (einstanceof ConnectionShutdownException);
	if(! recover(e, streamAllocation, requestSendStarted, request))throw e;
		releaseConnection = false;
		continue;
} 
Copy the code

Both exceptions are retried based on the RECOVER method. If true is returned, retries are allowed.

private boolean recover(IOException e, StreamAllocation streamAllocation,
                            boolean requestSendStarted, Request userRequest) {
	streamAllocation.streamFailed(e);
	If a request fails, it will not be retried
	if(! client.retryOnConnectionFailure())return false;
	// Todo 2, since requestSendStarted is true only for http2 I/O exceptions, forget http2
	if (requestSendStarted && userRequest.body() instanceof UnrepeatableRequestBody)
		return false;

	//todo 3. Check whether it is a retry exception
	if(! isRecoverable(e, requestSendStarted))return false;

	// Todo 4. Is there a route that can be used to connect
	if(! streamAllocation.hasMoreRoutes())return false;

	// For failure recovery, use the same route selector with a new connection.
	return true;
}
Copy the code

Therefore, on the premise of not prohibiting retry, if some exceptions occur and more routes exist, the user tries to retry the request on another line. Some of these exceptions are determined in isRecoverable:

private boolean isRecoverable(IOException e, boolean requestSendStarted) {
	// A protocol exception occurred and cannot be retried
    if (e instanceof ProtocolException) {
      return false;
    }

	// requestSendStarted thinks it will always be false(regardless of http2)
    if (e instanceof InterruptedIOException) {
      return e instanceofSocketTimeoutException && ! requestSendStarted; }// The SSL handshake is abnormal. The certificate is faulty and cannot be retried
    if (e instanceof SSLHandshakeException) {
      if (e.getCause() instanceof CertificateException) {
        return false; }}// The SSL handshake is not authorized
    if (e instanceof SSLPeerUnverifiedException) {
      return false;
    }
    return true;
}
Copy the code

1, the protocol is abnormal, if so, directly determine cannot retry; (There is a problem with your request or the server’s response. The data is not defined according to the HTTP protocol. It is useless to try again.)

2. If the timeout is abnormal, the Socket pipe may time out due to network fluctuation. Is there any reason not to retry? (Routing will be covered later)

3, SSL certificate exception /SSL authentication failure exception, the former is the certificate authentication failure, the latter may be no certificate at all, or the certificate data is incorrect, so how to retry

If retries are still allowed after an exception is determined, it checks again to see if there are currently available routing routes to connect. In simple terms, the DNS may return multiple IP addresses after resolving a domain name. If one IP address fails, try another IP address again.

redirect

If no exception occurs at the end of the request, it does not mean that the current response is the one that needs to be delivered to the user. You need to further determine whether redirection is required. The redirection judgment is located in the followUpRequest method

private Request followUpRequest(Response userResponse) throws IOException {
	if (userResponse == null) throw newIllegalStateException(); Connection connection = streamAllocation.connection(); Route route = connection ! =null
        ? connection.route()
        : null;
    int responseCode = userResponse.code();

    final String method = userResponse.request().method();
    switch (responseCode) {
      // the 407 client uses HTTP Proxy server. Add proxy-authorization in the request header for Proxy server Authorization
      caseHTTP_PROXY_AUTH: Proxy selectedProxy = route ! =null
            ? route.proxy()
            : client.proxy();
        if(selectedProxy.type() ! = Proxy.Type.HTTP) {throw new ProtocolException("Received HTTP_PROXY_AUTH (407) code while not using proxy");
        }
        return client.proxyAuthenticator().authenticate(route, userResponse);
      // Add Authorization to request headers
      case HTTP_UNAUTHORIZED:
        return client.authenticator().authenticate(route, userResponse);
      // 308 is permanently redirected
      // 307 temporary redirect
      case HTTP_PERM_REDIRECT:
      case HTTP_TEMP_REDIRECT:
        // The framework will not automatically redirect the request if it is not a GET or HEAD request
        if(! method.equals("GET") && !method.equals("HEAD")) {
          return null;
        }
      // 300 301 302 303 
      case HTTP_MULT_CHOICE:
      case HTTP_MOVED_PERM:
      case HTTP_MOVED_TEMP:
      case HTTP_SEE_OTHER:
        // If the user does not allow redirection, return null
        if(! client.followRedirects())return null;
        // Retrieve location from the response header
        String location = userResponse.header("Location");
        if (location == null) return null;
        // Configure the new request URL based on location
        HttpUrl url = userResponse.request().url().resolve(location);
        // If HttpUrl is null, the protocol is faulty and the HttpUrl cannot be retrieved
        if (url == null) return null;
        // If the redirection switches between HTTP and HTTPS, you need to check whether the user allows it (default allows).
        boolean sameScheme = url.scheme().equals(userResponse.request().url().scheme());
        if(! sameScheme && ! client.followSslRedirects())return null;

        Request.Builder requestBuilder = userResponse.request().newBuilder();
		/** * Redirects requests that are not PROPFIND requests, either POST or other methods should be changed to GET requests, * i.e. only PROPFIND requests can have request bodies */
		// Requests are not get and head
        if (HttpMethod.permitsRequestBody(method)) {
          final boolean maintainBody = HttpMethod.redirectsWithBody(method);
           // All requests except PROPFIND are changed to GET requests
          if (HttpMethod.redirectsToGet(method)) {
            requestBuilder.method("GET".null);
          } else {
            RequestBody requestBody = maintainBody ? userResponse.request().body() : null;
            requestBuilder.method(method, requestBody);
          }
          // For requests that are not PROPFIND, delete the data in the request header about the request body
          if(! maintainBody) { requestBuilder.removeHeader("Transfer-Encoding");
            requestBuilder.removeHeader("Content-Length");
            requestBuilder.removeHeader("Content-Type"); }}// Delete the authentication request header when redirecting across hosts
        if(! sameConnection(userResponse, url)) { requestBuilder.removeHeader("Authorization");
        }

        return requestBuilder.url(url).build();

      // 408 The client request timed out
      case HTTP_CLIENT_TIMEOUT:
        // 408 failed to connect, so check whether the user is allowed to retry
       	if(! client.retryOnConnectionFailure()) {return null;
		}
		UnrepeatableRequestBody is not actually used elsewhere
		if (userResponse.request().body() instanceof UnrepeatableRequestBody) {
			return null;
		}
		// If this response is the result of a new request and the last request was made again because of 408, then we will not re-request this time
		if(userResponse.priorResponse() ! =null
                        && userResponse.priorResponse().code() == HTTP_CLIENT_TIMEOUT) {
			return null;
		}
		// If the server tells us how long to Retry After, the framework doesn't care.
		if (retryAfter(userResponse, 0) > 0) {
			return null;
		}
		return userResponse.request();
	   // 503 service unavailable is similar to 408, but only requests again if the server tells you to Retry After: 0
 	   case HTTP_UNAVAILABLE:
		if(userResponse.priorResponse() ! =null
                        && userResponse.priorResponse().code() == HTTP_UNAVAILABLE) {
         	return null;
         }

         if (retryAfter(userResponse, Integer.MAX_VALUE) == 0) {
         	return userResponse.request();
         }

         return null;
      default:
        return null; }}Copy the code

The whole redirection decision is a lot to remember, which is normal, but the key is to understand what they mean. If this method returns nothing, then no redirection is required and the response is returned; If the followup is not empty, it will re-request the returned Request, but note that our interceptor defines a maximum of 20 times.

conclusion

This interceptor is the first in the chain of responsibility, meaning that it is the role that first touches the Request and last receives the Response, and the main function of this interceptor is to determine whether retries and redirects are required.

Retry only if a RouteException or IOException occurs. As soon as these two exceptions occur during subsequent interceptor execution, the Recover method is used to determine whether to retry the connection.

The redirection occurs after the decision of retry. If the conditions of retry are not met, followUpRequest needs to be further called according to the Response code of Response (of course, if the direct request fails, the Response will not exist, then an exception will be thrown). Followup can happen up to 20 times.

Bridge interceptor

BridgeInterceptor is a bridge between the application and the server. Requests sent by the BridgeInterceptor will be processed by the BridgeInterceptor before being sent to the server, such as setting the request content length, encoding, Gzip compression, cookie, and saving cookies after receiving the response. This interceptor is relatively simple.

Complete request header:

Request header instructions
Content-Type Request body type, such as:application/x-www-form-urlencoded
Content-Length/Transfer-Encoding Request body resolution mode
Host Host site requested
Connection: Keep-Alive Keep long connection
Accept-Encoding: gzip The received response supports GZIP compression
Cookie Cookie identification
User-Agent Requested user information, such as the operating system and browser

After completing the request header and passing it to the next interceptor, it gets the response and does two things:

1, save the cookie, the next request will read the corresponding data Settings into the request header, the default CookieJar does not provide implementation

2. If the data returned by gzip is used, use a GzipSource wrapper for easy parsing.

conclusion

The execution logic of the bridge interceptor focuses on the following points

Add or delete the relevant header information of the user-constructed Request to transform it into a Request that can actually make a network Request. Send the Request that meets the network Request specification to the next interceptor for processing and obtain the Response. If the Response body is GZIP compressed, it needs to be decompressed. Build a Response that is available to the user and return it

Cache interceptor

CacheInterceptor, which determines whether a cache is hit before issuing a request. If a match is made, the cached response can be used without the request. (There will only be a cache for Get requests)

Steps as follows:

1. Get the response cache of the corresponding request from the cache

Create a CacheStrategy that determines whether caching can be used. There are two members in the CacheStrategy :networkRequest and cacheResponse. Their combination is as follows:

networkRequest cacheResponse instructions
Null Not Null Use cache directly
Not Null Null Initiate a request to the server
Null Null Gg and okhttp return 504
Not Null Not Null Initiate a request, if the response is 304(no modification), update the cached response and return

3, to the next chain of responsibility to continue processing

4. For follow-up work, 304 cached responses are returned; Otherwise, use the network response and cache the response (cache only the response for the Get request)

The work of the cache interceptor is relatively simple to say, but the implementation has a lot to deal with. Determining whether the cache can be used in the cache interceptor, or the requesting server, is determined by CacheStrategy.

Caching strategies

CacheStrategy. First, you need to recognize a few request and response headers

Response headers instructions example
Date The time when the message was sent Date: Sat, 18 Nov 2028 06:17:41 GMT
Expires The time when the resource expires Expires: Sat, 18 Nov 2028 06:17:41 GMT
Last-Modified Time when the resource was last modified Last-Modified: Fri, 22 Jul 2016 02:57:17 GMT
ETag The unique identification of a resource on the server ETag: “16df0-5383097a03d40”
Age How long has the server responded to the request with the cache since it was created (in seconds) Age: 3825683
Cache-Control
Request header instructions example
If-Modified-Since The server does not modify the requested resource after the specified time, returns 304(no modification) If-Modified-Since: Fri, 22 Jul 2016 02:57:17 GMT
If-None-Match The server assigns it to the requested resourceEtagThe match returns 304 If-None-Match: “16df0-5383097a03d40”
Cache-Control

Cache-control can exist either in the request header or in the response header, and the corresponding value can be set in multiple combinations:

  1. Max - age = [s]: Maximum effective time of resources;
  2. public: indicates that the resource can be cached by any user, such as the client or proxy server.
  3. private: indicates that the resource can be cached only by a single user. The default value is private.
  4. no-store: Resources cannot be cached
  5. no-cache(Request) Do not use caching
  6. immutable(Response) Resources do not change
  7. Min - fresh = [s]Minimum cache freshness (how long the user thinks the cache is valid)
  8. must-revalidateExpiration caching is not allowed :(response)
  9. Max - stale = [s]How long will the cache remain valid after it expires

Suppose there is max-age=100 and min-fresh=20. This represents the time that the user thinks the cached response will take from the time the server creates the response to the time the cache can be used is 100-20=80s. But if max-stale=100. This represents the cache effective time after the 80s, 100s is still allowed to be used, which can be regarded as the cache effective time of 180s.

Detailed process

If the Response corresponding to the request URL is obtained from the cache, the above data will be obtained from the Response first for backup.

public Factory(long nowMillis, Request request, Response cacheResponse) {
            this.nowMillis = nowMillis;
            this.request = request;
            this.cacheResponse = cacheResponse;

            if(cacheResponse ! =null) {
                // The local time when the corresponding request is issued and the local time when the response is received
                this.sentRequestMillis = cacheResponse.sentRequestAtMillis();
                this.receivedResponseMillis = cacheResponse.receivedResponseAtMillis();
                Headers headers = cacheResponse.headers();
                for (int i = 0, size = headers.size(); i < size; i++) {
                    String fieldName = headers.name(i);
                    String value = headers.value(i);
                    if ("Date".equalsIgnoreCase(fieldName)) {
                        servedDate = HttpDate.parse(value);
                        servedDateString = value;
                    } else if ("Expires".equalsIgnoreCase(fieldName)) {
                        expires = HttpDate.parse(value);
                    } else if ("Last-Modified".equalsIgnoreCase(fieldName)) {
                        lastModified = HttpDate.parse(value);
                        lastModifiedString = value;
                    } else if ("ETag".equalsIgnoreCase(fieldName)) {
                        etag = value;
                    } else if ("Age".equalsIgnoreCase(fieldName)) {
                        ageSeconds = HttpHeaders.parseSeconds(value, -1); }}}}Copy the code

The get() method is used to determine the cache hit

public CacheStrategy get(a) {
	CacheStrategy candidate = getCandidate();
	// If todo can use cache, networkRequest must be null; NetworkRequest is not null. Gg (interceptor returns 504)
	if(candidate.networkRequest ! =null && request.cacheControl().onlyIfCached()) {
		// We're forbidden from using the network and the cache is insufficient.
		return new CacheStrategy(null.null);
	}
	return candidate;
}
Copy the code

The getCandidate() method is called to do the actual cache determination.

1. Whether the cache exists

The first judgment in the whole method is whether the cache exists:

if (cacheResponse == null) {
	return new CacheStrategy(request, null);
}
Copy the code

CacheResponse is the response found in the cache. If null, no cache was found. The created CacheStrategy instance exists only with networkRequest, indicating that a networkRequest needs to be made.

2. Caching HTTPS requests

Moving on means cacheResponse must exist, but it doesn’t necessarily work. A series of judgments of validity followed

if (request.isHttps() && cacheResponse.handshake() == null) {
	return new CacheStrategy(request, null);
}
Copy the code

If the request is HTTPS and there is no handshake in the cache, the cache is invalid.

3. Response code and response header
if(! isCacheable(cacheResponse, request)) {return new CacheStrategy(request, null);
}
Copy the code

The whole logic is in isCacheable, which says:

public static boolean isCacheable(Response response, Request request) {
        Always go to network for uncacheable response codes (RFC 7231 Section 6.1),
        // This implementation doesn't support caching partial content.
        switch (response.code()) {
            case HTTP_OK:
            case HTTP_NOT_AUTHORITATIVE:
            case HTTP_NO_CONTENT:
            case HTTP_MULT_CHOICE:
            case HTTP_MOVED_PERM:
            case HTTP_NOT_FOUND:
            case HTTP_BAD_METHOD:
            case HTTP_GONE:
            case HTTP_REQ_TOO_LONG:
            case HTTP_NOT_IMPLEMENTED:
            case StatusLine.HTTP_PERM_REDIRECT:
                // These codes can be cached unless headers forbid it.
                break;

            case HTTP_MOVED_TEMP:
            case StatusLine.HTTP_TEMP_REDIRECT:
                // These codes can only be cached with the right response headers.
                // http://tools.ietf.org/html/rfc7234#section-3
                // s-maxage is not checked because OkHttp is a private cache that should ignore
                // s-maxage.
                if (response.header("Expires") != null|| response.cacheControl().maxAgeSeconds() ! = -1
                        || response.cacheControl().isPublic()
                        || response.cacheControl().isPrivate()) {
                    break;
                }
                // Fall-through.
            default:
                // All other codes cannot be cached.
                return false;
        }

        // A 'no-store' directive on request or response prevents the response from being cached.
        return! response.cacheControl().noStore() && ! request.cacheControl().noStore(); }Copy the code

Cache-control = 200, 203, 204, 300, 301, 404, 405, 410, 414, 501, 308; No -store (resource cannot be cached), so if the server gives this response header, it is consistent with the previous two criteria (cache unavailable). Otherwise, continue to determine whether the cache is available

If the response code is 302/307(redirect), you need to further determine if there are any cache-allowed response headers. According to the description in annotations to the document in http://tools.ietf.org/html/rfc7234#section-3, if there is a Expires or cache-control value is:

  1. Max-age =[seconds] : indicates the maximum validity time of the resource.

  2. Public: indicates that the resource can be cached by any user, such as the client or proxy server.

  3. Private: indicates that the resource can be cached only by a single user. The default value is private.

With no cache-control: no-store, you can continue to determine whether the Cache is available.

Therefore, in general, the priority is determined as follows:

1. If the response code is not 200, 203, 204, 300, 301, 404, 405, 410, 414, 501, 308,302,307, the cache is not available;

2. If the response code is 302 or 307 and does not contain some response headers, the cache is unavailable.

3, Cache is unavailable if cache-control: no-store response header exists.

If the response cache is available, further determine the cache validity

4. Configure user requests
CacheControl requestCaching = request.cacheControl();
if (requestCaching.noCache() || hasConditions(request)) {
	return new CacheStrategy(request, null);
}
private static boolean hasConditions(Request request) {
	return request.header("If-Modified-Since") != null || request.header("If-None-Match") != null;
}
Copy the code

If the user has specified cache-control, OkHttp will need to check whether the Request was sent by the user. If a no-cache request header or a request header contains if-modified-since or if-none-match, caching is not allowed.

Request header instructions
Cache-Control: no-cache Ignore the cache
The if-modified-since: time Value is commonlyDataorlastModified, the server does not modify the requested resource after the specified time, returns 304(no modification)
If None - Match: tag Value is commonlyEtag, which corresponds to the resource requestedEtagValues are compared; If a match is made, 304 is returned

This means that if the user request header contains this content, they must make a request to the server. However, it is important to note that OkHttp does not cache 304’s response. In this case, if the user initiates a request with the server, the server will return 304’s response directly to the user: “Since you requested, I will only inform you of the result of this request”. If these headers are not included, the cache validity continues.

5. Whether the resources remain unchanged
CacheControl responseCaching = cacheResponse.cacheControl();
if (responseCaching.immutable()) {
	return new CacheStrategy(null, cacheResponse);
}
Copy the code

If the cached response contains cache-control: immutable, it means that the response to the request will never change. At this point you can use the cache directly. Otherwise, continue to determine whether the cache is available

6. Cache validity of the response

This step further determines whether the cache is valid based on some information in the cached response. If:

Cache lifetime < cache freshness – minimum cache freshness + duration of continued use after expiration

Indicates that caching is available. Freshness can be understood as the effective time, and here “cache freshness – cache minimum freshness” represents the actual effective time of the cache.

Get the cached response time from creation to present
long ageMillis = cacheResponseAge();
//todo
6.2. Get the valid cache length of this response
long freshMillis = computeFreshnessLifetime();
if(requestCaching.maxAgeSeconds() ! = -1) {
//todo If max-age is specified in the request, the minimum available cache time can be obtained by combining the available cache time with the available cache time of the request
		freshMillis = Math.min(freshMillis, SECONDS.toMillis(requestCaching.maxAgeSeconds()));
}
// 6.3 Request contains cache-control :min-fresh=[seconds] Can use the Cache before the specified time (request thought Cache valid time)
long minFreshMillis = 0;
if(requestCaching.minFreshSeconds() ! = -1) {
	minFreshMillis = SECONDS.toMillis(requestCaching.minFreshSeconds());
}
/ / 6.4
6.4.1, cache-control :must-revalidate; // 6.4.1, cache-control :must-revalidate
// 6.4.2, cache-control :max-stale=[s] Cache stale can be used for a specified period of time after the Cache expires. If specified, the cache can be used for as long as specified
	// The former ignores the latter, so it is not necessary to check with the server to get max-stale in the request header
long maxStaleMillis = 0;
if(! responseCaching.mustRevalidate() && requestCaching.maxStaleSeconds() ! = -1) {
	maxStaleMillis = SECONDS.toMillis(requestCaching.maxStaleSeconds());
}

// 6.5 Does not need to verify validity with the server && The time the response exists + the time the request considers the cache to be valid is less than the cache validity period + the time it can be used after expiration
// Cache is allowed
if(! responseCaching.noCache() && ageMillis + minFreshMillis < freshMillis + maxStaleMillis) { Response.Builder builder = cacheResponse.newBuilder();// Todo if it has expired, but does not exceed the expiration period, it can continue to be used by adding the corresponding header field
	if (ageMillis + minFreshMillis >= freshMillis) {
		builder.addHeader("Warning"."110 HttpURLConnection \"Response is stale\"");
	}
	//todo Also needs to add a warning if the cache is longer than a day and no expiration time is set in the response
	long oneDayMillis = 24 * 60 * 60 * 1000L;
	if (ageMillis > oneDayMillis && isFreshnessLifetimeHeuristic()) {
		builder.addHeader("Warning"."113 HttpURLConnection \"Heuristic expiration\"");
	}
	return new CacheStrategy(null, builder.build());
}
Copy the code

6.1 Cache survival time: ageMillis

First, the cacheResponseAge() method gets the response about how long it’s been around:

long ageMillis = cacheResponseAge();

private long cacheResponseAge(a) {
	longapparentReceivedAge = servedDate ! =null
                    ? Math.max(0, receivedResponseMillis - servedDate.getTime())
                    : 0;
	longreceivedAge = ageSeconds ! = -1
                    ? Math.max(apparentReceivedAge, SECONDS.toMillis(ageSeconds))
                    : apparentReceivedAge;
	long responseDuration = receivedResponseMillis - sentRequestMillis;
	long residentDuration = nowMillis - receivedResponseMillis;
	return receivedAge + responseDuration + residentDuration;
}
Copy the code

ApparentReceivedAge represents the apparentReceivedAge apparentapparentage apparentReceivedAge represents the apparent lag between the time the client receives the response and the server sends the response

SeredData is the time (when the server sent this response) corresponding to the Data response header retrieved from the cache; ReceivedResponseMillis indicates the time when the client sends the request

ReceivedAge represents the client’s cache for how long it is present when received

The ageSeconds are the number of seconds that correspond to the Age response header taken from the cache (the locally cached response is returned by the server’s cache, which is cached for the duration of the server’s existence)

The apparentReceivedAge maximum apparentReceivedAge is how long the response data has been in existence by the time the response is received.

Suppose the server has a cache with Data: 0 points when we make the request. In this case, the client initiates the request in 1 hour, and the server inserts Age: 1 hour into the cache and returns it to the client. In this case, the receivedAge calculated by the client is 1 hour, which represents how long the client cache has existed when it is received. (Does not represent how old it was at the time of this request)

ResponseDuration is the time between the sending and receiving of cached requests

4. ResidentDuration is the time difference between the time this cache received and now

ReceivedAge + responseDuration + residentDuration

The amount of time the cache has existed since the client received it + the time spent in the request process + the time since the request was received from the cache is how long the cache has existed.

6.2. Cache Freshness (valid time) : freshMillis

long freshMillis = computeFreshnessLifetime();

private long computeFreshnessLifetime(a) {
	CacheControl responseCaching = cacheResponse.cacheControl();
            
	if(responseCaching.maxAgeSeconds() ! = -1) {
		return SECONDS.toMillis(responseCaching.maxAgeSeconds());
	} else if(expires ! =null) {
		longservedMillis = servedDate ! =null ? servedDate.getTime() : receivedResponseMillis;
		long delta = expires.getTime() - servedMillis;
		return delta > 0 ? delta : 0;
	} else if(lastModified ! =null && cacheResponse.request().url().query() == null) {
		// As recommended by the HTTP RFC and implemented in Firefox, the
		// max age of a document should be defaulted to 10% of the
		// document's age at the time it was served. Default expiration
		// dates aren't used for URIs containing a query.
		longservedMillis = servedDate ! =null ? servedDate.getTime() : sentRequestMillis;
		long delta = servedMillis - lastModified.getTime();
		return delta > 0 ? (delta / 10) : 0;
	}
	return 0;
}
Copy the code

Cache freshness (validity period) can be determined in several ways, in order of priority as follows:

1. The Cache response contains cache-control: max-age=[s] the maximum valid time of the resource

2. The cache response contains Expires: time, which is calculated by Data or by receiving the response time

3. The cache response contains last-modified: time, so the valid time of the resource is calculated by Data or the time of sending the corresponding request. And according to the recommendation and implementation in Firefox, use 10% of the result as the valid time of the resource.

6.3. Minimum cache freshness: minFreshMillis

long minFreshMillis = 0;
if(requestCaching.minFreshSeconds() ! = -1) {
	minFreshMillis = SECONDS.toMillis(requestCaching.minFreshSeconds());
}
Copy the code

If the user’s request header contains cache-Control: min-fresh=[seconds], this represents how long the user thinks the Cache is valid. If the cache freshness is 100 milliseconds and the cache minimum freshness is 10 milliseconds, then the cache is actually valid for 90 milliseconds.

6.4. Validity period of cache after expiration: maxStaleMillis

long maxStaleMillis = 0;
if(! responseCaching.mustRevalidate() && requestCaching.maxStaleSeconds() ! = -1) {
	maxStaleMillis = SECONDS.toMillis(requestCaching.maxStaleSeconds());
}
Copy the code

The first condition of this judgment is that the cached response does not contain cache-control: must-revalidate, and the user request header contains cache-control: max-stale=[s] how long the Cache will remain valid after expiration.

6.5 Determine whether the cache is valid

if(! responseCaching.noCache() && ageMillis + minFreshMillis < freshMillis + maxStaleMillis) { Response.Builder builder = cacheResponse.newBuilder();// Todo if it has expired, but does not exceed the expiration period, it can continue to be used by adding the corresponding header field
	if (ageMillis + minFreshMillis >= freshMillis) {
		builder.addHeader("Warning"."110 HttpURLConnection \"Response is stale\"");
	}
	//todo Also needs to add a warning if the cache is longer than a day and no expiration time is set in the response
	long oneDayMillis = 24 * 60 * 60 * 1000L;
	if (ageMillis > oneDayMillis && isFreshnessLifetimeHeuristic()) {
		builder.addHeader("Warning"."113 HttpURLConnection \"Heuristic expiration\"");
	}
	return new CacheStrategy(null, builder.build());
}
Copy the code

Finally, using the values generated in the previous 4 steps, ignore the cache as long as the cached response does not specify no-cache if:

Cache lifetime + minimum cache freshness < cache freshness + expiration duration indicates that the cache can be used.

Assume that the cache is alive until now: 100 milliseconds; The user thinks the cache valid time (minimum cache freshness) is: 10 ms; Cache freshness: 100 ms; Cache can be used after expiration: 0 ms; Under these conditions, the actual time of the first cache is: 90 milliseconds, and the cache has already passed this time, so the cache cannot be used.

The inequality can be translated as: Cache lifetime < cache freshness – Minimum cache freshness + expiration duration, that is, cache lifetime < cache validity duration + expiration duration

In general, caches are used as long as they are not ignored and have not expired.

7. Cache expiration processing

String conditionName;
String conditionValue;
if(etag ! =null) {
	conditionName = "If-None-Match";
	conditionValue = etag;
} else if(lastModified ! =null) {
	conditionName = "If-Modified-Since";
	conditionValue = lastModifiedString;
} else if(servedDate ! =null) {
	conditionName = "If-Modified-Since";
	conditionValue = servedDateString;
} else {
    // means no comparison can be made with the server, only a re-request
	return new CacheStrategy(request, null); // No condition! Make a regular request.
}

// Add the request header
Headers.Builder conditionalRequestHeaders = request.headers().newBuilder();
Internal.instance.addLenient(conditionalRequestHeaders, conditionName, conditionValue);
Request conditionalRequest = request.newBuilder()
	.headers(conditionalRequestHeaders.build())
	.build();
return new CacheStrategy(conditionalRequest, cacheResponse);
Copy the code

If the command continues, the cache has expired and cannot be used. If Etag exists in the cached response, if-none-match is used to verify the cached response. If last-modified or Data exists, if-modified-since is passed to the server for validation. The server will return 304 if no changes are made.

Since the request was made because the cache was out of date (as opposed to the fourth judgment user initiative setting), if the server returns 304, the framework will automatically update the cache, so at this pointCacheStrategyContains bothnetworkRequestAlso containscacheResponse

8, ending

At this point, the determination of caching is complete, and the interceptor only needs to determine the different combinations of networkRequest and cacheResponse in CacheStrategy to determine whether caching is allowed.

Note, however, that if onlyIfCached is configured when the user creates the request, this means that the user expects the request to come only from the cache, and does not need to initiate the request. If a CacheStrategy is generated with networkRequest, it means that a request will definitely be made, and a conflict will occur! That will simply give the interceptor an object that has neither networkRequest nor cacheResponse. The interceptor returns the user 504!

// Cache policy get method
if(candidate.networkRequest ! =null && request.cacheControl().onlyIfCached()) {
	// We're forbidden from using the network and the cache is insufficient.
	return new CacheStrategy(null.null);
}

// Cache interceptor
if (networkRequest == null && cacheResponse == null) {
	return new Response.Builder()
                    .request(chain.request())
                    .protocol(Protocol.HTTP_1_1)
                    .code(504)
                    .message("Unsatisfiable Request (only-if-cached)")
                    .body(Util.EMPTY_RESPONSE)
                    .sentRequestAtMillis(-1L)
                    .receivedResponseAtMillis(System.currentTimeMillis())
                    .build();
}
Copy the code
9,

1. If the Response obtained from the cache is null, the network request is used to obtain the Response; 2. If an Https request is made and the handshake information is lost, the cache cannot be used, and a network request needs to be made. 3. If it is determined that the response code cannot be cached and the response header has a no-store identifier, a network request is required. 4, If the request header has a no-cache flag or if-modified-since/if-none-match, then the network request needs to be made. 5. If there is no no-cache flag in the response header and the cache time does not exceed the limit time, then the cache can be used without network request. 6. If the cache expires, check whether the response header is set to Etag/ last-Modified /Date. If not, use the network request directly.

And whenever a network request is made, the request header must not contain only-if-cached, otherwise the framework will return 504!

The main logic of the cache interceptor itself is in the cache policy. The logic of the interceptor itself is very simple. If it is determined that a network request needs to be made, the next interceptor is the ConnectInterceptor

Connect interceptors

ConnectInterceptor, which opens a connection to the target server and executes the next interceptor. It’s short enough to be posted in its entirety right here:

public final class ConnectInterceptor implements Interceptor {
  public final OkHttpClient client;

  public ConnectInterceptor(OkHttpClient client) {
    this.client = client;
  }

  @Override public Response intercept(Chain chain) throws IOException {
    RealInterceptorChain realChain = (RealInterceptorChain) chain;
    Request request = realChain.request();
    StreamAllocation streamAllocation = realChain.streamAllocation();

    // We need the network to satisfy this request. Possibly for validating a conditional GET.
    booleandoExtensiveHealthChecks = ! request.method().equals("GET");
    HttpCodec httpCodec = streamAllocation.newStream(client, chain, doExtensiveHealthChecks);
    RealConnection connection = streamAllocation.connection();

    returnrealChain.proceed(request, streamAllocation, httpCodec, connection); }}Copy the code

Although the amount of code is small, most of the functionality is actually wrapped up in other classes, so it’s just called.

The StreamAllocation object we first see is created in the first interceptor: redirection interceptor, but is actually used here.

“When a request is made, a connection needs to be established, and once the connection is established, a stream needs to be used to read and write data”; StreamAllocation is to coordinate the relationship among requests, connections and data flows. It is responsible for finding connections for a request and obtaining streams for network communication.

The newStream method is used to find or establish a valid connection with the requesting host. The returned HttpCodec contains the input and output streams and encapsulates the encoding and decoding of the HTTP request message.

StreamAllocation simply maintains connections: RealConnection — encapsulates sockets and a pool of Socket connections. Reusable RealConnection requires:

public boolean isEligible(Address address, @Nullable Route route) {
    // If this connection is not accepting new streams, we're done.
    if (allocations.size() >= allocationLimit || noNewStreams) return false;

    // If the non-host fields of the address don't overlap, we're done.
    if(! Internal.instance.equalsNonHost(this.route.address(), address)) return false;

    // If the host exactly matches, we're done: this connection can carry the address.
    if (address.url().host().equals(this.route().address().url().host())) {
      return true; // This connection is a perfect match.
    }

    // At this point we don't have a hostname match. But we still be able to carry the request if
    // our connection coalescing requirements are met. See also:
    // https://hpbn.co/optimizing-application-delivery/#eliminate-domain-sharding
    // https://daniel.haxx.se/blog/2016/08/18/http2-connection-coalescing/

    // 1. This connection must be HTTP/2.
    if (http2Connection == null) return false;

    // 2. The routes must share an IP address. This requires us to have a DNS address for both
    // hosts, which only happens after route planning. We can't coalesce connections that use a
    // proxy, since proxies don't tell us the origin server's IP address.
    if (route == null) return false;
    if(route.proxy().type() ! = Proxy.Type.DIRECT)return false;
    if (this.route.proxy().type() ! = Proxy.Type.DIRECT)return false;
    if (!this.route.socketAddress().equals(route.socketAddress())) return false;

    // 3. This connection's server certificate's must cover the new host.
    if(route.address().hostnameVerifier() ! = OkHostnameVerifier.INSTANCE)return false;
    if(! supportsUrl(address.url()))return false;

    // 4. Certificate pinning must match the host.
    try {
      address.certificatePinner().check(address.url().host(), handshake().peerCertificates());
    } catch (SSLPeerUnverifiedException e) {
      return false;
    }

    return true; // The caller's address can be carried by this connection.
  }
Copy the code

1, the if (allocations. The size () > = allocationLimit | | noNewStreams) return false.

The connection reaches the maximum concurrent stream or the connection does not allow new streams to be created. For example, http1.x is using a connection that cannot be used by others (maximum concurrent flow :1) or the connection is closed; Then reuse is not allowed;

2,

if(! Internal.instance.equalsNonHost(this.route.address(), address)) return false;
if (address.url().host().equals(this.route().address().url().host())) {
      return true; // This connection is a perfect match.
}
Copy the code

The DNS, proxy, SSL certificate, server domain name, and port are the same.

If none of the above conditions are met, reuse may still be possible in some HTTP/2 scenarios (regardless of http2).

In summary, if a connection is found in the pool with the same connection parameters and is not closed and not occupied, it can be reused.

conclusion

All implementations in this interceptor are aimed at obtaining a connection to the target server on which HTTP data is sent and received.

Request server interceptor

The CallServerInterceptor uses HttpCodec to send a request to the server and parse a Response.

First call httpCodec. WriteRequestHeaders (request); The request header is written to the cache (it is not actually sent to the server until the call to flushRequest() is made). And then immediately make the first logical judgment

Response.Builder responseBuilder = null;
if(HttpMethod.permitsRequestBody(request.method()) && request.body() ! =null) {
// If there's a "Expect: 100-continue" header on the request, wait for a "HTTP/1.1 100
// Continue" response before transmitting the request body. If we don't get that, return
// what we did get (such as a 4xx response) without ever transmitting the request body.
	if ("100-continue".equalsIgnoreCase(request.header("Expect"))) {
		httpCodec.flushRequest();
		realChain.eventListener().responseHeadersStart(realChain.call());
		responseBuilder = httpCodec.readResponseHeaders(true);
	}
	if (responseBuilder == null) {
		// Write the request body if the "Expect: 100-continue" expectation was met.
		realChain.eventListener().requestBodyStart(realChain.call());
		long contentLength = request.body().contentLength();
		CountingSink requestBodyOut =
                        new CountingSink(httpCodec.createRequestBody(request, contentLength));
		BufferedSink bufferedRequestBody = Okio.buffer(requestBodyOut);

		request.body().writeTo(bufferedRequestBody);
		bufferedRequestBody.close();
		realChain.eventListener().requestBodyEnd(realChain.call(),requestBodyOut.successfulCount);
	} else if(! connection.isMultiplexed()) {//HTTP2 multiplexing, no need to close the socket, regardless!
		// If the "Expect: 100-continue" expectation wasn't met, prevent the HTTP/1
		// connection
		// from being reused. Otherwise we're still obligated to transmit the request
		// body to
		// leave the connection in a consistent state.
		streamAllocation.noNewStreams();
	}
}
httpCodec.finishRequest();
Copy the code

The entire if is associated with a single request header: Expect: 100-continue. The request header represents the need to confirm with the server whether it wants to accept the request body sent by the client before sending the request body. So permitsRequestBody determines whether the responseBuilder will carry the request body (POST), and if it hits if, the responseBuilder will respond 100 if the server accepts the request body. Only then can the remaining request data be sent.

But if the server does not agree to accept the request body, then we need to flag that the connection cannot be reused and call noNewStreams() to close the relevant Socket.

The following code is:

if (responseBuilder == null) {
	realChain.eventListener().responseHeadersStart(realChain.call());
	responseBuilder = httpCodec.readResponseHeaders(false);
}

Response response = responseBuilder
                .request(request)
                .handshake(streamAllocation.connection().handshake())
                .sentRequestAtMillis(sentRequestMillis)
                .receivedResponseAtMillis(System.currentTimeMillis())
                .build();
Copy the code

The responseBuilder state is as follows:

1, POST request, request header contains Expect, server allows to accept the request body, and has sent the request body, responseBuilder is null;

2, POST request, request header contains Expect, server does not allow to accept request body, responseBuilder is not null

3, POST request, without Expect, directly send request body, responseBuilder null;

ResponseBuilder = null; responseBuilder = null;

ResponseBuilder = null;

For each of the five cases above, the Response header is read and a Response is formed. Note that this Response has no Response body. Also note that if the server accepts Expect: 100-continue does that mean we made two requests? The response header is the first query to see if the server supports accepting the request body, not the result of the actual request. So immediately following:

int code = response.code();
if (code == 100) {
	// server sent a 100-continue even though we did not request one.
	// try again to read the actual response
	responseBuilder = httpCodec.readResponseHeaders(false);

	response = responseBuilder
                    .request(request)
                    .handshake(streamAllocation.connection().handshake())
                    .sentRequestAtMillis(sentRequestMillis)
                    .receivedResponseAtMillis(System.currentTimeMillis())
                    .build();

	code = response.code();
}
Copy the code

If the response is 100, it indicates a request. Expect: 100-continue A successful response, which immediately reads a second response header, which is the actual result of the request.

Then finish

if (forWebSocket && code == 101) {
// Connection is upgrading, but we need to ensure interceptors see a non-null
// response body.
	response = response.newBuilder()
                    .body(Util.EMPTY_RESPONSE)
                    .build();
} else {
	response = response.newBuilder()
                    .body(httpCodec.openResponseBody(response))
                    .build();
}

if ("close".equalsIgnoreCase(response.request().header("Connection"))
                || "close".equalsIgnoreCase(response.header("Connection"))) {
	streamAllocation.noNewStreams();
}

if ((code == 204 || code == 205) && response.body().contentLength() > 0) {
	throw new ProtocolException(
		"HTTP " + code + " had non-zero Content-Length: " +  response.body().contentLength());
}
returnThe response;Copy the code

ForWebSocket stands forWebSocket request, so we’re going to go straight to else, which is reading the response body data. Then determine if both the request and the server want the connection to continue, and if either side indicates close, the socket needs to be closed. If the server returns 204/205, these return codes will not normally exist, but if they do, it means that there is no response body, but the parsed response header contains Content-lenght and is not zero, which indicates the length of the response body in bytes. If a conflict occurs, throw a protocol exception!

conclusion

In this interceptor, HTTP packets are encapsulated and parsed.

OkHttp summary

The entire OkHttp functionality is implemented in these five default interceptors, so understanding how the interceptor pattern works is a prerequisite. The five interceptors are: retry interceptor, bridge interceptor, cache interceptor, connection interceptor, request Service interceptor. Each interceptor does a different job, like a factory assembly line, and the final product comes out of these five processes.

But unlike pipelinings, each time an interceptor in OkHttp initiates a request, it does something before passing it on to the next interceptor, and something after it gets the result. The whole process is sequential in the request direction and reverse in the response direction.

When a user makes a request, the task Dispatcher wraps the request and passes it to the retry interceptor.

The interceptor is responsible for determining whether the user canceled the request before handing it over (to the next interceptor). After the result is obtained, a redirection is determined based on the response code, and all interceptors are restarted if the condition is met.

The bridge interceptor is responsible for adding required HTTP headers (e.g., Host) and default behaviors (e.g., GZIP compression) before delivering them. After obtaining the results, the save Cookie interface is called and the GZIP data is parsed.

3. Cache interceptor, as the name implies, reads and determines whether to use cache before handing over; Determine whether to cache after obtaining the results.

4. The connection interceptor is responsible for finding or creating a connection and obtaining the corresponding socket flow before handing it over. No additional processing is performed after the results are obtained.

5. Request server interceptor to actually communicate with the server, send data to the server, and parse the read response data.

After this series of processes, an HTTP request is completed!

Supplement: Agent

When using OkHttp, if a proxy or proxySelector is configured when creating OkHttpClient, the configured proxy will be used, and the proxy has a higher priority than the proxySelector. If not, the machine-configured agent is obtained and used.

//JDK : ProxySelector
try {
	URI uri = new URI("http://restapi.amap.com");
	List<Proxy> proxyList = ProxySelector.getDefault().select(uri);
	System.out.println(proxyList.get(0).address());
	System.out.println(proxyList.get(0).type());
} catch (URISyntaxException e) {
	e.printStackTrace();
}
Copy the code

Therefore, if we do not need to delegate requests in our App, we can configure a proxy(proxy.no_proxy) to avoid packet capture. NO_PROXY is defined as follows:

public static final Proxy NO_PROXY = new Proxy();
private Proxy(a) {
	this.type = Proxy.Type.DIRECT;
	this.sa = null;
}
Copy the code

There are three types of abstract classes that proxies correspond to in Java:

public static enum Type {
        DIRECT,
        HTTP,
        SOCKS;
	private Type(a) {}}Copy the code

DIRECT: no proxy, HTTP: HTTP proxy, SOCKS: SOCKS proxy. What is the difference between an Http proxy and a Socks proxy?

For the Socks proxy, in the HTTP scenario, the proxy server forwards TCP packets. Http proxy servers, in addition to forwarding data, parse Http requests and responses, and do some processing based on the content of the request and response.

RealConnection connectSocket method:

// Socks proxy new Socket(proxy); Otherwise, it is equivalent to simply :new Socket()
rawSocket = proxy.type() == Proxy.Type.DIRECT || proxy.type() == Proxy.Type.HTTP
                ? address.socketFactory().createSocket()
                : new Socket(proxy);
/ / the connect method
socket.connect(address);
Copy the code

If the SOCKS proxy is configured, the Socket is passed to the proxy when it is created, and the HTTP server is used as the target address for connection. However, if an Http proxy is set, the Socket is created to establish a connection with the Http proxy server.

Delivery address in the connect method from the following set of inetSocketAddresses RouteSelector resetNextInetSocketAddress method:

private void resetNextInetSocketAddress(Proxy proxy) throws IOException {
    / /...
    if (proxy.type() == Proxy.Type.DIRECT || proxy.type() == Proxy.Type.SOCKS) {
        // No proxy or socks proxy, using HTTP server domain name and port
      socketHost = address.url().host();
      socketPort = address.url().port();
    } else {
      SocketAddress proxyAddress = proxy.address();
      if(! (proxyAddressinstanceof InetSocketAddress)) {
        throw new IllegalArgumentException(
            "Proxy.address() is not an " + "InetSocketAddress: " + proxyAddress.getClass());
      }
      InetSocketAddress proxySocketAddress = (InetSocketAddress) proxyAddress;
      socketHost = getHostString(proxySocketAddress);
      socketPort = proxySocketAddress.getPort();
    }

    / /...

    if (proxy.type() == Proxy.Type.SOCKS) {
        //socks proxy connect HTTP server
      inetSocketAddresses.add(InetSocketAddress.createUnresolved(socketHost, socketPort));
    } else {
        // If there is no proxy, DNS resolves the HTTP server
        // HTTP proxy. DNS parses HTTP proxy servers
      List<InetAddress> addresses = address.dns().lookup(socketHost);
      / /...
      for (int i = 0, size = addresses.size(); i < size; i++) {
        InetAddress inetAddress = addresses.get(i);
        inetSocketAddresses.add(newInetSocketAddress(inetAddress, socketPort)); }}}Copy the code

When configuring the proxy, the domain name resolution of the Http server is assigned to the proxy server. However, if an Http proxy is configured, the DNS configured by OkhttpClient is used to resolve the domain name of the Http proxy server. The domain name resolution of the Http proxy server is assigned to the proxy server for resolution.

The above code is the use of proxy and DNS in OkHttp, but it is important to note that Http proxies are also divided into two types: plain proxy and tunnel proxy.

The common proxy does not need additional operations and acts as a middleman to transfer packets between the two ends. When receiving the request packet sent by the client, the middleman correctly processes the request and connection status and sends a new request to the server. After receiving the response, the middleman wraps the response result into a response body and returns it to the client. In the process of an ordinary broker, it is possible to go unnoticed at both ends of the broker.

However, the tunnel agent no longer acts as a middleman and cannot rewrite the client’s request. Instead, it simply forwards the client’s request to the terminal server mindlessly through the established tunnel after the connection is established. The tunnel proxy initiates an Http CONNECT request, which does not have a request body and is only used by the proxy server and is not passed to the terminal server. Once the header part of the request is completed, all subsequent data is treated as data that should be forwarded to the terminal server, and the proxy needs to forward them mindlessly until the TCP read channel from the client is closed. After the connection is established between the proxy server and the terminal server, a 200 CONNECT Established status code is sent to the client to indicate that the connection is established successfully.

The Connect method of RealConnection

if (route.requiresTunnel()) {         
	connectTunnel(connectTimeout, readTimeout, writeTimeout, call, eventListener);
	if (rawSocket == null) {
		// We were unable to connect the tunnel but properly closed down our
		// resources.
		break; }}else {
	connectSocket(connectTimeout, readTimeout, call, eventListener);
}

Copy the code

The requiresTunnel method determines that the current request is HTTPS and an HTTP proxy exists, in which case connectTunnel initiates:

CONNECT xxxxHTTP / 1.1Host: xxxx
Proxy-Connection: Keep-Alive
User-Agent: okhttp/${version}
Copy the code

The proxy server returns 200 if the connection is successful. If 407 is returned, the Proxy server requires Authorization (for example, paid Proxy), add proxy-authorization in the request header:

 Authenticator authenticator = new Authenticator() {
        @Nullable
        @Override
        public Request authenticate(Route route, Response response) throws IOException {
          if(response.code == 407) {// Proxy authentication
            String credential = Credentials.basic("Proxy Service Username"."Proxy Service Password");
            return response.request().newBuilder()
                    .header("Proxy-Authorization", credential)
                    .build();
          }
          return null; }};new OkHttpClient.Builder().proxyAuthenticator(authenticator);
Copy the code