OkHttp schema parsing

The Dispatcher dispenser

Internal maintenance of queues and thread pools to complete the deployment of requests.

Question 1: Where does the request go?

The Dispatcher is where synchronous and asynchronous calls are kept and is responsible for performing asynchronous AsyncCall.

  • For synchronization requests, the Dispatcher uses a Deque to hold the synchronization task;
  • For asynchronous requests, the Dispatcher uses two queue deQues, one for readyAsyncCalls and one for runningAsyncCalls.
  • Why use two? Since the Dispatcher default supports a maximum of 64 concurrent requests, a single Host can execute up to 5 concurrent requests. If this is exceeded, the Call will be placed in readyAsyncCall first. The thread from readyAsyncCall is then moved into runningAsynCalls to execute the request.

Asynchronous request flow:

  1. Call (RealCall) — > enQueue ()
  2. RealCall the enqueue ()
  3. Dispatcher’s enqueue() : Adds requests to asynchronous request queues waiting to be executed
  4. Keep pulling requests from readyAsyncCalls into runningAsyncCalls and remove readyAsyncCalls from readyAsyncCalls, if the runningAsyncCalls are not sufficient and the number of hosts occupied by calls is less than the maximum number, Call is added to runningAsyncCalls for execution
getResponseWithInterceptorChain();
Copy the code

How do I move from Ready to Running?

After each task is finished, a task is read from ready and put into RUNNING.

Each request is removed from RUNNING and the same logical decision is made in the first step to decide whether to move.

client.dispatcher().finished(this);
Copy the code

Q 3: How is the thread pool of the dispenser defined?

No wait, maximum concurrency. SynchronousQueue: No capacity queue.

  • Why use thread pools?

Cause: A large number of requests and a short request life cycle lead to frequent request creation and destruction, as well as frequent request creation and destruction of the corresponding thread, which may cause problems such as memory jitter. Therefore, a thread pool is used.

  • Why is the thread pool designed this way?

Since the dispatcher already maintains two queues at the time the request is sent, the two queue frameworks can maintain themselves. If there is still one queue sent to the thread pool, it means there are three queues, and waiting again in the thread pool is inefficient

Interceptor Interceptor

Complete the request process

The process of Http request: DNS resolution > establish Tcp/Ip connection > write out Http packets using socket object output stream