A, gossip

Volley analysis mainly explains the encapsulation of the network framework, as well as some of the technical details involved. Without further ado, let’s steal a map.




Volley.jpeg


It seems like a cliche to ask the cache first, and the cache doesn’t ask the network. However, in the actual coding process, we found that there are many bits and pieces of details to consider, such as cache management, how to find the corresponding cache according to the request, and many cache-related problems. There are also many network issues, such as how the network thread pool is managed, when the cache decides whether or not to make a network request, and how the data returned from the request is handled. How to solve these problems elegantly, form a package, and still be flexible and extensible is what Volley is doing here. Bingshen learning and salute to the attitude, of course, but also in order to make their own code more Bigger, to seriously analyze Volley source code, learning architecture and design patterns and other applications. In line with other Volley analysis ideas, analysis is carried out by taking a Request to get a Response as a clue. The so-called everything has to start, the beginning of a good can do more things later. For Volley, this header is the initialization of the queue.

Second, queue initialization

The following picture is my own drawing, ugly is ugly, but make do or can see, it seems to want to express the meaning or clear. The creation of queues mainly initializes the cache manager, the cache distributor, the Network distributor, the Network that actually performs the Network, and the result return. So much for the architecture, it seems.




RequestQueue.jpg


And what exactly is being initialized?

1. Create a RequestQueue.

Requestqueues can be created in multiple ways, with only one cache dispatcher per request pair column, and only four network dispatchers by default.

/** * Creates a default instance of the worker pool and calls {@link} Creates a default instance of the worker pool and calls /** * Creates a default instance of the worker pool and calls RequestQueue#start()} on it. * * @param context A {@link Context} to use for creating the cache dir. * @return A started  {@link RequestQueue} instance. */ public static RequestQueue newRequestQueue(Context context) { return newRequestQueue(context, (BaseHttpStack) null); } inside it calls its private newRequestQueue(). Requestqueues are objects that are directly new, so queues can be created multiple times. Do not create too many queues in one process. private static RequestQueue newRequestQueue(Context context, Network network) { File cacheDir = new File(context.getCacheDir(), DEFAULT_CACHE_DIR); RequestQueue queue = new RequestQueue(new DiskBasedCache(cacheDir), network); queue.start();Copy the code

The last sentence of the code above, start(), starts the cache and network dispatcher.

/** Starts the dispatchers in this queue. */ public void start() { stop(); // Make sure any currently running dispatchers are stopped. // Create the cache dispatcher and start it. mCacheDispatcher = new CacheDispatcher(mCacheQueue, mNetworkQueue, mCache, mDelivery); mCacheDispatcher.start(); // Create network dispatchers (and corresponding threads) up to the pool size. for (int i = 0; i < mDispatchers.length; i++) { NetworkDispatcher networkDispatcher = new NetworkDispatcher(mNetworkQueue, mNetwork, mCache, mDelivery); mDispatchers[i] = networkDispatcher; networkDispatcher.start(); }}Copy the code

2. Create a cache.

The cache strategy also uses LRU, and the underlying implementation relies on LinkedHashMap. Basically, the accessOrder parameter is set to true. For each visit, the modCount count of the corresponding node is incremented by one to obtain a sort iterator based on modCount sorting when the iterator is finally passed.

/** Map of the Key, CacheHeader pairs */
    private final Map<String, CacheHeader> mEntries = new LinkedHashMap<>(16, .75f, true);

Copy the code

3. net work of creation

A Network is an Interface. It has only one method, performRequest(). This is user-enablement, which means that the user can set how the network connection is actually performed to retrieve the data. Take a look at the default implementation.

public static RequestQueue newRequestQueue(Context context, BaseHttpStack stack) {
        BasicNetwork network;
        if (stack == null) {
            if (Build.VERSION.SDK_INT >= 9) {
                network = new BasicNetwork(new HurlStack());
            } else {
                // Prior to Gingerbread, HttpUrlConnection was unreliable.
                // See: http://android-developers.blogspot.com/2011/09/androids-http-clients.html
                // At some point in the future we'll move our minSdkVersion past Froyo and can
                // delete this fallback (along with all Apache HTTP code).
                String userAgent = "volley/0";
                try {
                    String packageName = context.getPackageName();
                    PackageInfo info =
                            context.getPackageManager().getPackageInfo(packageName, /* flags= */ 0);
                    userAgent = packageName + "/" + info.versionCode;
                } catch (NameNotFoundException e) {
                }

                network =
                        new BasicNetwork(
                                new HttpClientStack(AndroidHttpClient.newInstance(userAgent)));
            }
        } else {
            network = new BasicNetwork(stack);
        }

        return newRequestQueue(context, network);
    }
Copy the code

When creating a RequestQueue, the user’s HttpStack will wrap a Network around the user’s HttpStack. If not, use HurlStack (HttpUrlConnection) for SDK > 9 (Android 2.3). If it’s smaller than that — and such a system still exists — use AndroidHttpClient that encapsulates Apache. HttpUrlConnection has a bug in keep-alive processing, which is protocol-dependent. Apache is deprecated from the android network library. HttpUrlConnection or Volley is recommended. After Android 4.4, HttpUrlConnection was replaced with OkHttp.

4. Start the cache and network distributor.

Starting a column starts a cache distributor and a network distributor, which are Java threads. The cache distributor actively retrieves the request from the cache pair column and enters await state if it does not receive it. The implementation of this underlying mechanism relies on the implementation of PriorityBlockingQueue, which is itself a blocking queue that implements the producer and consumer model internally. The network dispenser is implemented in the same way as the cache dispenser, based on the PriorityBlockingQueue implementation. In terms of the process, a request is entered into the cache column first, and network requests or updates are entered into the network queue after being processed by the cache queue.

private void processRequest() throws InterruptedException { // Get a request from the cache triage queue, blocking until // at least one is available. final Request<? > request = mCacheQueue.take(); processRequest(request); }Copy the code

This code is very similar to the cache dispenser’s network dispenser. The main thing is to call the PriorityBlockingQueue#take() method. Look at the implementation of the take() method.

public E take() throws InterruptedException {
        final ReentrantLock lock = this.lock;
        lock.lockInterruptibly();
        E result;
        try {
            while ( (result = dequeue()) == null)
                notEmpty.await();
        } finally {
            lock.unlock();
        }
        return result;
    }
Copy the code

As you can see above, if the queue is empty it enters the wait state notempty.await (). Look again at the PriorityBlockingQueue#put() method

public void put(E e) {
        offer(e); // never need to block
}
public boolean offer(E e) {
        if (e == null)
            throw new NullPointerException();
        final ReentrantLock lock = this.lock;
        lock.lock();
        int n, cap;
        Object[] array;
        while ((n = size) >= (cap = (array = queue).length))
            tryGrow(array, cap);
        try {
            Comparator<? super E> cmp = comparator;
            if (cmp == null)
                siftUpComparable(n, e, array);
            else
                siftUpUsingComparator(n, e, array, cmp);
            size = n + 1;
            notEmpty.signal();
        } finally {
            lock.unlock();
        }
        return true;
    }
Copy the code

The put() method is very simple, and it continues by calling the offer() method. Regardless of all the other clutter in this method, the key sentence is notempty.signal (). Emitting this signal wakes up the preceding take() state because the queue is empty. This is a typical producer-consumer model.

4. Tug on priorities a little

Now that you know that the dispenser is PriorityBlockingQueue, you should also know why Volley controls the priority of network requests, which blocks the queue. The priority is based on the mSequence value of the Request. This is a simple integer value, and the Request’s CompareTo() method determines its ordering. This priority can be set by the user. If the user does not set it, the queue itself maintains one, and each Request added to the queue increases by 1.

 /** Gets a sequence number. */
    public int getSequenceNumber() {
        return mSequenceGenerator.incrementAndGet();
    }
Copy the code

Once the environment is initialized, it is natural to initiate a Request and return a Response.

Make a request

Another ugly picture, and compared to the big god’s picture, very messy look, just what a thing. The main reason for the confusion is that the level of drawing is really average, just like this writing level. The second is because of the producer-consumer wait-and-wake model and its conditional judgments-lazy flowcharts and inconsistencies. There’s a lot to be gained from reading it carefully.




SendRequest.jpg

1. Initialize the cache

Only the cache is created when the queue is initialized, and the actual initialization is when the cache dispenser runs. Let’s look at a little bit of code.

@Override
    public synchronized void initialize() {
        if (!mRootDirectory.exists()) {
            if (!mRootDirectory.mkdirs()) {
                VolleyLog.e("Unable to create cache dir %s", mRootDirectory.getAbsolutePath());
            }
            return;
        }
        File[] files = mRootDirectory.listFiles();
        if (files == null) {
            return;
        }
        for (File file : files) {
            try {
                long entrySize = file.length();
                CountingInputStream cis =
                        new CountingInputStream(
                                new BufferedInputStream(createInputStream(file)), entrySize);
                try {
                    CacheHeader entry = CacheHeader.readHeader(cis);
                    // NOTE: When this entry was put, its size was recorded as data.length, but
                    // when the entry is initialized below, its size is recorded as file.length()
                    entry.size = entrySize;
                    putEntry(entry.key, entry);
                } finally {
                    // Any IOException thrown here is handled by the below catch block by design.
                    //noinspection ThrowFromFinallyBlock
                    cis.close();
                }
            } catch (IOException e) {
                //noinspection ResultOfMethodCallIgnored
                file.delete();
            }
        }
    }
Copy the code

The code looks a lot, but there are two key points. On the one hand, if the current cache folder is empty, it returns directly. On the other hand, if the current cache folder is not empty, all files are iterated through to parse their contents into a cache Entry(understood as a unit) and put into the cache.

2. Manage cache keys

Another aspect of cache management is the generation of this key. Ensuring that it is unique is very important because it is related to determining whether a request has a corresponding cache. Look at the code.

/** Returns the cache key for this request. By default, this is the URL. */
    public String getCacheKey() {
        String url = getUrl();
        // If this is a GET request, just use the URL as the key.
        // For callers using DEPRECATED_GET_OR_POST, we assume the method is GET, which matches
        // legacy behavior where all methods had the same cache key. We can't determine which method
        // will be used because doing so requires calling getPostBody() which is expensive and may
        // throw AuthFailureError.
        // TODO(#190): Remove support for non-GET methods.
        int method = getMethod();
        if (method == Method.GET || method == Method.DEPRECATED_GET_OR_POST) {
            return url;
        }
        return Integer.toString(method) + '-' + url;
    }
Copy the code

It doesn’t look that complicated, it’s the url or HTTP request method corresponding to the int and underline, plus the URL. Of course the URL is unique, so I don’t think I can argue with that. But for security, it is more appropriate to convert the URL to MD5 and save some space.

3. Cache processing flow

In fact, the sequence diagram above makes it obvious that the cache misses or expires and the request is thrown to the network queue. The network queue wakes up the network dispatcher for processing. There are more than one network distributor, of course, who is free to grab who will be executed. There’s something fishy about the caching logic, which combines the HTTP protocol with the caching itself. Let’s start by looking at the cache-related definitions in Entry.

 /** ETag for cache coherency. */
        public String etag;

        /** Date of this response as reported by the server. */
        public long serverDate;

        /** The last modified date for the requested object. */
        public long lastModified;

        /** TTL for this record. */
        public long ttl;

        /** Soft TTL for this record. */
        public long softTtl;
Copy the code

All I know are ETAG,serverDate,lastModified, TTL and softTtl. As you can see from the following code, TTL and softTtl represent the expiration time and the time to refresh.

/** True if the entry is expired. */ public boolean isExpired() { return this.ttl < System.currentTimeMillis(); } /** True if a refresh is needed from the original data source. */ public boolean refreshNeeded() { return this.softTtl  < System.currentTimeMillis(); }Copy the code

This is actually related to the HTTP protocol cache. If you are not interested, you can skip it. The following code is also a bit long, so if you don’t want to see it, you can look directly at the conclusion of the code.

public static Cache.Entry parseCacheHeaders(NetworkResponse response) { long now = System.currentTimeMillis(); Map<String, String> headers = response.headers; long serverDate = 0; long lastModified = 0; long serverExpires = 0; long softExpire = 0; long finalExpire = 0; long maxAge = 0; long staleWhileRevalidate = 0; boolean hasCacheControl = false; boolean mustRevalidate = false; String serverEtag = null; String headerValue; headerValue = headers.get("Date"); if (headerValue ! = null) { serverDate = parseDateAsEpoch(headerValue); } headerValue = headers.get("Cache-Control"); if (headerValue ! = null) { hasCacheControl = true; String[] tokens = headerValue.split(",", 0); for (int i = 0; i < tokens.length; i++) { String token = tokens[i].trim(); if (token.equals("no-cache") || token.equals("no-store")) { return null; } else if (token.startsWith("max-age=")) { try { maxAge = Long.parseLong(token.substring(8)); } catch (Exception e) { } } else if (token.startsWith("stale-while-revalidate=")) { try { staleWhileRevalidate = Long.parseLong(token.substring(23)); } catch (Exception e) { } } else if (token.equals("must-revalidate") || token.equals("proxy-revalidate")) { mustRevalidate = true; } } } headerValue = headers.get("Expires"); if (headerValue ! = null) { serverExpires = parseDateAsEpoch(headerValue); } headerValue = headers.get("Last-Modified"); if (headerValue ! = null) { lastModified = parseDateAsEpoch(headerValue); } serverEtag = headers.get("ETag"); // Cache-Control takes precedence over an Expires header, even if both exist and Expires // is more restrictive. if (hasCacheControl) { softExpire = now + maxAge * 1000; finalExpire = mustRevalidate ? softExpire : softExpire + staleWhileRevalidate * 1000; } else if (serverDate > 0 && serverExpires >= serverDate) { // Default semantic for Expire header in HTTP specification is softExpire. softExpire = now + (serverExpires - serverDate); finalExpire = softExpire; } Cache.Entry entry = new Cache.Entry(); entry.data = response.data; entry.etag = serverEtag; entry.softTtl = softExpire; entry.ttl = finalExpire; entry.serverDate = serverDate; entry.lastModified = lastModified; entry.responseHeaders = headers; entry.allResponseHeaders = response.allHeaders; return entry; }Copy the code

The first part reads and parses the keywords in the header. “Date” : indicates the server time “cache-control” : “No-cache”, “max-age=”, “stale-while-revalidate=”, “must-revalidate”, and “proxy-revalidate”, “Expires” : Cache expected expiration time “last-modified” : time when the file was Last Modified” ETag” : whether the file has been Modified Mark fields It takes a little more work to fully understand these fields and their priorities. Here we focus on the calculation of softTtl and TTL. You can see from the code that softTtl = softExpire and TTL = finalExpire. And the two calculations are as follows: SoftExpire = Start now + maximum lifetime of cache finalExpire = softExpire or softExpire + Stale-while-revalidate (2) No cache-control field exists, but the server time exists. FinalExpire = softExpire + (cache expire time vs. server time) finalExpire = softExpire + (cache expire time vs. server time) It might be helpful to look at the flow diagram of caching a “borrowed” request.




Cache.jpg


From the above analysis, softTtl and TTL help the framework to determine in advance whether the cache is valid and needs to be updated. Other fields such as ETag,Last-Modified, and other protocol fields determine the cache policy when the actual network request is executed. Caching is difficult to understand and use, so it is a bit long and verbose. Let’s continue with the network distribution.

4. Network dispenser processing

As mentioned earlier, a missed or expired cache sends the request to the network queue, which then relies on the underlying implementation mechanism of PriorityBlockingQueue to wake up the network distributor to continue processing.

private void processRequest() throws InterruptedException { // Take a request from the queue. Request<? > request = mQueue.take(); processRequest(request); }Copy the code

Above is the network dispatcher to retrieve the request, if not retrieved will enter the wait state. The request will continue to be processed when it is retrieved.

void processRequest(Request<? > request) { long startTimeMs = SystemClock.elapsedRealtime(); try { request.addMarker("network-queue-take"); // If the request was cancelled already, do not perform the // network request. if (request.isCanceled()) { request.finish("network-discard-cancelled"); request.notifyListenerResponseNotUsable(); return; } addTrafficStatsTag(request); // Perform the network request. NetworkResponse networkResponse = mNetwork.performRequest(request); request.addMarker("network-http-complete"); // If the server returned 304 AND we delivered a response already, // we're done -- don't deliver a second identical response. if (networkResponse.notModified && request.hasHadResponseDelivered()) { request.finish("not-modified"); request.notifyListenerResponseNotUsable(); return; } // Parse the response here on the worker thread. Response<? > response = request.parseNetworkResponse(networkResponse); request.addMarker("network-parse-complete"); // Write to cache if applicable. // TODO: Only update cache metadata instead of entire record for 304s. if (request.shouldCache() && response.cacheEntry ! = null) { mCache.put(request.getCacheKey(), response.cacheEntry); request.addMarker("network-cache-written"); } // Post the response back. request.markDelivered(); mDelivery.postResponse(request, response); request.notifyListenerResponseReceived(response); } catch (VolleyError volleyError) { volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs); parseAndDeliverNetworkError(request, volleyError); request.notifyListenerResponseNotUsable(); } catch (Exception e) { VolleyLog.e(e, "Unhandled exception %s", e.toString()); VolleyError volleyError = new VolleyError(e); volleyError.setNetworkTimeMs(SystemClock.elapsedRealtime() - startTimeMs); mDelivery.postError(request, volleyError); request.notifyListenerResponseNotUsable(); }}Copy the code

This looks like a lot of code, but the key points are: (1) whether the request has been canceled, and if it has been canceled, abort (2) making a real network request via mnetwork.performRequest (request). Network can be set by the user, or the default implementation of HttpUrlConnection can be used, but the underlying implementation is OkHttp. (3) After receiving the Request, the corresponding Request conducts Response analysis to obtain the result of the network Request. (ResponseDelivery) (ResponseDelivery) (ResponseDelivery) (4) Network error or other exceptions, also via ResponseDelivery to return.

Fourth, pass the return result

Apart from ResponseDelivery will return transfer as a result, there is also a Request# notifyListenerResponseReceived () returns the result. To whom do they each return? Let’s start with an ugly picture.




DeliveryResponse.jpg

(1) ResponseDelivery is only an Interface, and its implementor is ExecutorDelivery.

public ExecutorDelivery(final Handler handler) { // Make an Executor that just wraps the handler. mResponsePoster = new Executor() { @Override public void execute(Runnable command) { handler.post(command); }}; }Copy the code

As you can see from its initialization, it contains a mResponsePoster, which is an Executor that wants to send from a thread pool. The key is the Handler parameter, which means that the result is finally returned to the appropriate receiving Handler. This Handler, by default, wraps MainLooper, which goes back to RequestQueue initialization. The previous RequestQueue actually calls this version of the constructor indirectly.

public RequestQueue(Cache cache, Network network, int threadPoolSize) {
        this(
                cache,
                network,
                threadPoolSize,
                new ExecutorDelivery(new Handler(Looper.getMainLooper())));
    }
Copy the code

And what this Handler is actually handling is ResponseDeliveryRunnable. Its run() method actually calls the deliverResponse() corresponding to the Request. I’m borrowing a StringRequest class, otherwise we wouldn’t be able to do this. (2) look at Request# notifyListenerResponseReceived () method. Is the final call CacheDispatcher# WaitingRequestManager. OnResponseReceived () method.

public void onResponseReceived(Request<? > request, Response<? > response) { if (response.cacheEntry == null || response.cacheEntry.isExpired()) { onNoUsableResponseReceived(request);  return; } String cacheKey = request.getCacheKey(); List<Request<? >> waitingRequests; synchronized (this) { waitingRequests = mWaitingRequests.remove(cacheKey); } if (waitingRequests ! = null) { if (VolleyLog.DEBUG) { VolleyLog.v( "Releasing %d waiting requests for cacheKey=%s.", waitingRequests.size(), cacheKey); } // Process all queued up requests. for (Request<? > waiting : waitingRequests) { mCacheDispatcher.mDelivery.postResponse(waiting, response); }}}Copy the code

This notification is used by the cache distributor to inform all requests with the same cache key that have been returned.

Five, the summary

According to the process of writing an essay, there should be a summary at the end. As for the advantages and disadvantages, frankly speaking, I have not used Volley in the project, I just want to learn the elegant demeanor of the big god, incidentally to install the Bigger, so my analysis can not be so sharp, maybe there are many mistakes. Let me summarize a few key points of this analysis. (1)Volley library is a relatively complete package, coding has a relatively high Bigger, simple, clear, strong scalability, but does not lose a powerful function. (2)RequestQueue is the core of the library, from initiating requests, caching, executing network requests, parsing results, and returning results. It is possible to have multiple RequestQueue instances in a process. (3)Cache is one of the highlights of this library, which realizes the Cache of Http protocol, and the Cache content is also used in the flash memory DiskLRUCache algorithm. However, the implementation of LinkedHashMap, which relies on it, uses its sorting function to implement LRU algorithm. (4) The most critical queue is actually only two, a cache queue, a network queue. The cache queue is handled by the cache dispatcher, and the network queue by the network dispatcher. After the cache queue finishes processing, it is handed over to the network queue for processing as appropriate.