primers

Network request burying point collects network request data from the client and uploads it to the cloud to provide data support for network performance optimization. (This web request will be based on OkHttp + Retrofit)

Generally, the collected data includes the following fields:

  1. The IP address
  2. Network type (cellular data, WIFI)
  3. The user id
  4. DNS time-consuming
  5. Connection Establishment Time
  6. Total Request Time
  7. The request url
  8. Request method (GET, POST)
  9. The response code
  10. Response Protocol (HTTP/2, QUIC)

Fields 4 to 10 are strongly correlated with network requests.

Collect data

Data that is strongly associated with the network is not available in “one place.”

OkHttp provides the okhttp3.EventListener EventListener to monitor network data:

abstract class EventListener {
  // Request started
  open fun callStart(call: Call) {}
  / / DNS began
  open fun dnsStart(call: Call,domainName: String) {}
  / / end of DNS
  open fun dnsEnd(call: Call,domainName: String,inetAddressList: List<@JvmSuppressWildcards InetAddress>) {}
  // The connection starts
  open fun connectStart(call: Call,inetSocketAddress: InetSocketAddress,proxy: Proxy) {}
  / / TLS
  open fun secureConnectStart(call: Call) {}
  / / end of TLS
  open fun secureConnectEnd(call: Call,handshake: Handshake?) {}
  // Connection failed
  open fun connectFailed(call: Call,inetSocketAddress: InetSocketAddress,proxy: Proxy,protocol: Protocol? ,ioe: IOException) {}
  // End of request
  open fun callEnd(call: Call) {}
  // The request failed
  open fun callFailed(call: Call,ioe: IOException) {}}Copy the code

The event listener provides a number of functions, which are relevant to collecting data this time.

Customize an event listener:

class TrackEventListener : EventListener() {
    private var callStartMillis: Long? = null // Request starts milliseconds
    private var dnsStartMillis: Long? = null // DNS starts milliseconds
    private var tcpConnectStartMillis: Long? = null // When the TCP connection starts milliseconds
    private var tlsConnectStartMillis: Long? = null // TLS connection starts milliseconds
    private var callDuration = 0L // Request time
    private var dnsDuration = 0L / / DNS time-consuming
    private var tcpDuration = 0L / / TCP time-consuming
    private var tlsDuration = 0L / / TLS time-consuming

    override fun callStart(call: Call) {
        callStartMillis = System.currentTimeMillis()
    }
    override fun callEnd(call: Call){ callStartMillis = callStartMillis ? : System.currentTimeMillis() callDuration = System.currentTimeMillis() - callStartMillis!! }override fun callFailed(call: Call, ioe: IOException){ callStartMillis = callStartMillis ? : System.currentTimeMillis() callDuration = System.currentTimeMillis() - callStartMillis!! }override fun dnsStart(call: Call, domainName: String) {
        dnsStartMillis = System.currentTimeMillis()
    }
    override fun dnsEnd(call: Call, domainName: String, inetAddressList: List<InetAddress>){ dnsStartMillis = dnsStartMillis ? : System.currentTimeMillis() dnsDuration = System.currentTimeMillis() - dnsStartMillis!! }override fun connectStart(call: Call, inetSocketAddress: InetSocketAddress, proxy: Proxy) {
        tcpConnectStartMillis = System.currentTimeMillis()
    }
    override fun secureConnectStart(call: Call){ tlsConnectStartMillis = tlsConnectStartMillis ? : System.currentTimeMillis() tcpDuration = System.currentTimeMillis() - tcpConnectStartMillis!! }override fun secureConnectEnd(call: Call, handshake: Handshake?). {
        tlsDuration = System.currentTimeMillis() - tlsConnectStartMillis!!
    }
    override fun connectFailed(call: Call,inetSocketAddress: InetSocketAddress,proxy: Proxy,protocol: Protocol? ,ioe:IOException){ tcpDuration = System.currentTimeMillis() - tcpConnectStartMillis!! }}Copy the code

The collection roadmap for time consuming data is as follows: Record the start time in the event start callback and collect time in the event end callback.

In this way, time-related data is stored in custom event listener instances.

The remaining request URL, request mode, response code, and response protocol are obtained by interceptor:

class TrackInterceptor : Interceptor {
    override fun intercept(chain: Interceptor.Chain): Response {
        // Continue along the chain of responsibility
        val response = chain.proceed(chain.request())
        response.code / / the response code
        response.protocol // Response protocol
        response.request.url / / request url
        response.request.method // Request mode
        return response
    }
}
Copy the code

OkHttp’s interceptor is a “paper clip”, meaning that the request is sent and the response is returned through an interceptor twice:

  • The arrow to the right indicates that the interceptor sends the request one by one (recursion within recursion), as shown in the codechain.proceed()The call.
  • The arrow to the left indicates that the response is passed to the client in the reverse order in which the request is sent (the regression in recursion), as shown in the codechain.proceed()The return.

(about OkHttp interceptor pattern used in the practical application can click on the interview questions | how to write a log, nice and fast library? – the nuggets (juejin. Cn))

Add a buried point interceptor to OkHttpClient to achieve a “flying goose” buried point:

OkHttpClient.Builder()
    .addInterceptor(TrackInterceptor())
    .build()
Copy the code

Identify network requests

Now the question is, how do you connect the dots between data that is in EventListener and data that is intercepting it? That is, how do you determine that the data generated in two places belongs to the same request?

Generate an ID for each request.

Retrofit provides a factory class for building requests:

fun interface Factory {
  fun newCall(request: Request): Call
}
Copy the code

The abstract method newCall() is used to define how to build a request.

// Create a custom Factory, passing in another Factory (decorator mode)
class TrackCallFactory(private val factory: Call.Factory) : Call.Factory {
    private val callId = AtomicLong(1L) // Uniquely identifies a request
    override fun newCall(request: Request): Call {
        val id = callId.getAndIncrement() // Get the new request ID
        // Refactor the Request instance and tag it with the Request ID
        val newRequest = request.newBuilder().tag(id).build()
        // Pass the new request to the decorated Factory
        return factory.newCall(newRequest)
    }
}
Copy the code

Customizing a call.factory and passing in another Factory instance in the constructor is intended to “reuse the behavior of the build request,” and on top of that, extend the new behavior by adding an ID to each request. (this is the decorator pattern, about the design pattern of explanation can click on the camera is using the combination of design patterns | skin care the decorator pattern – the nuggets (juejin. Cn))

The request ID is defined as a Long, incremented from 1, and bound to it by the request tag.

Then specify Factory when building an instance of Retrofit:

// Build an instance of OkHttpClient, which is also a call.factory
val okHttpClient = OkHttpClient.Builder()
    .addInterceptor(TrackInterceptor())
    .build()
// Build a Retrofit instance
val retrofit: Retrofit = Retrofit.Builder()
    .callFactory(TrackCallFactory(okHttpClient))
    .addConverterFactory(GsonConverterFactory.create())
    .build()
Copy the code

Once the Request ID is bound to the Request object, it can be retrieved in the buried interceptor:

class TrackInterceptor : Interceptor {
    override fun intercept(chain: Interceptor.Chain): Response {
        val response = chain.proceed(chain.request())
        // Get the request ID
        val callId = chain.request().tag() as? LongcallId? .let { response.code/ / the response code
            response.protocol // Response protocol
            response.request.url / / request url
            response.request.method // Request mode
        }
        return response
    }
}
Copy the code

Each callback to EventListener also provides a Call object from which to obtain the Request object:

class TrackEventListener : EventListener() {
    override fun callStart(call: Call) {
        // Get the request ID in the callback
        val callId = call.request().tag() as? Long}... }Copy the code

Summary data

Data from event listeners and interceptors must be aggregated to produce a request for complete network data.

The idea is to write all this data into a list:

typealias CallInfo = Triple<Long, String, Any>

val datas = mutableListOf<CallInfo>()
Copy the code

The request burying data is designed as a Triple, where the first element is the request ID, the second is the key, and the third is the value.

To increase the semantics of the triplet and reduce its complexity, a new name CallInfo is given to the triplet using typeAlias syntax

The data container is designed as a top-level CallInfo list for easy access everywhere.

To write data to a list in an interceptor:

class TrackInterceptor : Interceptor {
    override fun intercept(chain: Interceptor.Chain): Response {
        val response = chain.proceed(chain.request())
        val callId = chain.request().tag() as? LongcallId? .let { datas.add(CallInfo(it,"code", response.code))
            datas.add(CallInfo(it, "protocol", response.protocol))
            datas.add(CallInfo(it, "url", response.request.url))
            datas.add(CallInfo(it, "method", response.request.method))
        }
        return response
    }
}
Copy the code

Write data to the list in the event listener:

class TrackEventListener : EventListener() {
    override fun callEnd(call: Call) {
        val callId = call.request().tag() as? Long
        datas.add(CallInfo(callId, "duration", callDuration))
    }
    ...
}
Copy the code

Concurrent request data is invalid

OkHttp requests are concurrent, and if all requests share an event listener, the data can be corrupted.

Having to request a separate event listener for each request, OkHttp provides a factory method for event listeners:

fun interface Factory {
  fun create(call: Call): EventListener
}
Copy the code

The abstract method create() defines how to build the corresponding event listener for a request. The custom factory is as follows:

object TrackEventListenerFactory : EventListener.Factory {
    override fun create(call: Call): EventListener {
        val callId = call.request().tag() as? Long // Get the request ID
        return TrackEventListener(callId) // Pass the request ID to the event listener}}// Custom event listener also made corresponding changes, new member callId
class TrackEventListener(private val callId: Long?). : EventListener() {}Copy the code

Then set the factory when building the OkHttpClient instance:

OkHttpClient.Builder()
    .addInterceptor(TrackInterceptor())
    .eventListenerFactory(TrackEventListenerFactory) // Specify the event listener factory
    .build()
Copy the code

Data container selection

CopyOnWriteArrayList

Even so, the data can be skewed.

Because OkHttp’s asynchronous request is executed in a different thread, the event listener is called back in a different thread. Therefore, multithreading concurrent writing to the data container will occur, and there are thread safety issues.

The first thread-safe container that came to mind was CopyOnWriteArrayList, which led to the first version of the network burying point:

class TrackEventListener(private val callId: Long?). : EventListener() {private var callStartMillis: Long? = null
    private var callDuration = 0L.companion object {
        // Data container
        private val trackers = CopyOnWriteArrayList<Triple<Long, String, Any>>()
        / / write data
        fun put(callId: Long, key: String, value: Any) {
            trackers.add(Triple(callId, key, value))
        }
        
        // Consume data: Read all data for a request and organize it into a map
        fun get(callId: Long): Map<String, Any> =
            trackers.filter { it.first == callId }
                .map { it.second to it.third }
                .let { mapOf(*it.toTypedArray()) }
                
        // Remove all data from a request
        fun removeAll(callId: Long) {
            if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.N) {
                trackers.removeIf { it.first == callId }
            }else {
                synchronized(trackers){
                    trackers.removeAll { it.first == callId }
                }
            }
        }
        // Call back data to the upper interface
        var networkTrackCallback: NetworkTrackCallback? = null
    }

    override fun callStart(call: Call){ callId? .let { callStartMillis = System.currentTimeMillis() } }override fun callEnd(call: Call){ callId? .let { callStartMillis = callStartMillis ? : System.currentTimeMillis() callDuration = System.currentTimeMillis() - callStartMillis!!/ / write data
            put(callId, "duration", callDuration)
            // Call the data back to the upper layernetworkTrackCallback? .onCallEnd(get(callId))
            // Remove all data for the current request
            removeAll(callId)
        }
    }
    ...
}

// Network data callback
interface NetworkTrackCallback {
    fun onCallEnd(map: Map<String, Any>)
}
Copy the code

The buried interceptor also makes changes to write data to CopyOnWriteArrayList:

class TrackInterceptor : Interceptor {
    override fun intercept(chain: Interceptor.Chain): Response {
        val response = chain.proceed(chain.request())
        val callId = chain.request().tag() as? LongcallId? .let { TrackEventListener.put(it,"code", response.code)
            TrackEventListener.put(it, "protocol", response.protocol)
            TrackEventListener.put(it, "url", response.request.url)
            TrackEventListener.put(it, "method", response.request.method)
        }
        return response
    }
}
Copy the code

The starting point of a single request for data is callStart() and the corresponding end point is callEnd(), so the logic to consume the data is written in there.

Consumption data is to traverse the container, and filter out all the records of a given request id, the key and the value of them is organized into a Map, and callback NetworkTrackCallback. OnCallEnd (Map: the Map < String, Any >). By implementing this interface, the business layer can get the buried data strongly related to the network. It only needs to splice the remaining data in the map structure to report the data to the cloud.

CopyOnWriteArrayList isn’t really a bad choice, except for slightly worse performance.

CopyOnWriteArrayList uses an array as a container. Each time it writes data, it copies the original array to a new block of memory, appends data to the end of the new array, and finally points the array reference to the new array:

public class CopyOnWriteArrayList<E> {
    / / container
    private transient volatile Object[] array;
    // Insert elements
    public boolean add(E e) {
        synchronized (lock) {// Prevent multiple copies of concurrent writes
            Object[] elements = getArray();
            int len = elements.length;
            // A copy of the original array
            Object[] newElements = Arrays.copyOf(elements, len + 1);
            // Appends the element to the end of the new array
            newElements[len] = e;
            // Point the container reference to the new array
            setArray(newElements);
            return true; }}}Copy the code

The CopyOnWriteArrayList write operation is locked on the object, so concurrent write cannot be implemented, that is, when one thread successfully obtains the lock, the other competing threads are blocked.

In contrast, CopyOnWriteArrayList performs very well in reading data:

// java.util.concurrent.CopyOnWriteArrayList

public E get(int index) {
    return get(getArray(), index);
}

private E get(Object[] a, int index) {
    return (E) a[index];
}

final Object[] getArray() {
    return array;
}
Copy the code

It’s just a normal array element, very fast.

Concurrent writes of CopyOnWriteArrayList cannot be implemented due to object locks and array copy operations, and frequent writes increase memory stress.

ConcurrentLinkedQueue

It is common for apps to request multiple interfaces concurrently. For network request buries, there are scenarios where multiple threads write data containers concurrently.

Using CopyOnWriteArrayList as a data container not only increases memory stress, but can even cause the response to requests to be blocked. For example, when multiple requests are returned at the same time, one thread can acquire the lock while writing the buried data, and the other threads must be blocked.

Which linear container can actually write concurrently? ConcurrentLinkedQueue!

ConcurrentLinkedQueue is a queue that stores the end of the queue and fetches the head. The storage structure is a singly linked list of leading and trailing Pointers. It implements thread safety in a non-blocking manner, that is, it does not use locks, but CAS + volatile ensures thread safety for changing the header and tail Pointers.

More detailed analyses on the ConcurrentLinkedQueue can click on the interview questions | freehand write a ConcurrentLinkedQueue?

In theory, ConcurrentLinkedQueue performs better as a data container than CopyOnWriteArrayList in a network request burying scenario. Because it allows true concurrent writing, and the chain structure does not require array copying.

Write a demo to verify the performance difference:

class ConcurrentActivity : AppCompatActivity() {
    // Simulates a multi-threaded environment with up to 64 parallel tasks.
    private val executor = Executors.newFixedThreadPool(64)
    / / CopyOnWriteArrayList container
    private val cowArrayList = CopyOnWriteArrayList<CallInfo>()
    / / ConcurrentLinkedQueue container
    private val concurrentQueue = ConcurrentLinkedQueue<CallInfo>()
    private val mainScope = MainScope()
    private var start = 0L
    
    // Get the container, change the return value to cowArrayList and concurrentQueue respectively
    private fun getList(a) = cowArrayList
    
    override fun onCreate(savedInstanceState: Bundle?). {
        super.onCreate(savedInstanceState)
        // Performance test timing starts
        start = System.currentTimeMillis()
        // Simulate 1000 consecutive requests (concurrent write scenario)
        repeat(1000) {
            executor.execute {
                getList().add(CallInfo(it.toLong(), "code".200))
                getList().add(CallInfo(it.toLong(), "url"."https://www.ddd.com"))
                getList().add(CallInfo(it.toLong(), "protocol"."QUIC"))
                getList().add(CallInfo(it.toLong(), "method"."GET"))}}// Simulate responses to 1000 requests
        mainScope.launch {
            // Emulation consumes network data
            repeat(1000) { callId ->
                executor.execute {
                    getList().add(CallInfo(callId.toLong(), "duration".10000))
                    // Consume network data
                    get(callId.toLong())
                    getList().removeIf { it.first == callId.toLong() }
                }
            }
            // Wait for all asynchronous tasks to complete
            executor.shutdown()
            executor.awaitTermination(Long.MAX_VALUE, TimeUnit.SECONDS)
            val size = getList().size
            // The output performance takes time
            Log.v(
                "ttaylor"."onCreate() size=${size} consume=${System.currentTimeMillis() - start}")}}Consume network data from the same request and organize them into a map
    private fun get(callId: Long): Map<String, Any> {
        return getList().filter { it.first == callId }
            .map { it.second to it.third }
            .let { mapOf(*it.toTypedArray()) }
    }

}

// Network data entity class
typealias CallInfo = Triple<Long, String, Any>
Copy the code

1000 consecutive network requests are simulated, and the maximum number of concurrent requests is 64. The total time and memory usage of consuming all network data using different containers are counted, and the log is printed as follows:

// CopyOnWriteArrayList
ttaylor: onCreate() size=0 consume=455
ttaylor: onCreate() size=0 consume=337
ttaylor: onCreate() size=0 consume=262
ttaylor: onCreate() size=0 consume=247

// ConcurrentLinkedQueue
ttaylor: onCreate() size=0 consume=155
ttaylor: onCreate() size=0 consume=102
ttaylor: onCreate() size=0 consume=102
ttaylor: onCreate() size=0 consume=103
Copy the code

The time performance gap is significant, with ConcurrentLinkedQueue consuming all network data in less than half the time of CopyOnWriteArrayList.

Using AndroidStudio’s profile to observe memory, the demo started with a stable memory of 111 MB, with CopyOnWriteArrayList, the maximum memory soared to 155 MB, The peak memory using ConcurrentLinkedQueue is 132 MB.

conclusion

  • OkHttp + Retrofit based network data buried through the event listenerEventListenerAnd interceptors to collect data.
  • For concurrent requests, an EventListener is assigned to each request to prevent data clutter.
  • Buried data containers need to consider thread safety.
  • CopyOnWriteArrayListThread-safe is a linear container that uses arrays as storage. The write data operation is locked, that is, concurrent writes are not allowed. The write operation makes a copy of the original array, inserts data at the end of the new array, and finally points the array reference to the new array. This is designed to achieve “read-write concurrency,” where one thread reads while another thread writes (reads include GET and iterate).
  • ConcurrentLinkedQueueIs a linear queue with a single linked list as storage medium, which is thread safe. Read and write operations are not locked, can achieve a true sense of concurrent write, CAS + volatile implementation of thread safety.

Recommended reading

The interview series is as follows:

The interview questions | how to write a sound and rapid logging library? – the nuggets (juejin. Cn)

Writing a ConcurrentLinkedQueue interview questions | unarmed? – the nuggets (juejin. Cn)

What kind of questions should you ask in an Android interview? – the nuggets (juejin. Cn)

RecyclerView interview question | what item in the table below is recycled to the cache pool? – the nuggets (juejin. Cn)