I gave a presentation on Android performance optimization at the Droidcon NYC conference a few weeks ago.

I spent a lot of time preparing this report because I wanted to show performance problems in real life examples and how to use the appropriate tools to identify them. But not having enough time to show everything, I had to cut the slides in half. In this article, I’ll summarize everything I’ve talked about and show examples that I don’t have time to discuss.

  • You can watch a video of the report here.
  • The slides can be seen here.

Now, let’s take a closer look at some of the important things I talked about earlier, and hopefully I can explain everything in great depth. Let’s start with the basic principles I follow when optimizing:

My principle

Whenever I deal with or troubleshoot performance issues, I follow these principles:

  • Keep measuring: Optimizing with your eyes is never a good idea. After watching the same animation a few times, you start imagining it running faster and faster. Numbers never lie. Use the tools we’ll discuss to measure your application’s performance several times before and after you make changes.
  • Use slow devices: If you really want to expose all the weak links, slow devices will help you more. Some performance issues may not be present on newer and more powerful devices, but not all users will have the latest and greatest.
  • Trade-offs: Performance optimization is all about trade-offs. You optimize one thing — often at the expense of another. In many cases, another thing that hurts could be the time it takes to find and fix the problem, or it could be the quality of the bitmap, or the amount of data that should be stored in a particular data structure. You should always be prepared to make trade-offs.

Systrace

Systrace is a great tool that you may not have used before. Because developers don’t know what to do with the information it provides.

Systrace tells us roughly what programs are currently running on the phone. This tool reminds us that the phone in your hand is actually a powerful computer that can do many things at once. In a recent update to the SDK tool, the tool has enhanced the ability to generate waveforms from data, which can help us find problems. Let’s take a look at what a record file looks like:

You can generate a log file using the Android Device Monitor tool or from the command line. More information can be found here.

I explained the different parts in the video. The most interesting of these are alerts and frames, which show the analysis of collected data. Let’s look at a collected log file and select an alert at the top:

This alert reports a View#draw() call that takes a lot of time. We get a description of the alarm, including links to documentation and even videos on the subject. Examining the line below the frame, we see that each frame drawn has an identifier, marked green, yellow, or red. If the flag is red, it indicates that the frame was being drawn with a performance problem. Let’s pick a red frame:

We see all the alarms associated with this frame at the bottom. There are three, and one of them is the one we saw earlier. Let’s zoom in on this frame and add the alarm “Inflation during ListView Recycling” at the bottom:

We see that this takes 32 milliseconds, which is more than the 60 frames per minute requirement. In this case, each frame should take no more than 16 milliseconds to draw. Each item in the ListView frame has more time information — each item takes 6 milliseconds, and we have 5 items in total. The descriptions help us understand the problem and even offer a solution. From the image above, we can see that everything is visual, and we can even zoom in on the “inflate” piece to see which View in the layout takes longer to expand.

Another example of a slow frame drawing:

After selecting a frame, we can press the “M” key to highlight and observe how long the section takes. In the figure above, we observe that it took 19 milliseconds to draw this frame. Expand this frame’s unique alert, which tells us that there is a “scheduling delay.”

The scheduling delay indicates that the thread processing a particular slice of time has not been scheduled by the CPU for a long time. So the thread took a long time to complete. Select the longest slice in the frame to get more details:

Wall Duration is the time spent from the beginning to the end of the time slice. It is called “wall time” because starting a thread is like watching a wall clock.

CPU time is the actual amount of time the CPU takes to process this time slice.

It’s worth noting that there’s a big difference between the two times. The time slice took 18 milliseconds to complete, but the CPU took only 4 milliseconds. This is a bit strange, and now is a good time to look up and see what the CPU has been doing all this time:

All four cores of the CPU are quite busy.

Select a thread in the com.udinic. keepbusyApp application. In this example, a different application keeps the CPU busy instead of contributing resources to our application.

This particular scenario is usually temporary, because other applications don’t always hog the CPU in the background (right?). . These threads may originate from other processes in your application or even from the host process. Because Systrace is an overview tool, there are some limitations that prevent us from going further. We need to use another tool called Traceview to find out what keeps the CPU busy.

Traceview

Traceview is a performance analysis tool that tells us how long each method took to execute. Let’s look at a trace file:

The tool can be launched from the Android Device Monitor or from code. See here for more information.

Let’s take a closer look at these different columns:

  • Name: The name of this method, identified in different colors in the figure above.
  • CPU non-exclusive time: The CPU time used by this method and its children (that is, all methods called).
  • CPU exclusive time: This method occupies CPU time alone.
  • Non-exclusive and exclusive real time: the time this method takes from the moment it starts until it completes. The same as “wall time” in Systrace.
  • Calls and recursion: The number of times this method is called and the number of recursive calls.
  • CPU time and actual time per invocation: The average CPU time and actual time per invocation of this method. Another time field shows the total time for all calls to the method.

I opened an app that didn’t slide smoothly. I started tracking, swiped for a while and then turned it off. Finding the getView() method and expanding it, I see the following result:

This method was called 12 times and each call took the CPU 3 milliseconds, but each call actually took 162 milliseconds! There must be something wrong…

Looking at the submethods of this method, you can see which methods are spending the total time. Thread.join() takes up about 98% of the non-exclusive real time. This method is used to wait for another thread to finish. The other child method is thread.start (), and I guess the getView() method starts a Thread and waits for it to finish.

But where is this thread?

Because getView() doesn’t do this directly, getView() doesn’t have such child threads. To find it, I look for a thread.run () method, which is called to generate a new Thread. I tracked it down to the culprit:

I found that each call to the bgService.dowork () method took about 14 milliseconds, 40 times. It is possible to call getView() more than once each time, which explains why each call to getView() takes so long. This method keeps the CPU busy for a long time. Looking at the CPU hog time again, we see that it takes up 80% of the CPU time in the entire record! Sorting THE CPU exclusive time in the trace is also the best way to find time-wasting functions, as they are likely to be responsible for the performance problems you are experiencing.

Tracking time-sensitive methods, such as getView(), View#onDraw(), and others, will help us find out why the application is slowing down. But sometimes there are other things that keep the CPU busy and eat up valuable CPU cycles that could be used to draw the UI to make the application flow more smoothly. The garbage collector occasionally runs to clean up objects that are no longer used, and it usually doesn’t have much impact on applications running in the foreground. But if GC is executed too often, it slows down the application and we can be blamed for that…

Memory Performance Analysis

Android Studio has improved a lot recently, and there are more and more tools available to help identify and analyze performance issues. The memory page in the Android window tells us how much data is allocated on the stack over time. It looks something like this:

We see a small drop in the figure where a GC event occurs, removing unwanted objects from the heap and freeing up space.

There are two tools available on the left side of the figure: heap dump and allocation tracker.

Heap dumps

To investigate what is allocated on the heap, we can use the heap dump button on the left. This will take a snapshot of what is currently allocated on the heap and present it on the screen as a separate report in Android Studio:

On the left we see a bar chart of instances on the heap, grouped by their class names. Each instance has the number of allocated objects, the size of the instance (shallow size), and the size of objects to keep in memory. The latter tells us how much memory can be freed if these instances are freed. This view is important because it allows us to see the footprint of our application and helps us identify large data structures and object relationships. This information helps us build more efficient data structures, unwire objects to reduce memory retention, and ultimately reduce memory footprint as much as possible.

Looking at the bar chart, we see that MemoryActivity has 39 instances, which is strange for an Activity. We select one of the instances on the right, and all references to that instance are displayed in the reference tree at the bottom.

One of them is part of the array in the ListenersManager object. Looking at other instances of the Activity shows that they are retained by this object. This explains why only objects of this class take up so much memory.

This condition is known as a “memory leak”. Since these activities are completely destroyed, the unused memory cannot be collected as garbage due to references. One way to avoid this is to ensure that objects are not referenced by other objects with a longer lifetime than others. In this case, ListenManager should not retain the reference after the Activity is destroyed. One solution is to remove the reference in the onDestory() callback when the Activity is destroyed.

Memory leaks and other large objects that take up a lot of space in the heap reduce available memory and frequently trigger GC events in an attempt to free up more space. These GC events can keep the CPU busy, resulting in reduced application performance. If there is not a sufficient amount of memory available for the application, and the heap cannot grow any further, a more serious consequence, OutofMemoryException, can occur, causing the application to crash.

The Eclipse memory analysis tool (Elicpse MAT) is a more advanced tool:

This tool does everything Android Studio can do, identifies potential memory leaks, and provides more advanced instance lookups, such as finding all Bitmap instances larger than 2 MB or all Rect empty objects.

The LeakCanary library is also a good tool to track objects and make sure they don’t leak. If memory leaks – you will receive a notification telling you where and what happened.

Allocation tracker

In the memory diagram, the allocation tracker can be started or stopped by the other buttons on the left. It generates a report of all instances assigned at that time, grouped by class:

Or grouped by method:

It has a nice visualization of what the largest allocation instance is.

With this information, we can find a method that uses a lot of memory and is time-critical. It may trigger GC events frequently. We can also find a large number of instances of the same type with short lifetimes, so we can consider using a pool of objects to reduce the number of allocations.

Common memory tricks

Here are some tips and guidelines I use to write code:

  • Enumerations are a hot topic in performance discussions. Here’s a video that tells us about the size of enumerated types, and a discussion of the video and some of its misdirection. Do enumerations take up more space than constants? Of course. Is that bad? Not necessarily. If you’re writing a library and need strong type safety, it’s better to use this method than, say, @intdef. If you just need to sum up a bunch of constants — using enumerations is not a good idea. More often than not, you need to weigh the pros and cons when making a decision.
  • Autoboxing – Autoboxing automatically converts primitive types to their object representations (such as int->Integer). Every time a primitive type is “boxed” into an object, a new object is created (shocking, I know). If we have a lot of these — the GC runs frequently. It’s not easy to notice the amount of autoboxing, because it happens automatically every time a primitive type assigns a value to an object. Trying to keep these types consistent is one solution. If you use these primitive types in your application, avoid having them automatically boxed for no reason. You can use memory profiling tools to find many objects that represent a primitive type. You can also use Traceview to find interger.valueof (), long.valueof (), and so on.
  • Comparison of HashMap to ArrayMap or Sparse*Array — HashMap requires objects as key values, which is related to auto-boxing issues. If you use the original “int” type in your application, it will be automatically boated as an Interger when interacting with a HashMap, in which case SparseIntArray is fine. If you just want objects as key values, you can use the ArrayMap type. It is very similar to HashMap, but the underlying mechanism is completely different. It uses memory more efficiently at the expense of slower speeds. Both alternatives take up less memory than a HashMap, but it takes more time to retrieve items and allocate space. Unless there are 1000 + projects with little difference in execution time, they are viable options for you to implement mappings.
  • Context aware —As you saw earlier, in the ActivityMemory leaks are more likely. Activities are the most common memory leak in Android (!). “You probably won’t be surprised. They are also very expensive leaks because they contain all the view hierarchies in the UI, which take up a lot of space. Many operations on the platform require a Context object, and usually an Activity is used to pass this information. Make sure you understand what’s happening on that Activity. If a reference to it is cached and the object outlives the Activity, leaving the reference uncleared will cause a memory leak.
  • Avoid non-static inner classes —When you create and instantiate a non-static inner class, you create an implicit reference to an external type. If an instance of the inner class survives longer than the outer type, it will be kept in memory even if the outer type is not needed. For example, a non-static type is created in the Activity class that extends AsynTask, starts processing asynchronous tasks, and kills the Activity during execution. It keeps the Activity alive as long as the asynchronous task is running. Solution —Don’t do this; declare a static inner class if necessary.

GPU Performance Analysis

Android Studio 1.4 adds a new tool for performance analysis of GPU rendering.

Go to the GPU page under the Android window and you’ll see a chart showing how long it took to draw each frame on the screen:

Each line in the figure represents a frame being drawn, and different colors indicate different stages in the process:

  • Draw(blue) — represents the View#onDraw() method. This section creates and updates the DisplayList objects, which are then converted into OpenGL commands that the GPU can understand. The higher values are due to complex views taking more time to create a display list, or many views failing in a short period of time.
  • Ready (purple) — In Lollipop, another Thread has been added to make the UI Thread draw the UI faster. This thread is called the RenderThread. It is responsible for converting the display list into OpenGL commands and sending them to the GPU. While this is being processed, the UI thread can start processing the next frame. This step requires the UI thread to take the time to pass related resources to the RenderThread. This step can be time-consuming if there are many resources to pass, such as many and large display lists.
  • Processing (red) – Executes the display list to create the OpenGL command. If you need to perform many and complex display lists, this step can take a long time because many views need to be redrawn. When the view fails, or it is exposed to a moving overlapping view, it is redrawn.
  • Execute (yellow) — send OpenGL command to GPU. This part is a blocking call because the CPU sends a cache containing the command to the GPU, expecting the GPU to return a clean cache to process the next frame. The number of these caches is limited, and if the GPU is busy — the CPU will find it waiting for a cache to be freed. So if we see a higher value at this step, it’s likely that the GPU is busy drawing a UI that is too complex to complete in a short period of time.

In Marshmallow, more colors were added to represent more steps, such as measurement and layout, input processing and other functions:

Edited on 09/29, 2015: John Reck, a framework engineer from Google, added information about the new colors:

The exact definition of “animation” is every thing that registers with Choreographer as CALLBACK_ANIMATION. Choreographer#postFrameCallback and View#postOnAnimation are used for view.animate(), ObjectAnimator, Transition, etc. It’s the same thing as the “animation” TAB for Systrace.

“Misc” refers to the delay between vsync and the current time label. If you have ever seen a message from Choreographer’s logs like “frames skipped by milliseconds missed by vsync”, these are now marked as “MISC”. INTENDED_VSYNC differs from VSYNC in the dump of statistical frames. (https://developer.android.com/preview/testing/performance.html#timing-info)

To use this feature, you need to turn on the GPU render option in developer options:

This tool allows you to use ADB commands to get all the information it needs, which is useful for us too! Use the following command:

adb shell dumpsys gfxinfo <PACKAGE_NAME>Copy the code

We can receive this data and create the chart below. This command also prints other useful information, such as how many views there are in the hierarchy, the size of the entire display list, and so on. More statistics can be found in Marshmallow.

If your application has automated UI tests, you can run this command on the server to see if the values change over time after some interaction (list slides, lots of animations, etc.), such as “Janky Frames.” This will help us locate a performance degradation issue after some commits are pushed in, giving us time to fix the issue before the application is released to the market. Using the keyword “framestats”, we can get more detailed drawing information, see here.

But it’s not just looking at charts!

There is also a “On Screen as Bars” option in the “Profile GPU Rendering” developer option. When this option is turned on, each window on the screen displays a chart with a green line representing the 16-millisecond threshold.

In the example on the right, we see some frames beyond the green line, which means they took more than 16 milliseconds to draw. Because most of these lines are blue, we think there are many or complex views to draw. In this scenario, I scroll through the news feed list, which has different types of views. Some views are invalid, and some are more complex to draw. Some frames may exceed the threshold because there is a complex view to draw during this time.

Hierarchical viewer

I love this tool, but I’m disappointed that most people don’t use it!

Using the hierarchy viewer, we can get performance statistics, view the full view hierarchy on the screen, and access the properties of all those views. Using the hierarchy observer alone, you can also dump the topic data and observe the property values for each style, but you can’t do this on Android Monitor. I use this tool for layout design and optimization.

In the middle, we see a tree representing the view hierarchy. This view hierarchy can be very wide, but if it is too deep (about 10 levels), it can take a lot of time in the layout and measurement phases. Each time a view is measured in View#onMeasure(), or all subviews are positioned in View#onLayout(), these commands are passed to the subviews, and the subviews do the same. Some layouts execute each step twice, such as RelativeLayout and some LinearLayout configurations, and the number of passes increases exponentially if they are nested.

At the bottom right, we see a “blueprint” of the layout showing the location of each view. We select a view here or in the tree above and view its properties on the left. When designing a layout, I’m sometimes not sure why a particular view ends up there. With this tool, I can track it in the tree, select it and observe its position in the front window. By seeing the final size of the view on the screen, I can design interesting animations and also use this information to move things accurately. I can find views that are unintentionally overwritten by other views, and much more.

For each view and its children, we have the time it takes to measure, lay out, and draw them. The colors indicate how well this view performs compared to other views, making it easy to find the weakest links. Because we are also seeing a preview of the view, we can go through the tree and follow the steps that created it, finding and removing the redundant steps. One thing that can have a big impact on performance is Overdraw.

Excessive drawing

As you can see in the GPU Performance analysis section — if the GPU has a lot of stuff to draw to the screen, increasing the time it takes to draw each frame, then the execution phase represented in yellow in the diagram takes longer to complete. Overdrawing occurs when we draw something on top of something else, such as a yellow button on a red background. The GPU needs to draw the red background first and then draw the yellow buttons on it, so overdrawing is inevitable. If you have a lot of overdrawn layers, it can make the GPU very busy and difficult to meet the 16ms requirement.

By using the “Debug GPU Overdraw” setting in the development options, all overdraws become different colors to indicate the severity of overdrawing in this area. It’s ok to have a double or double overdraw, and even some small areas of light red aren’t too bad, but if we see a lot of red on the screen — that could be a problem. Let’s look at some examples:

In the example on the left, there’s a list that’s painted green, which is usually fine, but there’s an overlay at the top that turns it red, and that starts to get problematic. In the example on the right, the entire list is light red. Both examples have an opaque list with a 2x or 3x overdraw. If you have a full-screen background color in your Activity or Fragment window, the view of the list and each of the fields in it can be overdrawn. We can solve this problem by setting the background color for only one of them.

Note: The default theme declares a full-screen background color for the window. If an Activity has an opaque layout that covers the entire screen, removing the window’s background can reduce one layer of overdrawing. This can be done in the theme or code by calling getWindow().setBackgroundDrawable(null) in onCreate().

Using the Hierarchy Viewer, you can export all the layers in the hierarchy and generate a PSD file that can be opened in Photoshop. Looking at the different layers in Photeshop shows all the overdrawing in the layout. Use this information to reduce excessive drawing, don’t stop at green, go for blue!

Alpha

Using transparency can have implicit performance issues, and to understand why — let’s look at what happens when you give a view an alpha value. Consider the following layout:

This layout has three ImageViews, one on top of the other. SetAlpha () allows you to set an alpha value directly and simply, and this command is passed to the ImageView’s child view. These imageViews are set to that alpha value and drawn to the frame cache. The result:

That’s not what we want to see.

Because each ImageView is drawn with an alpha value, all overlapping images are mixed together. Fortunately, operating systems have a way around this problem. The layout is copied to an off-screen cache, the alpha value is applied to the entire cache, and then copied to the frame cache. The result is:

But… We still paid the price.

Drawing the view on the off-screen cache before drawing it to the frame cache actually adds another layer of over-drawing that goes undetected. The operating system does not know when to use this method, or the straightforward method shown earlier, so it always defaults to the complex one. But there are ways to set the alpha value and avoid adding complex off-screen caches.

  • TextView – Use setTextColor() instead of setAlpha(). Text colors using the alpha channel will cause text to be drawn directly with alpha.
  • ImageView – Use setImageAlpha() instead of setAlpha(). The reason is the same as text view.
  • Custom View – If your custom view does not support view overlap, this complex behavior is irrelevant to us. There is no way, as in the example above, that the subviews will get mixed up. By overloading the hasOverlappingRendering() method and returning an error, we can tell the system to channel the view directly and simply. By overloading the onSetAlpha() method and returning correct, we have the option to manually handle what happens when we set an alpha value.

Hardware acceleration

Hardware acceleration was introduced on Honeycomb, where we had a new drawing model for rendering applications on the screen. The DisplayList data structure was introduced to record view-drawing commands for quick rendering. But there’s another great feature that developers don’t pay attention to or use properly — view layering.

With view layering, we can draw views on an off-screen cache (seen earlier, applied to an Alpha channel) and do whatever we want. This feature is mainly used for animation because we can quickly animate complex views. Without layering, animating a view requires invalidating the view after changing the animation properties (such as X coordinates, scaling, alpha values, etc.). For complex views, this invalidation is propagated to all of the subviews, which are also redrawn, an expensive operation. By using hardware-supported view layering, the GPU creates a texture of the view. There are operations that can be applied to textures without disabling them, such as X and Y positions, rotation, alpha, etc. This means we can animate a complex view on the screen without disabling it in the process! This makes for a smoother animation. Here is a sample code:

// Using the Object animator view.setLayerType(View.LAYER_TYPE_HARDWARE, null); ObjectAnimator objectAnimator = ObjectAnimator.ofFloat(view, View.TRANSLATION_X, 20f); objectAnimator.addListener(new AnimatorListenerAdapter() { @Override public void onAnimationEnd(Animator animation) { view.setLayerType(View.LAYER_TYPE_NONE, null); }}); objectAnimator.start(); // Using the Property animator view.animate().translationX(20f).withLayer().start();Copy the code

Simple, right?

Sure, but there are a few things to keep in mind when using the hardware layer:

  • Clean up after using views – the hardware layer takes up space on a memory-limited module (GPU). Try to use them only when you need them, like animations, and clean them up when you’re done. In the ObjectAnimator example above, I added a listener to remove the layering after the animation. In the Property animation example, I use the withLayers method, which automatically creates layers at the beginning and removes them when the animation ends.
  • If you are using a hardware layer to change the view, this will invalidate the hardware layer and completely redraw the view on the off-screen cache. This happens when changing a property that cannot be optimized by the hardware layer (currently, the following are optimized: rotation, scaling, X/Y transformation, rotation motion, and alpha). For example, if you are animating a view with the hardware layer, update the background color of the view as you move it around the screen, which causes the hardware layer to be constantly updated. Updating the hardware layer is expensive, and it doesn’t make sense to use it in this case.

The second problem is visualizing updates to these hardware layers. Using the Developer option, we can turn on the “Show Hardware Layers Updates” option.

With this option turned on, the view turns green and flashes when it updates the hardware layer. I’ve used it before, and there was a ViewPage that didn’t slide as smoothly as I’d hoped. After this option is turned on, I continue to swipe ViewPager and see the following:

Both pages turn green throughout the slide!

This means that the two pages create a hardware layer, and both of them are invalidated when sliding the ViewPager. As I swiped these pages, I updated them by applying parallax effects on the background and gradually animating the items on the page. I didn’t create a hardware layer for the ViewPager page. After reading the ViewPager source code, I found that when the user starts sliding, a hardware layer is created for both pages and removed when the sliding stops.

It creates a layer of hardware as it slides, which I think is bad. Usually these pages don’t change when we swipe the ViewPager, and they’re quite complex — the hardware layer can draw them very quickly. This was not the case with the applications I developed, and I had to use a few tricks to remove these hardware layers.

A silver bullet can kill vampires, werewolves, or monsters. Silver bullet is an effective way to solve the problem. It’s important to understand them and use them properly, otherwise you could be in big trouble.

diy

While PREPARING all these demo examples, I wrote a lot of code to simulate these situations. You can find them all in the Github repository and on Google Play. I split the different scenarios into different activities and tried to write documentation to give you an understanding of the problems caused by using one Activity. Read the Javadoc for these activities, open the tool, and play with the application.

For more information

As the Android operating system evolves, you will have more ways to optimize your applications. The Android SDK introduces new tools and new features (such as a hardware layer) to the system. You should keep up with The Times and weigh the pros and cons before making changes.

There’s a great YouTube playlist called Android Performance Mode, which has lots of short videos from Google explaining different topics in terms of performance. You can find comparisons of different data structures (HashMap vs. ArrayMap), bitmap optimization, and even how to optimize network requests. I highly recommend going through them all.

Join the Google+ Android Performance Patterns community to discuss performance issues with others, including Google engineers, and share ideas, articles, and questions.

More interesting links:

  • Learn how the Android graphics architecture works. Here’s everything you need to know, including how Android draws its UI, explains the different system components, such as SurfaceFlinger, and how they communicate with each other. It’s a long one, but it’s worth reading.
  • A talk at Google IO 2012 shows how the drawing model works, and how and why it stutges when drawing UI.
  • An Android performance keynote at Devoxx 2013 shows some optimizations to draw models on Android 4.4 and demonstrates different tools for optimizing performance (Systrace, overdraw, etc.).
  • This is a good article on preventive optimization, which also explains the difference from over-optimization. Many developers don’t optimize their code because they don’t think the impact is a big deal. Remember one thing, all the little problems add up to a big problem. If you have a chance to optimize a small portion, it may not seem like much, but don’t rule it out.
  • Memory management on Android – an old video from Google IO 2011, but still relevant. It shows you how to manage your application’s memory on Android and how to use tools like Eclipse MAT to find problems.
  • Google engineer Romain Guy did a case study on how to optimize a common Twitter client. In this case, Romain tells us how he found the application’s performance problems and suggested how to fix them. Subsequent articles describe other problems with this optimized program.

I hope you now have enough information to feel more confident about optimizing your application starting today!

Start by opening records and some associated developer options. Feel free to share what you find in the comments or on Google+ ‘s Android Performance Patterns community.

Support me to translate more good articles, thank you!

Exceptional translator



comments

About the author:To the qin dynasty

Personal home page
My article
54