10 ways to make Python run super fast

Most people use Python because it is convenient, not because it is fast. The plethora of third-party libraries makes Python’s performance worse than Java and C. But it’s understandable, because in most cases, speed of development takes precedence over speed of execution.

But don’t worry too much about Python’s speed; it’s not necessarily an either-or proposition. With proper optimization, Python applications can run surprisingly fast — perhaps not as fast as Java or C, but fast enough for Web applications, data analysis, management, and automation tools, and most other uses. So fast that you may forget that you’re trading application performance for developer productivity.

Optimizing Python performance cannot be viewed from a single perspective. Instead, apply all available optimization methods and choose a collection of multiple methods that best fit the current scenario. (The Dropbox folks have one of the most eye-popping examples of the power of Python optimization, check out the link.)

In this article, I’ll briefly cover many common Python optimizations. Some are temporary measures that simply convert one item to another (such as changing the Python interpreter), but those that bring the most benefit will require more detailed work.

1. Speed, speed, speed!

If you can’t figure out what’s causing the slowness, you can’t be sure why your Python application isn’t working as well as it should.

There are many ways to calculate. You can try python’s built-in cProfile module for simple computational analysis, or if you need more precision (calculating per-line runtime), you can use a third-party tool called Line_profiler. In general, analyzing the runtime of a program’s functions will give you an improvement, so it is recommended to use profileHooks third-party library, which calculates the runtime of individual functions.

You may need to do a little digging to find out why something in your program is so slow and how to fix it. The point is to narrow your search down to a single statement.

2. Cache data that needs to be reused

Don’t repeat the calculation thousands of times when you can save the data you need to calculate. If you have a function that you use often and returns predictable results, Python has given you the option to cache it in memory. Subsequent function calls, if identical, return the result immediately.

There are many ways to do this, such as using a native python library: Functools, which has a decorator called @functools.lru_cache, can cache the last N calls to the function, which is useful when the value of the cache has remained constant for a certain period of time, such as listing items used in the last day.

3. Refactor the mathematical calculation into NumPy

If you have matrix – or array-based math in your Python programs and want to evaluate them more efficiently, you should use NumPy because it does the heavy lifting by using the C library and can handle arrays faster than the native Python interpreter, And can store digital data more efficiently than Python’s built-in data structures.

NumPy can also greatly speed up relatively ordinary math operations. This package provides replacements for many common Python mathematical operations, such as min and Max, that are many times faster than original Python.

Another advantage of NumPy is the more efficient use of memory for large objects, such as lists containing millions of items. In general, if you represent large objects like those in NumPy in traditional Python, they take up about a quarter of the memory.

Rewriting the Python algorithm to use NumPy requires some work, because you need to redeclare array objects using NumPy’s syntax. But NumPy uses Python’s existing idioms (+, -, and so on) for actual mathematical operations, so switching to NumPy isn’t too confusing.

4. Use Clibrary

NumPy’s use of libraries written in C is a good approach. If the existing C library meets your needs, Python and its ecosystem offer several options to connect to the library and take advantage of it for speed.

The most common method is Python’s Ctypes library. Because ctypes is widely compatible with other Python applications, it’s the best place to start, but it’s not the only one. The CFFI project provides a more elegant interface to C. Cython (see point 5 below) and can also be used to wrap external libraries, at the expense of having to learn Cython’s markup methods.

5. Switch to Cython

If you’re really into speed, you should use C instead of Python, but for people like me who are addicted to Python, there’s a built-in fear of C. Now there is a good solution.

Cython allows Python users to easily access C speeds. Existing Python code can be converted to C step by step: first compile the code to C via Cython, then add type annotations for faster speed.

Cython can’t do magic, though. Code converted to Cython as-is usually doesn’t run more than 15 to 50 percent faster, because most optimizations at this level focus on reducing the overhead of the Python interpreter. The speed increase is greatest when you allow the code to be converted to pure C only if you provide type annotations for the Cython module.

6. Use multiple threads

Because of the global interpreter lock (GIL), Python specifies that only one thread is executed at a time to avoid state problems when multiple threads are used. It’s there for a good reason, but it’s still annoying.

GIL has become significantly more efficient over time (one of the reasons why you should use python3), but the core problem remains. To solve this problem, Python provides the multiprocessing module to run multiple processes of the Python interpreter on a separate kernel. State can be shared through shared memory or server processes, and data can be passed between process instances via queues or pipes.

You still have to manually manage state between processes. In addition, there is some overhead involved in starting multiple Python instances and passing objects between them. Still, multiprocessing libraries are useful. In addition, Python modules and packages that use the C library (such as NumPy) also avoid GIL entirely. This is another reason to recommend them for speed.

7. Know yourslibraryIs why

How convenient it would be to simply type import XYZ, but as you know, third-party libraries can change application performance, but not always for the better.

Sometimes the application slows down when you add a module, and that’s where modules from a particular library become a bottleneck. Also, careful calculation of running times can help, sometimes less obviously. Example :Pyglet is a library for creating windowed graphical applications that automatically enables debug mode, which greatly affects performance until explicitly disabled. You may never realize this until you read the documentation. Read as much as you can.

8. Be aware of speed differences between platforms

Python runs cross-platform, but that doesn’t mean the features of every operating system (Windows, Linux, OS X) can be abstracted from Python. In most cases, you need to know the details of the platform, such as pathnaming conventions and so on.

But it’s also important to understand platform differences when it comes to performance. For example, some Python scripts that need to use Windows apis to access specific applications may also slow down.

9. Run the program using pypy

CPython is the most commonly used optimization for Python because it prioritizes compatibility over raw speed. For programmers who want speed first, PyPy is a better Python solution, equipped with a JIT compiler to speed up code execution (compiled into C code).

Because PyPy is designed as a temporary replacement for CPython, it is one of the easiest ways to get a quick performance boost. Most Python applications will run on PyPy exactly as-is. However, getting the most out of PyPy may require constant testing. You’ll find that long-running applications are more likely to get the biggest performance benefit from PyPy because the compiler analyzes execution over time. For short scripts that run and exit, CPython is best used because the performance gain is not enough to overcome JIT overhead.

10. Upgrade to PYTHon3

If you’re using python2. And there is no overriding reason (such as an incompatible module) to stick with it; you should jump to python3.

There are many constructs and optimizations in Python 3 that are not available in Python 2.x. For example, Python 3.5 makes asynchrony less tricky, and the async and await keywords become part of the language syntax. Python 3.2 has a major update to the global interpreter lock that significantly improves the way Python handles multiple threads.

These are all ten improvements, and while you may not be able to run faster than C and Java using these methods, the speed of your code depends not on the language, but on the person, and Python itself doesn’t have to be the fastest, just fast enough.

If you enjoyed our Python tutorial today, please keep checking us out and give us a thumbs up/check it out below if it helped you


Python Dict.com Is more than a dictatorial model