Writing in the front

  • This article is intended for intermediate to advanced Python development and is not intended for topics that are too basic.
  • This article will only cover questions related to Python. Other questions related to networking, MySQL, algorithms and other required interview questions will be arranged separately.
  • Not simply provide answers, boycott the eight-part essay!! I hope that through code demonstration, principle exploration and so on to explain a knowledge point in depth, to achieve mastery.
  • Some of the demo code is also in my Github directory.

Language Basics

The basic data types of Python

Python3 has six standard data types:

  • (integer, float, complex, Boolean, etc.)
  • String (String)
  • List (List)
  • Tuple (Tuple)
  • Set
  • A Dictionary is a Dictionary.

Of the six standard data types in Python3:

  • Immutable data (three) : Number, String, Tuple;
  • Variable data (three) : List, Dictionary, Set.

Is Python statically or dynamically typed? Strongly typed or weakly typed?

  • Dynamically strongly typed languages (many people mistakenly think of weakly typed languages)
  • Dynamic or static refers to whether the compiler or the runtime determines the type
  • Strong typing means that no implicit casting occurs

Js is typically a weakly typed language. For example, if you simulate the addition of numbers and strings under console, you’ll see that a cast occurs.

Python will give TypeError

What is a duck type

“When a bird walks like a duck, swims like a duck and quacks like a duck, that bird is called a duck.”

The duck type focuses on the behavior of the object, not the type. For example, file,StringIO, and socket objects all support read/write methods. For example, objects with an __iter__ magic method can be iterated for.

Here’s an example to simulate the duck type:

class Duck:
    def say(self) :
        print("Lady gaga")


class Dog:
    def say(self) :
        print("Wang wang")


def speak(duck) :
    duck.say()


duck = Duck()
dog = Dog()
speak(duck) # gaga
speak(dog) # wang wang
Copy the code

What is introspection

Introspection is the ability at runtime to determine the type of an object.

We use type, ID, isinstance to get the type information of the object.

Introspection, or reflection, in computer programming is usually the ability to examine something to determine what it is, what it knows, and what it can do.

The main methods related to this are:

  • Hasattr (object, name) checks whether an object hasa specific name attribute. Returns a bool.
  • Getattr (object, name, default) Gets the name attribute of an object.
  • Setattr (object, name, default) sets the name attribute on an object
  • Delattr (object, name) Deletes the name attribute from the object
  • Dir ([object]) gets most of the properties of an object
  • Isinstance (name, object) Checks whether the name is an object
  • Type (object) View the type of an object
  • Callable (object) Determines whether an object is callable

Comparison of PYTHon3 and PYTHon2

  • Print becomes a function
  • Coding problem. Python3 no longer has unicode objects; the default STR is Unicode
  • Division change. The PYTHon3 divisor returns a floating point number. If you want to return an integer, use //
  • Type annotations. Helps IDE implement type checking
  • Super () is optimized to make it easier to call the superclass function directly. One difference between python3.x and python2.x is that Python3 can use super(Class, self).xxx instead of super(Class, self).xxx directly:
  • Advanced unpacking operation. a, b, *rest = range(10)
  • Keyword only the arguments. Limit keyword arguments
  • Chained exceptions. Python3 does not lose stack information when it rethrows an exception
  • Everything returns an iterator. range, zip, map, dict.values, etc. are all iterators
  • Performance optimization…

How does Python pass arguments

From the official Python documentation:

“Remember that arguments are passed by assignment in Python. Since assignment Just creates References to objects, There’s no alias between an argument name in the caller and Callee, and so no call-by-reference per Se.”

To be precise, Python’s argument passing is pass by assignment, or pass by object reference. All data types in Python are objects, so when you pass an argument, you just make the new variable point to the same object as the original variable. There is no passing by value or reference.

Passing it by reference to the object gives two different results depending on whether the object is mutable or immutable. If it is a mutable object, modify it directly. If it is an immutable object, a new object is produced and the parameters point to the new object

This can be simulated with the following code example:

def flist(l) :
    l.append(0)
    print(id(l))    # Print the same ID every time
    print(l)


ll = []
print(id(ll))
flist(ll)   # [0]
flist(ll)   # [0, 0]

print("=" * 10)


def fstr(s) :
    print(id(s)) # is the same as the id of the input parameter ss
    s += "a"
    print(id(s))  # and the input parameter ss id is different, each time the print result is different
    print(s)


ss = "sun"
print(id(ss))
fstr(ss)    # a
fstr(ss)    # a

Copy the code

Python’s mutable/immutable objects

Immutable objects: bool/int/float/tuple/STR/frozenset variable object: list/set/dict

So let’s go ahead and look at two code examples, what’s the output

def clear_list(l) :
    l = []

ll = [1.2.3]
clear_list(ll)
print(ll)

def fl(l=[1]) :
    l.append(1)
    print(l)
fl()
fl()
Copy the code

The answer is

[1, 2, 3] [1] [1, 1]Copy the code

For the first problem, the l = [] step creates a new object and pastes the l (note that the inside and outside l of the function are the difference between parameters and arguments), so the original L is not changed

For the second problem, the default parameter is calculated only once.

If you are interested, try this example again:

a = 1
def fun(a) :
    print("func_in".id(a))
    a = 2
    print("re-point".id(a), id(2))
print("func_out".id(a), id(1))

fun(a)
Copy the code

The answer is:

func_out 2602672810288 2602672810288
func_in 2602672810288
re-point 2602672810320 2602672810320
Copy the code

For argument passing in Python, mutable/immutable objects, here’s another stackOverflow answer.

Arguments are passed by assignment. The rationale behind this is twofold:

the parameter passed in is actually a reference to an object (but the reference is passed by value)some data types are mutable, but others aren’t

So: If you pass a mutable object into a method, the method gets a reference to that same object and you can mutate it to your heart’s delight, but if you rebind the reference in the method, the outer scope will know nothing about it, and after you’re done, the outer reference will still point at the original object.

If you pass an immutable object to a method, you still can’t rebind the outer reference, and you can’t even mutate the object.

In Python*args **kwargs

To handle mutable arguments, *args are packaged as tuples and **kwargs are packaged as dict

Let’s look at some code examples:

def print_multiple_args(*args) :
    print(type(args), args)
    for idx, val in enumerate(args):  # enumerate() enumerate function
        print(idx, val)

print_multiple_args('a'.'b'.'c')
# Specifies that the received value argument must be *args by prefixing the list with * as a keyword argument
print_multiple_args(*['a'.'b'.'c'])

def print_kwargs(**kwargs) :
    print(type(kwargs), kwargs)
    for k, v in kwargs.items():
        print('{}: {}'.format(k, v))


print_kwargs(a=1, b=2)
** prefixes the dictionary as a keyword argument, specifying that the argument to receive the value must be **kwargs
print_kwargs(**dict(a=1, b=2))

def print_all(a, *args, **kwargs) :
    print(a)
    if args:
        print(args)
    if kwargs:
        print(kwargs)
print_all('hello'.'world', name='monki')

Copy the code

The output is:

<class 'tuple'> ('a', 'b', 'c')
0 a
1 b
2 c
<class 'tuple'> ('a', 'b', 'c')
0 a
1 b
2 c
<class 'dict'> {'a': 1, 'b': 2}
a: 1
b: 2
<class 'dict'> {'a': 1, 'b': 2}
a: 1
b: 2
hello
('world',)
{'name': 'monki'}
Copy the code

Python Exception Mechanism

Refer to the official Python documentation for the exception hierarchy

Docs.python.org/zh-cn/3/lib…

Examples of python exception blocks:

try:
    # func # Code that might throw an exception
except (Exception1, Exception2) as e:  # Can catch multiple exceptions and handle them
    # Exception handling code
else:
    # pass # Code logic when an exception does not occur
finally:
    pass     # Code that executes whether or not an exception has occurred
Copy the code

What is the Python GIL?

The Global Interpreter Lock, also known as the Global Interpreter Lock, is a means of synchronizing threads within the Interpreter.

There is a GIL for each interpreter process, which has the direct effect of limiting the parallel execution of multiple threads in a single interpreter process, so that only one thread can be running at a time for a single interpreter process, even on a multi-core processor. For Python, the GIL is not a feature of the language itself, but an implementation feature of the CPython interpreter.

The compiled bytecode of Python code is executed in the interpreter. During execution, the GIL in the CPython interpreter causes only one thread to execute bytecode at a time. The immediate problem with the existence of the GIL is that it is impossible to achieve true parallelism in an interpreter process using multiple threads using multi-core processors.

Therefore, Python’s multithreading is pseudo-multithreading, unable to utilize multi-core resources, and only one thread is actually running at a time.

GIL limits the multi-core execution of a program

  • Only one thread can execute bytecode at a time
  • It is difficult for CPU-intensive programs to take advantage of multiple cores
  • The GIL is released during IO, which has little effect on IO intensive programs

There are a number of ways we can improve performance with the GIL

  • For IO intensive tasks, we can use multithreading or coroutines.
  • It is an option to replace an interpreter such as Jython that does not have a GIL, but it is not recommended to do so because you miss many useful features in C modules.
  • CPU intensive can use multi-process + process pool.
  • Move computationally intensive tasks to Python’s C/C++ extension modules.

Why worry about thread safety when you have the GIL?

GIL guarantees the exclusivity of each byte code execution, that is, the execution of each byte code is atomic. The GIL has a release mechanism, so the GIL does not guarantee that the thread will not switch during the execution of bytecode, that is, it is possible for the thread to switch between multiple bytecode.

We can use Python’s DIS module to look at the bytecode of a += 1 execution, and find that multiple bytecodes are needed to complete, and threads have the possibility of switching, so it is not thread safe.

An operation is atomic if it can be done by a bytecode instruction. Non-atomic operations are not thread-safe; atomic operations are thread-safe.

The granularity of the GIL is different from that of the thread mutex. GIL is a Python interpreter level mutex, which ensures consistency of shared resources at the interpreter level. Thread mutex is a code (or user) level mutex, which ensures consistency of shared data at the Python program level. So we still need thread mutex and other thread synchronization methods to keep the data consistent.

For more information on the Python GIL, please refer to my article “The Python GIL in Detail.”

What are iterators and generators?

This is a nice picture, it sums up all the concepts.

Container

Container can be understood as a data structure that organizes multiple elements together. Elements in a container can be obtained one by one iteratively. The in and not in keywords can be used to determine whether an element is contained in the container. For example, common Container objects in Python include list, deque, and set

Iterables

Most containers are iterable. For example, a list or set is an iterable. Anything that returns an iterator is an iterable.

Iterator

There are many containers in Python, such as lists, tuples, dictionaries, collections, etc. Containers can be intuitively thought of as units of multiple elements together. All containers are iterable.

We usually use the for in statement to enumerate iterable objects. The underlying mechanism is:

Iterables, on the other hand, return an iterator via the iter() function, which provides a next method. By calling this method, you either get the next object in the container, or you get a StopIteration error.

Here’s an example:

>>> x = [1.2.3]
>>> # Get the iterator
>>> y = iter(items) # Invokes items.__iter__()
>>> # Run the iterator
>>> next(y) # Invokes it.__next__()
1
>>> next(y)
2
>>> next(y)
3
>>> type(x)
<class 'list'> > > >type(y)
<class 'list_iterator'> > > >next(y)
Traceback (most recent call last) :
    File "<stdin>", line 1.in <module>
StopIteration
>>>
Copy the code

In the example above, x=[1,2,3] is an iterable, also called a container. Y =iter(x) is an iterator and implements __iter__ and __next__ methods.

The relationship between them is shown in the figure below:

You can see that iterators are passed through the iter method. It’s a stateful object, and when we call next, we return the next value in the container. We can say that any object that implements both iter and next is an iterator. Iter returns the iterator itself.

Iterators are like lazily loaded factories, generating values for them only when they are needed, and hibernating until they are called again.

Generator

A generator is simply a lazy version of an iterator.

The advantage over iterators is that generators don’t use as much memory as iterators. For example, declaring an iterator: [I for I in range(100000000)] would declare a list of 100 million elements, each of which would be stored in memory after generation. But we probably don’t need to save that much stuff. We just want the next variable to be generated when you use the next() function, so the generator comes out, which in Python is written as (I for I in range(100000000)).

In addition, generators can take other forms, such as a generator function that returns results to the next() method using the yield keyword. Here’s an example:

def frange(start, stop, increment) :
    x = start
    while x < stop:
        yield x
        x += increment

for n in frange(0.2.0.5) :print(n)

0
0.5
1.0
1.5
Copy the code

Generators have the following advantages over iterators:

  1. Reduce memory
  2. Delay calculation
  3. Effectively improve code readability

I have a summary of generators and iterators: Iterators and Generators in Python

Stackoverflow has a good answer on yield:

Stackoverflow.com/questions/2…

What is a coroutine?

More content, specific can see my article. Python Coroutines in Detail

What is a closure?

When you define a function inside a function that uses variables from an outside function, you call that function and the variables you use a closure.

In simple terms, an inner function is considered a closure if it references a variable in an outer (but not global) scope. Let’s look at a few simple examples:

The simplest example is to implement addition

def addx(x) :
    def adder(y) :
        return x + y
    return adder

c = addx(8)
print(type(c))
print(c.__name__)
print(c(10))
Copy the code
<class 'function'>
adder
18
Copy the code

Fibonacci sequence is realized by closure

from functools import wraps

def cache(func) :
    store = {}
    @wraps(func)
    def _(n) :
        if n in store:
            return store[n]
        else:
            res = func(n)
            store[n] = res
            return res
    return _

@cache
def f(n) :
    if n <= 1:
        return 1
    return f(n-1) + f(n-2)

print(f(10))
Copy the code

Recommend an article: blog.csdn.net/Yeoman92/ar…

What are deep and shallow Python copies?

Note the difference between references and copy() and deepCopy ()

Here’s an example:

import copy

a = [1.2.3.4['a'.'b']]  # Original object

b = a  # Assign, pass a reference to an object
c = copy.copy(a)  Object copy, shallow copy
d = copy.deepcopy(a)  Object copy, deep copy

a.append(5)  # Modify object a
a[4].append('c')  ['a', 'b']

print('a = ', a)
print('b = ', b)
print('c = ', c)
print('d = ', d)
Copy the code

The output is:

a =  [1, 2, 3, 4, ['a', 'b', 'c'], 5]
b =  [1, 2, 3, 4, ['a', 'b', 'c'], 5]
c =  [1, 2, 3, 4, ['a', 'b', 'c']]
d =  [1, 2, 3, 4, ['a', 'b']]
Copy the code

Memory management in Python

Python has a memory pool mechanism, Pymalloc mechanism, for the application and release of memory management. Let’s see why there is a memory pool:

Frequent calls to new/malloc in C can result in a large amount of memory fragmentation when creating a large number of objects that consume small amounts of memory.

The concept of a memory pool is that a certain number of memory blocks of equal size are allocated in memory in advance and reserved for standby. When a new memory demand arises, memory is first allocated from the memory pool to meet the demand, and then new memory is applied when the demand is insufficient. The most significant advantage of this is that it reduces memory fragmentation and improves efficiency.

If you look at the source code, you can see Pymalloc. For small objects, Pymalloc will apply for space in the memory pool, usually less than 236KB. For large objects, you will call New /malloc directly to apply for new memory space.

If you have memory creation, you need to collect it. Garbage collection is one of the most important things to ask in any Python interview. Let’s see what garbage collection is.

Python’s garbage collection mechanism

GC has two things to do. One is to find the garbage object resources in memory that are useless, and the other is to clear the garbage objects found, freeing up memory for other objects to use.

Python GC mainly uses reference counting to track and collect garbage. On the basis of reference counting, the circular reference problem of container objects can be solved by “mark and sweep”, and the garbage collection efficiency can be improved by “generation collection” by exchanging space for time.

Reference counting

The structure of each object in the source code is represented as follows:

typedef struct_object {
 int ob_refcnt;
 struct_typeobject *ob_type;
} PyObject;
Copy the code

PyObject is a mandatory part of every object, where ob_refcnt is used as a reference count. When an object has a new reference, its OB_refcnt increases, and when the reference is removed, its OB_refcnt decreases. When the reference count is 0, the object is reclaimed immediately and the memory occupied by the object is released.

Advantages:

  • simple
  • Real-time, once there is no reference, memory is freed directly. You don’t have to wait for a specific moment like other mechanics.

Disadvantages:

  • Additional space is required to maintain reference counts.
  • Circular references to objects cannot be resolved. (Major disadvantages)

Here’s what a circular reference is:

A and B refer to each other and there is no external reference to either A or B. That is, objects are applied to each other, causing a chain of references to form a loop.

> > > > > > a = {} # object of a reference count to 1 > > > b = {} # object's reference count is 1 > b > > a = b [' b '] # b reference counting increases 1 > > > b = [' a '] a reference count for the # 1 > > > del # a reference to a minus 1, The reference to object A is 1 >>>del b # b minus 1, and the reference to object B is 1Copy the code

After del is executed, objects A and B no longer have any reference to them, but each object contains A reference to the other object, even though neither object can refer to the other object through other variables, making them two inactive or garbage objects to GC. In theory it needs to be recycled.

The reference counting principle above would require a count of 0 to be collected, but their reference count does not decrease to zero. Therefore, if the two objects are managed using reference counting, they will not be reclaimed, they will always reside in memory, causing a memory leak (memory space that is not released after it is used up).

To solve the problem of circular references to objects, Python introduced two GC mechanisms: tag cleanup and generational collection.

Mark-clear mechanism

Flag clearing is mainly to solve the circular reference problem.

Tag-clearing algorithm is a garbage collection algorithm based on tracing GC technology.

It is divided into two phases: the first phase is the marking phase, GC will mark all the live objects, the second phase is the unmarked objects, inactive objects to reclaim. How does GC determine which objects are live and which are not?

Objects are connected by reference (pointer) to form a directed graph. Objects form the nodes of the directed graph, and reference relationships form the edges of the directed graph. Starting from the root object, objects are traversed along the directed edge. Reachable objects are marked as active objects, and unreachable objects are inactive objects to be cleared. The root object is the global variable, call stack, register.

Generational technology

Generational recycling is a space-for-time operation.

Python divides memory into different sets according to the lifetime of objects. Each set is called a generation. Python divides memory into 3 “generations”, namely the young generation (generation 0), the middle generation (generation 1), and the old generation (generation 2), which correspond to 3 linked lists. Their garbage collection frequency decreases as the lifetime of the object increases. Any newly created objects are allocated to the young generation. When the total number of young generation lists reaches the upper limit, Python’s garbage collection mechanism is triggered. Objects that can be collected are reclaimed, and those that cannot are moved to the middle generation, and so on. Or even over the life of the system. At the same time, generational recycling is based on the marker removal technology.

Object Oriented Chapter

What are composition and inheritance?

  • Composition is the use of other class instances as one of their own properties (has-a relationship)
  • Inheritance Is when a child inherits properties and methods from its parent class (Is a relationship).
  • Use combinations first to keep the code simple

What’s the difference between class variables and instance variables?

  • Class variables are shared by all instances
  • Instance variables are enjoyed by the instance alone and do not affect each instance
  • Class variables are used when we need to share variables between different instances of a class

What is the difference between classMethod and StaticMethod?

  • Both can be used using the class.method () method
  • The first parameter of classMethod is CLS, which can refer to class variables
  • Staticmethod is used just like a normal function, except it is organized in a class
  • Classmethod is used to use class variables, staticMethod is needed for code organization and can be placed outside the class

In this example, you can see the use of class variables, instance variables, class methods, normal methods, and static methods

class Person:
    Country = 'china'
    
    def __init__(self, name, age) :
        self.name = name
        self.age = age

    def print_name(self) :
        print(self.name)

    @classmethod
    def print_country(cls) :
        print(cls.Country)

    @staticmethod
    def join_name(first_name, last_name) :
        return print(last_name + first_name)

a = Person("Bruce"."Lee")
a.print_country()
a.print_name()
a.join_name("Bruce"."Lee")
Person.print_country()
Person.print_name(a)
Person.join_name("Bruce"."Lee")
Copy the code

More references:

  • Stackoverflow.com/questions/1…
  • Realpython.com/instance-cl…

__new__and__init__The difference between them?

  • __new__Is a static method, and__init__Is an instance method.
  • __new__Method returns a created instance, and__init__Nothing is returned.
  • Only in the__new__Return an instance of CLS__init__Can be called.
  • Called when a new instance is created__new__To initialize an instance__init__.

We can do a couple of interesting experiments.

class Person:
    def __new__(cls, *args, **kwargs) :
        print("in __new__")
        instance = super().__new__(cls)
        return instance

    def __init__(self, name, age) :
        print("in __init__")
        self._name = name
        self._age = age

p = Person("zhiyu".26)
print("p:", p)
Copy the code

The output of this program is:

in __new__
in __init__
p: <__main__.Person object at 0x00000261FE562E50>
Copy the code

You can see that the new method creates the object, and then init initializes it. Suppose that the object is not returned in the new method, what happens?

class Person:
    def __new__(cls, *args, **kwargs) :
        print("in __new__")
        instance = super().__new__(cls)
        #return instance

    def __init__(self, name, age) :
        print("in __init__")
        self._name = name
        self._age = age

p = Person("zhiyu".26)
print("p:", p)
Copy the code

Found that if new does not return an instantiated object, init cannot be initialized.

The output is:

in __new__
p: None
Copy the code

What is a metaclass?

Meta classes are the classes that create classes

  • Metaclasses allow us to control the generation of classes, such as modifying class attributes
  • Use type to define metaclasses
  • One of the most common usage scenarios for metaclasses is the ORM framework

What are decorators in Python?

  • Everything in Python is an object, and functions can be passed as arguments
  • A decorator is a function (class) that takes a function as an argument, adds a function and returns a new function.
  • Python uses decorators through @, syntax sugar

Example: Write a decorator that records the time a function takes:

import time

def log_time(func) :  Take a function as an argument
    def _log(*args, **kwargs) :
        beg = time.time()
        res = func(*args, **kwargs)
        print('use time: {}'.format(time.time() - beg))
        return res

    return _log

@log_time  # Decorator syntax sugar
def mysleep() :
    time.sleep(1)

mysleep()

# another way to write, equivalent to the above method of calling
def mysleep2() :
    time.sleep(1)

newsleep = log_time(mysleep2)
newsleep()
Copy the code

Of course, decorators can take arguments

def log_time_with_param(use_int) :
    def decorator(func) : Take a function as an argument
        def _log(*args, **kwargs) :
            beg = time.time()
            res = func(*args, **kwargs)
            if use_int:
                print('use time: {}'.format(int(time.time()-beg)))
            else:
                print('use time: {}'.format(time.time()-beg))
            return res
        return _log
    return decorator

@log_time_with_param(True)
def my_sleep6() :
    time.sleep(1)
Copy the code

You can also use classes as decorators

class LogTime:
    def __call__(self, func) : Take a function as an argument
        def _log(*args, **kwargs) :
            beg = time.time()
            res = func(*args, **kwargs)
            print('use time: {}'.format(time.time()-beg))
            return res
        return _log

@LogTime()
def mysleep3() :
    time.sleep(1)

mysleep3()
Copy the code

You can also add arguments to class decorators

class LogTime2:
    def __init__(self, use_int=False) :
        self.use_int = use_int

    def __call__(self, func) : Take a function as an argument
        def _log(*args, **kwargs) :
            beg = time.time()
            res = func(*args, **kwargs)
            if self.use_int:
                print('use time: {}'.format(int(time.time()-beg)))
            else:
                print('use time: {}'.format(time.time()-beg))
            return res
        return _log

@LogTime2(True)
def mysleep4() :
    time.sleep(1)

mysleep4()

@LogTime2(False)
def mysleep5() :
    time.sleep(1)

mysleep5()
Copy the code

Let’s also talk about the output order of the decorator

@a
@b
@c
def f() :
    pass
Copy the code

F = a(b(c(f)))

Magic in Python

  • __new__Used to generate instances
  • __init__Used to initialize an instance

These two have been mentioned above, in addition to magic methods are:

  • __call__

First, we need to understand what is a callable object, usually custom functions, built-in functions and classes belong to the callable object, but any can apply a pair of parentheses () to an object can be called a callable object, determine whether the object is a callable object can use the function callable.

To understand this, see the following code example:

class A:
    def __init__(self) :
        print("__init__ ")
        super(A, self).__init__()

    def __new__(cls) :
        print("__new__ ")
        return super(A, cls).__new__(cls)

    def __call__(self) :  # Can define any parameter
        print('__call__ ')

a = A()
a()
print(callable(a))  # True
Copy the code

The output is:

__init__ 
__call__ 
True
Copy the code

Executing a() prints __call__. A is an instantiated object as well as a callable object.

  • __del__, which is executed when an object is deleted, and is automatically called when the object is destroyed in memory.
import time class People: def __init__(self, name, age): self.name = name self.age = age def __del__(self): Print ('__del__') obj = People("zhiyu", 26) # del obj time.sleep(5) print('__del__') obj = People("zhiyu", 26)Copy the code

Wait until the program execution is complete, you can find that after 5s, the console output

__del__
Copy the code

What are some common design patterns in Python?

Because of the large content, I sorted out a separate article. Common Design Patterns in Python

Note the various ways the singleton pattern is written.

The Django framework article

todo

The resources

The Chinese version of stackoverflow answer: taizilongxu. Gitbooks. IO/stackoverfl…

Fluent Python

Python cookbook Chinese version: python3 – cookbook. Readthedocs. IO/zh_CN/lates…

Interview resources on Github

  • Github.com/taizilongxu…
  • Github.com/kenwoodjw/p…

Interview resources on the Web:

  • Blog.csdn.net/qq_27695659…
  • Gitbook. Cn/gitchat/act…
  • Gitbook. Cn/books / 5 c7e6…

Official Python documentation:

  • docs.python.org/zh-cn/3/