The article directories

    • preface
    • Process concept Q&A
      • What is a process
      • Why do processes exist
      • The distinction and relation between program and process
      • Three basic states of a process
      • Switch between process states
      • Suspension of the process
      • Process control block (PCB)
    • Process scheduling algorithm
        • Non-deprivation mode
        • Deprived of their way
      • First in, first out (FIFO)
      • Shortest processor run-time priority scheduling algorithm
      • Maximum response ratio priority scheduling algorithm
      • Priority scheduling algorithm
      • Dynamic priority
      • Time slice rotation scheduling algorithm
      • Front and background scheduling algorithms
      • Multi-level feedback queue rotation algorithm
      • Three things that can happen when a process executes in sequence
      • Timing and process of process scheduling
        • The timing of process scheduling
        • The process of process scheduling
    • Process primitives
      • fork
        • How processes are generated:
      • The exec family
      • wait/waitpid
    • Relationships between processes
      • Guidelines for synchronization mechanisms to follow
      • The reader-writer problem
      • The problem of eating for philosophers
        • Solutions:
      • A deadlock
        • Cause of deadlock
        • Four necessary conditions for deadlock
        • Basic solution to deadlocks
          • Deadlock prevention
          • Avoid deadlock
        • Banker’s algorithm
          • The sample
        • Detection of deadlock
          • Resource allocation chart
        • Deadlock release
    • Interprocess communication
      • The pipe
      • The message queue
      • Memory Shared Mapping (SHM)
      • File memory Mapping (MMP)
      • Network Circulation message (Socket)
        • Advantage of Socket in interprocess communication
        • Use TCP long connection for communication
        • The final surprise

preface

Unconsciously, to the junior. Before you know it, you’re looking for a summer internship. Look at the old and you’ll know the new. (After reviewing the data structure for two days, I still prefer this one.) So here we are.


Process concept Q&A

In Linux, processes can be created, terminated, and communicated between processes. The following is a brief introduction.

What is a process

The execution of a program.

Process is a running activity of a program on a data set in a computer. It is the basic unit of resource allocation and scheduling in the system and the basis of operating system structure. In the early process-oriented computer architecture, the process is the basic execution entity of the program. In modern thread-oriented computer architectures, processes are containers for threads. A program is a description of instructions, data and their organizational form, while a process is an entity of the program.

A bunch of official words, the implementation of a program.

Why do processes exist

In order to enable concurrent execution of programs in multi-program environment, and to control and describe concurrent execution of programs, the concept of process is introduced. Program segment, data segment and process control block constitute the entity of a process.

The distinction and relation between program and process

(1) process is a single execution of the program, is a dynamic concept, the program is to achieve an orderly sequence of instructions that a particular function, is a static concept, (2) a process can perform one or more procedures, the same program may also be performed by multiple processes at the same time (3) process is the system of resource allocation and scheduling an independent unit, The program is not (4) the program can be stored as a software resource for a long time, while the process is an execution process of the program. It is temporary, and the process with life is structured

To sum up: a process is the execution of a program

Three basic states of a process

  1. Ready When a process has allocated all necessary resources except CPU and can execute as soon as it can acquire a processor, the state is called ready
  2. Execution status means that the process has acquired the processor and its program is executing
  3. Blocked State The state in which the execution of a process is suspended due to an event (such as an I/O request or a request for buffer space), that is, the execution of the process is blocked. Therefore, this state is called the blocked state, sometimes also called the “waiting” state or “sleep” state.

Switch between process states

Suspension of the process

In process, CTRL+C.

End user Needs When an end user discovers a suspicious problem during the running of his or her program, he or she often wants to temporarily freeze his or her process. The needs of the parent process The parent process often wants to inspect and modify the child process or when coordinating the activities of the child process. The needs of the operating system The operating system sometimes needs to suspend certain processes to check the utilization of resources in the running and to do accounting in order to improve the performance of the system. In order to relieve the memory shortage, the process in the blocked state is transferred to the auxiliary memory, so that the process is in a new state that is different from the blocked state. The need for load regulationCopy the code

Process control block (PCB)

- Process control block records process information - The operating system controls and manages concurrent processes according to the process control block PCB -PCB is the only indication of the existence of a processCopy the code

– Process IDENTIFIER Information A process identifier uniquely identifies a process. Usually, the process identifier consists of an external identifier and an internal identifier. 1 External identifier. Provided by the creator and usually consisting of letters and numbers, it is used by users (processes) when accessing the process. ② Internal identifier. This is set up for easy system use. In all operating systems, each process is assigned a unique integer as its internal identifier, which is usually the sequence number of a process.


Process scheduling algorithm

Process scheduling is the system according to a certain algorithm to dynamically allocate CPU to a ready process. Process scheduling is done through the process scheduler. The main functions of the process scheduler are: – Select the process to occupy the CPU – to switch the process context

Non-deprivation mode

Once a dispatcher assigns a processor to a process, it allows it to run until the process is complete or blocked by an event (such as an I/O request) – Advantages: simple, low overhead, – Disadvantages: seemingly fair and may lead to system performance degradation

Deprived of their way

This method stipulates that when a process is running, the system can strip the processor allocated to it based on certain principles, and allocate it to other process stripping principles: priority principle, short process priority principle, time slice principle

First in, first out (FIFO)

- Algorithm: allocates processors to the first process to enter the ready queue - Advantages: easy to implement - Disadvantages: fair on the surface, poor quality of service, disadvantageous to short processesCopy the code

Shortest processor run-time priority scheduling algorithm

– Algorithm: selects the process with the shortest “next CPU execution period” from the ready queue, and allocates processors to it for execution – Advantage: can obtain better scheduling performance – Disadvantage: the CPU execution period of the process is difficult to accurately obtain, which is detrimental to long processes

Maximum response ratio priority scheduling algorithm

- Algorithm: Response ratio = (wait time + required service time)/required service time, each time select the process with the highest response ratio scheduling - Advantage: so it is favorable for short processes, and take into account the waiting time - disadvantage: calculation of response ratio has some system overheadCopy the code

Priority scheduling algorithm

– Algorithm: Allocates CPU to the process with the highest priority in the ready queue – The static priority is established when the process is created and remains unchanged during the run. The requirements are as follows: Process type, resource requirements, and user priority Advantages: Simple Disadvantages: The process cannot be dynamically displayed, and the system scheduling performance is poor

Dynamic priority

When a process is created, its priority is determined according to a certain principle. As the execution time of the process changes, its priority is dynamically adjusted. It is determined according to the length of CPU time occupied by the process. The value is determined by the time the process waits for the CPU. The longer the time, the higher the priorityCopy the code

Time slice rotation scheduling algorithm

Algorithm: usually used in time-sharing systems, it schedules all the ready processes in the system in turn, so that the ready processes in turn get a time slice of the running time

  • The principle of time slice length determination should not only ensure that each user process of the system can get the response in time, but also not increase the scheduling overhead and reduce the efficiency of the system because the time slice is too short

Front and background scheduling algorithms

– Algorithm: This method is used in batch processing and time-sharing combined system. Put time-sharing user jobs in the foreground and batch jobs in the background. The system schedules the front desk job according to the time slice rotation method, and only when there is no job on the front desk, the processor is assigned to the process of the background job. Background processes are usually run on a first-come-first-served basis – Advantages: Timely response to time-sharing user processes and improved system resource utilization

Multi-level feedback queue rotation algorithm

– The system sets multiple ready queues with different priorities. Each queue with a higher priority is scheduled first. The queue with a lower priority is scheduled only when the queue is empty. – Normally, newly created processes and processes that have not used up their time slices due to I/O requests are placed in the highest priority queue. In this queue, 2-3 processes that have not completed their time slices are placed in the next lower priority queue. – Whenever a process in the queue with a higher priority enters the queue, the system immediately switches to process scheduling and schedules processes in the queue with a higher priority in time. – Advantages: It can better meet the user requirements of all kinds of jobs. It can not only make the time-sharing user jobs get satisfactory response, but also make the batch user jobs get reasonable turnaround time

Three things that can happen when a process executes in sequence

- If a process is blocked due to I/O requests during execution, the system puts the process into the corresponding blocking queue and causes scheduling. - The process is not completed after using up the time slice. The system should put it back at the end of the ready queue for the next executionCopy the code

Timing and process of process scheduling

The timing of process scheduling

- When the executing process finishes running - when the executing process calls the blocking primitive to block itself up and enter the waiting state - when the preemptive priority scheduling is adopted, a process with higher priority than the running process enters the ready queue - when the time slice has run out in a time-sharing system - when the CPU mode is deniable, The priority of a process in the ready queue becomes higher than that of the currently running processCopy the code

The process of process scheduling

– The data structure on which process scheduling depends is usually scheduling queue. Due to different scheduling reasons, a variety of waiting queues are set up in the single-processor system. Only the process in the ready queue can obtain the processor and finally run. You must enter the ready queue to allocate processors – the setup structure of the queue data structure is closely related to the scheduling algorithm – the process scheduling algorithm simply decides which process will get the processor, and the assignment of the processor to that process is done by the dispatcher


Process primitives

fork

#include <unistd.h>

pid_t fork(void);
Copy the code

Function: The child process copies the 0~3g space and PCB in the parent process, but the ID number is different.

Fork returns the child id of the parent process. The child id of the parent process is 0

Related functions:

#include<sys/types.h>
#include<unistd.h>

pid_t getpid(void); // Get the process ID
pid_t getppid(void); // Get the parent process ID
Copy the code

How processes are generated:

Processes can occur in many ways, but they can be traced back to the same source.

(1Copy the parent process of the system environment (rest assured, as long as you open the process, there must be a parent process) (2Build process structures in the kernel (3Insert structure into process list for easy maintenance (4Allocate resources to the process (5Copy the parent's memory-mapped message (6Manage file descriptors and link points (7) notifies the parent processCopy the code

Here is a diagram of a list of processes with the command: pstree.

As you can see, init is the parent of all processes, and all other processes are forked directly or indirectly by the init process.

The exec family

Fork is used to execute a new program. (After fork is created, both the child process and the parent process are scheduled by the OS at the same time. Therefore, the child process can execute a program independently, which will run concurrently with the parent process.)

Run new executable programs using exec family functions. Exec family functions can directly load and run a compiled executable program.

With the exec family of functions, a typical parent-child program looks like this: the program that the child needs to run is written separately, compiled separately and linked into an executable program (Hello). The main process is the parent process. Fork creates a child process and exec executes hello in the child process to achieve the effect of executing different programs simultaneously (macro).

#include<unistd.h>

int execve(const char *path,char *const argv[],char *const envp[]);// This is a real system call
// The following functions end up calling this function

int execl(const char *path,char *constArgv,...);
int execlp(const char *file,char *constArgv,...);
int execle(const char *path,char *constArgv,...char *const envp[]);
int execv(const char *path,char *const argv[]);
int execvp(const char *file,char *const argv,);
Copy the code

The exec family of functions loads and runs the program path/file, passing the arguments arg0(arg1, arg2, argv[], envp[]) to the subroutine, returning -1 on error.

Look at the suffixes:

The suffix function
l You want to receive a comma-separated list of arguments, terminated by a NULL pointer
v You want to receive a pointer to a null-terminated array of strings
p Is a null-terminated pointer to an array of strings. The function can find subroutine files in the DOS PATH variable
e The envp function passes the specified argument, envp, to allow the child process to change its environment. Without the suffix e, the child process uses the current program’s environment

Here are some easy-to-understand chestnuts that I found:

#ifdef HAVE_CONFIG_H
#include <config.h>
#endif

#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <errno.h>

int main(int argc, char *argv[])
{
  // A pointer to a null-terminated array of strings, suitable for exec function arguments containing v
  char *arg[] = {"ls"."-a".NULL};
  
  /** * Create a child process and call the function execl * execl wants to receive a comma-separated list of arguments, terminated by a NULL pointer */
  if( fork() == 0 )
  {
    // in clild 
    printf( "1------------execl------------\n" );
    if( execl( "/bin/ls"."ls"."-a".NULL) = =- 1 )
    {
      perror( "execl error " );
      exit(1); }}/** * create a child process and call the function execv *execv wants to receive a pointer to a null-terminated array of strings */
  if( fork() == 0 )
  {
    // in child 
    printf("2------------execv------------\n");
    if( execv( "/bin/ls",arg) < 0)
    {
      perror("execv error ");
      exit(1); }}Execlp *l in execlp wants to receive a comma-separated list of arguments terminated by a NULL pointer *p is a null-terminated pointer to an array of strings. The function can find the subroutine file */ in the DOS PATH variable
  if( fork() == 0 )
  {
    // in clhild 
    printf("3------------execlp------------\n");
    if( execlp( "ls"."ls"."-a".NULL ) < 0 )
    {
      perror( "execlp error " );
      exit(1); }}*p is a null-terminated pointer to an array of strings. The function can find subroutine files in the DOS PATH variable */
  if( fork() == 0 )
  {
    printf("4------------execvp------------\n");
    if( execvp( "ls", arg ) < 0 )
    {
      perror( "execvp error " );
      exit( 1); }}The e function passes the specified parameter envp, allowing the child process to change its environment. Without the suffix E, the child process uses the current program's environment */
  if( fork() == 0 )
  {
    printf("5------------execle------------\n");
    if( execle("/bin/ls"."ls"."-a".NULL.NULL) = =- 1 )
    {
      perror("execle error ");
      exit(1); }}The e function execve * v expects to receive a pointer to a null-terminated array of strings. This function passes the specified parameter envp, allowing the child to change its environment. Without the suffix E, the child uses the current program's environment */
  if( fork() == 0 )
  {
    printf("6------------execve-----------\n");
    if( execve( "/bin/ls", arg, NULL) = =0)
    {
      perror("execve error ");
      exit(1); }}return EXIT_SUCCESS;
}
Copy the code

wait/waitpid

Here are a few concepts:

Zombie process: The child process exits and the parent process is not reclaimed in time. The child process becomes the zombie orphan process

There are several ways to terminate a process:1) main returns (2) call exit (3) call _exit (4Abort (send an abort signal to yourself) (5) terminated by a signalCopy the code
#include<sys/types.h>
#include<sys/wait.h>

pid_t wait(int *status);
// Where status is an integer pointer, is the return status of the child process. If this pointer is not null, you can use this pointer to obtain the status of the child process when it exits.

pid_t waitpid(pid_t pid,int *status,int options);
// PID indicates the process ID
/* <-1 reclaim any child process in the specified process group -1 reclaim any child process 0 reclaim and all child processes in the group currently called by WaitPID >0 reclaim the child process with the specified ID */
/ / the options:
/* WNOHANG: mandatory collection, no blocking. WUNTRANCED: Usually use the preceding */
Copy the code

Relationships between processes

– Resource sharing – Cooperation

Guidelines for synchronization mechanisms to follow

- Idle let in - busy wait - limited wait - let the right to waitCopy the code

The reader-writer problem

The processes that require only reading are called Reader processes, and other processes are called Writer processes. Multiple reader processes are allowed to read a shared object at The same time, but one writer process and other reader or writer processes are never allowed to access The shared object at The same time Problem is a synchronization Problem that only one Writer process must mutually access a shared object with other processes

The problem of eating for philosophers

There were five philosophers who lived by alternating between thinking and eating. The philosophers shared a round table and sat in five chairs around it. There are five bowls and five chopsticks on the round table. When a philosopher is hungry, he tries to take the chopsticks closest to him on the left and right. Only when he gets two chopsticks can he have a meal. After eating, put down your chopsticks and continue thinking.

Solutions:

① No more than four philosophers are allowed to eat at the same time to ensure that at least one philosopher can eat, and eventually the two chopsticks he used will be released so that more philosophers can eat. ② A philosopher is allowed to eat with chopsticks only when both his left and right chopsticks are available. ③ The philosopher with odd number should take the chopsticks on his left first and then the chopsticks on his right; Even-numbered philosophers do the opposite.

A deadlock

There is also a danger that improper management, allocation, and use of resources in multiprogram systems can, under certain conditions, lead to deadlocks, a type of random error in the system (see the two issues above).

Cause of deadlock

Insufficient resources shared by multiple processes cause deadlock as they compete for resources – competing for disposables and non-disposables – competing for non-disposables

When a process is running, resources are requested or released in an incorrect sequence, resulting in process deadlock. – Process advance sequence is valid. – Process advance sequence is invalid

In plain English, races.

Four necessary conditions for deadlock

  • Mutually exclusive processes require exclusive control over allocated resources, that is, a resource is owned by only one process for a period of time
  • Request and hold conditions When a process is blocked by requesting a resource, it holds on to acquired resources
  • Resources acquired by a condition process cannot be stripped until they are used up, and can only be released when they are used up
  • Loop waiting conditions must have a process – resource loop chain when deadlock occurs

Basic solution to deadlocks

  • Deadlock prevention Prevents deadlocks by setting restrictions that break one or more of the four necessary conditions for a deadlock to occur.
  • Deadlock Avoidance During the dynamic resource allocation process, a method is used to prevent the system from entering an insecure state, thus avoiding the occurrence of deadlock.
  • Deadlock detection The deadlock detection method allows death locks to be issued during system operation. However, through the detection mechanism set up in the system, the occurrence of deadlock can be detected in time, and precisely determine the process and resources related to deadlock, and then take appropriate measures to eliminate the deadlock from the system
  • Unlocking deadlock Unlocking is a facility that accompanies detecting deadlocks and is used to free a process from a deadlock state
Deadlock prevention

The system requires all processes to apply for all the resources they need at one time. If the system has enough resources to allocate to a process, it will allocate all the resources it needs to the process at one time, abandoning the “request” condition. If one resource requirement is not met during allocation, all existing resources are not allocated to the process, discarding the “hold” condition (static resource allocation method).

Avoid deadlock

The so-called safe state refers to the system can follow a certain process order such as (P1, P2,… , Pn) (called <P1, P2… , Pn> sequence is the safe sequence) to allocate the required resources to each process until the maximum demand, so that each process can successfully complete

If the system does not have such a security sequence, the system is said to be in an unsafe state

If resources are not allocated in a safe order, the system may change from a safe state to an unsafe state and deadlock may occur

Banker’s algorithm

Available: is an array with m elements, where each element represents the number of a class of Available resources. If Available[j]=k, it means that there are k class Rj resources in the system

Max: is an N × M matrix, which defines the maximum demand of each process in n processes on class M resources. If Max(I,j)=k, it indicates that the maximum number of class Rj resources required by process I is K

If the Allocation(I,j)= K, process I can allocate class Rj resources to process I. If the Allocation(I,j)= K, process I can allocate class Rj resources to process I

If Need[I,j]= K, it indicates that process I also needs k Rj resources to complete its task. Need(I,j)=Max(I,j)-Allocation(I,j)

The sample













Detection of deadlock

To detect deadlocks, the system must ① keep information about resource requests and allocations, and ② provide an algorithm that uses this information to detect whether the system has entered a deadlock state

Resource allocation chart

Resource allocation graph is composed of a set of node N and a set of edge E. The node G =(N, E) is divided into two kinds of nodes: process node and resource node. Edges represent the request-allocation relationship between processes and resources

Deadlock release

– Deprive resources – Undo the process


Interprocess communication

I wrote a lot until

The pipe

In the shell pipe with “|” said. Pipes have a long history.

It can be regarded as a buffer in memory. It is used to import and export data streams from a certain process to achieve communication.

The pipe is no name, so “|” said pipe called anonymous pipe, destroy you’re done.

Note that this anonymous pipe is a special file that exists only in memory, not in the file system.

Anonymous pipes are used for interprocess communication with related processes.

A | B command in A shell, A and B process are shell created the child process, there is no father and son relationship between A and B, is it both the parent process of the shell.



If pipes are used to communicate between unrelated processes, FIFO named pipes are used.

At this point, the situation is clear: pipes are “easy to reach, easy to reach, and do not occupy the location of the file system”, which is suitable for the shell “one-time game”.

If it’s a “long term game,” it doesn’t work. You have to look back.

The message queue

Compared to the limitations of pipes, message queues are much more sensible because they hold “tail money” and therefore need to persist in the system.

Message queues are internal linked lists in the kernel address space that pass messages between different processes through the Linux kernel. Messages are sent sequentially to the message queue and retrieved from the queue in several different ways. 3. Message queues in the kernel are distinguished by IPC identifiers, and different message queues are independent of each other. 4. The messages in each message queue constitute an independent linked list.

I see it as a “nest of abundance”.

However, message queues do have their limitations: message queues are not suitable for large data transfers, because there is a maximum length limit for each message body in the kernel, as well as an upper limit on the total length of the entire message body contained in all queues. In the Linux kernel, there are two macros that define MSGMAX and MSGMNB, which define the maximum length of a message and the maximum length of a queue in bytes, respectively.

Message queue is very good, just look at the above features, than pipe space, slower than SHM, transmission message size is not as good as MMP, but also asynchronous, damn. It is such a “nothing” way of communication, forcibly turn their disadvantages into advantages, otherwise it does not know which box to be pressed to the bottom.

Yes, asynchronous.

With the emergence of “asynchronous service” (such as double eleven flow peak clipping, second kill system, etc.), message queue suddenly became hot property, it can carry more messages than pipes, it does not pursue speed, the memory is smaller than MMP, is simply used to do flow peak clipping, decoupling, asynchronous god.

RobbieMQ, RocketMQ, Kafka. Message queuing: decoupled, asynchronous, traffic peak shaving. What are the most popular TYPES of MQ and how to choose MQ for beginners?

Message queues are hot, and fate is amazing.

Memory Shared Mapping (SHM)

1. Shared memory is an interprocess communication method in which an area of memory is shared between multiple processes.

2. Memory sharing is implemented by mapping specified memory segments between multiple processes.

3. This is the fastest way for IPC because there are no middlemen to make a difference.

4. Multiple processes share the same physical space with different mount addresses, so they can directly use this space without replication.

The advantages are obvious, fast, the load of data is also enough, generally used for inter-process packet communication (packet is not too large, and a lot of).

But if you want to carry out large file operation, then this is a bit too much, not to say no, is Yang short avoid long ha ha ha, swift horse to it to pull the mill.

File memory Mapping (MMP)

1. The mmap() function is used to map files or devices into memory. 2. Mmap is characterized by on-demand paging. At the beginning only apply vMA, do not adjust the real page. When a reference is made to a page, a missing page is interrupted and the page is brought back into memory, thus avoiding memory waste.

Mmap advantage: Manipulating files like memory is good for reading and writing large files.

The disadvantages of Mmap are as follows: 1. If the file is very small, such as 60bytes, since the organization in memory is page by page, the file is loaded into memory as a page 4K, so the other 4096-60=4036 bytes of memory space will be wasted.

Network Circulation message (Socket)

Advantage of Socket in interprocess communication

First, distributed systems, which I think of when I see sockets for interprocess communication. Socket process communication can be across hosts, scalability, process distribution to different servers, change the port can be used. In contrast, no other IPC can cross machines.

Secondly, my brother told me every day that the server he learned could interact with cross-platform, Windows and Linux in the future. I asked him and he said that he had not learned yet, so what about now? Let me reveal the plot first.

On programming, TCP sockets and pipe are operating file descriptors, used for sending and receiving bytes, can read/write/an FCNTL/select/poll, etc, the difference is that TCP is full duplex, pipe is half duplex, not convenient.

Take the fastest IPC, SHM shared memory, that works? For technical not perfect friend, that can be really potholes, anyway I was black and blue. Some people may say, specific problem specific analysis, single use SHM, distributed use TCP, I would like to ask, interesting? Interesting? You love writing two pieces of code for the same feature. Compare SHM with TCP. TCP is a byte stream protocol that can only be read sequentially and has a write buffer. SHM is a messaging protocol where one process writes to a virtual address and another process reads away, basically blocking.

In fact, I like SHM communication very much, and even packaged dynamic libraries, which can be found under my “dynamic libraries” column, but, mutiny is so fast hahaha.

Again, what happens if IPC communication crashes? Period of error; TCP? The network is down. Which one is easy to find? At a glance. Once TCP breaks, reconnect where it broke. What about IPC? Then we have to start all over again.

Use TCP long connection for communication

There are two advantages to using TCP for long connection communication:

  1. Easy to locate dependencies between services in a distributed system.

    As long as it’s running on the machinenetstat -tpna | grep : portYou can immediately list the addresses of clients that use a service and use them on the clientnetstatorlsofCommand to find out which process initiated the connection. This effectively prevents outages when migrating services.

    TCP short connections and UDP connections do not have this feature.
  2. The length of the sending and receiving queue also makes it easier to locate network or program faults. Both recv-q and send-q printed by netstat are close to or fluctuate around 0 during normal operation. If the RECV-Q stays the same or continues to increase, the service process will typically slow down, possibly deadlocking or blocking. If send-Q stays the same or continues to grow, it could be that the host server is too busy to deal with you, that a router or switch on the network is down, or that the host is disconnected.

Here’s an example:

The final surprise

It also works across languages !!!!!


That’s it, click “like”, click “comment”, click “like”, click “like”, click “like”, click “like”, click “like”, click “like”, click “like”, click “like”