Installation method

  1. Double-click aero-iot-light-install.bat to install the lightweight solution as a service
  2. Double-click aero-iot-light-remove.bat to uninstall the iot-light service
  3. Net Start Iot-Light Start the lightweight Internet of Things
  4. Net Stop IoT-Light Turns off the lightweight Internet of Things
  5. Dynamic loading address:http://localhost:47731/swagger-ui.html You need to place the JAR package in the D:\ SDK \jars folder ahead of time
  6. Web address: http://localhost:47731

Lightweight solution architecture diagram

component

  1. Receiving component: it is mainly used to interact with sensors, and convert the data reported by sensors into the data we need, and pass the message component to the processing component
  2. Processing component: the main function is to receive the data processed by the component into the library
  3. Web component: Relies on Tomcat to provide a business system call interface
  4. Message components: Rely on high-performance queues — Disruptor

Functions:

  • Interaction with sensors
  • SDK dynamic loading
  • Data warehousing
  • Provides TCP, UDP, and MQTT interactions
  • One-click deployment as A Windows service
  • Integrated Web Container
  • # Unit tests
  • # dynamic configuration
  • # integration docker

Disruptor message component

(due to time relationship, the most content of extract tech.meituan.com/2016/11/18/…

Shared

The basic structure of the calculation is shown below. L1, L2, and L3 indicate level 1 cache, level 2 cache, and level 3 cache respectively. The closer the cache is to the CPU, the faster the cache is and the smaller the capacity is. So the L1 cache is small but fast and sits right next to the CPU core that uses it; L2 is larger and slower, and can still only be used by a single CPU core; L3 is larger, slower, and shared by all CPU cores on a single slot; Finally, there is main memory, shared by all CPU cores on all slots.

When the CPU performs an operation, it goes to L1, then L2, then L3, and if none of these are in the cache, the required data goes to main memory. The farther out you go, the longer the computation takes. So if you’re doing something very frequently, you want to try to make sure that your data is in L1 cache.

In addition, when threads share a piece of data, one thread writes the data back to main memory and another thread accesses the corresponding data in main memory. Here is the time concept for accessing different levels of data from the CPU:As you can see, the CPU reads data from main memory nearly 2 orders of magnitude slower than it reads data from L1.

Cache line

A Cache consists of a number of Cache lines. Each cache line is usually 64 bytes long, and it effectively refers to an address in main memory. A Java variable of type long is 8 bytes, so you can store up to 8 long variables in a cache line.

Each time the CPU pulls data from main memory, it stores adjacent data to the same cache line.

When you access a long array, if one of the values in the array is loaded into the cache, it automatically loads the other seven. So you can go through the array very quickly. In fact, you can iterate very quickly over any data structure allocated in a contiguous chunk of memory. However, there is a downside to this free loading. Imagine if we have a variable a of type long, which is not part of the array, but a separate variable, and we have another variable B of type long next to it, then b will be loaded for free when A is loaded.

False sharing

If one CPU core thread is modifying A, another CPU core thread is reading B. When the current user modifies a, both a and B will be loaded into the cache line of the former core. After updating A, all other cache lines containing A will be invalidated because a in other caches is not the latest value. When the latter reads B, it finds that the cache row is invalid and needs to be reloaded from main memory.

When multiple threads modify variables that are independent of each other, if these variables share the same cache row, they will inadvertently affect each other’s performance, which is called pseudo-sharing.

Avoid fake sharing

For pseudo-sharing, a common solution is to increase the spacing of array elements so that elements accessed by different threads are on different cache lines, swapping space for time for example:

    volatile long x;
    long p1, p2, p3, p4, p5, p6, p7;
    volatile long y;
}
Copy the code