The author of this set of technology column (Qin Kaixin) focuses on big data and container cloud core technology decryption, has 5 years of industrial IOT big data cloud platform construction experience, can provide full stack of big data + cloud native platform consulting solutions, please continue to pay attention to this set of blog. QQ email address: [email protected], if there is any academic exchange, please feel free to contact.

1. Index management

1.1 Creating An Index

  • Settings can be used to add Settings to the index when it is created, as well as to initialize type mapping

    curl -XPUT 'http://elasticsearch02:9200/twitter? pretty' -d ' { "settings" : { "index" : { "number_of_shards" : 3, "number_of_replicas" : 2 } }, "mappings" : { "type1" : { "properties" : { "field1" : { "type" : "text" } } } } }'Copy the code

1.2 Explanation of the index Creation Return message

  • By default, the index creation command returns a response message after each copy of the Primary shard has been replicated, or after the request has timed out, similar to the following.

  • Where “trees” indicates whether the index has been created successfully, and “shards_trees” indicates whether there are enough replicas in each primary shard to start copying. It is possible that these two parameters will be false, but the index will still be created successfully. Because these parameters only indicate whether the two actions succeeded or not before the request timed out. It is also possible that the request timed out and failed before the request timed out, but both actions were executed on the ES server after the request timed out.

  • If acknoledged is false, you may have timed out. When the response message is received, the cluster state has not changed and the newly created index has not been added, but the index may still be created later. If shards_trees is false, it is possible that the timeout will occur before the primary shard copies the index, but at this point the index will be created successfully and the cluster state will have joined the newly created index.

      {
          "acknowledged": true,
          "shards_acknowledged": true
      }
    Copy the code

1.3 Deleting an Index

Deletes a type in the index curl - XDELETE 'http://elasticsearch02:9200/twitter? pretty'Copy the code

1.4 Querying Index Settings

curl -XGET 'http://elasticsearch02:9200/twitter? pretty'Copy the code

1.5 Enabling/Disabling indexes

  • If an index is closed, there is no performance cost associated with the index, as long as the index metadata is retained, and then no read or write operations on the index are successful. A closed index can then be opened again, and the Shard Recovery process takes place.

  • For example, when you are doing some operation and maintenance operations, now you need to do some configuration on an index, operation and maintenance operations, modify some Settings, close the index, do not allow write, and then open the index after success

    curl -XPOST 'http://elasticsearch02:9200/twitter/_close? pretty' curl -XPOST 'http://elasticsearch02:9200/twitter/_open? pretty' curl -XPUT 'http://elasticsearch02:9200/twitter/type1/1? pretty' -d ' { "field1": "1" }'Copy the code

1.6 Compressing Indexes

  • The shrink command compresses an existing index into a new index while reducing the number of primary shards.

  • The primary shard is not allowed to be modified because it involves hash routing of the document. However, if you want to reduce the primary shard of an index, you can use the shrink command to compress the index. However, the compressed number of shards must be divisible by the original number of shards. For example, an index with eight primary shards can be compressed to four, two, or one primary shard.

  • If you want to shrink an index, for example, if you want to keep data for 7 days, then you are given 10 shards. However, now that the requirement has changed, the index only needs to keep data for 3 days, then the data volume has become smaller. Therefore, you do not need 10 shards, and you can do shrink operation for 5 shards.

  • The shrink command works as follows:

    (1) First, it creates a target index as defined by the source index, The segment file of the source index is hardlinked directly to the segment file of the target index. If the operating system does not support hard-link, copy the segment file of the source index to the data dir of the target index, which is time-consuming. (3) In the end, shard recovery is performed on target indexCopy the code
  • To shrink index, the index must first be marked as read only, and each copy of the index shard, either primary or replica, must be copied to a node.

  • By default, each shard of an index may be on a different machine. For example, an index has five shards, shard0 and shard1 on machine 1, shard2 and shard3 on machine 2, and shard4 on machine 3. Now you have to copy Shard0, Shard1, Shard2, Shard3, Shard4 to the same replica Shard of Shard0. And each primary shard must exist. This can be done with the following command. Including index. The routing. The allocation. The require. _name must be the name of a node that can be set up on its own.

    curl -XPUT 'http://elasticsearch02:9200/twitter/_settings? pretty' -d ' { "settings": { "index.routing.allocation.require._name": "node-elasticsearch-02", "index.blocks.write": true } }'Copy the code
  • This command takes some time to copy a copy of each source index shard to the specified node, using GET _cat/recovery? V command to track the progress of the process.

  • To shrink an index, the shard copy relocate process completes, using the following command:

      POST my_source_index/_shrink/my_target_inde
    Copy the code
  • If the target index is added to the cluster state, this command will return immediately, not waiting for the shrink process to complete. You can also change the target index setting by using the following command to change the number of primary shards in the target index:

    curl -XPOST 'http://elasticsearch02:9200/twitter/_shrink/twitter_shrinked? pretty' -d ' { "settings": { "index.number_of_replicas": 1, "index.number_of_shards": 1, "index.codec": "best_compression" } }'Copy the code
  • To monitor the shrink process, use GET _cat/recovery? V can be.

1.7 rollover index Creates a new index

  • The rollover command can reset an alias to a new index if an existing index is considered too large or the data is too old. This command accepts an alias name and a list of conditions.

  • If the index satisfies condition, a new index is created and alias points to that new index. Such as the following command. For example, if an index logs-0000001 is given an alias logs_write and then a rollover command is issued, if the index to which the logs_write alias previously pointed, i.e., logs-0000001, has been created for more than 7 days, Or if there are more than 1000 documents, an index logs-000002 will be created, and the logs_write alias will point to the new index.

  • You can write a shell script and execute the following rollover command at 0:00 every day to determine if the previous index has been in existence for more than 1 day. Create a new index and assign the alias to the new index. Automatically scroll to create new indexes and keep each index for just one hour, one day, seven days, three days, one week, one month.

  • Similarly, if ES is used as a logging platform, it may be distributed e-commerce platform or order system logs, and a separate index is required to keep the logs of the last three days. The transaction system log, which is a separate index, is required to retain the last 30 days of the log.

    curl -XPUT 'http://elasticsearch02:9200/logs-000001? pretty' -d ' { "aliases": { "logs_write": {} } }' # Add > 1000 documents to logs-000001 curl -XPUT 'http://elasticsearch02:9200/logs-000001/data/1? pretty' -d ' { "userid": 1, "page": 1 }' curl -XPUT 'http://elasticsearch02:9200/logs-000001/data/2? pretty' -d ' { "userid": 2, "page": 2 }' curl -XPUT 'http://elasticsearch02:9200/logs-000001/data/3? pretty' -d ' { "userid": 3, "page": 3 }' curl -XPOST 'http://elasticsearch02:9200/logs_write/_rollover? pretty' -d ' { "conditions": { "max_age": "1d", "max_docs": 3 } }' { "acknowledged": true, "shards_acknowledged": true, "old_index": "logs-000001", "new_index": "logs-000002", "rolled_over": true, "dry_run": false, "conditions": { "[max_age: 7d]": false, "[max_docs: 1000]": true } }Copy the code
  • This process is common in site user behavior log data, such as automatic index segmentation by day, writing a script to perform rollover periodically, will automatically create new indexes, but the alias is always the same, for external users, use the latest data index.

  • Take a simple example, how to play this piece, for example, using ES to do real-time user behavior analysis of the website, the requirement is that an index only needs to keep the data of the day, so you can use this Rollover strategy to ensure that each index contains the latest data of the day. The old data becomes another index, and you can write a shell script to delete the old data, so that the current data in es is retained. You can also keep the data of the last 7 days according to your requirements, but the data of the latest day in an index for analysis and query.

  • By default, if an existing index ends with a – sign and a number, such as logs-000001, the name of the new index automatically increses that number by one, as in logs-000002, which is automatically a six-digit number and auto-zeros. But we can also specify the desired new index name, such as the following:

      POST /my_alias/_rollover/my_new_index_name
      {
        "conditions": {
          "max_age":   "7d",
          "max_docs":  1000
        }
      }
    Copy the code
  • The rollover command can be used in conjunction with the date date, as in the following example, where an index in the format logs-2016.10.31-1 is first created. Then each time if successfully rollover, then if it is rollover multiple times in that day, that is the date of the day, the number at the end of the increasing. If the rollover is performed every other day, the date will be automatically changed and the serial number at the end of the rollover will be maintained.

      PUT /%3Clogs-%7Bnow%2Fd%7D-1%3E 
      {
        "aliases": {
          "logs_write": {}
        }
      }
      
      PUT logs_write/log/1
      {
        "message": "a dummy log"
      }
      
      POST logs_write/_refresh
      
      # Wait for a day to pass
      
      POST /logs_write/_rollover 
      {
        "conditions": {
          "max_docs":   "1"
        }
      }
    Copy the code
  • Of course, you can also set the new index to a new setting while rollover:

      POST /logs_write/_rollover
      {
        "conditions" : {
          "max_age": "7d",
          "max_docs": 1000
        },
        "settings": {
          "index.number_of_shards": 2
        }
      }
    Copy the code

1.8 the mapping management

  • The put mapping command allows you to add a new type to an existing index or modify a type, such as adding fields to a type.

  • The following command creates a type directly after the index:

    curl -XPUT 'http://elasticsearch02:9200/twitter? pretty' -d ' { "mappings": { "tweet": { "properties": { "message": { "type": "text" } } } } }'Copy the code
  • Add a type to an existing index:

    curl -XPUT 'http://elasticsearch02:9200/twitter/_mapping/user? pretty' -d ' { "properties": { "name": { "type": "text" } } }'Copy the code
  • Add a field to an existing type:

    curl -XPUT 'http://elasticsearch02:9200/twitter/_mapping/tweet? pretty' -d ' { "properties": { "user_name": { "type": "text" } } }' curl -XGET 'http://elasticsearch02:9200/twitter/_mapping/tweet? Pretty ', the above command line can view a type mapping mapping information curl - XGET 'http://elasticsearch02:9200/twitter/_mapping/tweet/field/message? Pretty ', which looks at the mapping information for a field of a typeCopy the code

1.9 Index Alias Management

curl -XPOST 'http://elasticsearch02:9200/_aliases? pretty' -d ' { "actions" : [ { "add" : { "index" : "twitter", "alias" : "twitter_prod" } } ] }' curl -XPOST 'http://elasticsearch02:9200/_aliases? pretty' -d ' { "actions" : [ { "remove" : { "index" : "twitter", "alias" : "twitter_prod" } } ] }' POST /_aliases { "actions" : [ { "remove" : { "index" : "test1", "alias" : "alias1" } }, { "add" : { "index" : "test2", "alias" : "alias1" } } ] } POST /_aliases { "actions" : [ { "add" : { "indices" : ["test1", "test2"], "alias" : "alias1" } } ] }Copy the code
  • An index alias can be used to mount multiple indexes, such as 7 days of data

  • Index alias is often combined with the rollover explained before. For the sake of performance and management convenience, we rollover out an index for the data every day. However, during data analysis, there may be an index access-log, which points to the latest data of the day and is used to calculate real-time data. There is an index access-log-7days, which points to seven indexes for seven days, allowing us to do some weekly statistics and analysis.

1.10 Index Settings Management

  • It is often possible to make some Settings changes to the index, often combined with the previous index open and close

    curl -XPUT 'http://elasticsearch02:9200/twitter/_settings? pretty' -d ' { "index" : { "number_of_replicas" : 1 } }' curl -XGET 'http://elasticsearch02:9200/twitter/_settings? pretty'Copy the code

1.11 Index Template Management

  • You can define index templates that are automatically applied to newly created indexes. A template can contain Settings and mappings, as well as a pattern that determines which indexes the template will be applied to. Template is only used when an index is created. Modifying the template does not affect existing indexes.

    curl -XPUT 'http://elasticsearch02:9200/_template/template_access_log? pretty' -d ' { "template": "access-log-*", "settings": { "number_of_shards": 2 }, "mappings": { "log": { "_source": { "enabled": false }, "properties": { "host_name": { "type": "keyword" }, "created_at": { "type": "date", "format": "EEE MMM dd HH:mm:ss Z YYYY" } } } }, "aliases" : { "access-log" : {} } }' curl -XDELETE 'http://elasticsearch02:9200/_template/template_access_log? pretty' curl -XGET 'http://elasticsearch02:9200/_template/template_access_log? pretty' curl -XPUT 'http://elasticsearch02:9200/access-log-01? pretty' curl -XGET 'http://elasticsearch02:9200/access-log-01? pretty'Copy the code
  • Index template, can be like this, is you will often create different indexes, such as goods, divided into several, every items of data is very big, may say, a product sort is an index, but each commodity index set is similar, so they can make a commodity index template, Then create a new commodity index one at a time, binding it directly to the template and referencing the relevant Settings

2 Index Statistics

  • Indice stat provides statistics for different types of operations that occur on index. The API provides index level statistics, but most statistics can also be obtained from Node Level. This includes doc count, index size, segment memory usage, merge, Flush, refresh, translog, and other underlying mechanisms.

    curl -XGET 'http://elasticsearch02:9200/twitter/_stats? pretty'Copy the code

2.1 the segment statistics

  • Use lucene’s segment information to view more information about shards and indexes, including optimization information, wasted data space due to delete, etc.

      curl -XGET 'http://elasticsearch02:9200/twitter/_segments?pretty'
      
      {
          ...
              "_3": {
                  "generation": 3,
                  "num_docs": 1121,
                  "deleted_docs": 53,
                  "size_in_bytes": 228288,
                  "memory_in_bytes": 3211,
                  "committed": true,
                  "search": true,
                  "version": "4.6",
                  "compound": true
              }
          ...
      }
    Copy the code
  • 3, is the name of the segment. This name is associated with the filename of the segment files. All files in a segment begin with this name

  • Generation: Increments each time a segment is generated. The segment name is this value

  • Num_docs: Number of documents stored in this segment that were not deleted deleted_docs: The number of deleted documents stored in this segment is irrelevant because these documents are deleted every time a segment merge occurs

  • Size_in_bytes: Disk space occupied by this segment

  • Memory_in_bytes: The size of the segment that needs to be cached in memory for better search performance

  • Committed: Commit /sync segments to ensure data is not lost, but it does not matter if the segment is false because the data is stored in the translog. It is possible to replay the logs in translog to recover data

  • Search: This segment can be unsearched. If false, the segment may have been synced to disk but not refreshed yet, so it cannot be searched

  • Version: Indicates the version number of Lucene

  • Compound: True means that Lucene merged all the files in this segment into one file, thus saving on file descriptor consumption

2.2 Shard Storage Information Statistics

  • Query the storage information of index Shard copies to see which shard copies exist on which nodes, the allocation ID of the shard copies, the unique identifier of each Shard copy, and the error message when opening the index. By default, shard of at least one unallocated copy is displayed. If Cluster Health is yellow, shard of at least one unallocated replica is displayed. When Cluster Health is red, Displays shard with unassigned primary. But you can see the information for each shard with status=green.

    curl -XGET 'http://elasticsearch02:9200/twitter/_shard_stores? pretty' curl -XGET 'http://elasticsearch02:9200/twitter/_shard_stores? status=green&pretty' { ... "0": { "stores": [ { "sPa3OgxLSYGvQ4oPs-Tajw": { "name": "node_t0", "transport_address": "local[1]", "attributes": { "mode": "local" } }, "allocation_id": "2iNySv_OQVePRX-yaRH_lQ", "legacy_version": 42, "allocation" : "primary" | "replica" | "unused", "store_exception": ... }, ... ] },... }Copy the code
  • 0: shard id

  • Stores: Store information for each copy of a shard

  • Spa3ogxlsygvq4ops-tajw: node ID that holds a copy of node information

  • Allocationi_id: indicates the allocationID of the copy

  • Allocation: Role of the SHard copy

2.4 clear the cache

curl -XPOST 'http://elasticsearch02:9200/twitter/_cache/clear? Pretty ', which clears all cachesCopy the code

2.5 flush

  • The Flush API allows us to forcibly flush multiple indexes. When we flush the index, it frees up memory because it forces OS cache data to fsync to disk and also cleans translog. By default, ES automatically triggers flush from time to time to clean up memory in a timely manner. POST Twitter /_flush

  • The flush command can take two arguments, wait_if_going, and if set to true, the Flush API will wait until the flush operation is complete before returning, even if other flush operations have to complete first. The default value is false, so that an error is reported if another flush operation is being performed; Force does not force a flush if it is not necessary

    curl -XPOST 'http://elasticsearch02:9200/twitter/_flush? pretty'Copy the code

2.6 the refresh

  • Refresh is used to explicitly refresh an index so that all previous operations performed by the refresh are visible. POST twitter/_refresh

    curl -XPOST 'http://elasticsearch02:9200/twitter/_refresh? pretty'Copy the code

2.7 the force of the merge

  • The Force Merge API can merge multiple segment files of a lucene index. This API can merge multiple segment files of a Lucene index to reduce the number of segment files. POST/twitter / _forcemerge.

    curl -XPOST 'http://elasticsearch02:9200/twitter/_forcemerge? pretty'Copy the code

3 short circuiter

The ES has a number of circuit breakers, or circuit breakers, that can be used to prevent any operation that causes the OOM to overflow. Each breaker has a limit on how much memory it can use. In addition, there is a parent circuit breaker that specifies the maximum amount of memory all circuit breakers can use.

  • Father short circuiter

    Indices. The breaker. The total limit, you can configure the father short circuiter maximum memory limit, the default is 70% of the JVM heap memory

  • Fielddata breaker

    Field data short-circuiter can estimate how much memory is needed to load all the data in each field into memory. This short-circuiter can prevent OOM problems when field data is loaded into JVM memory. The default value is 60% of the JVM Heap. Indices. Breaker. Fielddata. Limit, can be used to configure the parameters. Indices. Breaker. Fielddata. Overhead, you can configure the estimation factor, estimate value is multiplied by the estimated factor, leave some buffer, the default is 1.03.

  • request circuit breaker

    The Request Circuit breaker prevents each request from generating OOM data structures, such as an aggregate request that might use JVM memory to do summary calculations. Indices, breaker. Request. Limit, maximum is 60% of the JVM heap. Indices. Breaker. Request. Overhead, estimate factor, the default is 1.

  • in flight request circuit breaker

    The Flight Request Circuit breaker can limit all current incoming transport or HTTP layer requests beyond the total memory of a node, which is the size of the request itself. Net.breaker. Inflight_requests. Limit, which defaults to 100% of the JVM heap. Net.breaker. Inflight_requests. Overhead, estimate factor, default is 1.

  • script compilation circuit breaker

    This short-circuit can prevent the number of inline script compilations over a period of time. Script. max_compilations_per_minute the default is 15 compilations_per_minute.

4 Cache Optimization

  • fielddata cache

  • This cache is used when sorting or aggregating fields. This cache loads all field values into memory to speed up sorting or aggregation. However, it is expensive to build a fielddata cache per field, so it is recommended to provide the machine with enough memory to hold fieldData Cache.

    Indices. Fielddata. The cache size, this parameter can control the size of the cache and can be 30% of the relative size, or 12 gb the absolute size, default is unlimited.Copy the code
  • Fielddata is a JVM memory data structure that is used when sorting or aggregating fields after word segmentation. If you are sorting or aggregating ordinary unsegmented fields, the default is the DOC value data structure, which is cached in OS cache.

  • query cache

  • It is used to cache query results. Each node has a Query cache that uses the LRU policy and automatically cleans the data. Query Cache caches only filter data, not search data. If you only want to perform equivalent queries or filters based on a few fields, the performance of the filter operation is better, query cache

    Indices. The queries. The cache size, control the query cache size, the default is 10% of the JVM heap.Copy the code
  • index buffer

  • The document used to store the latest index. If the segment is full, the document will be written to a segment file, but it will be written to the OS cache instead of fsync to disk. Then after flush, it fsync to disk.

    Indices.memory. index_buffer_size, controls the size of the index buffer. Default is 10%. Indices.memory. min_index_buffer_size specifies the minimum size of a buffer. The default value is 48mb.Copy the code
  • Document, the data is first written to the index buffer, written to the disk file, invisible, refresh the disk file corresponding to the OS cache, and translog a copy of the data

  • For distributed search requests, the associated SHard performs the search and returns a result set to a Coordinate Node, which performs the final result merge and calculation. The Shard Request cache caches the local results of each SHard. Frequently requested data can be fetched directly from the cache. Unlike Query Cache, which is only for filters, Shard Request cache is for all searches and aggregations.

  • shard request cache

  • By default, the shard Request cache only caches hits. Total, aggregated results, etc., but does not cache hits. The cache is smart. If the doc data in the cache is refreshed, the cache is automatically invalidated. If the cache is full, the LRU algorithm is automatically used to clean the cache.

  • Cache can be enabled and disabled manually:

    PUT /my_index { "settings": { "index.requests.cache.enable": False}} Cache can also be manually enabled or disabled in each request: GET /my_index/_search? request_cache=true { "size": 0, "aggs": { "popular_colors": { "terms": { "field": "colors" } } } }Copy the code
  • By default, requests with size>0 will not be cached, even if request cache is enabled in index Settings. Only when the reqeust cache parameter is manually added during the request, the result cache can be implemented for the request whose size>0. The cache key is the complete request JSON, so each request cannot reuse the request cache result of the previous request, even if the JSON changes a little bit.

    Indices. Requests. The cache size, you can set the request cache size, the default is 1% GET / _stats/request_cache? If the request cache is search, it will not be cached by default unless you manually enable request_cache=true. If the request is aggr, it will be cached by default. The aggregated results are also cachedCopy the code
  • Index return

    Indices.recovery. max_bytes_per_sec, the amount of data that can be restored per second. The default value is 40mbCopy the code

5 Advanced index management features

5.1 the merge

  • A shard in es is a Lucene index. Each Lucene index consists of multiple segment files. The segment file is responsible for storing all document data and is immutable. Merge some small segment files into a larger segment file. This ensures that the segment file size does not swell too much, and the deleted data is physically deleted. The merge process automatically throttle traffic limiting to balance hardware resources between merge operations and other operations on nodes.

  • The Merge scheduler controls when merge operations are executed. Merge operations are executed on separate background threads. If the maximum number of threads is reached, merg operations wait for a free thread to execute.

    Index. The merge. The scheduler max_thread_count, this parameter can control the maximum number of threads for each of the merge operation, The default formula is math.max (1, math.min (4, Runtime.getruntime ().availableProcessors() / 2)), which works fine for SSDS. However, if we are using a mechanical hard disk, it is recommended to reduce this number to 1.Copy the code

5.2 translog

  • Persistent lucene disk, which can be committed with a Lucene commit, is a heavy process and therefore cannot be performed after every insert or delete operation. Therefore, data changes between the last commit and the next commit are in the OS cache and may result in data loss if the machine fails. To avoid this loss of data, each shard has a transaction log, also known as a translog, to write to the Write Ahead log. Any written data is also written to the Translog. If the machine hangs, you can replay the translog to recover the data in the SHard.

  • For each es Flush operation, a Lucene commit is performed to fsync the data to disk and flush the Translog file. This is done automatically in the background to ensure that the Translog log does not grow too large, so that replaying the Translog data for recovery does not take too long.

    Index. translog.flush_threshold_size: sets the threshold at which data in the OS cache is flushed. The default value is 512mb.Copy the code
  • In addition, by default, ES fsync translog every five seconds after each add, delete, or change operation,

    But requires index. The translog. Durability is set to the async or request (default). Es will be displayed as success in the status returned by the add, delete and modify operation only after the primary and translog fsync of each Replica shard. Index. The translog. Sync_interval: set up the index. The translog. Durability = fsync operations, translog fsync to the frequency of the disk, the default is 5 seconds index. The translog. Durability: Do you want to fsync translog after each add, delete, or change operation? The default is request. After each write request, fsync translog is performed. In this way, even if the machine is down, any operation that returns success means that the translog has been fsync to disk and data loss is guaranteed. The default value is 5 seconds. When the machine is down, 5 seconds of data may be lost. The data itself stays in the OS cache, and the corresponding translog also stays in the OS cache. It takes 5 seconds to import translog from OS cache to diskCopy the code

5.3 What about Translog damage

  • If the hard disk is damaged, translog may be damaged. Es will automatically detect the problem through checksum. In this case, ES will consider that the shard is faulty, forbids allocating shard to this node, and tries to recover data from other replica shard. If there is no replica data available, you can manually restore the replica data using elasticSearch – Translog. Note that the ElasticSaerch-Translog tool cannot be used while ElasticSearch is running, otherwise we may lose data.

      bin/elasticsearch-translog truncate -d /var/lib/elasticsearchdata/nodes/0/indices/P45vf_YQRhqjfwLMUvSqDw/0/translog/
      
      Checking existing translog files
    Copy the code

6 summarizes

There are still a lot of work to be done in production deployment. This paper starts from the primary idea and integrates the problems.

The author of this set of technology column (Qin Kaixin) focuses on big data and container cloud core technology decryption, has 5 years of industrial IOT big data cloud platform construction experience, can provide full stack of big data + cloud native platform consulting solutions, please continue to pay attention to this set of blog. QQ email address: [email protected], if there is any academic exchange, please feel free to contact

Qin kai new