1. The introduction

Apollo is a distributed configuration center developed by the Framework department of Ctrip. It can centrally manage configurations in different environments and clusters, and push configurations to applications in real time after modification. It has standardized permissions and process governance features, and is suitable for micro-service configuration management scenarios.

As stated on the official website: Apollo is an open source configuration center created by Ctrip, and GitHub’s star will soon reach 22K, indicating its maturity and community activity. Therefore, Apollo was finally decided after some rehearsal during the selection of configuration center recently.

Apollo is an essential basic service in the microservices system, so we have to understand its architecture design and basic use.

Therefore, the next part of this paper will mainly introduce how to rapidly deploy Apollo cluster to K8S based on Helm. NET Core application integration, and how to smoothly migrate configuration to Apollo.

This article has detailed deployment steps and recommends hands-on practice. The deployment Chart package and Demo have been uploaded to GitHub:K8S.NET.Apollo, can be saved for future use.

2. Apollo Architecture Overview

Prior to deployment, you need to understand Apollo’s infrastructure for subsequent deployments.

As for its interpretation, I will not expand in detail here, but the following points are still to understand, interested in the website can directly see the details: Apollo configuration center design.

  1. The Config Service provides functions such as reading and pushing configurations to the Apollo client
  2. Admin Service provides configuration modification, publishing, and other functions. The Service object is Apollo Portal.
  3. The Config Service and Admin Service are both multi-instance and stateless deployment. You need to register and discover services through the registry
  4. The default registry is Eureka, which is served by Service in K8S
  5. The Apollo client obtains the Config Service Service list from the registry to read configurations
  6. Apollo Portal Obtains the Admin Service Service list from the registry for configuration management

Based on the above introduction to Apollo, its physical architecture can be summarized as follows:

  1. Each environment must have its own independent Config Service and Admin Service, as well as its own ConfigDB.
  2. Multiple environments can be managed by sharing a single Apollo Portal, which has an independent PortalDB.

3. Deploy to K8S based on Helm

Apollo 1.7.0 adds a deployment mode based on Kubernetes native service discovery to replace the built-in Eureka, simplifying the overall deployment. Helm Charts are also available to make Apollo easier to use out of the box. Here’s an example of how to deploy Apollo using a test environment. (Deployed to the native Docker Desktop Local K8S environment).

Environmental requirements: Kubernetes 1.10+, Helm 3

3.1 Apollo Config&Portal DB setup

From the physical architecture shown above, the Config DB and PortalDB must be deployed first. For DB setup, it is recommended to use bitnami/ mysqlChart directly. The steps are as follows:

> helm repo add bitnami https://charts.bitnami.com/bitnami > helm repo list > helm repo update > helm search repo Bitnami /mysql NAME APP VERSION DESCRIPTION Bitnami /mysql 6.14.8 8.0.21 CHART to create a Highly available MySQL clusterCopy the code

To perform the installation of the HELM package, you need to customize the configuration file, values.yaml. We can download the mysql Chart package first.

The chart package is downloaded locally to ensure that subsequent maintenance is based on a consistent chart package version. Avoid performing helm Repo Update that automatically upgrades the Chart package version without realizing it.

> helm pull bitnami/mysql - untar / / download and unpack the mysql ├ ─ ─ Chart. The yaml ├ ─ ─ ci │ └ ─ ─ values - production. Yaml ├ ─ ─ files │ └ ─ ─ Docker entrypoint - initdb. D. │ └ ─ ─ the README. Md ├ ─ ─ the README. Md ├ ─ ─ templates │ ├ ─ ─ initialization - configmap. Yaml │ ├ ─ ─ The master - configmap. Yaml │ ├ ─ ─ the master - statefulset. Yaml │ ├ ─ ─ the master - SVC. Yaml │ ├ ─ ─ the TXT │ ├ ─ ─ secrets. Yaml │ ├ ─ ─ Yaml │ ├─ Slave-StateFulSet.Yaml │ ├─ Slave-StateFulSet.Yaml │ ├─ Slave-StateFulSet.Yaml │ ├─ Slave - SVC. Yaml │ └ ─ ─ _helpers. TPL ├ ─ ─ values - production. Yaml └ ─ ─ values. The yamlCopy the code

As shown in the distributed Deployment guide on the official website, DB initialization scripts are provided to create ApolloConfigDB and ApolloPortalDB respectively. Therefore, you can directly download the above SQL script to the files/docker-entrypoint-initdb.d directory of mysql Chart, so that the script will be automatically executed to create the database when the mysql instance is deployed.

> cd mysql/files/docker-entrypoint-initdb.d > curl https://raw.githubusercontent.com/ctripcorp/apollo/master/scripts/sql/apolloportaldb.sql > apolloportaldb.sql / / download apolloportaldb. SQL > > curl https://raw.githubusercontent.com/ctripcorp/apollo/master/scripts/sql/apolloconfigdb.sql Apolloconfigdb. SQL > ls Directory: C:\Users\Shengjie\k8s\helm\charts\apollo\mysql\files\docker-entrypoint-initdb.d Mode LastWriteTime Length Name ---- ------------- ------ ---- -a--- 8/12/2020 11:01 PM 21291 apolloconfigdb.sql -a--- 8/12/2020 10:56 PM 16278 apolloportaldb.sql -a--- 8/9/2020 6:26 PM 242 README.mdCopy the code

Then copy values.yaml and name it dev-mysql-values.yaml. Then modify the core configuration:

  1. global.storageClass=hostpath

You can check the storageClass supported by the cluster through kubectl get SC. I choose the default hostpath. The default reclaim policy for pv created by mysql is delete, which means that mysql is uninstalled and data is deleted directly. To save test data, update storageClass. 2. Root. password=root Change the password of the default root user

After the modification is complete, run the following script to install:

Helm install mysql-apollo. -f dev-mysql-values. Yaml -n db NAME: mysql-apollo LAST DEPLOYED: Sun Aug 16 11:01:18 2020 NAMESPACE: db STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Please be patient while the chart is being deployed Tip: Watch the deployment status using the command: kubectl get pods -w --namespace db Services: echo Master: mysql-apollo.db.svc.cluster.local:3306 echo Slave: mysql-apollo-slave.db.svc.cluster.local:3306 Administrator credentials: echo Username: root echo Password : $(kubectl get secret --namespace db mysql-apollo -o jsonpath="{.data.mysql-root-password}" | base64 --decode) To connect  to your database: 1. Run a pod that you can use as a client: Kubectl run mysql - Apollo - the client - rm - tty - I - restart = 'Never' - image docker. IO/bitnami/mysql: 8.0.21 debian - 10 - r17 --namespace db --command -- bash 2. To connect to master service (read/write): mysql -h mysql-apollo.db.svc.cluster.local -uroot -p my_database 3. To connect to slave service (read-only): mysql -h mysql-apollo-slave.db.svc.cluster.local -uroot -p my_database To upgrade this helm chart: 1. Obtain the password as described on the 'Administrator credentials' section and set the 'root.password' parameter as shown below: ROOT_PASSWORD=$(kubectl get secret --namespace db mysql-apollo -o jsonpath="{.data.mysql-root-password}" | base64 --decode) helm upgrade mysql-apollo bitnami/mysql --set root.password=$ROOT_PASSWORDCopy the code

Verify that the database was successfully created by following the instructions above:

> kubectl run mysql - Apollo - the client - rm - tty - I - restart = 'Never' - image docker. IO/bitnami/mysql: 8.0.21 debian - 10 - r17 --namespace db --command -- bash # mysql-client pod I have no name! @ mysql - Apollo - client: mysql / $mysql - h - Apollo. The SVC. Cluster. The local uroot - proot # to connect to the master node mysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 61 Server version: Source Distribution Copyright (C) 2000, 2020, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help; ' or '\h' for help. Type '\c' to clear the current input statement. mysql> show databases; # to check the databases; +--------------------+ | Database | +--------------------+ | ApolloConfigDB | | ApolloPortalDB | | information_schema | | my_database | | mysql | | performance_schema | | sys | + -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- + 7 rows in the set (0.00 SEC) mysql > use ApolloConfigDB; # switch to ApolloConfigDB; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Database changed mysql> show tables; SQL > alter table; +--------------------------+ | Tables_in_ApolloConfigDB | +--------------------------+ | AccessKey | | App | | AppNamespace | | Audit | | Cluster | | Commit | | GrayReleaseRule | | Instance | | InstanceConfig | | Item | | Namespace  | | NamespaceLock | | Release | | ReleaseHistory | | ReleaseMessage | | ServerConfig | +--------------------------+ 16 Rows in set (0.01sec)Copy the code

Apollo ConfigDB and PortalDB are successfully set up.

3.2 Setting up Apollo Config Service

To build Apollo Service, ctrip’s official Chart warehouse needs to be added:

> helm repo add apollo http://ctripcorp.github.io/apollo/charts > helm search repo apollo NAME CHART VERSION APP VERSION DESCRIPTION Apollo/Apollo-portal 0.1.0 1.7.0a Helm Chart for Apollo Portal Apollo/Apollo-service 0.1.0 1.7.0a Helm chart for Apollo Config Service and Apol...Copy the code

As can be seen from the above, there are two charts that are used to deploy service and Portal respectively. So let’s look at the chart Apollo/Apollo-service. As usual, download the Chart package first:

< p style = "max-width: 100%; clear: both; clear: both │ ├ ─ ─ deployment - configservice. Yaml │ ├ ─ ─ the TXT │ ├ ─ ─ service - adminservice. Yaml │ ├ ─ ─ service - configdb. Yaml │ ├ ─ ─ Service - configservice. Yaml │ └ ─ ─ _helpers. TPL └ ─ ─ values. The yamlCopy the code

From the above tree, it is used to deploy config service and admin service. Next, copy a values.yaml and name it dev-Apollo-svc-values. yaml. Modify the following configurations:

  1. configdb.host=mysql-apollo.db

2. Configdb. password=root Specifies the configdb secret

The modified configuration is as follows:

configdb:
  name: apollo-configdb
  # apolloconfigdb host
  host: "mysql-apollo.db"
  port: 3306
  dbName: ApolloConfigDB
  # apolloconfigdb user name
  userName: "root"
  # apolloconfigdb password
  password: "root"
....
Copy the code

Other configurations can be temporarily left unchanged, and then run the following command to install:

Helm install --dry-run --debug apollo-dev-svc. -f Dev-apollo-svc-values. Yaml -n Apollo # -f dev-apollo-svc-values.yaml -n Apollo NAME: helm install Apollo -dev-svc. apollo-dev-svc LAST DEPLOYED: Sun Aug 16 11:17:38 2020 NAMESPACE: apollo STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Get meta service url for current release by running these commands: echo http://apollo-dev-svc-apollo-configservice.apollo:8080 For local test use: export POD_NAME=$(kubectl get pods --namespace apollo -l "app=apollo-dev-svc-apollo-configservice" -o Jsonpath ="{.items[0].metadata.name}") echo http://127.0.0.1:8080 kubectl --namespace Apollo port-forward $POD_NAME A 8080-8080Copy the code

Here to remember the meta service above url: http://apollo-dev-svc-apollo-configservice.apollo:8080

How do you make sure you deploy correctly:

Word-wrap: break-word! Important; "> kubectl get all-n Apollo # RESTARTS AGE for Apollo pod/apollo-dev-svc-apollo-adminservice-7d4468ff46-gw6h4 1/1 Running 0 3m26s pod/apollo-dev-svc-apollo-configservice-58d6c44cd4-n4qk9 1/1 Running 0 3m26s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/ apollo-dev-svc-apollo-adminService ClusterIP 10.99.251.14 < None > 8090/TCP 3m26s Service/apollo-dev-svc-apollo-configService ClusterIP 10.108.121.201 <none> 8080/TCP 3m26s NAME READY up-to-date AVAILABLE AGE deployment.apps/apollo-dev-svc-apollo-adminservice 1/1 1 1 3m26s deployment.apps/apollo-dev-svc-apollo-configservice 1/1 1 1 3m26s NAME DESIRED CURRENT READY AGE replicaset.apps/apollo-dev-svc-apollo-adminservice-7d4468ff46 1 1 1 3m26s replicaset.apps/apollo-dev-svc-apollo-configservice-58d6c44cd4 1 1 1 3m26sCopy the code

Configservice and AdminService are exposed. Try forwarding the ConfigService port to a local port.

> kubectl port-forward service/ apollo-dev-svc-apollo-configService 8080:8080 -n Apollo # From 127.0.0.1:8080 -> 8080 Forwarding from [::1]:8080 -> 8080Copy the code

Use a browser to access localhost:8080, Can see the output ({” appName “, “Apollo – configservice”, “instanceId” : “Apollo – configservice: http://apollo.shisheng.wang/config-svc”, “” hom epageUrl”:”http://apollo.shisheng.wang/config-svc”},{“appName”:”apollo-adminservice”,”instanceId”:”apollo-adminservice:h TTP: / / Apollo. Shisheng. Wang/admin – SVC “, “homepageUrl” : “http://apollo.shisheng.wang/admin-svc”}].

This indicates that the Apollo Service is successfully set up.

3.3 Setting up the Apollo Portal Service

Again, download the Portal Chart package first and explore the directory structure:

> Helm pull Apollo /apollo-portal --untar Apollo -portal ├── Chart Ingress - portal. Yaml │ ├ ─ ─ the TXT │ ├ ─ ─ service - portal. Yaml │ ├ ─ ─ service - portaldb. Yaml │ └ ─ ─ _helpers. TPL └ ─ ─ values.yamlCopy the code

Portal, relatively speaking, builds portal services and exposes them through ingress. Copy a values.yaml and name it dev-apollo-portal-values.yaml. Modify the following configurations:

  1. ingress.enabled=true

Enable ingress and set the Ingress Controller via annotations. Since portal is a stateful service, keep an eye on Sessiion state maintenance. The following is the configuration for nginx-ingress-controller. If you use other ingress-controller, please note the change. Helm install nginx bitnaim/ nginx-Ingress-controller helm install nginx bitnaim/ nginx-Ingress-controller helm install nginx bitnaim/ nginx-Ingress-controller

ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/affinity: "cookie"
    nginx.ingress.kubernetes.io/session-cookie-name: "route"
  hosts:
    - host: "apollo.demo.com"
      paths: ["/"]
  tls: []
Copy the code
  1. Specify the configuration sources, mainly enVS and metaServers:

Config. Envs = dev config. MetaServers. Dev = http://apollo-dev-svc-apollo-configservice.apollo:8080 (the deployment of Apollo Apollo Service URL for service output) if both development, test, and production environments are enabled. The value can be: enVS: dev, UAT, PRD, metaServers Specify the corresponding environment configuration. Here is the configuration for enabling only the development environment:

config:
  # spring profiles to activate
  profiles: "github,auth"
  # specify the env names, e.g. dev,pro
  envs: "dev"
  # specify the meta servers, e.g.
  # dev: http://apollo-configservice-dev:8080
  # pro: http://apollo-configservice-pro:8080
  metaServers: 
    dev: http://apollo-svc-dev-apollo-configservice.apollo:8080
    # dev: http://apollo.shisheng.wang
  # specify the context path, e.g. /apollo
  contextPath: ""
  # extra config files for apollo-portal, e.g. application-ldap.yml
  files: {}
Copy the code
  1. portaldb.host=mysql-apollo.db & portaldb.password=root

Specify the host and password for portalDB

portaldb:
  name: apollo-portaldb
  # apolloportaldb host
  host: mysql-apollo.db
  port: 3306
  dbName: ApolloPortalDB
  # apolloportaldb user name
  userName: root
  # apolloportaldb password
  password: root
Copy the code

Other configurations can be temporarily left unchanged, and then run the following command to install:

> Helm install --dry-run --debug apollo-dev-portal. -f dev-apollo-portal-values. -f dev-apollo-portal-values. Yaml -n Apollo PS Helm install apollo-dev-portal. -f dev-apollo-portal-values C:\Users\Shengjie\k8s\helm\charts\apollo\apollo-portal> Helm install apollo-dev-portal . -f dev-apollo-portal-values.yaml -n apollo NAME: apollo-dev-portal LAST DEPLOYED: Sun Aug 16 11:53:18 2020 NAMESPACE: apollo STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Get apollo portal url by running these commands: http://apollo.demo.com/Copy the code

At this stage, if you want to access the hosts locally, you need to modify the local hosts and add127.0.0.1 apollo.demo.com. Then open your Browser inputapollo.demo.com/, you can access it. The default user password is [Apollo /admin].

3.4. Expose the Config Service

The development environment is deployed above, but we need to do a few things to get the development environment to access the Config Service. The ingress-configService. yaml file in the template directory is as follows:

# ingress-configservice.yaml {{- if .Values.configService.ingress.enabled -}} {{- $fullName := include "apollo.configService.fullName" . -}} {{- $svcPort := .Values.configService.service.port -}} {{- if semverCompare "> = 1.14 0". "Capabilities. KubeVersion. GitVersion -}} apiVersion: networking. K8s. IO/v1beta1 {{- else -}} apiVersion: extensions/v1beta1 {{- end }} kind: Ingress metadata: name: {{ $fullName }} labels: {{- include "apollo.service.labels" . | nindent 4 }} {{- with .Values.configService.ingress.annotations }} annotations: {{- toYaml . | nindent 4 }} {{- end }} spec: {{- if .Values.configService.ingress.tls }} tls: {{- range .Values.configService.ingress.tls }} - hosts: {{- range .hosts }} - {{ . | quote }} {{- end }} secretName: {{ .secretName }} {{- end }} {{- end }} rules: {{- range .Values.configService.ingress.hosts }} - host: {{ .host | quote }} http: paths: {{- range .paths }} - path: {{ . }} backend: serviceName: {{ $fullName }} servicePort: {{ $svcPort }} {{- end }} {{- end }} {{- end }}Copy the code

Yaml add ingress configuration options under configService:

configService: name: apollo-configservice fullNameOverride: "" replicaCount: 2 containerPort: 8080 image: repository: apolloconfig/apollo-configservice pullPolicy: IfNotPresent imagePullSecrets: [] service: fullNameOverride: "" port: 8080 targetPort: 8080 type: ClusterIP # "" paths: [] tls: []Copy the code

Then modify the dev – Apollo – we created above SVC – values. Under the yaml configService nodes, add the corresponding ingress and config configServiceUrlOverride configuration:

configService:
  name: apollo-configservice
  fullNameOverride: ""
  replicaCount: 1
  containerPort: 8080
  image:
    repository: apolloconfig/apollo-configservice
    pullPolicy: IfNotPresent
  imagePullSecrets: []
  service:
    fullNameOverride: ""
    port: 8080
    targetPort: 8080
    type: ClusterIP
  ingress:
    enabled: true
    annotations:
      kubernetes.io/ingress.class: nginx
      nginx.ingress.kubernetes.io/rewrite-target: /$2
    hosts:
      - host: "apollo.demo.com"
        paths: ["/config-svc(/|$)(.*)"]
    tls: []
  liveness:
    initialDelaySeconds: 100
    periodSeconds: 10
  readiness:
    initialDelaySeconds: 30
    periodSeconds: 5
  config:
    # spring profiles to activate
    profiles: "github,kubernetes"
    # override apollo.config-service.url: config service url to be accessed by apollo-client
    configServiceUrlOverride: "http://apollo.demo.com/config-svc"
    # override apollo.admin-service.url: admin service url to be accessed by apollo-portal
    adminServiceUrlOverride: ""

Copy the code

After the modification is complete, run the following command to upgrade Apollo service:

> helm upgrade apollo-service-dev . -f dev-apollo-svc-values.yaml -n apollo NAME: apollo-service-dev LAST DEPLOYED: Tue Aug 18 14:20:41 2020 NAMESPACE: apollo STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Get meta service url for current release by running these commands: echo http://apollo-service-dev-apollo-configservice.apollo:8080 For local test use: export POD_NAME=$(kubectl get pods --namespace apollo -l "app=apollo-service-dev-apollo-configservice" -o Jsonpath ="{.items[0].metadata.name}") echo http://127.0.0.1:8080 kubectl --namespace Apollo port-forward $POD_NAME 8080:8080 > curl http://apollo.demo.com/config-svc [{"appName":"apollo-configservice","instanceId":"apollo-configservice:http://apollo.demo.com/config-svc","homepageUrl":" http://apollo.demo.com/config-svc"},{"appName":"apollo-adminservice","instanceId":"apollo-adminservice:http://apollo-ser vice-dev-apollo-adminservice.apollo:8090","homepageUrl":"http://apollo-service-dev-apollo-adminservice.apollo:8090"}]Copy the code

From the output above you can see, now can be read by http://apollo.demo.com/config-svc metaServer configuration, the local development environment can through this link to read the configuration of Apollo.

.net Core integrated with Apollo

To create a project and import Apollo and Swagger packages, run the following command:

> dotnet new webapi -n K8S.NET.Apollo
> cd K8S.NET.Apollo
> dotnet add package Com.Ctrip.Framework.Apollo.Configuration
> dotnet add package Swashbuckle.AspNetCore

Copy the code

Modify AppSettings. json to add Apollo configuration:

{    
    "AllowedHosts": "*",
    "apollo": {
        "AppId": "test",
        "MetaServer": "http://apollo.demo.com/config-svc",
        "Env": "Dev"
    }    
}
Copy the code

Modify program. cs, add Apollo configuration source as follows:

public static IHostBuilder CreateHostBuilder(string[] args) =>
    Host.CreateDefaultBuilder(args)
    .ConfigureAppConfiguration(configBuilder =>
    {
        configBuilder.AddApollo(configBuilder.Build().GetSection("apollo"))
            .AddDefault()
            .AddNamespace("TEST1.connectionstrings", "ConnectionStrings")
            .AddNamespace("logging", ConfigFileFormat.Json)
            ;
    })
        .ConfigureWebHostDefaults(webBuilder =>
        {
            webBuilder.UseStartup<Startup>();
        });
Copy the code

Add Swagger integration to startup. cs

public void ConfigureServices(IServiceCollection services) { services.AddControllers(); services.AddSwaggerGen(c => { c.SwaggerDoc("v1", new OpenApiInfo { Title = this.GetType().Namespace, Version = "v1" }); }); } // This method gets called by the runtime. Use this method to configure the HTTP request pipeline. public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); } app.UseSwagger(); app.UseSwaggerUI(c => { c.SwaggerEndpoint("/swagger/v1/swagger.json", $"{this.GetType().Namespace} V1"); c.RoutePrefix = string.Empty; }); / /... }Copy the code

Add ApolloController and add the following test code:

namespace K8S.NET.Apollo.Controllers
{
    [ApiController]
    [Route("[controller]/[action]")]
    public class ApolloController : Controller
    {
        private readonly IConfiguration _configuration;
        public ApolloController(IConfiguration configuration)
        {
            _configuration = configuration;
        }

        [HttpGet("key")]
        public IActionResult GetLogLevelSection()
        {
            var key = "Logging:LogLevel";
            var val = _configuration.GetSection(key).Get<LoggingOptions>();
            return Ok($"{key}:{JsonSerializer.Serialize(val)}");
        }

        [HttpGet("key")]
        public IActionResult GetString(string key)
        {
            var val = _configuration.GetValue<string>(key);
            return Ok($"{key}:{val}");
        }

        [HttpGet("key")]
        public IActionResult GetConnectionStrings(string key)
        {
            var val = _configuration.GetConnectionString(key);
            return Ok($"{key}:{val}");
        }
    }

    public class LoggingOptions : Dictionary<string, string>
    {
    }
}
Copy the code

Log in to Apollo Portal, add the test project, add the following configuration, and publish it.

In addition, Apollo synchronizes a copy of the configuration to the local directory: C :/opt/data/test/config-cache. This ensures that local configurations can be loaded even if the cloud connection cannot be established. Run the following command to read and verify the configuration:

> curl https://localhost:5001/Apollo/GetLogLevelSection Logging:LogLevel:{"Default":"Information","Microsoft":"Warning","Microsoft.Hosting.Lifetime":"Information"} > curl https://localhost:5001/Apollo/GetString/key? key=name name:Shengjie > curl https://localhost:5001/Apollo/GetConnectionStrings/key? key=Default Default:Server=mu3ne-mysql; port=3306; database=mu3ne0001; user id=root; password=abc123; AllowLoadLocalInfile=trueCopy the code

5. Configure the migration point to the north

I believe that the vast majority of people using Apollo did not start out using it, but migrated after the reconfiguration became more complex. I used K8S ConfigMap to do configuration management. Here’s the migration guide, which I break down into two modes:

  1. Lazy mode

If you want to make minimal changes, you simply continue to maintain the project configuration in Json format in Apollo’s private namespace.

public static IHostBuilder CreateHostBuilder(string[] args) =>
    Host.CreateDefaultBuilder(args)
        .ConfigureAppConfiguration((context, builder) =>
        {
            builder.AddApollo(builder.Build().GetSection("apollo"))
                .AddDefault()
                .AddNamespace("appsettings",ConfigFileFormat.Json);
        })
        .ConfigureWebHostDefaults(webBuilder =>
        {
            webBuilder.UseStartup<Startup>();
        });
Copy the code
  1. Obsessive-compulsive model

There are also concerns that if you go to Apollo, you’ll need to use its features, so you’ll need to categorize existing configurations. What’s public and what’s private. If it’s public, you define it in a public namespace. The configuration format for the common namespace is only Properties, so you need to convert Json to Properties. For example, a Logging configuration can be converted online via json2Properties Converter. As follows:

If you do, you are wrong, and you will find that the final logging configuration does not take effect. This is because the properties format is split with **., while. NET Core uses: to identify node configurations, so the properties configuration is split by :**, as shown below. The two configurations are equivalent:

6. The final

Above, I believe that if you can start practical operation, you will harvest bandits shallow.

The complete configuration of the Demo and Chart packages for this article has been uploaded to Github:K8S.NET.Apollo, please use it on demand.