An overview of the

In addition to understanding the concept of Kubernetes, we also need to understand the implementation mechanism of the whole Kubernetes, if we can also understand the source code, then we are familiar with Kubernetes. I will explain the entire implementation mechanism of Kubernetes in terms of how kubernetes generates a Deployment resource, along with the source code interpretation.

Previous article

  • Kubernetes apply command execution of the whole process source code parsing – overview
  • Kubectl performs source parsing of the apply command
  • Kube – Apiserver accepts creating deploy resource requests
  • Kube-controller-manager handles the informer listener for creating deployment resources

As mentioned in the previous article, a depoyment resource can be created by Kubectl sending requests to the Apiserver, which can store the requested resource information to the ETCD. It is the controller-Manager that actually controls this resource. As discussed in the previous article, when we start, we start the summary controllers, and we generate various Infomers, which listen for data. And what happens when you listen to the data? This article shows how a controller-Manager can handle a new Deployment resource.

preface

Overview of controller-Manager

In controller mode, each action object fires an event, and then the controller performs a syncLoop operation. The controller listens for events and does a ListWatch operation using the Informer. I analyzed the informer listening process for the Kube-controller-Manager process for creating deployment resources in the previous article. Let’s review one of the structures of controller-Manger.

The essence of Deployment is to control replicaSet, and replicaSet will control pod, and then the controller will drive the objects to the desired state.

The entire creation process will involve

  • Deployment-controller-manager
  • Replicaset-controller-manager

Process the data

We are creating a Deployment resource, and the problem arises:

  1. Finally, how does the Deployment resource generate the POD resource?

Let’s look at deployment-controller-manager first.

deployment-controller-manager

Let’s look directly at the source addDeployment method.

// Add the deployment method to the controller code
func (dc *DeploymentController) addDeployment(obj interface{}) {
	d := obj.(*apps.Deployment)
	klog.V(4).InfoS("Adding deployment"."deployment", klog.KObj(d))
	dc.enqueueDeployment(d)
}

/// Add events to the workqueue
func (dc *DeploymentController) enqueue(deployment *apps.Deployment) {
	key, err := controller.KeyFunc(deployment)
	iferr ! =nil {
		utilruntime.HandleError(fmt.Errorf("couldn't get key for object %#v: %v", deployment, err))
		return
	}
  // Looking at the deploymentController data structure here, you can see the workqueue
	dc.queue.Add(key)
}
Copy the code

Look against the structure of DeploymentController

// DeploymentController is responsible for synchronizing Deployment objects stored
// in the system with actual running replica sets and pods.
type DeploymentController struct {
	// rsControl is used for adopting/releasing replica sets.
	rsControl     controller.RSControlInterface
	client        clientset.Interface
	eventRecorder record.EventRecorder

	// To allow injection of syncDeployment for testing.
	syncHandler func(dKey string) error
	// used for unit testing
	enqueueDeployment func(deployment *apps.Deployment)

	// dLister can list/get deployments from the shared informer's store
	dLister appslisters.DeploymentLister
	// rsLister can list/get replica sets from the shared informer's store
	rsLister appslisters.ReplicaSetLister
	// podLister can list/get pods from the shared informer's store
	podLister corelisters.PodLister

	// dListerSynced returns true if the Deployment store has been synced at least once.
	// Added as a member to the struct to allow injection for testing.
	dListerSynced cache.InformerSynced
	// rsListerSynced returns true if the ReplicaSet store has been synced at least once.
	// Added as a member to the struct to allow injection for testing.
	rsListerSynced cache.InformerSynced
	// podListerSynced returns true if the pod store has been synced at least once.
	// Added as a member to the struct to allow injection for testing.
	podListerSynced cache.InformerSynced

	// Deployments that need to be synced
  // This is the same as the previous analysis
	queue workqueue.RateLimitingInterface
}
Copy the code

We are using a workqueue, which echoes the previous figure. The controller-Manager is actually going to send this event to the workqueue, and what’s going to happen when we get to the workqueue?

// Run begins watching and syncing.
// Run the key method of deploymentController
func (dc *DeploymentController) Run(workers int, stopCh <-chan struct{}) {
	defer utilruntime.HandleCrash()
	defer dc.queue.ShutDown()

	klog.InfoS("Starting controller"."controller"."deployment")
	defer klog.InfoS("Shutting down controller"."controller"."deployment")
  
  // 1. Wait for informer to sync cache
	if! cache.WaitForNamedCacheSync("deployment", stopCh, dc.dListerSynced, dc.rsListerSynced, dc.podListerSynced) {
		return
	}
  // 2. Start 5 Gorutines to execute worker methods
	for i := 0; i < workers; i++ {
		go wait.Until(dc.worker, time.Second, stopCh)
	}

	<-stopCh
}

// 3. The worker method uses processNextWorkItem
// worker runs a worker thread that just dequeues items, processes them, and marks them done.
// It enforces that the syncHandler is never invoked concurrently with the same key.
func (dc *DeploymentController) worker(a) {
	for dc.processNextWorkItem() {
	}
}

func (dc *DeploymentController) processNextWorkItem(a) bool {
  // 4. Fetching objects from the queue
	key, quit := dc.queue.Get()
	if quit {
		return false
	}
	defer dc.queue.Done(key)
  // 5. Perform sync
	err := dc.syncHandler(key.(string))
	dc.handleErr(err, key)

	return true
}
Copy the code

You can see that when deployment-controller-Manager is running, it is listening on the workqueue. What you’re really dealing with is syncDeployment, which was defined earlier in the construction.

func (dc *DeploymentController) syncDeployment(key string) error {
  // Parse the key to get the namespace and name
	namespace, name, err := cache.SplitMetaNamespaceKey(key)
  ...
	// Retrieve data from indexer
	deployment, err := dc.dLister.Deployments(namespace).Get(name)
  ...
  // Construct a Deployment instance
	d := deployment.DeepCopy()
  // View the associated data
	everything := metav1.LabelSelector{}
	if reflect.DeepEqual(d.Spec.Selector, &everything) {
		dc.eventRecorder.Eventf(d, v1.EventTypeWarning, "SelectingAll"."This deployment is selecting all pods. A non-empty selector is required.")
		if d.Status.ObservedGeneration < d.Generation {
			d.Status.ObservedGeneration = d.Generation
			dc.client.AppsV1().Deployments(d.Namespace).UpdateStatus(context.TODO(), d, metav1.UpdateOptions{})
		}
		return nil
	}

	// List ReplicaSets owned by this Deployment, while reconciling ControllerRef
	// through adoption/orphaning.
  Replicaset is available and replicaset is available
	rsList, err := dc.getReplicaSetsForDeployment(d)
	iferr ! =nil {
		return err
	}
	// List all Pods owned by this Deployment, grouped by their ReplicaSet.
	// Current uses of the podMap are:
	//
	// * check if a Pod is labeled correctly with the pod-template-hash label.
	// * check that no old Pods are running in the middle of Recreate Deployments.
  // Determine the corresponding pod condition
	podMap, err := dc.getPodMapForDeployment(d, rsList)
	iferr ! =nil {
		return err
	}
  // Determine whether to expand the capacity
  scalingEvent, err := dc.isScalingEvent(d, rsList)
  iferr ! =nil {
		return err
	}
  // Expand the capacity if necessary
	if scalingEvent {
		return dc.sync(d, rsList)
	}
  // This will determine how Deployment will be released, and the final method of creation will be reflected here
	switch d.Spec.Strategy.Type {
	case apps.RecreateDeploymentStrategyType:
		return dc.rolloutRecreate(d, rsList, podMap)
  // By default we use the scrolling upgrade mode
	case apps.RollingUpdateDeploymentStrategyType:
		return dc.rolloutRolling(d, rsList)
	}
	return fmt.Errorf("unexpected deployment strategy type: %s", d.Spec.Strategy.Type)
	...
}
Copy the code

Following the dc.sync(d, rsList) function, you will see the update Replicaset in this code.

Kubernetes/pkg/controller/deployment/sync.go

func (dc *DeploymentController) scaleReplicaSet(rs *apps.ReplicaSet, newScale int32, deployment *apps.Deployment, scalingOperation string) (bool, *apps.ReplicaSet, error){ sizeNeedsUpdate := *(rs.Spec.Replicas) ! = newScale annotationsNeedUpdate := deploymentutil.ReplicasAnnotationsNeedUpdate(rs, *(deployment.Spec.Replicas), *(deployment.Spec.Replicas)+deploymentutil.MaxSurge(*deployment)) scaled :=false
	var err error
	if sizeNeedsUpdate || annotationsNeedUpdate {
		rsCopy := rs.DeepCopy()
		*(rsCopy.Spec.Replicas) = newScale
		deploymentutil.SetReplicasAnnotations(rsCopy, *(deployment.Spec.Replicas), *(deployment.Spec.Replicas)+deploymentutil.MaxSurge(*deployment))
    Replicaset (replicaset) is updated
		rs, err = dc.client.AppsV1().ReplicaSets(rsCopy.Namespace).Update(context.TODO(), rsCopy, metav1.UpdateOptions{})
		if err == nil && sizeNeedsUpdate {
			scaled = true
			dc.eventRecorder.Eventf(deployment, v1.EventTypeNormal, "ScalingReplicaSet"."Scaled %s replica set %s to %d", scalingOperation, rs.Name, newScale)
		}
	}
	return scaled, rs, err
}
Copy the code

As you can see from the source code, Deployment will determine whether Replicaset exists, which is one step closer to the final POD generation. While getReplicaSetsForDeployment generate relationship between the two.

func (dc *DeploymentController) getReplicaSetsForDeployment(d *apps.Deployment) ([]*apps.ReplicaSet, error){...// Deployment is associated with Relicaset
	cm := controller.NewReplicaSetControllerRefManager(dc.rsControl, d, deploymentSelector, controllerKind, canAdoptFunc)
	return cm.ClaimReplicaSets(rsList)
}
Copy the code

Finally, we will update replicaset with a rolling update. In the process of rolling update, data is constantly updated, the status of POD is judged, and the status of POD is updated.

// rolloutRolling implements the logic for rolling a new replica set.
func (dc *DeploymentController) rolloutRolling(d *apps.Deployment, rsList []*apps.ReplicaSet) error {
	newRS, oldRSs, err := dc.getAllReplicaSetsAndSyncRevision(d, rsList, true)
	iferr ! =nil {
		return err
	}
	allRSs := append(oldRSs, newRS)

	// Scale up, if we can.
	scaledUp, err := dc.reconcileNewReplicaSet(allRSs, newRS, d)
	iferr ! =nil {
		return err
	}
	if scaledUp {
		// Update DeploymentStatus
		return dc.syncRolloutStatus(allRSs, newRS, d)
	}

	// Scale down, if we can.
	scaledDown, err := dc.reconcileOldReplicaSets(allRSs, controller.FilterActiveReplicaSets(oldRSs), newRS, d)
	iferr ! =nil {
		return err
	}
	if scaledDown {
		// Update DeploymentStatus
		return dc.syncRolloutStatus(allRSs, newRS, d)
	}

	if deploymentutil.DeploymentComplete(d, &d.Status) {
		iferr := dc.cleanupDeployment(oldRSs, d); err ! =nil {
			return err
		}
	}
	// Sync deployment status
	return dc.syncRolloutStatus(allRSs, newRS, d)
}
Copy the code

Here is a brief summary of the processing logic of this Deployment-controller-Manager:

  1. callgetReplicaSetsForDeploymentObtain ReplicaSet related to Deployment in the cluster. If the MATCHING rs is found but not associated with Deployment, the ownerReferences field is set to associate with Deployment. If the ownerReferences are associated but do not match, delete the corresponding ownerReferences.
  2. callgetPodMapForDeploymentGets the POD associated with the current Deployment object and classifies the pod according to rS.uid;
  3. Verify whether it is a delete operation by judging the Deployment DeletionTimestamp field;
  4. performcheckPausedConditionsCheck whether Deployment ispauseState and add appropriatecondition;
  5. callgetRollbackToThe Deployment function checks whether Deployment is availableAnnotations: "deprecated. Deployment. The rollback. To"Field, if any, calleddc.rollbackMethod to perform the rollback operation;
  6. calldc.isScalingEventMethod to check whether it is in the scaling state.
  7. Finally check whether it is an update operation and according to the update policyRecreateRollingUpdateTo perform the corresponding operation;

RealRSControl

In deployment-controller-Manager, we see that RealRSControl is enabled, not Replicaset-controller-Manager. So let’s see, what does it do? Let’s look directly at the addReplicaSet function

// addReplicaSet enqueues the deployment that manages a ReplicaSet when the ReplicaSet is created.
func (dc *DeploymentController) addReplicaSet(obj interface{}) {
	rs := obj.(*apps.ReplicaSet)

	ifrs.DeletionTimestamp ! =nil {
		// On a restart of the controller manager, it's possible for an object to
		// show up in a state that is already pending deletion.
		dc.deleteReplicaSet(rs)
		return
	}

	// If it has a ControllerRef, that's all that matters.
	ifcontrollerRef := metav1.GetControllerOf(rs); controllerRef ! =nil {
		d := dc.resolveControllerRef(rs.Namespace, controllerRef)
		if d == nil {
			return
		}
		klog.V(4).InfoS("ReplicaSet added"."replicaSet", klog.KObj(rs))
		dc.enqueueDeployment(d)
		return
	}

	// Otherwise, it's an orphan. Get a list of all matching Deployments and sync
	// them to see if anyone wants to adopt it.
	ds := dc.getDeploymentsForReplicaSet(rs)
	if len(ds) == 0 {
		return
	}
	klog.V(4).InfoS("Orphan ReplicaSet added"."replicaSet", klog.KObj(rs))
	for _, d := range ds {
		dc.enqueueDeployment(d)
	}
}
Copy the code

This event will also be sent to the workqueue, which means that all events related to this Deployment will be listened on. And take action in response, mainly by making some status updates. I won’t go into detail here.

replicaset-controller-manager

Replicaset’s update event is generated by Deployment. Replicaset’s update event is generated by Replicaset. Replicaset’s own logic is very similar to Deployment’s. Let’s look at its processing logic when it listens to addRs.

Kubernetes/pkg/controller/replicaset/replica_set.go

func (rsc *ReplicaSetController) syncReplicaSet(key string) error {
	startTime := time.Now()
	defer func(a) {
		klog.V(4).Infof("Finished syncing %v %q (%v)", rsc.Kind, key, time.Since(startTime))
	}()

	namespace, name, err := cache.SplitMetaNamespaceKey(key)
	iferr ! =nil {
		return err
	}
  // 1. Obtain the rs object from the Informer cache based on ns/name
  // If rs has been deleted, delete the object in EXPECTATIONS directly
	rs, err := rsc.rsLister.ReplicaSets(namespace).Get(name)
	if apierrors.IsNotFound(err) {
		klog.V(4).Infof("%v %v has been deleted", rsc.Kind, key)
		rsc.expectations.DeleteExpectations(key)
		return nil
	}
	iferr ! =nil {
		return err
	}
  // 2. Check whether the RS needs to be synchronized
	rsNeedsSync := rsc.expectations.SatisfiedExpectations(key)
	selector, err := metav1.LabelSelectorAsSelector(rs.Spec.Selector)
	iferr ! =nil {
		utilruntime.HandleError(fmt.Errorf("error converting pod selector to selector: %v", err))
		return nil
	}
	// list all pods to include the pods that don't match the rs`s selector
	// anymore but has the stale controller ref.
	// TODO: Do the List and Filter in a single pass, or use an index.
  // 3. Get all the pod lists
	allPods, err := rsc.podLister.Pods(rs.Namespace).List(labels.Everything())
	iferr ! =nil {
		return err
	}
	// Ignore inactive pods.
  // 4. Filter out abnormal PODS. The pods in the deleted or failed state are not active
	filteredPods := controller.FilterActivePods(allPods)

	// NOTE: filteredPods are pointing to objects from cache - if you need to
	// modify them, you need to copy it first.
  // 5. Check all pods, adopt and release according to pod, and finally get the POD list associated with this RS
	filteredPods, err = rsc.claimPods(rs, selector, filteredPods)
	iferr ! =nil {
		return err
	}

  // 6. If sync is required, run manageReplicas to create/delete pod
	var manageReplicasErr error
	if rsNeedsSync && rs.DeletionTimestamp == nil {
		manageReplicasErr = rsc.manageReplicas(filteredPods, rs)
	}
	rs = rs.DeepCopy()
  // 7. Calculate the current status of rs
	newStatus := calculateStatus(rs, filteredPods, manageReplicasErr)

	// Always updates status as pods come up or die.
  // 8. Update rs status
	updatedRS, err := updateReplicaSetStatus(rsc.kubeClient.AppsV1().ReplicaSets(rs.Namespace), rs, newStatus)
	iferr ! =nil {
		// Multiple things could lead to this update failing. Requeuing the replica set ensures
		// Returning an error causes a requeue without forcing a hotloop
		return err
	}
	// Resync the ReplicaSet after MinReadySeconds as a last line of defense to guard against clock-skew.
  // 9. Determine whether rs needs to be added to the delay queue
	if manageReplicasErr == nil && updatedRS.Spec.MinReadySeconds > 0&& updatedRS.Status.ReadyReplicas == *(updatedRS.Spec.Replicas) && updatedRS.Status.AvailableReplicas ! = *(updatedRS.Spec.Replicas) { rsc.queue.AddAfter(key, time.Duration(updatedRS.Spec.MinReadySeconds)*time.Second) }return manageReplicasErr
}

// Create/delete the key method of pod
func (rsc *ReplicaSetController) manageReplicas(filteredPods []*v1.Pod, rs *apps.ReplicaSet) error{...// The method here will create a pod
		successfulCreations, err := slowStartBatch(diff, controller.SlowStartInitialBatchSize, func(a) error {
			err := rsc.podControl.CreatePods(rs.Namespace, &rs.Spec.Template, rs, metav1.NewControllerRef(rs, rsc.GroupVersionKind))
			iferr ! =nil {
				if apierrors.HasStatusCause(err, v1.NamespaceTerminatingCause) {
					// if the namespace is being terminated, we don't have to do
					// anything because any creation will fail
					return nil}}return err
		})
  ...
}
Copy the code

SyncReplicaSet is the controller’s core method, which drives the objects controlled by the controller to the desired state. The main logic is as follows:

  1. Obtain an RS object based on ns/name.
  2. Call expectations. SatisfiedExpectations determine whether need to perform true sync operation.
  3. Get all pod lists;
  4. Obtain the POD list associated with this RS through filtering according to POD Label. If the orphan POD matches the RS label, it will be associated; if the associated POD does not match the RS label, it will be disassociated.
  5. Call manageReplicas to synchronize pod operations, add/del pod;
  6. Calculate the current STATUS of RS and update it;
  7. If rs has the MinReadySeconds field set, add the RS to the delay queue.

ManageReplicas is the key call in syncReplicaSet, which computs how many pods will be created or deleted by replicaSet and calls apiserver’s interface. In this phase, it only calls apiserver’s interface for creation. The pod is not guaranteed to run successfully, and if all pod objects are not successfully created in a particular round, no more pods are created. A maximum of 500 PODS can be created or deleted in a period. If the number of uncreated Pods exceeds the upper limit, the next syncLoop continues processing.

Since THE POD created by RS is one-to-many, it is easy to have exceptions. Based on this, more considerations should be taken into account in the process of creating POD. Here rs introduces an expectation mechanism. Before performing manageReplicas have a function RSC. Expectations. SatisfiedExpectations (key) to determine whether to need to be performed.

// SatisfiedExpectations returns true if the required adds/dels for the given controller have been observed.
// Add/del counts are established by the controller at sync time, and updated as controllees are observed by the controller
// manager.
// The function called in the syncRepliceSet method
func (r *ControllerExpectations) SatisfiedExpectations(controllerKey string) bool {
  // 1. If the key exists, check whether the condition is met or whether the synchronization period is exceeded
	if exp, exists, err := r.GetExpectations(controllerKey); exists {
    // 3. If add <= 0 and del <= 0, the local observation state is the expected state
		if exp.Fulfilled() {
			klog.V(4).Infof("Controller expectations fulfilled %#v", exp)
			return true
    // 4. Judge whether the key is expired. ExpectationsTimeout default is 5* time.minute
		} else if exp.isExpired() {
			klog.V(4).Infof("Controller expectations expired %#v", exp)
			return true
		} else {
			klog.V(4).Infof("Controller still waiting on expectations %#v", exp)
			return false}}else iferr ! =nil {
		klog.V(2).Infof("Error encountered while checking expectations %#v, forcing sync", err)
	} else {
    // 2. The RS may be newly created and needs to be synced
		// When a new controller is created, it doesn't have expectations.
		// When it doesn't see expected watch events for > TTL, the expectations expire.
		//	- In this case it wakes up, creates/deletes controllees, and sets expectations again.
		// When it has satisfied expectations and no controllees need to be created/destroyed > TTL, the expectations expire.
		//	- In this case it continues without setting expectations till it needs to create/delete controllees.
		klog.V(4).Infof("Controller %v either never recorded expectations, or the ttl expired.", controllerKey)
	}
	// Trigger a sync if we either encountered and error (which shouldn't happen since we're
	// getting from local store) or this controller hasn't established expectations.
	return true
}
Copy the code

The whole call process can also be shown briefly in this diagram.

SatisfiedExpectations (No rsKey, RsNeedsSync to true) | judgment add/del pod | | | ∨ | create expectations object, | and set the add/del ∨ | create rs - > syncReplicaSet - > manageReplicas - > ∨ (pod) for rs creation calls slowStartBatch batch create pod / | Delete selected the redundant pod | | | ∨ | update expectations object ∨ updateReplicaSetStatus (update the status of the rs subResource)Copy the code
Expectations mechanism

I mentioned expectations in syncReplicaSet before. What is the effect of expectations?

In addition to the informer cache, RS also has a local cache called Expectations, which records the number of pods that need to be added /del for all rs objects. If both of them are 0, it indicates that the pod number expected to be created or deleted by the RS has been met. If not, it indicates that the operation failed when creating or deleting a POD in syncLoop. In this case, the RS needs to be synchronized again after the expectation expires.

But do all update events require sync? There is no need to perform sync for updates other than rs.spec.replicas, since there is no need to create or delete a POD for updates to other Spec fields and status.

The following conditions trigger a replicaSet synchronizing event:

  1. Related to RS: AddRS, UpdateRS, DeleteRS;
  2. Related to pod: AddPod, UpdatePod, DeletePod;
  3. Informer level 2 cache synchronization;

Let’s look at the source code

type UIDTrackingControllerExpectations struct {
	ControllerExpectationsInterface
	// TODO: There is a much nicer way to do this that involves a single store,
	// a lock per entry, and a ControlleeExpectationsInterface type.
	uidStoreLock sync.Mutex
	// Store used for the UIDs associated with any expectation tracked via the
	// ControllerExpectationsInterface.
	uidStore cache.Store
}
Copy the code

We can see that Expectation is essentially a caching service as well. Let’s take AddPod as an example, and see what Exceptation is going to do.

// When a pod is created, enqueue the replica set that manages it and update its expectations.
func (rsc *ReplicaSetController) addPod(obj interface{}){...// If it has a ControllerRef, that's all that matters.
	ifcontrollerRef := metav1.GetControllerOf(pod); controllerRef ! =nil{...// Update rsKey's expectation to add -1
		rsc.expectations.CreationObserved(rsKey)
		rsc.enqueueReplicaSet(rs)
		return}... }// CreationObserved atomically decrements the `add` expectation count of the given controller.
func (r *ControllerExpectations) CreationObserved(controllerKey string) {
	r.LowerExpectations(controllerKey, 1.0)}// Decrements the expectation counts of the given controller.
func (r *ControllerExpectations) LowerExpectations(controllerKey string, add, del int) {
	if exp, exists, err := r.GetExpectations(controllerKey); err == nil && exists {
		exp.Add(int64(-add), int64(-del))
		// The expectations might've been modified since the update on the previous line.
		klog.V(4).Infof("Lowered expectations %#v", exp)
	}
}
Copy the code

As you can see from the previous code, before the sync operation actually begins, the expectations mechanism is used to determine whether to actually start sync because the eventHandler phase also updates expectations. From the above you can see in the eventHandler in addPod invokes the RSC. Expectations. Updated CreationObserved rsKey expectations, the add value – 1, And deletePod invokes the RSC. Expectations. DeletionObserved del its value 1. So when sync happens, if controllerKey(name or NS /name) meets expectations, then sync, and updatePod doesn’t change expectations, so, Expectations is designed to trigger sync operations when a POD needs to be created or deleted. The purpose of expectations is to reduce unnecessary sync operations.

summary

  1. Creating Deployment will first create Replicaset from the Deployment-controller-Manager core method syncDeployment. Replicaset-controller-manager’s core method, syncReplicaSet, creates the corresponding POD
  2. Controller-manager adds events to the workqueue by listening to the events of informer, and then the worker takes out the events in the queue for syncHandler.
  3. The RS object in Deployment-controller-Manager is RealRSControl
  4. Replicaset-controller-manager introduces expectations to reduce unnecessary sync operations

That leaves two questions for us to figure out later.

  1. Is there a POD-controller-manager in kube-controller-manager?
  2. How is the pod state controlled?

conclusion

In kubernetes Pao Ding Jie Ni Series, Kube-Controller-Manager is divided into two parts for analysis. This part mainly introduces how controller-Manager processes the monitoring data, and you can make a good study. It’s even more important to understand how this works if you want to develop an operator. The article is bound to have some not rigorous place, but also hope that we contain, we absorb the essence (if any), to its dregs. If you are interested, you can follow my public account: Gungunxi. My wechat id is lCOMedy2021

Reference documentation

  • Deployment – controller – manager source analysis: www.bookstack.cn/read/source…

  • Replicaset – controller – manager source analysis: www.bookstack.cn/read/source…