With the growth of cloud native (where are the next five years of cloud native?) , the industry needs a unified event definition and description specification to provide cross-service, cross-platform interaction capabilities. The CloudEvents event specification has emerged and is receiving a lot of attention from the industry, including major cloud providers and SaaS companies. The introduction, specification and practice of CloudEvent will be explained in three series articles. This article is “CloudEvent Trilogy: Practice”.

I. Product access

scenario

Serverless is an event-driven function computing service. By using function computing products, functions run in an elastic, operation-free and highly reliable manner. Users can focus more on the writing of function codes without purchasing and maintaining servers and other infrastructure. CloudEvents has been widely used in function computing. For third-party services to access function computing services, it is necessary to use CloudEvents message passing data conforming to the CloudEvents specification to facilitate function computing platforms to distribute and filter messages of third-party services. Meanwhile, due to the universality of the specification, Third party services can be seamlessly adapted to various Cloudevents-compliant platforms. In addition, messaging products (e.g., message queues, message services, event buses, etc.) can also support the CloudEvents specification and unify the event standard of the cloud to accelerate the integration of the cloud native ecosystem.

The development of

Building a CloudEvent typically requires the use of CloudEvents’ software Development Kit (SDK), which greatly facilitates integrated development. Until the release of CloudEvents V1.0 specification, The CloudEvents team supports and maintains the following six SDKS:

  • CSharp
  • Go SDK
  • Java SDK
  • JavaScript SDK
  • Python SDK
  • Ruby SDK

Here are some examples of using Go, Python SDK to construct CloudEvent 1.0 compliant message receiving and sending, HTTP/JSON request conversion, etc.

Golang

  1. Access to rely on

Go get github.com/cloudevents/sdk-go/[email protected]

  1. Rely on the reference

import cloudevents "github.com/cloudevents/sdk-go/v2"

  1. Send the event
package main

import (
    "log"

	cloudevents "github.com/cloudevents/sdk-go/v2"
)

func main(a) {
	// The default client is HTTP.
	c, err := cloudevents.NewDefaultClient()
	iferr ! =nil {
		log.Fatalf("failed to create client, %v", err)
	}

	// Create an Event.
	event :=  cloudevents.NewEvent()
	event.SetSource("example/uri")
	event.SetType("example.type")
	event.SetData(cloudevents.ApplicationJSON, map[string]string{"hello": "world"})

	// Set a target.
	ctx := cloudevents.ContextWithTarget(context.Background(), "http://localhost:8080/")

	// Send that Event.
	ifresult := c.Send(ctx, event); ! cloudevents.IsACK(result) { log.Fatalf("failed to send, %v", result)
	}
}
Copy the code
  1. Accept the event
package main

import (
    "log"

	cloudevents "github.com/cloudevents/sdk-go/v2"
)

func receive(event cloudevents.Event) {
	// do something with event.
    fmt.Printf("%s", event)
}

func main(a) {
	// The default client is HTTP.
	c, err := cloudevents.NewDefaultClient()
	iferr ! =nil {
		log.Fatalf("failed to create client, %v", err)
	}
	log.Fatal(c.StartReceiver(context.Background(), receive));
}
Copy the code
  1. serialization

Serialize to JSON

event := cloudevents.NewEvent()
event.SetSource("example/uri")
event.SetType("example.type")
event.SetData(cloudevents.ApplicationJSON, map[string]string{"hello": "world"})

bytes, err := json.Marshal(event)
Copy the code

deserialization

event :=  cloudevents.NewEvent()
err := json.Marshal(bytes, &event)
Copy the code

Python

  1. Dependency package Installation

pip install cloudevents

  1. Event sender

Construct CloudEvents events using the CloudEvent type in Python SDK, serialize them into JSON data using to_binary function, and send the JSON request using the Requests framework.

from cloudevents.http import CloudEvent, to_binary
import requests

# Build CloudEvent
# - The CloudEvent "id" is generated if omitted. "specversion" defaults to "1.0"
attributes = {
    "type": "com.example.sampletype1"."source": "https://example.com/event-producer",
}
data = {"message": "Hello World!"}
event = CloudEvent(attributes, data)

Serialize it into JSON format data using the to_binary function
headers, body = to_binary(event)

# POST
requests.post("<some-url>", data=body, headers=headers)
Copy the code
  1. Accept event handling

Parse CloudEvents events using the From_HTTP function in the Python SDK and print the event contents

from flask import Flask, request

from cloudevents.http import from_http

app = Flask(__name__)


# create an endpoint at http://localhost:/3000/
@app.route("/", methods=["POST"])
def home() :
    # create a CloudEvent
    event = from_http(request.headers, request.get_data())

    # you can access cloudevent fields as seen below
    print(
        f"Found {event['id']} from {event['source']} with type "
        f"{event['type']} and specversion {event['specversion']}"
    )

    return "".204


if __name__ == "__main__":
    app.run(port=3000)
Copy the code

Ii. Access mode

architecture

Event-driven services are one of the core functions of function computing. The platform uses Knative Eventing’s Broker/Trigger event processing model to filter and distribute events. In addition, to ensure cross-platform and interoperability, CloudEvents, a standard data format defined by CNCF, will be used for event transfer.

As shown in the figure above, the architecture is divided into three parts, from left to right, the event source, the event receiver/forwarder, and the event consumer.

  1. The event source

An event source is a Kubernetes custom resource that provides a mechanism for registering a class of events from a particular service system. For example, object storage event sources, Github event sources, and so on. Therefore, different event sources require different custom resources to be described.

Event sources are responsible for taking events for a particular service system and converting them into CloudEvents format events to send to Knative Eventing platforms (the Broker/Trigger event processing model).

  1. Event receiver/forwarder

The purpose of introducing the Broker/Trigger event processing model is to create black boxes that hide the implementation from the user without having to care about the underlying implementation details.

  • The Broker, like an event bucket, receives a variety of events that can be filtered through properties.
  • Trigger describes a filter in which only events selected through the filter can be sent to event consumers.

As shown in Figure 1, the user uses filter to specify events of interest to the red ball, and eventually only such events are transmitted to event consumers (in this case, the Knative Service, or KSvc function).

  1. Event consumer

An event consumer can be a service or a system, in this case a user-written KSvc function (that is, the logical code that handles the event).

implementation
  1. Third-party Access

Third-party services accessing the Serverless platform based on Knative implementation need to provide specific event sources. The Knative community has maintained some event sources. For details, see github.com/knative/eve…

If the third-party service is not on the support list provided by the community, you need to customize the event source. There are several common ways to customize the event source:

ContainerSource is simple to implement and is currently the most common implementation of custom event sources. It is also recommended by the KNative platform.

ContainerSource is a Custom Resource Definition (CRD) Resource type defined in Kubernetes

Mainly look at the following parts:

  1. Sink: target object of event forwarding, which is Borker introduced in Figure 1
  2. Image: image to be developed, including the implementation of monitoring events of specific data sources and forwarding events to sink
  3. Arg and ENV: Some of the developer’s custom data is passed into the image through the ARG and env

The image image part of ContainerSource requires custom implementation. The implementation methods are divided into the following two types according to the acquisition of third-party service events:

  1. Message queue mode

As shown in Figure 2 below, if a third-party service ADAPTS to the message queue, it can send the resulting event to the message queue, at which point ContainerSource can directly consume the event of the third-party service from the message queue.

  1. Direct way

As shown in Figure 3 below, ContainerSource can directly subscribe to events of third-party services if the service does not adapt to message queues but provides event subscription capabilities (such as Redis’s Keyspace feature, Keyspace Notifications Future). Listen for service changes.

Note: Either way, ContainerSource generates CloudEvents events with the unique identity of the KSVC object function for platformer distribution. For example: 1. In message queue mode, because all events are obtained from the same message queue, the production events of third-party services must carry the identity of the target function (this mode is used when object storage products are connected). 2. Direct connection: Because ContainerSource has a one-to-one relationship with third-party services, you can add the object function identifier when ContainerSource generates CloudEvents events.

Broker/Trigger event processing model is used to greatly simplify the process of third-party service access function calculation. Whether message queue or direct connection is used, the product side only needs to provide a ContainerSource image for third-party services for the platform side.

  1. Platform side pipe

The work of the platform side is mainly to manage ContainerSource provided by the product side and provide event filtering capability by using Trigger.

For ContainerSource, the platform side works differently:

Message queue implementation

The following content is created on the platform side:

  1. A set of identical ContainerSources (for high availability)
  2. A Broker type resource that distributes events
  3. Multiple Trigger type resources for event filtering

The platform creates ContainerSource and Broker resources in advance, and provides the add, delete, change and check interface for managing triggers for event filtering. The mapping between ContainerSource, Broker, and Trigger is shown in the following figure:

Direct way

The following content is created on the platform side:

  1. Multiple ContainerSource subscriptions listen for different service instances
  2. A Broker type resource that distributes events
  3. Multiple Trigger type resources for event filtering

The platform side will provide Broker resources in advance, and provide the add, delete, change and check interface for ContainerSource and Trigger. At this time, ContainerSource, Broker, and Trigger correspond to each other as shown in the figure below:

For more content about cloud native, please check the official number: DCOS

The resources

  • function
  • go sdk
  • sdk-go