While the microservices environment provides portability, allows for faster and more frequent deployment cycles, and even allows organizations to create domain-focused teams, it is accompanied by an increasing need for traffic management, security, and observability. Throughout the ecosystem, there are countless ways to implement the service grid pattern to address these requirements. Microsoft has been active in the Service Mesh Interface (https://smi-spec.io/) (SMI) community to help define a standard set of portable API specifications that enable common Service grid functionality across different Service grids. Vendors can apply SMI to ensure that ecosystem tools work on different grids, while also allowing customers to select grid providers.
Today we are pleased to launch a new Open source project – the Open Service Mesh (https://openservicemesh.io/) (OSM), a run on the Kubernetes lightweight, scalable Service grid. OSM enables consumers to consistently manage, protect, and observe service-to-service communication in a highly dynamic microservice environment. We want OSM to be a community-led project that will facilitate SMI collaboration on new and existing APIs. We intend to make OSM open governance so that it is easy to collaborate with the community. So we have submitted a proposal to start the process of donating OSM to the Cloud Native Computing Foundation (https://cncf.io/) (CNCF).
We want Kubernetes operations people to be able to install, maintain, and run OSM effortlessly. At the same time, make OSM simple enough for the entire community to understand and contribute.
These goals are rooted in the customer’s needs and lead us to three basic design principles. First, OSM preserves user choice by providing a control plane that is compatible with the SMI specification. Second, we use Envoy as the data plane, because Envoy has a very strong community motivation. Finally, the most important concept behind OSM is the “no cliffs” design. This allows OSM to be flexible enough to handle both simple and complex scenarios using SMI and writing Envoy XDS APIs.
“We are pleased that OSM will join the family of Envoy in building a vendor-neutral service grid solution for Kubernetes with an emphasis on simplicity.” — Matt Klein, founder of Envoy
OSM simplifies many tasks such as configuring traffic movement for common deployment scenarios, using MLTS to secure service-to-service communication, enforcing fine-grained access control policies for services, observing metrics for debugging and monitoring services, integrating local or external certificate management solutions, And loading the application into the grid using automatic sidecar injection.
I am excited to work with the community to improve, develop, learn from OSM, and share the progress of the SMI specification with the wider SMI community. We will be in KubeCon EU Virtual 2020 (https://events.linuxfoundation… Urope /), and held on August 14 CNCF Webinar (https://www.cncf.io/webinars/c… OSM is displayed on system /).
1. Understand Microsoft Open Service Mesh
Microsoft has released its open source Service Grid implementation. What does this mean for Kubernetes on Azure?
Only a few years ago, when we talked about infrastructure, we meant physical infrastructure: servers, memory, disks, network switches, and all the cables necessary to connect them. I used to have spreadsheets, and when I was building a Web application that could support tens of thousands of users, I needed to type a number into the spreadsheet to get the hardware specification I needed before implementing it.
Now everything has changed. First comes the virtual infrastructure, which resides on servers on those physical racks. With a set of hypervisors and software-defined networks and storage, I can specify the computing requirements of an application and configure the application and its virtual network on the physical hardware that someone else manages for me. Today, we are building distributed applications on top of an orchestration framework that automatically manages scaling, both vertical and horizontal, on a very large scale public cloud.
Use distributed grids to manage distributed application architectures
New application infrastructures need their own infrastructure layer, which must be smart enough to respond to automatic scaling, handle load balancing and service discovery, and still support policy-driven security. Outside of the microservice container, your application infrastructure is implemented as a grid of services, each of which is linked to a proxy that runs as a Sidecar. These agents manage communication between containers, allowing development teams to focus on their services and managed APIs, while all the service grids connecting these services are managed by the application operations team.
Perhaps the biggest problem one faces when applying a service grid is the difficulty of choosing between Google’s famous Istio, open source Linkerd, Hashicorp’s Consul, or more experimental tools like F5’s Aspen Mesh. While it is difficult to choose one, it is even harder to standardize an implementation across the organization.
Now, if you want to pass the Azure Kubernetes Service to use the Service grid (https://docs.microsoft.com/en-… -about), you’re advised to use Istio, Linkerd, or Consul, all of which are described in the AKS documentation. This is not the easiest approach, as you need a separate virtual machine to manage the service grid, as well as a Kubernetes cluster running on AKS. However, another approach still under development is the Service Mesh Interface (https://smi-spec.io/) (SMI), which provides a standard set of interfaces to connect Kubernetes to the Service grid. Azure has been supporting SMI for some time, as its Kubernetes team has been leading SMI’s development.
SMI: A set of generic service grid APIs
Like Kuebernetes, SMI is also a CNCF project, although currently only a sandbox project. Being in the sandbox means it is not yet seen as stable and its prospects will change significantly with the various stages of the CNCF development programme. Of course, there is a lot of support from cloud vendors and Kubernetes vendors, as well as other service grid projects that sponsor its development. SMI is designed to provide Kubernetes with a basic set of APIs to connect to an SMI-compliant service grid. So your scripts and operations people can use any service grid; There is no need to be locked into a single provider.
As a set of custom resource definition and extension API servers, SMI can be installed on any certified Kubernetes distribution, such as AKS. Once the application is in place, you can use familiar tools and techniques to define the connections between the application and the service grid. SMI makes the application portable; You can use SMI to develop on a local Kubernetes instance, and you can move any application to a Kubernetes hosted on a grid of SMI-compliant services without worrying about compatibility.
It is important to note that SMI itself is not a service grid; It is a specification that the service grid needs to implement to achieve a common set of functional features. There is nothing to prevent the service grid from adding its own extensions and interfaces, but they need to be compelling enough to be used by applications and application operations teams. The developers behind SMI also claim that they are not averse to the migration of new functionality to the SMI specification, as the definition of the service grid continues to evolve and the expected functionality changes.
4. Open Service Mesh: SMI implementation developed by Microsoft
Microsoft has released its first Kubernetes service grid (https://openservicemesh.io/), which is based on Microsoft on the results of SMI community. The Open Service Mesh is a conforming to the guidelines of SMI, lightweight Service grid, it is a hosted in making Open source project (https://github.com/openservicemesh/osm). Microsoft wants OSM to be a community-led project and intends to donate it to CNCF as soon as possible. You can think of OSM as a reference SMI implementation based on existing service grid components and concepts.
Although Microsoft isn’t clear, but in OSM statements and documents, there is a experience it used on Azure service grid, they focus on is something related to operations (https://github.com/openservice… IGN. Md). In the first blog (https://openservicemesh.io/blo… -mesh/), Michelle Noorali describes OSM as “effortless installation, maintenance, and operation for Kubernetes operators”. That was a wise decision. OSM is vendor-neutral, but it is likely to be one of the many service grid options available for AKS, so ease of installation and management will be an important factor in its acceptance.
OSM builds on the work of other service grid projects. Although it has its own control plane, its data plane is based on the Envoy. Again, this is a pragmatic and sensible approach. SMI is about how to control and manage instances of a service grid, so using the familiar Envoy to handle policies will allow OSM to build on existing capabilities, reducing the learning curve and allowing application operators to span the limited set of SMI capabilities. Use the features of the more complex Envoy if necessary.
OSM currently implements a set of common service grid features. These include supporting traffic movement, securing service-to-service connections, applying access control policies, and addressing the observability of services. OSM automatically adds new applications and services to the grid by automatically deploying the Envoy sidecar agent.
Deploy and use OSM
Want to start trying to OSM alpha version (https://github.com/openservice… Ide. Md), can be in the project of making Release (https://github.com/openservicemesh/osm/releases) page on the command line tools included with the download it osm. When OSM Install is run, it adds the OSM control plane to the Kubernetes cluster with the default namespace and grid name. You can make changes during installation. Install and run after OSM, can be added to the grid service (https://github.com/openservice… Ces.md), using policy definitions to add the Kubernetes namespace, and to automatically add the Sidecar agent to all PODs under the managed namespace.
These steps will implement the policy of your choice, so it is a good idea to design a set of SMI policies before you start deployment. The sample policy on the OSM GitHub Repo will help. OSM included Prometheus monitor toolkit and Grafana visualization tools (https://github.com/openservice… Ity. Md), so you can quickly see the service grid and the Kubernetes application in action.
Kubernetes is an important infrastructure element in modern cloud native applications, so we need to start taking it seriously. This requires you to manage it independently of the applications running on top of it. The combination of AKS, OSM, Git, and Azure Arc becomes the basic configuration for managing the Kubernetes application environment. The application infrastructure team manages the AKS and OSM and sets policies for applications and services, while Git and ARC control the development and deployment of applications through real-time application metrics delivered by OSM’s visual tools.
It will take some time for these elements to fully converge, but it is clear that Microsoft is making an important commitment to distributed application management and its necessary tools. Now go ahead and use the base element of the kit, AKS, and add OSM and ARC. You can now build and deploy Kubernetes on Azure, using Envoy as your service grid, while prototyping in your lab using OSM and ARC to be ready for production. It won’t take long.