Author’s brief introduction

James Falkoff is an investor at Converge, a boston-based venture capital firm.

Edge computing has taken its place in the zeitgeist of technology and is innovative and pioneering. For several years now, people have thought that edge computing must be the way of the future. In reality, however, the discussion remains hypothetical, as there is still a lot of room for development in the infrastructure needed to support edge computing.

That is now changing as a variety of edge computing resources, from microdata centers to dedicated processors to the necessary software abstractions, flood into the hands of application developers, entrepreneurs, and large enterprises. Now, we don’t have to be scripted to answer questions about the utility of edge computing and its implications. So what do real-world developments tell us about this perception? In particular, do edge calculations match their actual heat?

In this article, I’ll give you an overview of the current edge computing market. Overall, the trend toward edge computing is real, with the need to decentralize applications growing for cost and performance reasons. Some aspects of edge computing have been hyped, while others have not received the attention they deserve. Here are four key points to help decision-makers gain a practical understanding of the current and future capabilities of edge computing.

1. Edge computing is more than just low latency

Edge computing is a paradigm that enables more efficient use of computing and data storage. It stands in stark contrast to the traditional cloud computing model, where computing is concentrated in a handful of super-sized data centers. The edge can be anywhere closer to the end user or device than a traditional cloud data center, perhaps 100 miles, 1 mile, local, or device. Either way, the traditional narrative of edge computing emphasizes the function of edges to minimize latency in order to improve the user experience or enable new applications that are delay-sensitive. Such a statement tends to make people’s understanding of edge computing incomplete. While latency reduction is an important use case, it is not necessarily the most valuable use case. Another use case for edge computing is minimizing network traffic to and from the cloud, or “cloud offloading” in some views, which may provide at least as much economic value as reducing latency.

The fundamental driver of cloud offloading is the huge increase in the amount of data generated by users, devices, or sensors. “Edges are fundamentally a data problem,” says Chetan Venkatesh, CEO of Macrometa, a startup that is tackling the data challenges of edge computing. Cloud offloading occurs because it is expensive to migrate all data, so many enterprises prefer not to migrate their data elsewhere. At this point, edge computing provides a way to extract values from the local device because it does not require data migration beyond the edge. If necessary, the data can also be reduced to a more economical subset that can be sent to the cloud for storage or further analysis.

A classic use case for cloud offloading is handling video or audio data, which are the two most bandwidth-intensive types of data. A retailer with more than 10,000 locations in Asia is using edge computing to handle both in-store video surveillance and language translation services, according to people INVOLVED in the deployment I recently spoke to. But beyond that, there are other data sources that are just as expensive to move to the cloud. Another contact said a large IT software vendor is analyzing real-time data from customers’ local IT architectures to prevent problems and optimize performance. It uses edge computing to avoid sending all data back to AWS. In addition, industrial equipment also generates huge amounts of data, so it is a major application scenario for cloud offloading.

Edge computing is an extension of the cloud

Although the early pitch was that the edge would replace the cloud, it would be more accurate to say that the edge extends the cloud. It will not affect the trend for enterprises to move their business to the cloud. However, a number of initiatives are under way to extend the cloud computing formula for on-demand resource availability and physical infrastructure further and further away from traditional cloud data centers. These edge locations will be managed using tools and methods that have evolved from the cloud, and the boundaries between cloud and edge will blur as edge and cloud evolve.

In fact, edge and cloud are part of the same continuum, a fact you can get a glimpse of from the edge computing initiatives of public cloud providers such as AWS and Azure. If your business wants to run an AWS Outpost for local edge computing, Amazon will send you an AWS Outpost, an assembled computing and storage architecture that mimics the hardware design of Amazon’s own data centers. It will be installed in the customer’s own data center and monitored, maintained and upgraded by Amazon. Importantly, Outposts runs services that many AWS users rely on, such as EC2 computing services, to make edges operate like clouds. There are many other big manufacturers with similar goals. From these offerings, we receive a clear signal that cloud providers want to unify cloud and edge infrastructure under one umbrella.

3. Edge infrastructure is being implemented in phases

While some applications are best run locally, in many cases application owners want to benefit from edge computing without having to support any local footprint. This requires understanding a new kind of infrastructure that, while parts of it look like a cloud, is geographically more distributed than the dozens of super-sized data centers that make up clouds today. This type of infrastructure is now beginning to be used, and it may be divided into three phases of development, each extending the edge by reaching an increasingly wide geographic area.

Phase 1: multi-area and cloudy

As for the first step in edge computing, many people may not consider edge computing for a wide range of applications. This step takes advantage of multiple zones provided by the public cloud provider. For example, AWS has data centers in 22 geographic regions, where AWS customers serving Users in North America and Europe can run their applications in northern California and the Frankfurt area. Moving from one zone to multiple zones can greatly reduce latency, which can provide a good user experience for a large number of applications.

At the same time, there is a trend towards cloudiness, which is driven by a number of considerations including cost efficiency, risk reduction, avoidance of vendor lock-in and a desire to get the best of the class of services offered by different providers. “Executing a multi-cloud strategy is a very important strategy and architecture today,” Mark Weiner, CMO of Volterra, a distributed cloud computing company, told me. Like the multi-region approach, the multi-cloud approach marks the first step in cloud computing towards distributed workloads, which are moving towards increasingly decentralized edge computing approaches.

Stage 2: Region edge calculation

The second phase in edge evolution extends the edge to a deeper level, where edge computing will leverage infrastructure in hundreds or thousands of locations, rather than super-sized data centers in just a few dozen cities. As it turns out, one group of players already has such an infrastructure: a content delivery network (CDN). For 20 years, CDNS have been pioneers in participating in the evolution of edge computing, caching static content closer to the end user to improve performance. While AWS already has 22 regions, a typical CDN like Cloudflare has 194.

The difference is that these CDNS are now beginning to open up their infrastructure to common workloads rather than just caching static content. Today, CDNS such as Cloudflare, Fastly, Limelight, StackPath, and Zenlayer offer a combination of container as a service, VM as a service, bare machine as a service, and Serverless capabilities. In other words, they are starting to look more like cloud providers. Forward-thinking cloud providers also provide such infrastructure, and AWS has sold the first step in multi-regionalization infrastructure, introducing the first so-called Los Angeles local region and promising more local regions to come.

Stage 3: Access Edge computing

The third stage of edge evolution drives the edge further outward, to the point where it is only a network hop or two away from the end user or device. In traditional telecom terminology, this is called the access part of the network, so this type of architecture has been labeled access edge. The typical form of Access Edge is tiny data centers, which can be as small as a stand-alone rack or as large as half a trailer and can be deployed on the side of the road or at the bottom of cellular towers. Behind this, innovations in power and cooling will allow smaller and smaller densities of infrastructure to be deployed in these tiny data centres.

Newcomers like Vapor IO, EdgeMicro and EdgePresence have begun building these tiny data centers in a handful of U.S. cities. 2019 is the first year of expansion and substantial investment in these expansion projects will continue through 2020 and 2021. By 2022, returns from edge data centers will be a key focus for investors. Ultimately, these returns will answer the question: Are there enough killer applications to keep the edge close to the end user or device?

The answer to this question is still in its infancy. Recently, I spoke with a number of practitioners who were skeptical that microdata centers in Access Edge are more marginal than regional data centers on the Edge of regions. Early adopters have taken advantage of the region’s edge in a variety of ways, including various cloud offloading use cases and latency reduction optimization of user experiences such as online gaming, advertising services, and e-commerce. By comparison, applications that require Access Edge’s ultra-low latency and very short network routing sound even more far-fetched: autonomous driving, drones, AR/VR, smart cities, remote surgery, etc. More importantly, these applications must weigh the benefits of Access Edge over computing locally using local or on-device methods. But there will definitely be killer apps for Access Edge — they may not be on the radar today, but we’ll know more about them in a few years.

4. New software is needed to manage edges

In the previous section, I briefly described several architectures in edge computing and the many places “edges” can be located. However, the ultimate direction for the industry is unification and standardization — the same tools and processes can be used to manage cloud and edge workloads no matter where the edge is located. This will require improvements to the software used to deploy, extend, and manage applications in the cloud, which in the past were designed with the architecture of a single data center in mind.

Startups such as Ori, Rancher, and Volterra, as well as larger companies such as Google’s Anthos and Microsoft’s Azure Arc, are planning to develop cloud infrastructure software in this way. In fact, all of these products have one thing in common: They are based on Kubernetes, which has become the primary method of managing containerized applications. But these products go beyond Kubernetes’ original design to support distributed multiple Kubernetes clusters. These clusters may be at the top of a heterogeneous infrastructure pool of “edges,” local environments, and public clouds, but they can all be managed uniformly thanks to these products.

Initially, the greatest opportunity for these products is to support the first phase of edge evolution, which is a moderately distributed deployment using a small number of regions through one or more clouds. But that just puts them in a good position to support the more distributed edge computing architecture that is coming. “Having solved today’s multi-cluster management and operations problems, you’ll be in a good position to tackle a broader range of edge computing use cases.” Says Haseeb Budhani, CEO of Rafay Systems.

Edge, from brilliant is not far away

Now that resources are emerging to support edge computing, edge-oriented thinking will become more prevalent among those designing applications. After an era in which resources were concentrated in a few cloud data centers, there are now opposing forces demanding increased decentralization. Edge computing is still in its infancy, but it has shifted from theory to practice. Now the industry is booming. As you all know, cloud computing is only 14 years old, so it is reasonable to believe that edge computing will leave its mark on computing in the near future.