Review images

This paper discusses the current problems of container and why it is the future of technology from the perspective of industrial technology development. Although Docker is still immature in terms of technology, it has solved many problems in the software industry in the past few years in a real sense. This article encourages everyone to try the new container technology, gain experience and lessons from the practice and move forward iteratively to improve and enhance our industry.

This really is the future

Last week I wrote an article irony container ecosystem’s article “It ‘s the Future” (https://circleci.com/blog/its-the-future/), a slight laugh at the Docker, Google, CoreOS technology and some other container. Many Docker fans took it as a joke, but many loved it and shared “I told you it was all crap.”

It’s easy to see why people might think the container ecosystem is garbage, as I satirized it in my article, but it’s not obvious when we first see what Docker is. Containerization is a bit like virtualization but not quite. Dockerfile is a bit like Chef but it contains a weird hierarchical file system. It solves similar problems with AWS, Heroku, VMware, and Vagrant, but in subtle ways is subtly different from each of them. There are 27 competing tools and their names give you no idea what they do, such as Machine, Swarm, Flannel, Weave, ETCD, RKT, Kubernetes, Compose and Flocker. They are all related to shiny microservices in one way or another, but making running a simple microservice so complicated is a great dumb idea. Not only that, but it’s left a lot of startups and big companies competing for “developer favor,” which in the old days seemed to be about money, and lots of people chasing everything.

It’s not unreasonable for you to look closely at the Docker and the container and conclude that everything is garbage. But there are exceptions, and this is how we’re going to build applications in the future.

Why hate?

Many people’s reaction to It’s the Future is that It’s 100% accurate and not ironic at all, while others are skeptical of the hype surrounding the container. Why is that?

Docker and the container ecosystem (Docker) borrow concepts from the application developer world, such as virtualization, SOA and operating systems; It was then packaged for different purposes and benefits. In doing so, however, it brings with it most of the problems of the developer community: grumpy people who hate everything.

The software industry, contrary to what you might expect, is full of people who hate progress. These are the people who would walk into the Sistine Chapel after Michelangelo was done and claim that they had a perfect picture of God, that they expected the ceiling to be white, that the frescoes weren’t good anyway. (These people look down on the work of others.)

At the same time, most of the software industry’s decisions look like those of a high school student: they overvalue cool-looking technology in their own community, perhaps based on information gleaning from Instagram and Facebook. They form small communities in these technologies, and they even stick their logos on their notebooks, hating anything that looks strange or different.

Back to Docker: a new way to do almost anything. Rules about operating systems, deployment, operations, packaging, firewalls, platform-as-a-service, and everything else can be thrown out. Some developers fall in love with it right away, sometimes for valid reasons like it really solves their problem, and sometimes just because it’s a shiny tool that makes them look cool and others haven’t gotten to it yet. Other developers hate it — it’s just overhyped, it’s no different than previous technologies, and somehow everyone is paying attention to it for the wrong reasons.

Because the reaction to Docker isn’t just based on the technology itself. Most of the naysayers are not against Docker’s approach to important and complex issues. Most of the time, that’s because they don’t have problems with scale-up systems. If you don’t have an intuitive and deep understanding of what “cattle not pets” means and why this is important, the choices Docker and its related tools make will seem strange and scary to you.

Review images


Merged world

Docker sits at the intersection of two disciplines: Web applications and distributed systems. Over the past decade, our Web community has led us to believe that Web applications can be implemented if we only know how to code. We can implement a website by writing some HTML, JavaScript, and Rails code. We add a few forms and handlers or an API and we’re done: it’s enough to launch a product, get attention and customers, make money and change the world!

Meanwhile, for the past 20 years, the folks in distributed computing have been doing similarly boring things. They experimented with complex protocols such as CORBA and SOAP, and tackled issues such as the law of large numbers and clock synchronization that were too theoretical for most people. These problems and their solutions are boring for anyone who wants to use their knowledge to write code to implement Web applications.

But then something interesting happened. Web applications have become so large that they have to scale horizontally, and the sheer volume of access from the Internet makes it impossible for Web applications to run on a single machine, even with vertical scaling. As we began to scale horizontally, we began to encounter problems in distributed systems such as “race conditions,” “network fragmentation,” “deadlocks,” and the “Byzantine General problem.” The distributed systems community has been working on these problems for a long time, but the solutions to these problems are complex and even theoretically impossible.

Early on, Heroku faced a crisis of horizontal scaling. Heroku achieves horizontal scaling of the infrastructure in a very simple way, which again makes us pretend that we are just running a simple Web application. The industry was deceived for at least five years.

Now that we’re out of this cheating situation, we find that we have to try to scale horizontally, we have to re-engineer software that doesn’t work so that it can scale horizontally, and we’re beginning to understand the problem of monolithic architecture, and why a single database can’t solve this problem. This led to new concepts such as “immutable infrastructure”, “Pets vs Cattle”. Refers to the different roles that infrastructure plays in architecture, pet refers to careful care of infrastructure that cannot be touched, livestock refers to infrastructure that is designed for fail), microservices, and a whole set of best and worst practices try to make these issues simple.

In the current changing situation, Docker comes in and tries to fix everything. It doesn’t tell us to continue to pretend that scale-out doesn’t exist and we can just continue to deal with it the way we have, similar to what Heroku did. Docker tells us that distributed systems are fundamentally what we’ve been dealing with all along, so we need to embrace it and start using distributed models. Instead of dealing with simple things like Web frameworks, databases and operating systems, we now have tools like Swarm, Weave, Kubernetes and Etcd that instead of pretending that everything is simple, require us not only to solve problems, It’s about trying to get a deeper understanding of the problem we’re solving.

The advantage of this is that we gain the ability to build a scale-out architecture rather than pretend that we can abstract it. We need to know what network fragmentation is and how to deal with it, how to choose between AP and CP systems, and how to build systems that scale horizontally across real networks and servers. Sometimes there are electronic storms in Virginia, sometimes there are fires, sometimes sharks bite through submarine cables, sometimes there are delays, sometimes machines die, and so on. We need to handle these exceptions.

Everything needs to be more resilient, more reliable, and we need to acknowledge that these are the things we need to think about when we’re developing applications. We’re doing it not because it’s cool, or because it’s fictional best practice, but because companies like Amazon, Netflix, and Google have spent 15 years telling us how to build systems that really scale horizontally.

The real problem was solved

So what does Docker really solve for us? Everything we do to build Web apps is fragile, and Docker forces us to stay sober and sane:

  • So far we have kept the deployment machine (the operations part of DevOps) and the deployment application (the development part of DevOps) separate and even maintained by different teams. This is funny because applications depend on machines, operating systems, and code, and it makes no sense to think of them separately. Containers unify OS and applications at the developer’s toolkit level.

  • So far, we have lacked tools to manage these SOA services when running our SOA architecture on AWS, Heroku, and other IaaS and PaaS platforms. Kubernetes and Swarm manage and orchestrate these services.

  • So far, we’ve used this operating system to run our applications, exposing all security issues rather than using as few resources and exposing as few security vulnerabilities as possible. Containers allow you to expose only the ports you need, and applications can be as small as a single static binary.

  • Until now, we have used the “configuration Management” tool after the machine is installed or deployed multiple times to the same machine. Because containers can be scaled flexibly through choreography tools, only immovable images are started and running machines are never reused, potential single points of failure can be eliminated.

  • So far, we’ve been using a single application designed to run on a single machine. Rails’ SOA equivalents didn’t exist before, and now Kubernetes and Compose allow you to define topologies across services.

  • So far, we have deployed virtual machines on a specific size basis provided by AWS. We can’t say “I need 0.1CPU and 200MB of memory”. We are wasting virtual overhead and using more resources. Containers require fewer resources and are better able to share resources.

  • So far, we have deployed applications on operating systems in multi-user environments. Unix was designed to have a large number of users sharing binaries, databases, file systems, and services on the operating system. This is completely inconsistent with our requirements for building Web services. Again, the container can keep simple binaries rather than the entire operating system, with the result that programs and services don’t need to be considered much more.

The only constant is change

Our industry is evolving so fast that people are eager to follow and use new technologies before they are actually mature. Docker is growing at a phenomenal rate, which means it’s not stable or mature at all. Container runtimes, image formats, choreography tools, and host operating systems all have multiple options, each with its own utility, scope, impetus, and community support.

In other industries, things tend to get old and boring before they really become stable. For example, how many protocols were abandoned before we got REST standards? We stepped on the corpses of SOAP and CORBA to build REST, AJAX, and JSON, and took what we had learned to build these new things. These are two important technological transformations of the past decade, but we still don’t have as many tools for REST apis as SOAP did ten years ago, and SOAP is not really dead yet.

The same is true of front-end development, A lot of people take my Docker ecosystem article and shit-show that’s going on in frontend Development (https://medium.com/@boopathi/ it-s-the-future-7a4207e028C2) was compared. The same is true of programming languages, where developers have been inventing new solutions to new problems since the advent of Java a decade ago. The container ecosystem also has a lot to work out.

So we can expect Docker to be immature right now, to have a lot of boundary issues and weird things when you try Docker, and some of those decisions will be completely wrong when you look back years from now. Best practice is a process of trial and error until it gets right.

It’s going to take a couple of years until we figure all these things out. But that doesn’t mean the container is garbage, or that we can ignore it. We are faced with a choice between sticking with existing technologies, or trying new ones, learning lessons and iteratively improving and upgrading our industry.

If you are looking for me, I will wait for you in the future.

View pictures View pictures



Activity recommended

【CNUTCon global Container Technology Conference 】 Micro services, continuous integration, container cloud, big data, e-commerce, traditional industries, start-ups and other 12 topics, Docker, Kubernetes, Netflix, Mesos, CoreOS, Alibaba, JINGdong and other companies of the core technology of the site exclusive reveal. Containerization and microservitization, please read the original link.

Review images