By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
Aug 3, 2021
Engineering

Compute

Post by
Wajdi Fathallah
&

A Brief History

Have you ever wondered why containers are so widely spread? Why have they become the de facto standard of shipping and deploying applications?

‍My approach to answering these questions is to tell the history of computing environments. Much like human evolution but on a different time scale, the computing environment evolution went through different eras with every new era attempting to tackle the limitations of the previous one.

In this blog, I will explain the reasons behind the growing adoption of containers by describing the four key eras in computing history.

The On-Premise Era

A few decades ago, an organization’s IT department was in charge of maintaining a set of physical servers usually installed on the premises of the organization. During this time, software applications were mainly serving the unique needs of the organization and often consisted of a few modules combined with a database.

Deploying an application at that time was a very complex procedure. First, application development teams needed to create the deployment package and hand it over to the Infrastructure team. The latter then allocated the required resources (memory, storage, and CPU), and configured the computing environment before it finally deployed the package.

The need for more flexibility and elasticity motivated the rise of virtualization technologies. Thanks to virtualization, computing environments can share the same computer resources which consequently led to better resource management.

Unfortunately, virtualization alone was not enough to prevent future problems. As more and more applications were deployed, more servers were needed. Organizations were suffering from growing maintenance costs and upfront investments in computing resources and facilities.

First Generation Cloud-Computing Era

The Cloud, to put it simply, is an infrastructure you connect to through the internet enabling computing resources use without the need to install them on your premises.

With the promise of greater flexibility and elasticity with sustainable costs, organizations started adopting cloud-based computing environments. Cloud adoption had several positive effects, organizations started delivering more and faster, and the software development lifecycle became smoother. More applications were deployed and more data was stored and processed, and this led to the emergence of horizontally scaled systems.

Let’s talk about scalability for a second, there are two kinds, vertical and horizontal. Vertical scalability refers to adding more hardware resources to a computer (also described as scaling up) which is usually installed in a rack. The more you scale, the higher the rack becomes. On top of this, vertical scalability is limited by the ceiling’s height. On the other hand, horizontal scalability consists in adding a computer to a pool of resources (also described as scaling out) which in turn increases the overall power of the pool, for this type of scalability there are no hard limits.

Having said that, this new kind of scalability required more flexibility and thinking on how systems react to events of failure. To that matter, containers brought many solutions.

The Containerization Era

As a starter, let’s define a container. In its simplest definition, a container is a lightweight, standalone, executable package of software that includes everything needed to run an application.

The containerization era’s focus was on how to make deployments portable, faster to deploy, and easier to scale. With horizontal scalability a new term was trending: cluster. You no longer heard about machines but instead clusters of machines. A cluster usually refers to a collection of individual environments that are connected to the same network and collaborate to execute a distributed workload.

Indisputably, horizontal scalability gave birth to large-scale applications, but complexity has increased as well. Not only connecting, synchronizing, and securing a cluster of machines was a big challenge but also the risk of failure increased as more failure points were introduced. To overcome that, we needed a more flexible and lightweight environment that adapts faster and better to events of failure or changing loads.

Starting a container replaces the heavy process of creating a virtual machine, installing required dependencies, and joining a distributed workload.

Here too, the large adoption of containers led to a new challenge which is orchestrating hundreds or thousands of containers. Kubernetes, among other technologies, was instrumental to solve this pain by abstracting as much operational complexity as possible.

One way to explain Kubernetes is the Paris metro network: think of containers as individual metros and Kubernetes as the infrastructure (stations, traffic lights, control room, etc..) that orchestrates the flow.

Kubernetes runs on top of a cluster of machines, usually machines provisioned in your cloud space (AWS, GCP, Azure, or others). Cloud vendors also saw the opportunity to offer managed clusters which increased the adoption and helped teams focus on innovating and not managing the complexity of a Kubernetes cluster.

Now you might ask, what is coming next? The following is one possibility.

The Poly Cloud Era

The poly cloud refers to using multiple cloud providers and not being limited to a single one, imagine being able to run your workloads on a cloud having the services, cost optimizations, and scale of AWS, Azure, and GCP combined. Even better, imagine a new abstraction layer when you no longer manage cloud providers but simply containers.

Technically speaking this is pretty much possible if we only use containers and stick to them. We can actually imagine a Kubernetes cluster that spans multiple machines from multiple cloud providers, the only requirement here is all the machines be connected. This is not a revolution, it is simply networking.

GCP had this vision years ago, they designed Anthos exactly for that, here is a 5-minute introduction https://www.youtube.com/watch?v=Qtwt7QcW4J8


As mentioned earlier, poly cloud is one path down the road of compute evolution and I am pretty sure there are many others. What would be the future from your perspective? Let me know at wajdi@siffletdata.com or comment below.

Related content