The history of Kubernetes – IBM Blog
When it comes to modern IT infrastructure, the role of Kubernetes—the open-source container orchestration platform that automates the deployment, management and scaling of containerized software applications (apps) and services—can’t be underestimated.
According to a Cloud Native Computing Foundation (CNCF) report (link resides outside ibm.com), Kubernetes is the second largest open-source project in the world after Linux and the primary container orchestration tool for 71% of Fortune 100 companies. To understand how Kubernetes came to dominate the cloud computing and microservices marketplaces, we have to examine its history.
The evolution of Kubernetes
The history of Kubernetes, whose name comes from the Ancient Greek for “pilot or “helmsman” (the person at the helm who steers the ship) is often traced to 2013 when a trio of engineers at Google—Craig McLuckie, Joe Beda and Brendan Burns—pitched an idea to build an open-source container management system. These tech pioneers were looking for ways to bring Google’s internal infrastructure expertise into the realm of large-scale cloud computing and also enable Google to compete with Amazon Web Services (AWS)—the unrivaled leader among cloud providers at the time.
Traditional IT infrastructure versus virtual IT infrastructure
But to truly understand the history of Kubernetes—also often referred to as “Kube” or “K8s,” a “numeronym” (link resides outside ibm.com)—we have to look at containers in the context of traditional IT infrastructure versus virtual IT infrastructure.
In the past, organizations ran their apps solely on physical servers (also known as bare metal servers). However, there was no way to maintain system resource boundaries for those apps. For instance, whenever a physical server ran multiple applications, one application might eat up all of the processing power, memory, storage space or other resources on that server. To prevent this from happening, businesses would run each application on a different physical server. But running apps on multiple servers creates underutilized resources and problems with an inability to scale. What’s more, having a large number of physical machines takes up space and is a costly endeavor.
Virtualization
Then came virtualization—the process that forms the foundation for cloud computing. While virtualization technology can be traced back to the late 1960s, it wasn’t widely adopted until the early 2000s.
Virtualization relies on software known as a hypervisor. A hypervisor is a lightweight form of software that enables multiple virtual machines (VMs) to run on a single physical server’s central processing unit (CPU). Each virtual machine has a guest operating system (OS), a virtual copy of the hardware that the OS requires to run and an application and its associated libraries and dependencies.
While VMs create more efficient usage of hardware resources to run apps than physical servers, they still take up a large amount of system resources. This is especially the case when numerous VMs are run on the same physical server, each with its own guest operating system.
Containers
Enter container technology. A historical milestone in container development occurred in 1979 with the development of chroot (link resides outside ibm.com), part of the Unix version 7 operating system. Chroot introduced the concept of process isolation by restricting an application’s file access to a specific directory (the root) and its children (or subprocesses).
Modern-day containers are defined as units of software where application code is packaged with all its libraries and dependencies. This allows applications to run quickly in any environment—whether on- or off-premises—from a desktop, private data center or public cloud.
Rather than virtualizing the underlying hardware like VMs, containers virtualize the operating system (usually as Linux or Windows). The lack of the guest OS is what makes containers lightweight, as well as faster and more portable than VMs.
Borg: The predecessor to Kubernetes
Back in the early 2000s, Google needed a way to get the best performance out of its virtual server to support its growing infrastructure and deliver its public cloud platform. This led to the creation of Borg, the first unified container management system. Developed between 2003 and 2004, the Borg system is named after a group of Star Trek aliens—the Borg—cybernetic organisms who function by sharing a hive mind (collective consciousness) called “The Collective.”
The Borg name fit the Google project well. Borg’s large-scale cluster management system essentially acts as a central brain for running containerized workloads across its data centers. Designed to run alongside Google’s search engine, Borg was used to build Google’s internet services, including Gmail, Google Docs, Google Search, Google Maps and YouTube.
Borg allowed Google to run hundreds of thousands of jobs, from many different applications, across many machines. This enabled Google to accomplish high resource utilization, fault tolerance and scalability for its large-scale workloads. Borg is still used at Google today as the company’s primary internal container management system.
In 2013, Google introduced Omega, its second-generation container management system. Omega took the Borg ecosystem further, providing a flexible, scalable scheduling solution for large-scale computer clusters. It was also in 2013 that Docker, a key player in Kubernetes history, came into the picture.
Docker ushers in open-source containerization
Developed by dotCloud, a Platform-as-a-Service (PaaS) technology company, Docker was released in 2013 as an open-source software tool that allowed online software developers to build, deploy and manage containerized applications.
Docker container technology uses the Linux kernel (the base component of the operating system) and features of the kernel to separate processes so they can run independently. To clear up any confusion, the Docker namesake also refers to Docker, Inc. (formerly dotCloud, link resides outside ibm.com), which develops productivity tools built around its open-source containerization platform, as well as the Docker open source ecosystem and community (link resides outside ibm.com).
By popularizing a lightweight container runtime and providing a simple way to package, distribute and deploy applications onto a machine, Docker provided the seeds or inspiration for the founders of Kubernetes. When Docker came on the scene, Googlers Craig McLuckie, Joe Beda and Brendan Burns were excited by Docker’s ability to build individual containers and run them on individual machines.
While Docker had changed the game for cloud-native infrastructure, it had limitations because it was built to run on a single node, which made automation impossible. For instance, as apps were built for thousands of separate containers, managing them across various environments became a difficult task where each individual development had to be manually packaged. The Google team saw a need—and an opportunity—for a container orchestrator that could deploy and manage multiple containers across multiple machines. Thus, Google’s third-generation container management system, Kubernetes, was born.
Learn more about the differences and similarities between Kubernetes and Docker
The birth of Kubernetes
Many of the developers of Kubernetes had worked to develop Borg and wanted to build a container orchestrator that incorporated everything they had learned through the design and development of the Borg and Omega systems to produce a less complex open-source tool with a user-friendly interface (UI). As an ode to Borg, they named it Project Seven of Nine after a Star Trek: Voyager character who is a former Borg drone. While the original project name didn’t stick, it was memorialized by the seven points on the Kubernetes logo (link resides outside ibm.com).
Inside a Kubernetes cluster
Kubernetes architecture is based on running clusters that allow containers to run across multiple machines and environments. Each cluster typically consists of two classes of nodes:
- Worker nodes, which run the containerized applications.
- Control plane nodes, which control the cluster.
The control plane basically acts as the orchestrator of the Kubernetes cluster and includes several components—the API server (manages all interactions with Kubernetes), the control manager (handles all control processes), cloud controller manager (the interface with the cloud provider’s API), and so forth. Worker nodes run containers using container runtimes such as Docker. Pods, the smallest deployable units in a cluster hold one or more app containers and share resources, such as storage and networking information.
Read more about how Kubernetes clusters work
Kubernetes goes public
In 2014, Kubernetes made its debut as an open-source version of Borg, with Microsoft, RedHat, IBM and Docker signing on as early members of the Kubernetes community. The software tool included basic features for container orchestration, including the following:
- Replication to deploy multiple instances of an application
- Load balancing and service discovery
- Basic health checking and repair
- Scheduling to group many machines together and distribute work to them
In 2015, at the O’Reilly Open Source Convention (OSCON) (link resides outside ibm.com), the Kubernetes founders unveiled an expanded and refined version of Kubernetes—Kubernetes 1.0. Soon after, developers from the Red Hat® OpenShift® team joined the Google team, lending their engineering and enterprise experience to the project.
The history of Kubernetes and the Cloud Native Computing Foundation
Coinciding with the release of Kubernetes 1.0 in 2015, Google donated Kubernetes to the Cloud Native Computing Foundation (CNCF) (link resides outside ibm.com), part of the nonprofit Linux Foundation. The CNCF was jointly created by numerous members of the world’s leading computing companies, including Docker, Google, Microsoft, IBM and Red Hat. The mission (link resides outside ibm.com) of the CNCF is “to make cloud-native computing ubiquitous.”
In 2016, Kubernetes became the CNCF’s first hosted project, and by 2018, Kubernetes was CNCF’s first project to graduate. The number of actively contributing companies rose quickly to over 700 members, and Kubernetes quickly became one of the fastest-growing open-source projects in history. By 2017, it was outpacing competitors like Docker Swarm and Apache Mesos to become the industry standard for container orchestration.
Kubernetes and cloud-native applications
Before cloud, software applications were tied to the hardware servers they were running on. But in 2018, as Kubernetes and containers became the management standard for cloud vending organizations, the concept of cloud-native applications began to take hold. This opened the gateway for the research and development of cloud-based software.
Kubernetes aids in developing cloud-native microservices-based programs and allows for the containerization of existing apps, enabling faster app development. Kubernetes also provides the automation and observability needed to efficiently manage multiple applications at the same time. The declarative, API-driven infrastructure of Kubernetes allows cloud-native development teams to operate independently and increase their productivity.
The continued impact of Kubernetes
The history of Kubernetes and its role as a portable, extensible, open-source platform for managing containerized workloads and microservices, continues to unfold.
Since Kubernetes joined the CNCF in 2016, the number of contributors has grown to 8,012—a 996% increase (link resides outside ibm.com). The CNCF’s flagship global conference, KubeCon + CloudNativeCon (link resides outside ibm.com), attracts thousands of attendees and provides an annual forum for developers’ and users’ information and insights on Kubernetes and other DevOps trends.
On the cloud transformation and application modernization fronts, the adoption of Kubernetes shows no signs of slowing down. According to a report from Gartner, The CTO’s Guide to Containers and Kubernetes (link resides outside ibm.com), more than 90% of the world’s organizations will be running containerized applications in production by 2027.
IBM and Kubernetes
Back in 2014, IBM was one of the first major companies to join forces with the Kubernetes open-source community and bring container orchestration to the enterprise. Today, IBM is helping businesses navigate their ongoing cloud journeys with the implementation of Kubernetes container orchestration and other cloud-based management solutions.
Whether your goal is cloud-native application development, large-scale app deployment or managing microservices, we can help you leverage Kubernetes and its many use cases.
Get started with IBM Cloud® Kubernetes Service
Red Hat® OpenShift® on IBM Cloud® offers OpenShift developers a fast and secure way to containerize and deploy enterprise workloads in Kubernetes clusters.
Explore Red Hat OpenShift on IBM Cloud
IBM Cloud® Code Engine, a fully managed serverless platform, allows you to run container, application code or batch job on a fully managed container runtime.
Learn more about IBM Cloud Code Engine