Container Orchestration - Chapter 2

Diving into the second chapter of the Introduction to Kubernetes training provided by the Linux foundation

Container Orchestration - Chapter 2

Learning Objectives

  • Define the concept of container orchestration.
  • Explain the benefits of using container orchestration.
  • Discuss different container orchestration options.
  • Discuss different container orchestration deployment options.

Before explaining what container orchestration is, the training reviews with us what containers are.

Containers provide portable, isolated virtual environments for applications to run without interference from other running application. After working with docker for a while I just recently learned about container-runtimes in general and that we are able to choose between different ones like runC, containerd or cri-o. Which completely blew my mind. I talked about this a previous blog post of mine.

Each of these building blocks are interchangeable, we are even fortunate that windows is going in the same direction, that way application, and therefore containers, can run on any environment and written in any language or any framework.

Nowadays these applications are called microservices. Like I mentioned it enables developers to write lightweight applications written in various modern programming languages, with specific dependencies, libraries and environmental requirements.

Containers encapsulate microservices and their dependencies but do not run them directly. Containers run container images.

Container images contains the runtime, all needed libraries and dependencies to run in an isolated executable environment for the application.

All together we are able to deploy container almost everywhere e.g. workstations, virtual machines, public cloud, raspberry PI or even bare metal!

What is Container Orchestration?

As soon as we migrate from dev environment, with running containers on a single host, to QA and Production environments the services will have to meet specific requirements. The Training gives us following requirements:

  • Fault-tolerance
  • On-demand scalability
  • Optimal resource usage
  • Auto-discovery to automatically discover and communicate with each other
  • Accessibility from the outside world
  • Seamless updates/rollbacks without any downtime.

I have to agree that we need orchestration for big Projects, were large and different teams have to work together on several systems. I am not sure yet when the control we get, and therefore the configuration and maintenance needed is really, well... needed. Hopefully I will find the answer after completing the courses and get some real-world projects on my Hand.

Container orchestrators are tools which group systems together to form clusters where containers' deployment and management is automated at scale while meeting the requirements mentioned above.

"Automated at scale while meeting the requirements mentioned above." could be the answer to my questions.

The are many solutions available, some are mere re-distributions of well-established container orchestration, some of them are enriched with features and sometimes with certain limitations in flexibility.

Big cloud Services like AWS with Amazon Elastic Container service, or Microsoft's Azure Container Instances / Azure Service Fabric are the most feature rich and customer-oriented solutions. Whereas Kubernetes and docker swarm are the more native ones with Kubernetes being part of the Cloud Native Computing Foundation (this is a topic in itself, very interesting). There is another good training with the name of Introduction to Cloud Infrastructure technologies.

Benefits of using Container Orchestration

While we can manually maintain a couple of containers, orchestrators will make things much easer for operators. As soon as we hit hundreds and thousands of containers running on a global infrastructure. The training lists following features of container orchestrators:

  • Group hosts together while creating a cluster
  • Schedule containers to run on hosts in the cluster based on resources availability
  • Enable containers in a cluster to communicate with each other regardless of the host they are deployed to in the cluster
  • Bind containers and storage resources
  • Group sets of similar containers and bind them to load-balancing constructs to simplify access to containerized applications by creating a level of abstraction between the containers and the user
  • Manage and optimize resource usage
  • Allow for implementation of policies to secure access to applications running inside containers.

With all these configurable yet flexible features, container orchestrators are an obvious choice when it comes to managing containerized applications at scale.

Again this is all good on Paper, I haven't yet worked on an environment with "containerized applications at scale", what is the scale to redeem the effort to configure and maintain an high accessible cluster system where applications have to be grouped together. I see a big overhead.

Deployment Options

Choose any infrastructure to deploy container orchestrators, on bare metal, Virtual Machines, on-premise, on public and hybrid cloud. There are turnkey solutions which allow Kubernetes clusters to be installed with only a few commands on top of cloud IaaS. On providers, such as Amazon Elastic Kubernetes Service (Amazon EKS), Azure Kubernetes Service (AKS), DigitalOcean Kubernetes, Google Kubernetes Engine (GKE),IBM Cloud Kubernetes Service, Oracle Container Engine for Kubernetes, or VMware Tanzu Kubernetes Grid, provide managed container orchestration as -a-Service, more specifically the managed Kubernetes as-a-Service solution.

Well today we learned about the concept of container orchestration, we can explain the benefits of using container orchestration (Maybe even some drawbacks of using it). We can discuss the different container orchestration options, EKS, AKS, Digital Ocean, Docker Swarm etc. And at last we will be able to discuss the different container orchestration deployment options.

In the next Chapter we will go through Kubernetes specifically.

That's it my friends, thanks for reading.