Evolution from Virtual Machine to Container

Virtual Machines (VMs) changed the way services are deployed to better optimize the use of hardware in a high availability environment. Traditional virtual machines were great for dividing the available hardware in a single bare metal machine into many smaller computer instances running on top of a hypervisor application used to organize the access of multiple VMs to the same physical hardware. There are, however, costs associated with this. The answer to some of these costs is containers.

VMs are expensive from an overhead and management perspective. The hypervisor can cost between 9-12% of processing overhead for the management, and the hardware resources, such as CPU and memory, that are needed by the VM must be reserved regardless of whether the VM is using them or not. If a VM needs more resources, they must be manually reallocated sometimes requiring a reboot of the VM. Containers and container software, such as Docker, containerd, or CRI-O, on the other hand remove much of this overhead by utilizing the host OS underlying kernel more directly just like any other application would be by running it directly on the host OS. Dependencies such as networking, environment configuration, filesystems, and many of the other features that an OS offers can be virtualized instead to provide a much smaller "virtual machine" to run the application in. The biggest benefits of these containers being much smaller and application specific is scalability, resiliency, and deployment speed.

Containers are very easily scalable if there is performance headroom available on the host. Think of it like running multiple instances of a program on that host without having to worry about reconfiguring the ports, files, environment variables, etc. being used for each instance. With that level of manual configuration removed, the ability to launch more containers quickly becomes a simple exercise. (Deleted) Resiliency and high availability is supported by running multiple instances of the application in a load balanced fashion. If there is an issue with an application instance, it can be cycled out of the load balancing circulation and replaced with another new instance if needed. Deployment speed is what enables all of this.

Containers can provision and launch only a little slower than it would take for the application itself to start. This gives scalability and resiliency a tremendous advantage. However, containers do not scale all by themselves. Software is required to manage that scaling. That is where container orchestration comes into play.

Container orchestration is the crucial piece in making containers the application services deployment platform of the future. If the container is the individual instrument player, the container orchestrator is the conductor at the front of the orchestra. In a highly available, business critical deployment, redundancy is implemented at every level from the hardware to the software. Container orchestration software, such as native Kubernetes, Red Hat OpenShift, Azure Service Fabric, or AWS' Elastic Kubernetes Service, sit on top of and further extend the capabilities of a distributed hardware environment by managing the running containers and container access to services across multiple hosts, also known as nodes. These services could be networking, persistent disks, config storage, secret storage, etc. With this kind of feature set, scalability through containers and container orchestration is limited only by your access to hardware or cloud resources to support the deployments.

The promise of container orchestration is significant. In practice, converting applications to use containers and orchestration has many challenges to consider. Reliance on DevOps increases as teams can suddenly be responsible for maintaining more of the application services that would have been previously managed by System Operations (SysOps). For example, if a cache instance like Red is needs to be deployed, the deployment is generally configured through config files and scripts created by DevOps rather than SysOps performing a permanent install on a VM OS. The role of SysOps then shifts to monitoring and maintaining the underlying resources available to the orchestration software and the orchestration software itself.

Containers have the potential to improve the performance, resiliency, and ease of management of your deployed application service in many ways. Nearly every cloud provider has their own flavor that integrates rather seamlessly with their ecosystems allowing for automatic scaling of cloud resources.

Some recommended places to start your development journey in practice are with Docker and MiniKube. Docker is a container build tool and offers a container runtime for almost every platform (Windows, Linux, Mac, x86,ARM, etc). MiniKube is a binary runtime for running a miniature Kubernetes environment available on almost every platform as well. With the support, growth, free availability, and potential benefits of containers, there are great opportunities to integrate this new technology as part of your stack.

Previous
Previous

Understanding Blockchain and Distributed Ledger Technology

Next
Next

Women in Stem: A Business Owner’s Perspective