Today, IT organizations big and small are acknowledging the benefits of virtualization technology. It's a cost-effective, resource-efficient solution to running applications in a controlled environment. Instead of leasing a brand new server to run applications, for instance, organizations can create a virtual server on an existing server, offering many of the same functions but with more efficient use of resources.
Among the most popular and widely used virtualization technologies is containers and virtual machines (VM). It's a common assumption that these terms refer to the same technology, but this isn't necessarily true. While containers and VMs share some similarities in terms of purpose and function, they each have their own unique characteristics.
VM technology has roots dating back to the 1960s, although it wasn't until the later 1990s when a company called VMware launched an product that revolutionized the industry. The VMware Workstation virtualized the x86 architecture, essentially allowing computer programs to run inside a virtual environment. Since then, major IT players like Amazon, IBM and Microsoft have launched their own VM services.
As the name suggests, a virtual machine offers a complete virtualized computing environment, complete with a BIOS, operating system, disk storage, memory and CPU. If you want to run a program in VM, you must start up the virtual environment just like you would a traditional computer. VMs typically boot in less time than a traditional computer, but there's still an initial wait time.
On the other side of the virtualization fence are containers. Much like VMs, containers allow programs to be executed in a virtual computing environment. The key difference between it and a VM, however, lies in its method of abstract. The folks at NextPlatform explain that container abstraction “happens at the operating system level,” whereas VM abstraction happens at the hardware level. The virtualization technology driving containers also has a specific purpose, such as LXC being used strictly for Linux operations.
In a container, all users share the same operating system, network connection and kernal instance, which is helpful in promoting more efficient use of resources. Going back to the basic characteristics of a VM, each virtual machine runs its own copy of an operating system, as well as virtual copy of the hardware needed to run the OS. This can quickly bog down the CPU and memory, resulting in slower performance and less efficient use of resource. Containers, however, only require a portion of an operating system to function.
Which is Best?
There's really no single best type of virtualization technology, as both VMs and containers have their own advantages and disadvantages. Containers allow for a greater number of applications on a single server than VMs, due to the fact that abstraction occurs at the operating system level.
But there's also a greater risk of cyber threats posed by containers when compared to VMs. As Red Hat security engineer Daniel J Walsh explains “containers do not contain.” The good news is that there are ways to strengthen the security of a container environment, including disabling user privileges when they are no longer necessary; running services as non-root; and treating the root of a container as if it were not in a container.
Thanks for reading and feel free to let us know your thoughts in the comments below regarding the benefits of Agile Software Development.
Photo credit: Torkild Retvedt