Of limited functionality legacy code and virtualization

One thing I am noticing with virtualization is that from a fully virtualized physical environment, the boundary between the hardware and the virtualized object is moving to higher levels of software abstraction(vm/host boundary) as the technology advances. For instance, a step preliminary to this is an announced version of windows server 2016 that is skinnied way down, and comes with only the drivers for virtual hosted devices, and no GUI, greatly reducing the disk footprint, and to some degree the memory footprint. Microsoft claims that this sort of server would require perhaps 1/10 of the patching and rebooting of the full server version.

This sort of thing brings to mind an evolution whereby all windows server instances share this common set of drivers on disk and memory and in doing so, moving the VM/host boundary a bit away from the hardware. Doubtless many similar opportunities exist to perform other optimizations, further increasing the abstraction of the host machine until its something more like a container than a physical machine.

(I noted that VMware, even today, recognizes which parts of Windows Server memory are common among instances and keeps only one copy of said memory to share among virtual machines. This sort of optimization clearly points the way toward a more formal optimization in the Windows Server architecture).

 I thought about this when reading the article below, which discusses Microsoft responding to the ‘docker’ concept–a higher level of virtualization abstraction. To my thinking, the whole server multiplication process, whereby each application/function required a new Windows server, was due to limitations of the windows implementation such as a single registry per computer and inadequate isolation between processes—many versions of Windows server ago. Absent these limitations, a single Windows instance could have run multiple applications without the need for the boundaries of separate computers—performing the resource sharing that virtualization would provide.

 Given the universality of the installed base of Windows applications, its mostly too late to re-architect this installed code base with improved versions of Windows Server, hence the need for a virtual machine to support the installed base and do the optimizations under the covers. But based on this article, and others, it looks to me like the number and success of virtualized servers is leading Microsoft to evolve the application API a bit, leading to more efficient virtualization.

In any case I commend you to this article as you ponder the future of Windows virtualization. 



Leave a Reply