The Changing Face of Infrastructure Part 2
In my last post I spent some time discussing the potential benefits of both Employee and Employer owned Desktop Virtualisation Solutions either hosted in the Datacentre or distributed and run on the end user hardware. In this post I’ll cover Server Virtualisation and the the advantages it can bring to the Enterprise.
Like many IT Pro’s I started using virtualisation solutions back in the late 90’s / early 2000’s for running virtual servers on my desktop for training and testing. It wasn’t until about 5 years ago that Server Virtualisation began to take off and initially it was limited to running legacy servers or development, test and lab environments. VMWare bought server virtualisation into the mainstream with ESX, a server operating system based on the Linux kernel. This change significantly changed the Datacentre improving hardware utilisation, management, provisioning, availability and disaster recovery.
For many years I have been speaking with colleagues about Mick’s Laws of Server Management (I will post the list soon). The first law is “Single Server, Single App, Single Purpose”. This law simply states that a single server should never host multiple applications. Whilst there are some exceptions, such as an Active Directory Controller also hosting DNS, DHCP and WINS, it stands true most of the time. This law was designed to improve availability of services as a file server that also ran the backup software might need a reboot to fix an issue with a tape device which would also require an interruption to file sharing.
The downside of the first law is utilisation and cost. Buying server hardware to host applications that only required limited resources is an expensive exercise. These servers often spent a the majority of the time idle whilst using valuable power, rack space and cooling in the Datacentre. This is the first advantage of a virtualised server infrastructure. By running these small applications servers as Virtual Machines (VM’s) I can collocate the server instances on one physical server and maintain an average of 60-70% utilisation rather 5-10%. Whilst this may appear to increase risk as a hardware failure would now result in a multiple server outage, there are technologies which allow for automatic failure in the event of a disaster.
Years ago a number of servers in the Datacentre would never have been made highly available. Clustering and replication technologies were expensive and limited to a handful of Tier 1 applications, maybe Email and or ERP solutions. Server Virtualisation gives every server high availability. VMware ESX and Microsoft Hyper-V 2008 R2 have the ability to move VM’s seamlessly between hosts. Let me explain this a little more. When correctly configured I can move VM’s between physical servers. This process is called vMotion on VMware or Live Migration in Hyper-V. This allows for protection of the underlying server hardware. If an outage occurs or maintenance needs to be performed the VM’s can be moved online to another host. This move requires no downtime and users are unaware of the change. Unfortunately this type of move won’t be quiet as seamless if a physical server crashes. In this example the VM’s would be moved and started on another node but they would reboot. VMware have a new technology called Fault Tolerance which protects against this type of failure also.
The next advantage of the virtualised Infrastructure is flexibility. Using technologies that build on top of Server Virtualisation the ability to not only move VM’s between hosts but also between Datacentres becomes a reality. Therefore not only providing a high availability option, but also a Disaster Recovery one.
Provisioning also becomes a much simpler process. If the capacity is available spinning up a new VM can be done in minutes, without the need to raise a CAPEX, quote, order and await delivery of Hardware.
Development and Test scenarios are also greatly improved from a cost and benefit perspective. VM’s can be cloned and added to an isolated network providing an up to date copy of a production system for testing and development. Environments can also be updated faster and more accurately represent their production companions. Again both Microsoft and VMware have products for replicating test environments with System Centre Virtual Machine Manager and Lab Manager respectively.
The efficiency and high availability of VM’s, ability to move them between hosts or Datacentres, the speed to provision and the duplication possibilities when used together deliver a dynamic infrastructure. This dynamic infrastructure is refereed to, by some, as the “Private Cloud”. More on that topic to come in Part 3