It isn’t easy to think that way, but if you start planning your Virtual Desktop initiative at the end point, you are fighting a losing battle. This is because it isn’t the Thin Client that is important, but rather the delivery and support infrastructure behind that Thin Client that will make or break your VDI project in the long run.
When Virtualization came to the mainstream a few years ago, the prime mover was datacenter consolidation. This provided SysAdmins with savings in rack space, power, and cooling as well as an increase resilience and responsiveness to the IT infrastructure as a whole. By compartmentalizing your servers into VMs and putting them on shared storage, placing them into clusters, and leveraging HA and DRS, you added protection for your servers that was extremely costly and rare in a physical environment. But most importantly, SysAdmins got their lives back. When virtualized, there is a significant drop in hardware related downtime and outages, quicker response time to new project requests, and fewer calls in the middle of the night. By virtualizing, we as SysAdmins became proactive rather than reactive in our day to day affairs. However, there is little to be gained by the SysAdmin when it comes to VDI. There is a huge benefit to the organization as a whole and the HelpDesk in particular when desktops are virtuailized.
So why not just spin up a bunch of VMs as desktops and roll them out to all of our users? Because server VMs and desktops VMs have different use cases and workload profiles. Lets look at an example using some basic generalities and industry standards.
You have a user on a desktop, working at XYZ company. According to average usage, he is utilizing around 10 I/O operations to the hard drive per second, or 10 IOPS. The standard SATA hard drive in a desktop or laptop is capable of 80 IOPS, so there is little chance that he will saturate the hard drive’s performance envelope at any given time. Now take 100 average users, and you find that they will require, on average 1000 IOPS to sustain their performance. In order to virtualize those 100 desktops, you will need to provide 1000 IOPS from your storage array. Additionally, you may find that the read/write mix is around 60/40 (on average) from these users.
As I mentioned earlier, a SATA disk (7.2k) can support 80 IOPS. A 10k-SAS disk is around 140 IOPS, and a 15k-SAS disk supports 180 IOPS. As you can see, a 15k-SAS disk is much higher performer than the SATA disk. In the standard 14 disk storage enclosure, you will see significantly more IOPS from SAS drives than you will SATA drives. Additionally, we also need to take into consideration the “RAID Penalty”, or the cost on the back end to IOPS that occurs due to the RAID policy on an array. While reads are free (no penalty), writes to a RAID disk incur penalty due to parity. A parity calculation will require multiple writes to the array for each single write request. For example, RAID 1(0) requires two writes to the array for the mirror, RAID 5(0) requires 4 writes for parity, and RAID 6(0) requires 6 writes due to double parity.
In our example above, 1000 IOPS at a 60/40 read/write mix will have 600 reads and 400 writes required per second. In the case of a RAID 10 array, you will need
600 (reads) + [400 (writes) * 2 (RAID Penalty)] = 1400 IOPS.
What this means in practical sense is that to support the 1000 IOPS at 60/40 on RAID 10, you need a disk array capable of 1400 IOPS.
Take that further and calculate out a RAID 5 array.
600 (reads) + [400 (writes) * 4 (RAID Penalty)] = 2200 IOPS on the back end.
RAID 6 will look like this;
600 (reads) + [400 (writes) * 6 (RAID Penalty)] = 3000 IOPS,
which is over twice as much I/O as required by the RAID 10 array.
With all of this IOPS information at hand, we can now calculate what will be needed as hardware to support the virtual desktops. How many disks, and what type, will be needed to provide the required 1000 IOPS? Will 8 * 15k-SAS drives in RAID 10 suffice or will we need have to go with 28 * 7.2k-SATA drives in RAID 5? Possibly it will come down to a mix of the two types. And yet another card to play is Solid State Drives. Enterprise SSD drives can support IOPS at 4000 per disk, which can be a perfect match for some VDI deployments.
Now that you understand the relationship between IOPS on the desktop and back end storage requirements, what is the best way to figure out exactly what you will need for YOUR virtual desktop project? The best tool that I have found is Stratusphere from LiquidwareLabs. The LWL server is an OVF, downloadable and installable into your existing VMware infrastructure. From that point, we generate an installable agent that can be deployed on your existing desktops. Once installed, the agent collects performance information from each of the desktops relating to CPU, memory, network, and disk, from a user and application level. Once collected it is sent back to the LWL server for analysis. After several weeks of collection, we can start to see a usage profile for your users, based on time of day, applications used, and overall functionality of each desktop. With this data in hand, we can then profile your user base and size your desktop needs. While the average user consumes 10 IOPS, your users may only require 6 or 7 IOPS on average, with a subset of users that require 12-15 IOPS. We can also track application usage and determine how best to deploy applications to virtual desktops, either installed in the base image or ‘ThinApped’ and streamed to the clients as needed.
So what have we learned today? First, that it is very important to plan ahead when considering a VDI deployment, and that benchmarking and analysis with tools such as LWL’s Stratusphere can help you to profile your infrastructure. Secondly, that we will need to correctly size our back end storage support for any VDI deployment, and why it is so important to success. And third, that when it comes to hard drives, size really DOESN’T matter, at least not when concerning IOPS! It is all about controller type and spindle speed.