On Tuesday, 6/4/13, Dell unveiled its new PowerEdge VRTX converged system during the opening keynote at the 2013 Dell Enterprise Forum in San Jose, CA. The VRTX comes in a Tower or Rack mounted form factor covering 5RU. Inside, you will find slots for 4 of Dell’s M520 or M620 blades, the same servers found in their M1000 Blade Chassis. An onboard CMC provides iDRAC and remote management for the chassis. Each blade maps up to 4 GigE ports on an integrated switch, and there are 8 additional PCIe slots that can be mapped directly to the blades for more IO and accessory options such graphics adapters. In addition to the shared PCIe slots, there is a shared PERC controller (SPERC) that controls access to up to 25 integrated 2.5 inch or 12 3.5 inch drives. Both Howard Marks and Kevin Houston have done great posts on the details and specifications of the VRTX.
Being a vSphere-centric kind of guy, my first thoughts about the VRTX were that it could replace the commonly deployed “3-2-1” vSphere install (3 hosts + 2 iSCSI switches + 1 EqualLogic array) for SMB and ROBO scenarios. If you had up to 4 blades, integrated networking and storage, in a whisper quiet (and it is VERY quiet while running) box with integrated remote management and monitoring, why wouldn’t deploy a VRTX instead of 6 pieces of hardware?
After several inquiries relating to the shared PCIe slots and storage, I came up with the following answers from the Dell engineers onsite:
Networking: While there are only 4 onboard GigE ports per blade, you can easily add a 10Gig, FC, or CNA into the PCIe slots and map the card directly to a blade. Add 4 x dual port CNAs, map them to the blades, attach to an external switch, and carve up your network as needed.
Storage: This became a challenge. Apparently all 25 disks on the SPERC can be either directly mapped to individual blades, joined into RAID groups, and have virtual disks carved out of the RAID groups, which could also be mapped to one or multiple blades. In practice, it is similar to carving up storage in a MD1200 shelf. Where is became challenging was when I asked if there was a way to provide a shared disk for a vSphere installation. I asked several (more than 10) Dell engineers onsite at the #DellEF, and never got a definitive answer on that. Apparently nobody thought to qualify the VRTX as a vSphere platform. The answer provided was that the SPERC could create a shared disk, and present it to multiple blades via a common SCSI bus. Not FC, Not iSCSI, not NFS. The simple question of “Can you install a vSphere cluster on the VRTX” was answered with “I would imagine it would be possible.”
In conclusion, I was very impressed with Dell’s new converged infrastructure foray, the VRTX. The ability to put 4 servers, networking, and storage into a single 5U box will shake up the industry, especially if the price point is as competitive as Dell traditionally leads with. However, the storage subsystem is not much more than a DAS shelf for the blades. I think it was a great first move, but I would have loved to see an integrated MD3200 or EqualLogic controller to give the storage a little bit of intelligence. There are small form factor EqualLogic controllers currently in use on the PS-M4110 array. As far as installing vSphere on the VRTX, I’m not sure if it will work right out of the box. However, with the use of a VSA such as Nexenta to control the storage and make it available to all blades, this could be a huge success in the SMB/ROBO virtualization market. Well played, Dell. Well played indeed.
For more information on the Dell VRTX, head to the Dell TechCenter Blog, where Peter Tsai has aggregated the latest information.