This is part 2 of an ongoing series of posts to document and describe my journey implementing Citrix VDI with XenApp. View part 1.
After our selection of Citrix as the VDI solution of choice, an architecture was drafted with assistance from a Citrix partner to implement a VDI solution with application virtualization to provide the ‘any time, any device’ access that was key for our initiative to be a success. The pilot project was outlined to target two different school sites – an elementary school and a junior high school, with the target usage to be 100 concurrent desktops. XenApp would be used to augment remote access capabilities and provide application virtualization where possible. We wanted to make sure our pilot could grow into a more robust environment, expanding up to 2,000 seats or more.
Throughout the proof of concept phase varying tests were performed using VMware vSphere 4.1 and Hyper-V Server 2008 R2. Both operated with relative ease and the management of both functioned as designed. As they were functionally equivalent for our needs, Hyper-V was the clear choice due to licensing costs for the VMware Hypervisor (vCenter with vSphere or View licenses can get complicated and add a lot of additional cost for third party VDI).
Our hardware standard is HP servers, so DL380 G7’s equipped with 144GB of RAM, a local mirror disk, dual 81Q single port fiber channel HBA’s were ordered to be connected to dual Cisco MDS 9000 series switches and ultimately to a VNX 5500 block device with 22 300GB SAS disks and equipped with FAST cache.
Two clusters were created from 5 servers – the two node cluster serves 2 PVS servers, 2 back end sql servers, 3 XenApp servers (one dedicated data collector), 2 web interface servers while the three node cluster serves the Windows 7 desktops. Nearly every component from the back end is redundant, with the exclusion of the top of rack switching