Arizon State University Case Study @ Citrix Synergy Jack Hsu
Contents |
Details
- Largest public research university in United States under a single administration:
- 22 colleges, >70k students,
- 3 data centers on Tempe Campus, and one on each other campus…6 in total.
- IT Services:
- Infrastructure and academic services (e.g. eLearning),
- Back office applications (e.g. Sharepoint),
- Servers on demand for colleges and departments.
- Standard 3 tier environment...development, QA and production.
University Technology Office
- In 2007 developed five year plan:
- Most important piece was segmenting core (research and teaching) vs context.
- IT is critical to the university but not going to advance the university:
- Understand IT is not the core of the university, purpose of the university is research and teaching,
- Concept of One:
- Goal is to bring everything together under one platform to make everything simple and cost effective;
- Next step is Concept of Zero…driving further that IT is not the core business of the university.
Data Center Goals
- Standardization…Concept of One:
- Switches – Cisco,
- Servers – Dell,
- Storage – NetApp (NFS/iSCSI), no Fibre Channel, too costly,
- Switch dedicated to storage,
- Avoid need for separate storage expertise,
- Operating system – Red Hat and Linux,
- Hypervisor - XenServer.
- Virtualization:
- Pilot started Summer 2007 with both VMware and XenServer,
- Finished pilot 2009, kicked off project in beginning ’09 w/target completion by end of 2011,
- Timed project for three years to align with server refresh cycle of hardware,
- Goal is 75% virtualized by end of project, probably trending toward 85% at this point:
- This includes all servers and appliances (e.g. checkpoint firewall, Netscaler, Netapp),
- Scope – virtualize all physical servers, and if can’t virtualize then consolidate,
- In 2009 decided to use XenServer over VMware because:
- “this is our context, not our core. Need to be concerned about dollars and function.”
- Don’t want to pay for lack of use, but when needed the product needs to be there.
- Open source…prefer this in university environment.
- Cost…study indicated XenServer 1/3rd the cost of VMware.
- Already a XenApp shop.
- Keeping an eye on the marketplace:
- Spend the money based on what is needed today, don’t spend money to buy the future.
- Will verify periodically.
- “I am ready to go, if the cloud is ready.”
- What is not virtualized?
- Applications that don’t work in XenServer…Adobe Connect.
- Mission-critical applications and those that vendor won’t support in XenServer:
- Oracle supported on Oracle VM and Amazon, but not XenServer.
- Applications that already have redundancy and DR built in:
- E.g. Citrix Provisioning Server.
- Exchange 2010 Mailbox…will move this to virtual environment.
- Consolidation:
- Reduced cost and shift savings, emphasize investments in core (research and teaching).
Lessons Learned
- People, People, People!
- People needed to build it out, people you need to convince to move to virtual environment, and the people needed to fix any problems.
- P2V and V2V don’t always work.
- Solid storage and network management required.
- Ethernet is challenging in the infrastructure layer still, although believe this is going to change with newer technology.
- NFS tuning.
- Disk alignment for releases prior to Windows 2008 R2…these issues gone with new release.
Results
- Power consumption dropped 40%.
- 69% of total servers virtualized:
- Still have some VMware in virtual environment.
- 103 XenServer Hosts, 850 VMs across these servers
- Total physical servers reduced from 692 to 375.
- More responsive with fewer people:
- Remove friction between IT and users.
- Taking all IT away from the desk and into the data center:
- Improved security and power consumption.
Whats Next?
- Have 1,200 servers total physical and virtual:
- 69% virtualized today, would like to get to 85%.
- Building DR site at West Campus, primary site at Tempe campus today:
- Want to build up entire DR site for Student Health environment, meet HIPAA requirement:
- 20 miles away from Tempe,
- Have metro cluster across all the campuses, 10Gb pipe,
- Will use hetero pools,
- Going to use leftover hardware on the west campus,
- Will use SnapVault from NetApp to move data from Tempe to West Campus:
- Lower cost for server and storage.
- Evaluate private cloud:
- Looking at how they can automate the whole virtual environment, also building up a chargeback system.
- Self service – already have this in place for faculty/staff across servers and storage.
Q&A
- What is motivating move to cloud?
- The Concept of Zero: IT is not core business, faculty and students don’t care about IT.
- Trying to become more focused on the research and not tied to day in/day out operations.
- Why not a private cloud today?
- Not yet doing automation and chargeback.
- Need to be able to measure and charge by use, working to get to this.
- Storage:
- Single vendor NetApp shop.
- Server density aspirations:
- Roughly 8:1 today, don’t believe in ratios, memory is driving the ratio.
- Also need to consider physical hardware as SPF, is there extra capacity to handle this system going down.
- Have standby hardware to support failover.
- Ratios driven by memory today.
- Want to make sure environment is HA and totally transparent to customer.
- People management:
- First thing you have to do is get buy-in, biggest challenge in deployment is the people buying in and believing the product.
- How to get buy-in? Bring them to a conference, set up a reward.