I’ve mentioned this in a few of my blog posts over the years and here it is again – “Storage has been a major oversight by most companies deploying VMware Virtual Infrastructure”. News Flash! This was confirmed at the recent 2009 VMWorld …
What I’ve heard and read about VMWorld 2009 is that it was really a storage conference. Every storage vendor and his brother were there displaying and promoting storage products to help build better VI environments. And, from many of the emails I get, most visitors were really just interested in finding out what they can do to fix their main storage issue: poor performance.
A year ago I had a talk with my VMware support engineer and I explained to him then that I thought there was going to be a huge market for solutions for improving storage performance coming. I also explained how most VMware deployments are similar to boiling a frog. If a frog is thrown into hot water it will jump out, however, if the water temperature is turned up slowly the frog won’t realize it until it is too late. Likewise, if VMware sales representatives told every new customer they would need a new storage array within a year, nobody except those already planning to buy new storage would virtualize. However, if nothing is said and VMs are built one at a time over a 6 -12 month period on existing shared storage, nobody will notice performance degradation until one day when the main business application database crashes because it shares the same SAN as VMware but then VMware won’t be to blame, storage will. I suspect many companies using VMware are at this point today with their environments?
You have two options for your problem: 1) Buy more storage or 2) Re-carve your existing storage. My gut tells me most SAN Admins reading this post will argue with option 2 because that would take a lot of work and they most likely don’t believe it will help. I suggest for those that just don’t agree because they know better that they get re-educated on carving SAN for VMware.
No disrespect intended but it’s a hard one to digest. Think about this, you’re used to carving one 100GB LUN for one server with many users and dedicated HBA ports, right? Now consider for a moment that for VMware you are carving (8 – 16), 300GB, 400GB or 500GB LUNs for 8 – 16 ESX hosts with 160 – 240 virtual servers all accessing the LUN through the same HBA port, or path and – all at the same time. If I was lucky enough to get your attention then I won’t even try to insult you by trying to explain how each SAN is different – but – I will recommend calling your VMware and SAN support and speaking only with someone who works on storage for VMware.
Furthermore for both options mentioned, many storage vendors have the ability to do a form of what is known as wide-striping (HDS term) and it also requires a special license that will cost you. HDS, 3par, EMC and HP all can have 100s of drives in a single disk pool (RAID/parity group) this is, with the proper licensed features. NetApp will have a similar feature in OnTapp 8.4 from what I’ve been told.
I hope this has been helpful for someone trying to understand why the frog keeps dying. So, the next time you have a VM that starts croaking, I’d have a look at storage option 2.
vbeginners.com