Home > News, VMWare > vSphere 5 – the licensing bug . . .

vSphere 5 – the licensing bug . . .

So I have been rather quiet this week – which is strange, given that vSphere 5 has been announced(of course the worst kept secret ever)
Much like everyone else though, I have been very surprised by the licensing statements and not in the least bit surprised by the backlash.

Absolutely, I 100% agree that VMware may well have opened the door here for other Hypervisors – though not in all cases.
There are some major restrictions in this licensing design.
Firstly, 8GB of ram on the free version? Hmm, not great really – that’s not enough capacity to test drive anything in any lab environment? We all know that pretty much any server you purchase anywhere (even for small businesses) has at least 8GB of Ram in it. So if I am considering consolidating, my 4/5 server environment, I can not even have a free license for a length of time that can create a replica of my environment (ye sI am sure there will still be free trials etc, but these are limited time and so on)

Next, we are deep into a project, for desktop visualization – we have been looking at some pretty clever tools (FusionIO / Atlantis ILIO and so on) – but this new licensing change makes ESXi as the hypervisor VERY unappealing. Our desktops typically run low on CPU, but due to the number of VMs we create, consume a fair amount of memory. The Dell 910s we have bought have been very appealing as hardware for this, as we can cram a bunch of Ram in them and the CPUs are sufficient, but as the new license only allocates 48GB for each of our CPU licenses, it appears that this no longer works.

To save myself panicking too much, I decided to look into the current licensing in my environment, and that when we move(if we move to vSphere 5) and was horrified to learn that we currently have room for about 100% capacity expansion – without putting any clusters at risk(nice position to be in) – but once the license format changes, we can only use up 50% of that capacity? Yes, I know this means that I have less chance of over committing clusters and that I will finally be able to justify N+2 (or even N + 3) clusters as our licensing won;t allow us to go any further, but I must admit I am really disappointed with the way in which this has come forward.

I can appreciate that VMware have spent a lot of time / money developing the product and that when this all started, a 12 to 1 consolidation ration was considered acceptable. I realise we now consolidate at 40 / 50 to 1 and don;t have any issues and that this is largely to do with progress in CPU. So I know that form VMware’s point of view, licensing per socket becomes less and less appealing as CPUs get more cores, mores threads and so, but much like every other blog post I have read, I really think they are doing this too early and opening the door to the competition.

It takes huge balls to do this – they are taking a huge risk (I am sure they know) VMware have the best product (FACT) – but they are exposing themselves to losing the business from small companies that will have reduced consolidation ratios, SMEs – that simply can’t afford to upgrade and re-license environments that are probably running at memory limits and also large companies that have time to review and revise licensing (especially in line with products like Hyper-V being licensed under enterprise agreements etc)

I guess that going forward, those HP Blades that can have a single blade with 1TB of RAM won’t be very appealing as ESX hosts and scaling out – more, lower spec ESX hosts will be the way forward – so expect bigger clusters and lower consolidation ratios (not all bad really as it means ESX maintenance has impact on fewer hosts) – but of course again this means increased costs. More network Points, More Power points, more rack space and so on and so forth.

In short – I don’t like it :-(

It is going to be interesting to see which way this goes – I’ll grab some popcorn so long.

Categories: News, VMWare Tags:
  1. No comments yet.
  1. No trackbacks yet.
You must be logged in to post a comment.