Home > VMWare, VMWorld > VMWorld 2012 – performance, new features and best practices

VMWorld 2012 – performance, new features and best practices

Just rushed into hall B2 , going to go through performance enhancements in vSphere 5.1. Ignore typos please – typing in an iPad and autocorrect is not the most uhhh tech friendly tool…

VSphere 5.1 targets
Big data
Low latency
Monster apps
Large scale deployments
View and vCloud director environments.

Big Data:
Monster VMs mean:
64vCpus (who does this with a Vm?)
1TB Ram (again – surely this would justify a physical?)
VMWare have managed to more than 1 million IOPs out of a single VM (cool and ridiculous)

Big new addition – exposure of new CPU counters in new techs like ivyBridge, SandyBridge and PileDriver

Low latency:
New dropdown available to label VMs as latency sensitive (VM behave accordingly) – use with caution . . And do NOT let the business know about this. Prioritises access to resource, but if overused, loses its effectiveness.

Platform recommendations:
Size VMs correctly
Use resource settings only if needed
Avoid affinity possible
Over provisioning is fine great!
Hyper threading is GREAT – use it.
Double check bios and power management settings

Reduced memory overhead:
VSphere 5.1 allows for swap file creation to reduce memory reservation for backed processes – saving about 1GB per host.
Can be configured from web client under system volumes – edit system swap settings.
Overcommit to about 20% as a guideline. Make sure to use ballooning, transparent page sharing, memory compression, host cache swapping and ESX or guest Level swapping.
When you start seeing the swapping use go up – reduce overcommitte.
Sizing VMs – use reservations as needed and try keep memory within NUMA domain.

Memory – consumed vs active:
Consumed – physical memory used by VM (good measure of actual usage at point in time)
Active is the amount recently touched.

Storage IO control enhanced in vSphere 5.1:
VSphere 5.1 can use percentage based thresholds instead of absolute latency values – this means better throughput on both slow storage as well as low latency for low latency storage,
SIOC monitors and controls the full storage stack latency

Storage DRS enhancements:
Interoperability with vCloud Director – including linked clone (with vCloud only)
Storage DRS correlation detector (so we won’t automatically move storage between data stores that are actually hosted on the same spindles – which would have no benefit)
Can be used with Auto-Tiering – but you would need to follow the storage vendor’s best practice.

Storage performance:
Now support 16Gb CPU – which has lower CPU cost / efficiency.

Adapters:
Jumbo Frames best case throughput improved by:
Hwscsi read 88% write 20%
Swscsi read 11% write 40%
NFS. Read 9% write 32%

Storage best practices:
Size accordingly and keep latency below 30ms
Snapshots are not free!
Use sioc and sdrs
Update storage firmware
Remember the old tricks (multipathing, block size, alignment, paravirtulaised scsi etc)

Networking virtualisation:
New features – VDS snapshots(snapshot your switch’s config), auto-Rollback of configs, port mirroring and net flow enhancements)

Use VDS – Network IO control (eg don’t let a vMotion kill the mic for everyone else)

New feature: SR-IOV – allow one nic to be presented as multiple separate logical adapters. This allows us to allow multiple VMs to directly use the physical NIC – reducing latency.

VXLAN – new feature
Deploy VMs where resources are available, then create a gigantic layer 2 network, making access ‘local’ – possibly a great tool for getting max use out of geographically dispersed vSphere networks that run business hours only – e.g. NY / London office

Networking best practice:
Be mindful of converged networks
Use distributed virtual switches

VMotion enhancements:
Shared nothing migration – no shared storage required and still able to migrate host and storage at the same time (cool)
Parallel storage vMotion – so we do a storage vMotion of a VM with 4 disks – possibly separated by affinity etc. this allows the copies of up to 4 vmdks at the SAME time (previously, copies were sequential) – there is only benefit when the vmdks are moving from different data stores, to different data stores.

vMotion best practices:
Use the latest version of vmfs. (5.x)
Keep vmknics on same subnet
Separate vmknics across multiple vmnics. VMotion will load balance the traffic

vCenter enhancements:
Web client WITH SSO
Wb client supports 300 concurrent connection
Can collect up to 80 million stats per hour – so max logging (level 4) for an environment of 1000 hosts, with 2000 data stores and 15000 VMs!

VCenter best practices:
Size correctly
Size the db correctly
Keep an eye logging levels, DB performance and networking connectivity between VC, DB hosts etc.
VM or physical is ok
32 hosts per cluster
Use resource pools and affinity rules in clusters as needed.

Categories: VMWare, VMWorld Tags:
  1. No comments yet.
  1. No trackbacks yet.
You must be logged in to post a comment.