We have 4 vSphere clusters, each with their own storage.
1) 4 Host 6.0 cluster, with 11TB of all-flash Tintri storage over NFS via 10GB SFP+ Copper.
2) 3 Host 4.1 cluster, with 27TB of storage over 2 Equallogic crates via 1GB copper and 10GB SFP+ Copper respectively.
3) 4 Host 4.1 cluster, with 8TB of storage over 2 Infortrend crates via fiber channel.
4) 4 Host 4.1 cluster, with 14TB of storage over 2 HP P2000s via fiber channel.
The plan is to add a 5th Host to cluster 1, along with 50TB of storage, probably on 10GB SFP+ Copper via NFS and migrate the VMs from the other 3 clusters onto those hosts and storage.
As for migrating the VMs, I think this is our easiest option, but hoping to get some opinion/suggestions:
- Ensure all VM Port Groups from clusters 2,3 and 4 are trunked and replicated on cluster 1's hosts.
- Open firewall rules to allow Cluster 1's hosts to see and mount Cluster 2's storage.
- power off all VMs on cluster 2. Remove them from Cluster 2's inventory.
- add them to Cluster 1's inventory.
- Storage vMotion them to new Datastores provisioned on the 50TB additional storage
- All a fiber channel card to the new Host 5 in cluster 1.
- zone that card into the storage fabric of cluster 3 and present the disks, so it can see the Infortrend storage
- power off all VMs on cluster 4, remove them from Cluster 2's inventory
- add them to Cluster 1's inventory
- Storage vMotion them to the new Datastores
Repeat for cluster 4.
We can afford the down time, but is this the easiest way?
Thanks in advance!
Ben