Windows Server 2016 is getting to it’s final RTM state within several months now. After that time we can start using Windows Server 2016 Storage Spaces Direct (S2D) for production environments and start using Hyper-converged stacks.
I’m not going to explain how Storage Spaces Direct works, this is just a blog about the setup. If you want some more info about S2D check out this link for an overview and for te more tech guys or girls look here.
I have spend some lab time setting up en using Storage Spaces Direct (S2D) and use Hyper-converged hosts and started this blog to share some info.
Beware that the info below is Lab stuf. I’ve taken some shortcuts to be able to setup S2D on VM’s with Virtual disks. So do not use this commands for you’re own setup, unless you are also running tests in VMs.
General Info
I have 2 VM’s running on a Windows Server 2016 TP5 physical box. The VM’s are enabled for nested virtualization to make sure a can start VM’s on the VM’s. Both VM’s have 10 disks attached to them. So no shared VHD, just 10 .vhdx files to both VM’s which makes a total of 20 .vhdx files. The VM’s are also running Windows Server 2016 TP5.
Create and attach disks
To create and attach the disks I used some powershell commands. I have 2 “Hosts” HV-01 and HV-02.
$hostname="HV-01" Stop-VM -Name $hostname $disknumbers = @(0..9 | % {“SofsDisk0$_”}) ForEach ($number in $disknumbers) { $vhdpath = "D:\VMs\$hostname\$hostname-$number.vhdx" $vhdsize = 40GB New-VHD -Path $vhdpath -Dynamic -SizeBytes $vhdsize } $vhds=Get-VHD -Path "D:\VMs\$hostname\$hostname-S*.vhdx" $paths=$vhds.path Add-VMScsiController -VMName $hostname foreach ($path in $paths){ Add-VMHardDiskDrive -VMName $hostname -ControllerType SCSI -ControllerNumber 1` -Path $path } Start-VM -Name $hostname
I did the same for the second host HV-02. I now have 2 hosts with 10 disks on a new SCSI controller.
Add-WindowsFeature –Name Hyper-v, File-Services, Failover-Clustering ` –IncludeManagementTools
Next thing to do is setup a cluster.
New-Cluster -Name HV-S2D -Node HV-01,HV-02 -StaticAddress 10.10.10.100 -NoStorage
Make sure you add the -NoStorage parameter. Otherwise the Cluster setup will try to add the local disks which will fail because the disks are not available on both hosts. Now we have 2 nodes in a cluster.
Enable Storage Spaces Direct
Now we need to enable storage spaces direct to pool all the disks together, create a virtual storage system called the Software Storage Bus and some other magic. Normally you would run enable-clusters2d to enable storage spaces direct. But since were a cheating a bit with vm’s and virtual disks and the VHD’s are missing cache features and stuff this will fail.
So we need to run
enable-clusters2d -SkipEligibilityChecks
to bypass the checks. After the command has finished we have a storage pool stretched over 2 nodes with 20 disks.
Now lets recap here. We now have a 2 nodes cluster were the 20 physical disks are added to 1 pool of disks. Next stop is setup a volume on the disk pool.
New-Volume -StoragePoolFriendlyName "HV-S2D" -FriendlyName VDisk01 ` -FileSystem CSVFS_ReFS -Size 200GB -PhysicalDiskRedundancy 1
That gives me a 200GB volume that is added as cluster shared volume where I can place VM’s on. We use the “-PhysicalDiskRedundancy 1” parameter to make sure al data is written in mirror. Storage spaces makes sure that spreads across both nodes. When we look on physical Hyper-V host and look at the .vhdx files of the HV-01 and HV-02 they still do not contain any data (except some initial storage pool and volume data).
Create VM to generate data
We create a VM and place it on the Cluster Shared Volume.
I boot the VM and start the installation. If we let the VM install and check the nodes we see that node HV-01 is sending data and HV-02 is receiving it. So the data is written on disks of both nodes.
After a while the disks on both servers have increased in size to, if we check the Physical Hyper-V host again.
Finalize
Storage Spaces Direct is a new Windows Server 2016 feature which enables you to use physical boxes as building blocks for your Hyper-V Stack. Start with 3 boxes (Because that would probable be the minimum amount of nodes for a Hyperconverged cluster) and add Compute, Memory and Storage bij adding physical nodes to a maximum of 16 (for now) which could be higher when we hit RTM.
The difficult part in the Hyper-converged stack will be to find the sweet spot between CPU’s, Memory and Storage. Because if you add to much of one of them you could end up in not utillizing a large amount of storage, memory or CPU power.
If you take the disaggregated approach you only use some CPU and memory from the cluster nodes and a lot of disks to setup a large storage pool to create a big SAN. Then add a Scale out File server on top of that and use SMB3 to provide shares for a dedicated Hyper-V cluster… but that might be stuff for another blog 🙂
Good luck deploying!
Pascal Slijkerman
Hello there,
how long the s2d command take to complete ?
hi Munir,
It depends a bit on your config but it shouldn’t take more then 5 to 10 minutes. What hardware are you using?
Regards
Pascal
Hello Pascal,
I am using dell server and has created 4 nested vm to do the s2d, but when i enable the s2d, it only stuck at 23%. any idea on this ?
Hi Munir,
Could be lots of reasons, but I know of some cases with specific hardware, but since you are in a virtual scenario that doesn’t apply. Is your s2d cluster validation running successful?
Hi
yes it run successfully, btw thank for the feedback, i manage to make it run