Ignite 2017 @ Orlando Day 4

Day 4:

Azure High Performance Networking

This was a very interesting session with lots of good info. It started of wit VNet integration of Azure Container Service and the ability to give an IP to a single container instead of sharing the IP with several containers.

VNet Service endpoints is also new which gives you the ability to deny internet access to VM’s but allow specific Azure services as Endpoint. So your VM’s can talk to Azure Services or Paas Services without you trying to figure out behind what IPs the endpoints are located and talking to the rest of the internet.

Then NSG’s got a bit less dumber then they were. The applied service tags to NSG’s. So what it means is that you can for example set a tag SQL Servers, or IIS Servers and make all IIS or SQL Servers being tagged by the policy. So you setup one rule with a tag SQL and all your SQL servers wil be bound to that NSG rule instead of creating several rules based on source IP’s of that SQL server.

Read more

Ignite 2017 @ Orlando Day 2

The second day of Ignite i started of with a session on:

Azure Stack servicing and updating.

Updates for Azure stack consist of 2 packages or actually 3, but the third is different and not really clear how that is taking place because it will be the OEM vendor package and all vendors can take care of that in their own way. So the first package is for the OS updates for all the VM’s and hosts in the Azure Stack. The second package is about updating the resource providers in Azure stack. The Azure stack can be updated in a disconnected scenario as long as the bits are downloaded and uploaded in the blob storage through the Admin Portal.

Both are pretty big and not yet cumulative. Meaning that you have to run all the updates to get to the latest and you can’t skip an update or something. Updates will be every month and you should not supposed to fall behind more ten 3 months otherwise you will loss support and have to be current first.

Since the entire stack is locked you cannot login with RDP and go to Windows Update and click install updates. To take care of that Azure Stack has an Update Resource Provider. The resource provider gives an wizard in a set of blades to provide a destination to the update packages and install the update or schedule it.

Read more

Third party storage replication software and Server 2016 issue’s with S2D

Hi All, recently I encountered some issue’s with third party software to replicate VMs from an old Windows Server 2012 cluster to a brand new Windows Server 2016 S2D Cluster, were it turned out that the third party software was not fully supported on Windows Server 2016 although they claimed to be. I know you could use Share Nothing live Migration but in this case that was not possible so we had to look for third party software.

In this example I encountered two issue’s when using Storage Spaces Direct and the ReFS file system (which is a hard requirment with S2D) together with the replication software.

Issue 1

So my first issue was with the agent that you install on a Windows Server 2016 Hyper-V host to be able to migrate servers from a source VM to a Hyper-V Storage Space Direct cluster as destination. After a push from the console or a manual installation the agent service would not start. After starting it crashes with a .net error. Well that seems pretty simpel and straight forward… do you really need a blog for that. That’s true but the next issue is not directly noticeable. In the end it turned out that the service could not start because it could not work with ReFS.

Issue 2

After starting of the service failed, the vendors tech support provided a work arround, and since there was a deadline pushing we took the work around and left the agent running and migrated the VMs through the work around but that’s not what this blog is about.

Read more

Storage Spaces Direct Mirroring vs MRV (Parity) performance

HP Lefthand

Back several years ago (about 6 or 7 if i remember correctly) when Storage Spaces and Storage Spaces Direct (S2D) did not exist yet, there was a another vendor called “Lefthand” which did kind of the same trick. Lefthand was bought by HP and the product was renamed to P4000 and later on HP StorVirtual 4000. The principal of this type of storage is different to other vendors. These storage nodes use local raid level on several disks in a system and additional nodes with the exact same hardware are pooled into a cluster. On top of the cluster volumes with network raid level’s are created. So disks in a system (1 disk in case of a single local RAID 5 pool of disks) could die without losing the node. In this setup you have several layers of redundancy on a storage node, but also on the entire cluster.

You could start with 2 systems and create a mirrored volume. The raw capacity of 1 node minus the raid level was the total usable capacity. So take a two node setup with 12x 1TB disks you have give or take 21TB of usable capacity (12 disks in a RAID 5 = 12TB minus 1TB and minus some lost bits and bytes so give or take 10,5TB usable). In a two node setup you will have 10,5 TB of usable space with mirrored volumes because the data is mirrored across both nodes. Mirroring data like that brings you high availability on storage on a node level. So you could loss a storage node without the volume going offline but it will cost you half of the raw storage capacity. If you add an extra node to make a total of 3 nodes, you would have 10,5 * 3 = 31,5TB of raw capacity. Taking the mirroring in consideration you will have about 15,7TB of usable capacity. And this keeps going in a 10 node setup you have 105TB of raw capacity and about 50+TB of usable capacity. So all the time you will loss halve of the capacity in a Network Mirror volume. If you chose for 3-Way or even a 4-Way mirror (don’t know why but it is possible) you have massive redundancy and performance but a terrible efficiency because in a 4-Way mirror on 4 nodes you only have 25% of the RAW capacity available for data.

Yes.. it’s seems like a waste of space, so Lefthand (and later on HP) came up with Network RAID 5. When you have three or more nodes you could setup a volume with Network RAID 5. Then the data is places on 2 nodes and the 3 node is doing parity. You could still lose a disk in a node or an entire node…. BUT it was dreadfully slow and HP recommended AGAINST setting up Network RAID Level 5….
.
So I hear you thinking what is with all the “old stuff” on Lefthand… Well, Storage Spaces Direct is kind of the same principal and the same applies on Parity volumes… But bear with me on this 🙂
.

Mirror vs MRV Volumes.

With Storage Spaces the resiliency level is set on Volume level. That means that you can create Mirrored and Parity volumes and also a new flavor named Mixed Resiliency Volumes with both Parity and Mirrored space. With traditional hardware RAID, mirrored disks are always faster as Parity disks because of the parity process. The same applies for S2D. With Mirrored volumes all data is mirrored across an x amount of nodes and disks in the cluster. By default Storage Space Direct uses a 3-way mirror layout. All blocks written to disk are copied to 2 other nodes (in case of a 3 node or higher cluster). Because of this you default lose 2/3 of your raw capacity.

Microsoft S2D Program Manager Cosmos Darwin created a nice website to make some calculations on how much usable space you get with different combinations of disks, capacity and resiliency settings, check it out on http://aka.ms/s2dcalc

When you create a 1 TB 3-way mirrored volume that volume has a 3 TB footprint. That’s a simple calculation because 1 TB of data is copied 2 additional times in a 3 way-Mirror which makes 3 TB. When you create an MRV of 1 TB with for example 30% Mirrored capacity and 70% Parity capacity we have to do a bit more math. So 300 GB * 3 is 900 GB. Then we have 700 GB parity space that will require double the space for that a total of 1400 GB. The total footprint of a 1 TB MRV Disk is 900 GB + 1400 GB = 2300 GB. So with an MRV disk you save 700 GB of space on a 1 TB volume.

Because of the massive loss of capacity with 3-way mirror people (most of them are the people who are responsible for the budget) are forcing or highly recommending to use or consider Parity or a form of Mixed reciliency to get more GBs/TB’s out of there hardware.. But at what cost?

Read more

Go Hyper-converged with S2D

Windows Server 2016 is getting to it’s final RTM state within several months now. After that time we can start using Windows Server 2016 Storage Spaces Direct (S2D) for production environments and start using Hyper-converged stacks.

I’m not going to explain how Storage Spaces Direct works, this is just a blog about the setup. If you want some more info about S2D check out this link for an overview and for te more tech guys or girls look here.

I have spend some lab time setting up en using Storage Spaces Direct (S2D) and use Hyper-converged hosts and started this blog to share some info.

Beware that the info below is Lab stuf. I’ve taken some shortcuts to be able to setup S2D on VM’s with Virtual disks. So do not use this commands for you’re own setup, unless you are also running tests in VMs.

General Info

I have 2 VM’s running on a Windows Server 2016 TP5 physical box. The VM’s are enabled for nested virtualization to make sure a can start VM’s on the VM’s. Both VM’s have 10 disks attached to them. So no shared VHD, just 10 .vhdx files to both VM’s which makes a total of 20 .vhdx files. The VM’s are also running Windows Server 2016 TP5.

Create and attach disks

To create and attach the disks I used some powershell commands. I have 2 “Hosts” HV-01 and HV-02.

Read more