Ignite 2017 @ Orlando Day 4

Day 4:

Azure High Performance Networking

This was a very interesting session with lots of good info. It started of wit VNet integration of Azure Container Service and the ability to give an IP to a single container instead of sharing the IP with several containers.

VNet Service endpoints is also new which gives you the ability to deny internet access to VM’s but allow specific Azure services as Endpoint. So your VM’s can talk to Azure Services or Paas Services without you trying to figure out behind what IPs the endpoints are located and talking to the rest of the internet.

Then NSG’s got a bit less dumber then they were. The applied service tags to NSG’s. So what it means is that you can for example set a tag SQL Servers, or IIS Servers and make all IIS or SQL Servers being tagged by the policy. So you setup one rule with a tag SQL and all your SQL servers wil be bound to that NSG rule instead of creating several rules based on source IP’s of that SQL server.

Read more

Ignite 2017 @ Orlando Day 3

The third day at Ignite was kind a hard to start up, it were long day’s and fun long nights but 2 double espresso kind a pushed me out of my morning dip. Ready to start the day!

Azure High Performance Networking:

This sessions was initially not about new stuff. It’s was more to make things more clear about Azure networking. Near the end there was a lot of new stuff about ExpressRoute though!

Public and Microsoft Peering

Earlier I hear some noise from several people that the Office 365 peering, or Public Peering was to be canceled. But now we know that it’s not cancelled but that the 2 peerings have merged. That makes things simpler, but also more complex, because one of the most issue’s I hear customers talking about is that they don’t want to peer with all Azure or Office 365 services and now there is no choice in those either. It’s either none ore all in! But Microsoft must have heard this complaint because the came up with a new feature for ExpressRoute called Route Filters. With the filters you can choose what routes you want advertise to use only the service you want over the ExpressRoute connection. Nicely done! 🙂

Finally monitoring on ExpressRoute!

Read more

Ignite 2017 @ Orlando Day 2

The second day of Ignite i started of with a session on:

Azure Stack servicing and updating.

Updates for Azure stack consist of 2 packages or actually 3, but the third is different and not really clear how that is taking place because it will be the OEM vendor package and all vendors can take care of that in their own way. So the first package is for the OS updates for all the VM’s and hosts in the Azure Stack. The second package is about updating the resource providers in Azure stack. The Azure stack can be updated in a disconnected scenario as long as the bits are downloaded and uploaded in the blob storage through the Admin Portal.

Both are pretty big and not yet cumulative. Meaning that you have to run all the updates to get to the latest and you can’t skip an update or something. Updates will be every month and you should not supposed to fall behind more ten 3 months otherwise you will loss support and have to be current first.

Since the entire stack is locked you cannot login with RDP and go to Windows Update and click install updates. To take care of that Azure Stack has an Update Resource Provider. The resource provider gives an wizard in a set of blades to provide a destination to the update packages and install the update or schedule it.

Read more

Third party storage replication software and Server 2016 issue’s with S2D

Hi All, recently I encountered some issue’s with third party software to replicate VMs from an old Windows Server 2012 cluster to a brand new Windows Server 2016 S2D Cluster, were it turned out that the third party software was not fully supported on Windows Server 2016 although they claimed to be. I know you could use Share Nothing live Migration but in this case that was not possible so we had to look for third party software.

In this example I encountered two issue’s when using Storage Spaces Direct and the ReFS file system (which is a hard requirment with S2D) together with the replication software.

Issue 1

So my first issue was with the agent that you install on a Windows Server 2016 Hyper-V host to be able to migrate servers from a source VM to a Hyper-V Storage Space Direct cluster as destination. After a push from the console or a manual installation the agent service would not start. After starting it crashes with a .net error. Well that seems pretty simpel and straight forward… do you really need a blog for that. That’s true but the next issue is not directly noticeable. In the end it turned out that the service could not start because it could not work with ReFS.

Issue 2

After starting of the service failed, the vendors tech support provided a work arround, and since there was a deadline pushing we took the work around and left the agent running and migrated the VMs through the work around but that’s not what this blog is about.

Read more

Storage Spaces Direct Mirroring vs MRV (Parity) performance

HP Lefthand

Back several years ago (about 6 or 7 if i remember correctly) when Storage Spaces and Storage Spaces Direct (S2D) did not exist yet, there was a another vendor called “Lefthand” which did kind of the same trick. Lefthand was bought by HP and the product was renamed to P4000 and later on HP StorVirtual 4000. The principal of this type of storage is different to other vendors. These storage nodes use local raid level on several disks in a system and additional nodes with the exact same hardware are pooled into a cluster. On top of the cluster volumes with network raid level’s are created. So disks in a system (1 disk in case of a single local RAID 5 pool of disks) could die without losing the node. In this setup you have several layers of redundancy on a storage node, but also on the entire cluster.

You could start with 2 systems and create a mirrored volume. The raw capacity of 1 node minus the raid level was the total usable capacity. So take a two node setup with 12x 1TB disks you have give or take 21TB of usable capacity (12 disks in a RAID 5 = 12TB minus 1TB and minus some lost bits and bytes so give or take 10,5TB usable). In a two node setup you will have 10,5 TB of usable space with mirrored volumes because the data is mirrored across both nodes. Mirroring data like that brings you high availability on storage on a node level. So you could loss a storage node without the volume going offline but it will cost you half of the raw storage capacity. If you add an extra node to make a total of 3 nodes, you would have 10,5 * 3 = 31,5TB of raw capacity. Taking the mirroring in consideration you will have about 15,7TB of usable capacity. And this keeps going in a 10 node setup you have 105TB of raw capacity and about 50+TB of usable capacity. So all the time you will loss halve of the capacity in a Network Mirror volume. If you chose for 3-Way or even a 4-Way mirror (don’t know why but it is possible) you have massive redundancy and performance but a terrible efficiency because in a 4-Way mirror on 4 nodes you only have 25% of the RAW capacity available for data.

Yes.. it’s seems like a waste of space, so Lefthand (and later on HP) came up with Network RAID 5. When you have three or more nodes you could setup a volume with Network RAID 5. Then the data is places on 2 nodes and the 3 node is doing parity. You could still lose a disk in a node or an entire node…. BUT it was dreadfully slow and HP recommended AGAINST setting up Network RAID Level 5….
.
So I hear you thinking what is with all the “old stuff” on Lefthand… Well, Storage Spaces Direct is kind of the same principal and the same applies on Parity volumes… But bear with me on this 🙂
.

Mirror vs MRV Volumes.

With Storage Spaces the resiliency level is set on Volume level. That means that you can create Mirrored and Parity volumes and also a new flavor named Mixed Resiliency Volumes with both Parity and Mirrored space. With traditional hardware RAID, mirrored disks are always faster as Parity disks because of the parity process. The same applies for S2D. With Mirrored volumes all data is mirrored across an x amount of nodes and disks in the cluster. By default Storage Space Direct uses a 3-way mirror layout. All blocks written to disk are copied to 2 other nodes (in case of a 3 node or higher cluster). Because of this you default lose 2/3 of your raw capacity.

Microsoft S2D Program Manager Cosmos Darwin created a nice website to make some calculations on how much usable space you get with different combinations of disks, capacity and resiliency settings, check it out on http://aka.ms/s2dcalc

When you create a 1 TB 3-way mirrored volume that volume has a 3 TB footprint. That’s a simple calculation because 1 TB of data is copied 2 additional times in a 3 way-Mirror which makes 3 TB. When you create an MRV of 1 TB with for example 30% Mirrored capacity and 70% Parity capacity we have to do a bit more math. So 300 GB * 3 is 900 GB. Then we have 700 GB parity space that will require double the space for that a total of 1400 GB. The total footprint of a 1 TB MRV Disk is 900 GB + 1400 GB = 2300 GB. So with an MRV disk you save 700 GB of space on a 1 TB volume.

Because of the massive loss of capacity with 3-way mirror people (most of them are the people who are responsible for the budget) are forcing or highly recommending to use or consider Parity or a form of Mixed reciliency to get more GBs/TB’s out of there hardware.. But at what cost?

Read more

VMM 2016 and Network Controller certificate Issue’s

Since near the end of last year I was blessed with some hardware to test al lot of new features and stuff of Windows Server 2016, System Center 2016 and Azure Stack. Last week I experienced an issue with my Network Controller VM’s. In the end it turned out to be more of a VMM issue I think. But I wanted to share this with the world in case somebody else experienced this issue and does google for nothing because there is nothing to find about this issue.

Problem

I did the network controller and SLB Mux setup several weeks ago and all was running fine while all of a sudden I couldn’t change stuff in VMM anymore. Almost every action I did triggered this error:

Error (21426)
Execution of :: on the configuration provider  failed. Detailed exception: Unable to connect to the network service. Check connection string and network connectivity. Execution of Microsoft.SystemCenter.NetworkService::OpenDeviceConnectionEx on the configuration provider 3e2875a7-5831-4fb2-b388-1672e1c20fee failed. Detailed exception: System.Net.Http.HttpRequestException: An error occurred while sending the request. ---> System.Net.WebException: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. ---> System.Security.Authentication.AuthenticationException: The remote certificate is invalid according to the validation procedure.
Check the documentation for the configuration provider or contact the publisher support.
Unable to connect to the network service. Check connection string and network connectivity.

Recommended Action
Check the documentation for the configuration provider or contact the publisher support.

Troubleshooting

So I did a bunch of tests and troubleshooting

Read more