Ignite 2017 @ Orlando Day 4

Day 4:

Azure High Performance Networking

This was a very interesting session with lots of good info. It started of wit VNet integration of Azure Container Service and the ability to give an IP to a single container instead of sharing the IP with several containers.

VNet Service endpoints is also new which gives you the ability to deny internet access to VM’s but allow specific Azure services as Endpoint. So your VM’s can talk to Azure Services or Paas Services without you trying to figure out behind what IPs the endpoints are located and talking to the rest of the internet.

Then NSG’s got a bit less dumber then they were. The applied service tags to NSG’s. So what it means is that you can for example set a tag SQL Servers, or IIS Servers and make all IIS or SQL Servers being tagged by the policy. So you setup one rule with a tag SQL and all your SQL servers wil be bound to that NSG rule instead of creating several rules based on source IP’s of that SQL server.

Read more

Ignite 2017 @ Orlando Day 2

The second day of Ignite i started of with a session on:

Azure Stack servicing and updating.

Updates for Azure stack consist of 2 packages or actually 3, but the third is different and not really clear how that is taking place because it will be the OEM vendor package and all vendors can take care of that in their own way. So the first package is for the OS updates for all the VM’s and hosts in the Azure Stack. The second package is about updating the resource providers in Azure stack. The Azure stack can be updated in a disconnected scenario as long as the bits are downloaded and uploaded in the blob storage through the Admin Portal.

Both are pretty big and not yet cumulative. Meaning that you have to run all the updates to get to the latest and you can’t skip an update or something. Updates will be every month and you should not supposed to fall behind more ten 3 months otherwise you will loss support and have to be current first.

Since the entire stack is locked you cannot login with RDP and go to Windows Update and click install updates. To take care of that Azure Stack has an Update Resource Provider. The resource provider gives an wizard in a set of blades to provide a destination to the update packages and install the update or schedule it.

Read more

Ignite 2017 @ Orlando Day 1

Today was the first day of Ignite 2017 which was about to kick off with a key note from Satya Nadella. Unfortunately it was a lot of the same slides and info as from Inspire 2017, so it was a bit of a waste of time, and since we had a lot of drinks at some very nice places in Orlando and a sprinkler fight with some InSpark colleague’s the night before it would have been nice to get a couple of more hours of sleep 😉 .

Empower IT and Developer Productivity with Azure
After the keynote i started with the session from Scott Gutherie. It was packed with info but a couple of things besides the session from Corey with Massive VM sizes with 128 Cores and multiple Terrabytes of memory were interesting to me:

  • Update management:
    Update Management is in preview now, and as i noticed in my own subscription not available for all machines, don’t no the prereqs for that yet. But you can enable Update management to scan vm’s for updates it needs on Windows and Llinux. You can also include Onprem Machines. It’s then displayed in a nice dashboard
  • Change Tracking
    With Azure Change Tracking in the OMS suite you can track changes in a VM through Log Analytics on a big nummer of resource. For example on File level, Registry, process and service level. Here to a slick displayed dashboard to get a good overview of what happend.

After a horrible lunch experience the real sessions would start. Here is a quick overview with some valuable take away for myself within my focus

Virtual Machine Diagnostics on Microsoft Azure
This was a short 20 minute session in the OCCC South hall Expo Theather #10. A new powershell script is release to get the health from a VM and output it to a json formated overview. With Get-AzureRMVmHealth.ps1 you can get a quick overview of several details like is my nic up, whats the ip, what port is used for RDP, is the admin account disabled, whats the username, are all vital services for remote access running and lots more! Give it a try with the following command

Read more

Third party storage replication software and Server 2016 issue’s with S2D

Hi All, recently I encountered some issue’s with third party software to replicate VMs from an old Windows Server 2012 cluster to a brand new Windows Server 2016 S2D Cluster, were it turned out that the third party software was not fully supported on Windows Server 2016 although they claimed to be. I know you could use Share Nothing live Migration but in this case that was not possible so we had to look for third party software.

In this example I encountered two issue’s when using Storage Spaces Direct and the ReFS file system (which is a hard requirment with S2D) together with the replication software.

Issue 1

So my first issue was with the agent that you install on a Windows Server 2016 Hyper-V host to be able to migrate servers from a source VM to a Hyper-V Storage Space Direct cluster as destination. After a push from the console or a manual installation the agent service would not start. After starting it crashes with a .net error. Well that seems pretty simpel and straight forward… do you really need a blog for that. That’s true but the next issue is not directly noticeable. In the end it turned out that the service could not start because it could not work with ReFS.

Issue 2

After starting of the service failed, the vendors tech support provided a work arround, and since there was a deadline pushing we took the work around and left the agent running and migrated the VMs through the work around but that’s not what this blog is about.

Read more

DPM 2016 Modern Backup Storage ReFS Volume offline and in RAW state

A couple of weeks ago a customer reported problems with a DPM 2016 server with Modern Backup Storage (MBS). Modern Backup Storage is a new approach of DPM to get a more efficient and flexible storage pool without the known limitations of the LDM database. With MBS you use storage spaces on the DPM server to add the disks to a large pool. Then on top of the storage space you create a volume or several with shares that the DPM server can use as storage instead of adding unallocated disks that DPM manages.

An explanation on MBS is not part of the scope of this blog but the problem that we had with it is related to Storage Spaces and more specific to ReFS volumes. ReFS is the new file system Microsoft has been cooking on for the last 6+ years. It has many improvements and several of them are used to make sure data corruption does not occur and if it occurs it repairs it automatically or takes actions to prevent it.

So why seems the volume on this DPM server offline, inaccessible and corrupt in a RAW file format?

In this setup

Read more

DPM 2016 Modern Backup Storage and Deduplication performance issue

Modern Backup Storage and Deduplication

One of the great features of Hyper-V in combination with a virtual DPM server was the ability to use deduplication. On the Hyper-V host you have a volume which is enabled for dedup where you place your .vhdx files on that serves as backup storage for the virtual DPM server. This way you can safe a lot of disk space.

Since DPM 2016 you can use Modern Backup Storage (MBS). With MBS you can use Storage Spaces to create a volume with shares that you present to DPM. Now I hear you thinking hey, we can enable Dedup on that volume and let DPM backup data to that volume and it get’s deduped so we can use a physical machine to… Unfortunately not, because MBS requires ReFS as file system on the disk, Dedup is ruled out because it is not yet supported and not available on ReFS volumes.

So we still need a physical Hyper-V host with an NTFS volume for the .vhdx files with dedup enabled. In the VM we create the storage space with a virtual disk and a ReFS volume and you are ready to cruise with your DPM server.

The setup

So the situation above also discribes the setup a bit. In short we have a physical host that has storage for the virtual DPM server. In this case it was a disk enclosure attached to the host with several SATA disks that we added to a storage space. On top of the storage space a mirrored volume was created to place the backup .vhdx files on. The host takes care of the deduplication of the backup data inside the .VHDX files. The vm takes care of the Storage Space and DPM.

The physical host has to take care of some dedub jobs like optimizing, garbage collection and scrubbing. These jobs cost a big amount of IO and a large amount of

Read more