Ignite 2017 @ Orlando Day 4

Day 4:

Azure High Performance Networking

This was a very interesting session with lots of good info. It started of wit VNet integration of Azure Container Service and the ability to give an IP to a single container instead of sharing the IP with several containers.

VNet Service endpoints is also new which gives you the ability to deny internet access to VM’s but allow specific Azure services as Endpoint. So your VM’s can talk to Azure Services or Paas Services without you trying to figure out behind what IPs the endpoints are located and talking to the rest of the internet.

Then NSG’s got a bit less dumber then they were. The applied service tags to NSG’s. So what it means is that you can for example set a tag SQL Servers, or IIS Servers and make all IIS or SQL Servers being tagged by the policy. So you setup one rule with a tag SQL and all your SQL servers wil be bound to that NSG rule instead of creating several rules based on source IP’s of that SQL server.

Then Vnet Peering cross region came along and the enouncement that it’s at public preview. You can now peer Vnet’s from different regions which was not possible before. I found out that is was in Private preview and to be announced in public preview on the same day 🙂

Azure Site Recovery

Earlier this week I got the change to talk with some Azure Site Recovery Product managers to talk about feedback and use cases. Near the end I was told that my frustration against the fact that some features were available for VMWare to Azure, Physical Host to Azure, Azure to Azure DR/migration but not for Hyper-V to Azure DR/Migration will come to an end. Turns out that it will end near the end of next month. Features will be the same for all platforms finally! Also a big complaint from my side was the fact that there is no compression for the Hyper-V MARS agent available. That one is also coming thank you! 🙂

As for this session they did a talk about Application Level DR which was not new to me. But it was also about monitoring ability’s which are massively going to improve. They got a very cool dashboard! (sorry for the bad pictures, I was not sitting in the front 🙂 )

 

 

 

 

 

Now you can get tons of info about replication, RPO, Latency, Bandwidth, Churn Rate (how much change there is per replication interval) error logging and also a Topology view! I like it!

Software Defined in version 1709 what’s new

In this session with Jeff Woolsey and Claus Joergensen it was about what’s already there in 2016 and we all know that. It’s about whats new in 1709 and going a bit deeper!

As I blogged before on Day 2, with 1709 we can use HDD, SSD, NVME and NVDIMM for Storage Spaces Direct. The also mentioned that NVME uses less CPU cycle’s which is also really cool with HyperConverged because you need them for all the workloads. With NVDIMM or SCM the access times and latency is not in milliseconds but in nanoseconds.

You can use the SCM as Cache or Capacity although they are small in size for now as the max NVDimm is 16GB in size. And since your are filling up your regular memory slots you have to find a sweet spot in memory for RAM and SCM. But if you go for a disaggregated approach with SOFS i would stuff the sucker full with NVDimms and it as fast as you will ever imagine! As for S2D there are also improvements on the drive error handling and MVR disks. As I blogged here MVR disks are really slow, they claim there 4 times faster now. Currently I don’t have the systems available to test it out.

That’s it for Day 4!

Greetings,
Pascal Slijkerman

 

Leave a Comment