Storage, Failover Clustering, & Application Platform

This is a guide on storage, failover clustering, and applications platforms.

C++ is among the best languages to start with to learn programming. It is not the easiest, but with its speed and strength, it is one of the most effective. This small study book is ideal for middle school or high school students.

Storage Migration Service

In this presentation, we'll take a look at the Storage Migration Service which makes it easier to migrate servers to maybe a newer version of Windows Server or maybe to migrate workloads to newer hardware or virtual machines. Now, the idea is to be able to do all of this without apps or users having to change anything in terms of their configuration.

[Video description begins] Screen title: Storage Migration Service [Video description ends]

So the process begins with creating an inventory of all of the servers that are going to be involved and the associated data that will be migrated. Then of course, it actually transfers the data from one to the other. And then optionally, you can take over the identity of the source server, which is also referred to as cutting over. So that again, users and apps don't have to change anything to access the existing data. And Windows Server 2019 makes this easier by centralizing the migrations all within the new Windows Admin Center interface.

[Video description begins] Screen title: Requirements [Video description ends]

Now there are certainly requirements to implement this feature. You do need a Source server from which you will migrate the data, you need a Destination server to which you will migrate the data, and you also need an Orchestrator server which manages all of the processes. Now if you are only using a small number of servers in terms of the migration overall, then you can use either the source or destination server as the orchestrator provided its running Windows Server 2019. If you're migrating a lot of servers, then it is strongly recommended to use a separate orchestrator server. And again, all of this can be managed through the Windows Admin Center, but it should be noted that you can also use PowerShell.

[Video description begins] Screen title: Other Considerations [Video description ends]

And in terms of some other considerations, you do need a migration account to use that is administrative on both the source and the destination servers. You need an inbound firewall rule for file and printer sharing or SMB server message block protocol that has to be enabled on the orchestrator computer. And you need additional firewall rules on both the source and destination computers for file and printer sharing or SMB-In, the Netlogon Service, or NP-In, Windows Management Instrumentation, or DCOM-In, and Windows Management Instrumentation, or WMI-In. And in terms of the Active Directory Domain Services configuration, all systems should be in the same forest.

[Video description begins] Screen title: Source Server Requirements [Video description ends]

In terms of the requirements version wise on the source server, you can have just about anything, it goes as far back as Windows Server 2003, but quite a comprehensive list here. Including variations on versions for 2019, 2016, 2012, 2008 and even some of the essential systems there and small business server additions.

[Video description begins] Screen title: Destination Server Requirements [Video description ends]

But the destination server is certainly a little more limited in terms of the version, it must be either Windows Server, Semi-Annual Channel, Windows Server 2019, Windows Server 2016, or Windows Server 2012 R2. But again, the idea is to just make it a little more easy to move a workload from an older system, for example, to a newer system without having to reconfigure anything from the perspective of your users and/or applications.


Storage Spaces Direct

In this presentation, we'll examine the new features of Storage Spaces Direct in Windows Server 2019.

[Video description begins] Screen title: Deduplication and Native Support [Video description ends]

Beginning with support for deduplication and compression on volumes formatted with the Resilient File System to allow for possibly up to ten times more data on a single volume. And inherent support for Persistent memory modules including Intel's Optane DC PM and NVDIMM-N as cache memory to boost the performance of your active working sets, lowering latency to only microseconds.

[Video description begins] Screen title: Redundancy Features [Video description ends]

And there is also enhanced redundancy through what's referred to as nested resiliency whereby you can configure a RAID 5+1 array in a two node cluster to provide continuously available storage even through the loss of one complete server and a drive failure in the other. And a standard low-cost USB flash drive plugged into a router can now act as a witness in a two-node cluster to determine which node has the most up-to-date data if a node goes down but then comes back up.

[Video description begins] Screen title: Administration Features [Video description ends]

With respect to administration, Storage Spaces Direct can be fully managed through the new Windows Admin Center using the specifically built dashboard to create, open, expand or delete volumes quickly and easily. And you can view performance history from over 50 essential counters spanning all aspects of the core resources for data gathered for up to a year.

And you can scale up to 4 petabytes or 4,000 terabytes per cluster with increases to the total number of volumes which is now 64, up from 32. And to the individual volume sizes, which is also up to 64 terabytes up from 32. Plus the fact that your clusters can be aggregated into a single namespace using cluster sets. There have also been improvements to parity and latency with support for

[Video description begins] Screen title: Parity and Latency [Video description ends]

mirror accelerated parity, whereby you can configure Storage Spaces Direct volumes that are part mirror/ part parity, to get the best of both worlds. And you can easily identify drives with out of spec or abnormal performance with built-in detection and proactive monitoring features which will label the drive as abnormal in both the Windows Admin Center and in PowerShell.

[Video description begins] Screen title: Delimit the Allocation [Video description ends]

And finally, volume allocation can be manually delimited now in more advanced fault tolerant configurations such as three-way mirroring. Which improves reliability by allowing an administrator to specifically designate which nodes are used when allocating the array. For when you might feel that certain nodes are perhaps a little more reliable or maybe put another way a little less susceptible to failure than others. So then with these enhancements, you should be able to configure your storage in a variety of ways with greater performance and resiliency and with easier administration.


Storage Replica

In this video, we'll take a look at some improvements and enhancements to the storage replica feature for Windows Server 2019.

[Video description begins] Screen title: Storage Replica [Video description ends]

Which begins with support now for implementing it on systems running the Standard Edition along with the Datacenter Edition. And that was not available in Windows Server 2016.

[Video description begins] Screen title: Standard Edition Limitations [Video description ends]

But that said, if it is going to be implemented on the standard edition then there are some limitations including only a single volume replication can be configured as opposed to unlimited in the data center. And any replicated volume that is participating must be a maximum of 2 terabytes in standard whereas, in a data center, it's limited only by the support of the operating system.

[Video description begins] Screen title: Logging [Video description ends]

And storage replica has also been enhanced in terms of its logging and tracking performance, which directly translates into overall improved throughput and lower latency. But to take advantage of this, all members of the replication group must be running Windows Server 2019.

[Video description begins] Screen title: Windows Admin Center Support [Video description ends]

And in terms of management, storage replica can be fully managed graphically using the new Windows Admin Center via the Server Manager tool, and includes management of server to server, cluster to cluster and stretch cluster replication.

[Video description begins] Screen title: Test Failover [Video description ends]

And finally, disaster recovery protection has been expanded by allowing test failover scenarios where you can temporarily mount a snapshot of replicated storage on your destination nodes to verify the content and ensure proper configuration. So several enhancements to storage replicas have been introduced with Server 2019 that should hopefully allow you to configure more robust and reliable storage solutions.


Azure Stack HCI

In this presentation, we'll examine the Azure Stack HCI, or Hyper Converged Infrastructure.

[Video description begins] Screen title: Azure Stack HCI [Video description ends]

Now, despite the name Azure being in there, this doesn't really refer to something that is running in Azure. But rather, it's a local implementation of the same technology that's used in Azure data centers to run virtualized workloads in your own on premises data centers. Now this solution involves Microsoft tested and verified hardware from an OEM provider running Windows Server 2019 datacenter edition, and it can be managed through the new Windows Admin Center. Now, even though it's a local implementation, it's fully compatible with Azure services if desired. So you can configure features such as cloud-based backup, site recovery and monitoring along with others that we'll actually see in a few moments.

[Video description begins] Screen title: Use Cases [Video description ends]

But in terms of the use cases, you might choose to implement the Azure Stack HCI if you're looking to refresh aging hardware with a storage infrastructure that you know will be compatible with Azure services. And offer features such as Windows and Linux virtual machines, that can all be managed with familiar tools and applications. Or maybe you're looking to consolidate virtualized workloads onto a single efficient infrastructure or use it to connect to those optional Azure services in a hybrid configuration. Again, such as backup and site recovery for your existing workloads, but all with the knowledge that your implementation is running on validated hardware with cloud-based compatibility built in.

[Video description begins] Screen title: Hyperconverged Solution [Video description ends]

So in terms of some solutions using Azure HCI, you can combine resources into a highly virtualized, highly centralized cluster to simply make it easier to deploy, manage and scale your workloads. Also, with built-in support for features such as non-volatile memory, persistent memory, and remote direct memory access networking along with the enhanced security of shielded virtual machines, network segmentation and encryption, if desired.

[Video description begins] Screen title: Hybrid Capabilities [Video description ends]

So as mentioned, if you're implementing the Azure HCI in a hybrid cloud configuration, then you have access to integration with Azure services, including azure site recovery for added high availability Azure monitor to track your applications and infrastructure with advanced analytics. A Cloud Witness to break quorum ties, Azure Backup for off site protection particularly against ransomware. Azure Update Management for assessment and deployment of updates to on-premises virtual machines. Azure Network Adapter to connect to Azure using a point to point VPN and Azure Sync to synchronize files to the cloud.

[Video description begins] Screen title: Features [Video description ends]

And as for features and configuration for the Azure HCI, it supports options such as Hyper-V, Storage Spaces Direct, Software Defined Networking and Failover Clustering.

[Video description begins] Screen title: Management Tools [Video description ends]

All of which can be managed and configured using standard tools such as the new Windows Admin Center, System Center Virtual Machine Manager and/or Operations Manager, if you have them both, PowerShell, of course, and any instance of Server Manager. So again, the idea is just to implement this very robust type of storage infrastructure that you can run on premise but you know is compatible with all of these cloud-based services to implement a much more robust and resilient configuration.


Cluster Sets

Which is a new scale of technology in Windows Server 2019 that allows you to dramatically increase the number of nodes in a software defined data center. So with respect to what they're all about, in the same way that a single cluster is a collection of individual nodes, a cluster set is a collection of failover clusters. That enables flexibility and fluidity across your member clusters with a single unified storage namespace for that same flexibility in terms of the portability of member virtual machines. So as mentioned, cluster sets allow you to significantly increase the scale of your solution in a software defined data center by combining several smaller clusters into a single larger infrastructure.

And to be able to manage the entire failover cluster lifecycle, including onboarding new members and retiring old members. All without impacting the availability of the member virtual machines. Since they can migrate seamlessly across the entire fabric. And you can also easily make changes to the balance of compute to storage ratio. And benefit from Azure-like fault domains and availability sets. Both in terms of initial virtual machine placement and movement after the fact. And within the same single fabric, you can mix-and-match different generations of CPU hardware if necessary. Although, it is still recommended to keep everything as consistent as possible throughout the fabric.

[Video description begins] Screen title: Management Cluster [Video description ends]

Looking at the architecture, it begins with a management cluster. Which, as you might imagine, is responsible for the management plane of the entire cluster set. And is logically separated from other set members to ensure availability of the management plane in the event that an entire member cluster goes down, such as in a localized power failure. And additionally, the management cluster is responsible for maintaining the unified storage namespace for all other members.

[Video description begins] Screen title: Member Cluster [Video description ends]

Now, a member cluster then, of course, is any other cluster other than the management cluster. And can loosely be thought of as the worker cluster, if you will. Which typically takes the form of a more traditional hyper-converged cluster. Running virtual machines or perhaps Storage Spaces Direct workloads. The key distinctions of member clusters include that they participate in fault domains and availability sets. And as such, any virtual machines that move across boundaries, in a cluster set, must not reside in the management cluster. They must be in a member cluster.

[Video description begins] Screen title: Scale-Out File Server (SOFS) [Video description ends]

The cluster set namespace referral is a scale out file server that spans the entire set and implements cluster set namespace referrals for all SMB shares that reside within the set. Windows Server 2019 implements this as a SimpleReferral process that allows SMB clients to access the target of the share regardless of which host receives or hears the request. So the referral mechanism itself is generally referred to as a Lightweight mechanism. And as such, doesn't participate in the I/O path. Rather, the referrals are perpetually cached on the client nodes. And are dynamically updated as needed to ensure clients are directed to the target appropriately.

[Video description begins] Screen title: Cluster Set Master [Video description ends]

The cluster set Master or CS-Master, as you might imagine, by its name, coordinates all communication among set members. And the CS-Master itself is implemented as a cluster resource like any other. Hence in itself, is highly available and resilient to member cluster failures or management node failures. And ultimately, provides the endpoint for all cluster set management interactions and configurations.

[Video description begins] Screen title: Cluster Set Worker [Video description ends]

And along with the CS-Master, there is also a CS-Worker which interacts only with the CS master effectively as a liaison, if you will. To coordinate any local member cluster interactions as per the directions of the CS-Master. Such as individual virtual machine placement configurations or maybe resource inventory. Now, regardless of the number of member clusters, there is only ever one CS-Worker instance per cluster in the set. Which, of course, can be offloaded to another node in the event that the existing CS-Worker fails.

[Video description begins] Screen title: Fault Domain [Video description ends]

Now a fault domain in a cluster represents a collection of hardware and software that has the possibility of failing together as a group. For example all of the servers in a single rack that are fed by a single power supply. Now, the boundaries of any given fault domain, really, is just something that you have to consider in your design, as it's always possible for anything to fail. So really, it's up to you as an administrator to determine where those boundaries might exist, either by default or by design.

[Video description begins] Screen title: Availability Set [Video description ends]

So then to help overcome any obstacles presented by the fault domain, you can implement availability sets to configure redundancy of your workloads. By essentially ensuring that any given workload is spanned across more than one fault domain as opposed to being hosted entirely within one. This ensures that if the fault domain does fail and of course, you have other instances in other fault domains running redundant instances of that work load for increased availability.


Cluster-Aware Updating for Storage Spaces Direct

One of which is the fact that this is supported on Storage Spaces Direct or S2D. Now, it's abbreviated S2 because there are two Ss storage spaces, then of course the D for direct. But the reason they don't use SSD of course this also stands for Solid State Drive. So, S2D is just a common abbreviation for Storage Spaces Direct, but the idea is just to be able to manage your overall update process for patching the nodes in a cluster in a much smoother and faster manner by reducing the number of restarts that are required. In previous implementations of updating cluster nodes, you essentially had to take a node down to apply the updates, then of course bring it back up. And this often involved moving workloads around, so it was a fairly tedious process.

[Video description begins] Screen title: CAU Process [Video description ends]

So with Cluster Aware Updating, the process involves first assessing which hosts require the updates in the first place as certain hosts may not need the update. So that in and of itself can reduce the restarts. And then serially applying patches while automatically moving workloads to live nodes so that you don't have to continually restart all of the other nodes.

So again, it's just designed to maximize up-time and to ensure that all nodes are ultimately patched. But again, without having to restart them all, all the time. And the fact that Server 2019 supports this using Storage Spaces Direct can make it even easier still because you can configure your cluster nodes using Storage Spaces Direct, which in of itself is just a built-in feature of Windows server 2019. So you don't need any additional hardware or anything specific that in themselves may introduce the need for more updates.


Windows Admin Center Integration

Now earlier in this course we did introduce the Windows Admin Center, which is the new browser-based management tool that is locally deployed and effectively as the replacement for Server Manager.

[Video description begins] Screen title: Windows Admin Center [Video description ends]

Now in this presentation we'll see how you can also integrate the Windows Admin Center into your cluster management.

[Video description begins] Screen title: Hyper-converged Clusters [Video description ends]

So when it comes to hyper-converged clusters, you can manage all of the virtualized components for your clusters including the compute, storage and networking components.

[Video description begins] Screen title: Key Features [Video description ends]

And essentially it allows you to create and manage storage spaces and virtual machines. It provides powerful cluster-wide monitoring with a dashboard that graphs memory and CPU usage, storage capacity, IO per second, throughput, latency in real time and all of this across every server in the cluster. And also has full software defined networking support so you can manage and monitor your virtual networks, the subnets and connect to all of the virtual machines in those virtual networks.

[Video description begins] Screen title: Integration [Video description ends]

And in terms of integration, again, you certainly can centralize the deployment monitoring and the operation of your Hyper-V hosts and clusters using the Windows Admin Center, but it is a new feature. So for the time being, in terms of cluster support it is somewhat limited, core operations only.

So, there may be certain features that might still require you to use the failover cluster manager for the time being, but it is still in development. So, depending on when you are viewing this there may be full functionality in terms of the Windows Admin Center. So even though it was designed as a replacement for server manager, it is ultimately going to be implemented as a very centralized tool for almost all aspects of administration and management of your organization including your hyper-converged clusters.


Self-Healing Failover Clusters

Now in this presentation, we'll take a look at self-healing failover clusters, which is a new resiliency feature that has been implemented into Windows Server 2019.

[Video description begins] Screen title:Self-healing Failover Clusters [Video description ends]

 Which, of course, just adds high availability to your cluster so that they remain even more available. And in many cases, this is quite transparent to an administrator, because it will automatically attempt to discover and repair any infrastructure issues.

[Video description begins] Screen title: Repair Process [Video description ends]

So quite simply, the repair process is invoked whenever any node is unable to communicate. Now clearly, this will depend on what the problem actually is. But when that node is unable to communicate, a repair attempt is initiated automatically. And if that can resolve the issue, it will also rejoin the node to the cluster. So again, it's possible that you might not even notice that anything ever happened with that node apart from maybe examining the logs.

[Video description begins] Screen title: Failover Clustering [Video description ends]

So ultimately, this is designed to be a key aspect of failover clustering for hybrid cloud configurations, application platform configurations and/or hyper converged infrastructure configurations that will hopefully just make a much more resilient and much more robust solution for your clusters.


Application Platform Container Improvements

Beginning with the fact that you can now run both Windows and Linux based containers on the same host using the same Docker daemon, which provides greater flexibility to your application developers due to this heterogeneous platform.

[Video description begins] Screen title: Kubernetes [Video description ends]

And in addition, Windows Server 2019 provides improvements to compute, networking, and storage to in turn offer built in support for Kubernetes. Enhancements to networking resiliency and container networking plugins greatly improves the usability of Kubernetes on Windows. And since both Windows and Linux services are supported, Kubernetes workloads can take advantage of network security to protect both platforms.

[Video description begins] Screen title: Container Improvements [Video description ends]

And lastly, several improvements to the containers have been introduced, including enhanced integrated Windows authentication for easier identity management, compared to earlier versions of Windows Server. Improved application compatibility for containerizing Windows-based applications is provided through increased compatibility to the existing Windows Server core image. Plus a new base image simply called Windows for applications without any additional API dependencies is available. Higher performance through smaller image download size, smaller disk space requirements and decreased startup time.

And all of that within a better management experience by using the new Windows Admin Center where you can more easily see which containers are running and manage individual containers with the implementation of the new containers extension, which is available from the Windows Admin Center public feed. So hopefully these improvements will allow you to make a much more dynamic and compatible application environment, particularly in a hybrid or a private cloud configuration.


Virtual Workloads

In this presentation, we'll take a look at some improvements to network performance in Windows Server 2019. Which can help to maximize the overall throughput to the virtual workloads running on your virtual machines. While at the same time, lowering your operational and maintenance costs by effectively allowing you to consolidate more virtual machine workloads onto fewer physical hosts.

[Video description begins] Screen title: New Features [Video description ends]

Now, this could be achieved through two new features, first receive segment coalescing, or RSC, in a virtual switch which is an enhancement that coalesces multiple smaller TCP segments into a larger one before data traverses the virtual switch. Now in earlier versions, this was implemented but it was done so by the NIC itself. But it was disabled as soon as the adapter was connected to the virtual switch. So RSC in Windows Server 2019 addresses this by enabling RSC on the external virtual switches by default.

And secondly, Dynamic Virtual Machine Queue and Virtual Machine Multi-Queue, which in itself isn't new, allowed for higher overall throughput of a virtual machine as network speeds began to reach very high speeds themselves. But oftentimes, the planning, the tuning and the monitoring needed to take advantage of that increased speed was fairly time consuming. So Windows Server 2019 automatically implements these optimizations by dynamically spreading the processing and tuning of network workloads as necessary to ensure the highest levels of efficiency. While at the same time, relieving that workload from your administrators.


Windows Time Service

In this presentation, we'll take a look at the Windows Time Service and some new features and protocols that have been introduced with Windows Server 2019 to improve overall time management and the accuracy of time related functions.

Now, the Windows Time Service itself is not new. This has been around for quite some time. But it is responsible for synchronizing data and time for all computers in the Active Directory. Quite simply, you want your clients, your servers, and perhaps most notably, your domain controllers, to all agree as to what time it is. And this is critical for many Windows services and applications. Perhaps, most notably, just logging in, your client and the domain controller need to agree as to what time it was when your login occurred. Your Kerberos ticket validity depends, in fact, on that agreement. Now there is a tolerance that can be configured, but generally, you just want everyone to agree.

[Video description begins] Screen title: Leap Second Support [Video description ends]

Now in terms of the applications that are being developed these days, many of them are very time sensitive. So in terms of new features, Windows Server 2019 introduces leap second support. Now this is conceptually similar to a leap year whereby we add a day to compensate for the fact that what we think of as a year is not actually 365 days.

So leap second involves an occasional 1-second adjustment to Universal Coordinated time, which compensates for the fact that the rotation of the Earth is actually slowing down a little bit. So when it gets to a point, I believe it's 0.9 seconds, in fact, it will actually add this one second adjustment. So again, your applications that depend on it, in terms of accuracy and traceability, will all still agree as to what the time is in relation to mean solar time.

[Video description begins] Screen title: Precision Time Protocol [Video description ends]

And finally, Windows Server 2019 introduces a new time provider known as the Precision Time Protocol, or PTP. And this takes into account the fact that as time distributes across a network, it will inevitably encounter delay. There is always latency with respect to all of the networking devices that it has to cross and all the processing that has to happen.

So, again, if that's not accounted for and/or if it's not symmetric across all systems, it can simply become increasingly difficult for timestamps to have any meaning in terms of the time when it was issued, and the time when the client believes it is right now. So, again, this just allows you to compensate for latency and processing time to ensure that there is as much accuracy as possible between the time where the timestamp is issued, and what time the client thinks it is right now.


High Performance SDN Gateways

Now, in this video we'll take a look at some improvements to high performance gateways in Windows Server 2019. And this is due to an aspect of software defined networking in Windows Server 2016. Whereby the throughput requirements of modern networks really just weren't met by the software defined gateway. And specifically, it had to do with connections for GRE, or the Generic Routing Encapsulation Protocol and IPsec, or secured IP. So when it comes to the GRE connections, the throughput has been increased to 15 gigabits per second. Up from 2.5 gigabits per second in Windows Server 2016.

And there's no manual configuration involved to enable this high performance gateway. As soon as you deploy or upgrade to Windows Server 2019 on the gateway virtual machine. You should automatically see the enhanced performance, so you don't have to do anything to configure it. And in terms of IPsec connections, the throughput was increased to 1.8 gigabits per second. From only 300 megabits per second in Windows Server 2016. And this is enabled through your Services console. Just go in there and locate the Azure gateway service, and then set its startup type to automatic and then restart the gateway.

And of course make sure that you do that for all gateway systems if you do have more than one. But that should enable the high performance gateway for that system as well once it is running Windows Server 2019. So again, it really was just a little bit of a bottleneck in Windows Server 2016. And the improvements that Windows Server 2019 has introduced should meet the demands of most modern networks these days.


Encrypted Networks

In this presentation we'll examine virtual network encryption which does exactly as its name indicates. It will encrypt the virtual network traffic between the virtual machines on that network. Now, it is something with which you can be selective. In other words, you do have to make sure that any given subnet is marked as Encryption Enabled for the feature to be on.

[Video description begins] Screen title: Datagram Transport Layer security (DTLS) [Video description ends]

Now it uses Datagram Transport Layer Security or DTLS as the encryption means. Because this provides eavesdropping, tampering, and forgery for anyone who might have access to the underlying physical network.

[Video description begins] Screen title: Virtual Network Encryption Requirements [Video description ends]

Now in terms of the requirements, you do need an encryption certificate installed on each of the software defined enabled Hyper-V hosts. You need a credential object configured in the network controller that references the thumbprint of that certificate. And you need to specifically configure each virtual network that contains subnets that require encryption to ensure that it is implemented as desired.

[Video description begins] Screen title: Encryption [Video description ends]

Now in terms of the encryption itself, again once it is enabled on a subnet, then it will encrypt all traffic automatically. In other words, you don't have to do anything in addition to enabling this feature. Once you have met all the previous requirements, you just enable it and then all encryption is in place automatically, but it is only within the subnets. So traffic between subnets is sent unencrypted automatically. And again, automatically meaning that it will just decrypt automatically so that you don't have to do anything. And the same goes for any traffic that crosses the virtual network boundary in any way. It will also be unencrypted automatically and sent along its way. So again, you don't have to do anything, but just be mindful of the fact that the encryption is only within the subnet itself.