Moving from Traditional to HCI.

While the previous stage was to move servers from physical servers to virtual servers. If you’re still in the process of needing to do that look at the Microsoft SysInternals Suite program called “disk2vhd”. If you don’t have a SAN check out Window Server ISCSI target server or FreeNAS (changed to TrueNAS Core). TrueNAS core requires block level disk access storage to properly use ZFS if you’re inclined to go that route.

All of these options exist as-a-service as well with cloud service offerings to varying degrees of management. An HCI solution such as Azure Stack HCI or Nutanex AOS, and VMWare VxRail among others.

I feel that RackSpace had seriously pushed the Hyper-Converged Infrastructure environment with their massive deployment of OpenStack which is essentially a fully elastic computer resource platform which these products aim to be as well. However, they are meant to integrate more easily with existing business environments to benefit from such features for profit whereas OpenStack is free. Choose what suits you best.

The purpose of these types of software is to make computing resources more malleable, fungible, redundant, resilient, compressed, trimmed, in general perform extremely well while being able to easily adapt to environmental changes.

You can try out AzureStack HCI but you will need to update your VMs to version 10 in hyper-v which requires a beta version of Windows 10 if you’re using an AMD CPU to enable nested virtualization if you don’t have multiple physical servers to test it on. For most Intel chips enabling nested virtualization should be possible with the latest current version release of Windows 10.

https://clouddamcdnprodep.azureedge.net/gdc/gdcrknhJW/original

Setting it up even in a test environment can cause a lot of “gotchas” that I would like to outline at some point.

Azure Stack HCI installs very similar to windows from a flash drive.

To give it a go for yourself click this link:

https://azure.microsoft.com/en-us/products/azure-stack/hci/

I’m going to go through a demo of my home lab environment to test this software out. Note that MSFT probably doesn’t recommend my home lab hardware as “production ready” so you’ll have to check hardware compatibility lists to see if your production hardware qualifies if you’re going to use this in a production environment. I expect that very few companies would have the appropriate hardware where deploying Azure Stack HCI manually would make sense versus buying hardware with Azure Stack HCI pre-installed. Regardless, I would like to familiarize myself with the application and how to use it.

Firstly, you need a CPU and Windows OS that supports nested virtualization with Hyper-V. I have a Windows 10 PC that I have had to enable the “Beta channel” to get the latest build past Windows 10 build 19636 or higher because I have an AMD Ryzen Chip. – https://rcpmag.com/articles/2020/06/11/windows-10-amd-machines-now-support-nested-virtualization.aspx#:~:text=The%20new%20nested%20virtualization%20preview,10%20build%2019636%20or%20higher

To check your current windows build version click start and enter “winver”.

Next, I recommend making a Windows Server 2019 Gold Image such that you can use it to spin up new VMs quickly in your home lab environment.

Before creating the Gold Image though you will need a copy of Hyper-V. Since I’m just virtualizing this on my desktop just click start and type “turn windows features on or off” or “optionalfeatures.exe” and then select the check box to install Hyper-V. If it asks you to reboot, do so. Another option is to have a physical box running Windows Server Desktop Experience with the Hyper-V role installed, or you could install Hyper-V 2019 Core and manage that through a command line interface program called “sconfig”.

Before that even, you need to ensure that your BIOS/UEFI management console has Virtualization enabled. You can check the system requirements for this here: https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/system-requirements-for-hyper-v-on-windows

The CPU will need to support such features as SLAT (Second Level Address Translation) also known as nested paging or extended page tables (EPT). This could be it’s own deep dive. Just know that improves virtualization performance.

Also, hardware DEP must be enabled. For more info: https://docs.microsoft.com/en-us/troubleshoot/windows-client/performance/determine-hardware-dep-available#:~:text=Data%20Execution%20Prevention%20(DEP)%20is,location%20explicitly%20contains%20executable%20code.

Once you have your

BIOS/UEFI – System setting enabled for virtualization

Operating System: Windows Server 20xx with Hyper-V Role, Windows 10 Machine updated to support Nested Virtualization with Hyper-V feature enabled, or Hyper-V Core. For this lab I’m using Windows 10 dev build 21322 with hyper-v feature enabled.

Getting the feature installed on windows 10

Next, you’ll want to create a GOLD image or a MASTER image whichever you prefer to call it. Create the gold image in a different folder than that of your production clones. This just makes it such that you don’t have duplicates in the same folder.

Creating a Gold/Master Image in Hyper-V

Next, you’ll need to upgrade your hyper-v VM configuration from version 9 to version 10. This can only be done with the dev version of windows 10 for AMD chips if you recall from earlier.

Upgrade VM Hyper-V Configuration Version
Upgrading Hyper-V VM Configuration Version

Now, you’ll want to clone the gold image several times. In hyper-v there is no “clone” button where you can find such a feature in virtualization software like virtualbox if you’re familiar with that. Instead the method to clone the VM is to export it which saves it as a file and then import it to a new VM. When importing be CERTAIN to select that you want to create the VM with a new UNIQUE ID. Otherwise, you’ll have an ID conflict between the gold image and the clone. Since you’re copying the same VHDX file to the folder each will need to be renamed and the VM pointed to the new named VHDX file.

Cloning in Hyper-V

Note that the standard Hyper-V default path is:

C:\Users\Public\Documents\Hyper-V\Virtual hard disks

For Azure Stack HCI you will need a Primary Domain Controller to server DNS and other Active Directory related tasks as well as your Hyperconverged hosts. For the lab we’ll just use PDC for the Primary domain controller and HC01 and HC02 for the hyperconverged hosts. You need a minimum of 2. You will also need a VM to run the Azure Stack HCI (administration and orchestration app). I’m just going to call it the Azure Stack HCI “head” for short. I’ll name it HCHead.

  • Primary Domain Controller running Active Directory Services
  • Azure Stack HCI Head – Management Device
  • Hyperconverged Host 1
  • Hyperconverged Host 2

Next step is to configure Active Directory Domain Services.

Start up the PDC and setup ADDS.

This VOD is for convenience and not specifically one I’ve curated as “best”.

First thing I do once RDP’d into the PDC VM is to change the hostname to match the VM name of PDC. Then, reboot. I believe that the wizard may have the option but it’s habit at this point.

If we’re going for best practice as well we should go ahead and create a secondary domain controller which I’ll call DC2. Any additional Read-Only Domain controllers I’ll name them with RODC in the name i.e. RODC1 or with a location prefix using nearest 3 letter airport name such as CHIRODC3 or if it was primary CHIDC or CHIDC2 for “Chicago” as an example.

Starting with 2 writable DCs at your primary location such as a corporate office is ideal such that you have fail-over. Most MSFT MVPs and others will say that it’s historically been best practice to have physical DCs for a primary DC. This is because it was designed to go right on the bare metal. Some other settings have to be tweaked correctly for DCs on a virtualized environment most notably those that affect time drift. My personal recommendation is that if your PDC is physical then your other writable DC should be physical. RODCs can then be virtualized. However, if your PDC is virtual then your others should be virtual. This just happens to be what has worked best for me when doing labs. Feel free to make your own judgement on how best to do that.

For the lab environment I went with “contoso.com” the common fictitious company MSFT uses in their documentation.

While creating the VMs I had used the default network switch. This is great for communication within the VM environment from one VM to another. However, I would like to have my LAN devices be able to communicate with the virtualized devices and therefore I have to make an External Virtual Switch in Hyper-V manager and then attach the VMs to that External Virtual Switch. This means that I’m handling the DHCP and LAN settings from my physical router.

PDC Static IP and DNS

Since the DC will be responsible for DNS it will point to itself as primary DNS controller and use hints to point to the public DNS servers. Also, the alternate address will be the IP address of the secondary writable domain controller.

I’m ignoring these warnings since the first one I don’t really need to block since I don’t authenticate with older systems here and the second just doesn’t apply to my situation. I would block it for extra security in a production environment.

Note that since DNS points to itself and DNS hasn’t been installed yet it can’t resolve names to the outside world until DNS root hints have been entered including public DNS addresses.

Once the server has rebooted with ADDS and DNS configured you’ll notice that you’re able to ping websites on the public internet. This is because Windows is preloaded with the root hints of public DNS servers.

In between the DNS records it holds itself and the root hints you can configure a “forwarder”. I usually add services like CloudFlare, Google, or OpenDNS as forwarders after I have performed a DNS performance test. You can use desktop apps or a web app like dnsperf.com.

Secondary DC

The next step is to boot the HEAD and install Windows Admin Center. You could also name the server WAC for “windows admin center” since this is the utility needed to manage the Hyper-converged hosts.

Boot up the WAC/HEAD VM. Then, click on the “check out windows admin center” link. Leave that up and then change the server name to match with WAC and “join the server to the domain”. This is a critical step.

Now, I can just pop that URL in the address bar of my PC’s browser and remotely connect to it. My PC isn’t part of the domain so it isn’t aware of that WAC.contoso.com part so I just have to do it by IP address or join my PC to the same domain. I’ll just do it by IP here.

https://192.168.1.x:443

The HC01 and HC02 servers should have the Azure Stack HCI installed on them and they should now be joined to the domain.

Now, go to the Windows Admin Center web GUI.

Need to ensure nested virtualization is enabled on these VMs

Enable nested virtualization (There is no GUI method I have found) it’s powershell CLI only.

https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/nested-virtualization

Power off HC hosts

Open an elevated powershell window and enable the virtualization extensions for the hc hosts.

Set-VMProcessor -VMName HC01 -ExposeVirtualizationExtensions $true

Enable mac address spoofing

Power the VMs back on.

This keeps failing at the installation of the nested hyper-v.

I have found an article from last year saying that this can be done:

https://techcommunity.microsoft.com/t5/virtualization/amd-nested-virtualization-support/ba-p/1434841

It specifically mentions configuration version 9.3. I’m running 10.0 in my lab. I’ll create new HC hosts with the 9.3 version.

Huh, you MUST USE 9.3? What!? Looks like MSFT scrapped this to keep Intel domination.

Leave a Reply

Your email address will not be published. Required fields are marked *