My top 5 features of VMware vSphere 6.5

5. VMware vCenter High Availability

Since many functions are relying on vCenter being present and available, making the vCenter Server itself highly available is crucial for large scale deployments. In the past there’s been a few solutions that tried to handle this but now it’s already built in when you fire up your server. If you’re using the vCenter Server Appliance you have the ability to configure it to run in a high availability fashion using “vCenter High Availability”. Now, I need to emphasize, this is exclusively for the vCenter Server Appliance and not for the windows version of vCenter. You get an active node, a passive node and a witness node that makes the vCenter Server availabile by a click of a button. Simple.

4. vSphere Update Manager

Not really a new thing but since vCenter 6.5, vSphere Update Manager is now available for the vCenter Server Appliance as well – making a separate server unnecessary. Finally!

3. VMware PowerCLI

I’m a huge fan of automating and quite frankly PowerCli has to be on the list based on that fact alone, but there are some significant improvements to the different modules. A big leap forward in terms of automation.

2. Virtual Machine Encryption

For the security conscious people out there, this should be a huge deal. The ability to encrypt virtual machines from the outside means it’s independent of the OS inside and also that it’s storage agnostic.

1. Enhancements for Nested ESXi

I do a lot of demos and testing in lab environments I really appreciate the improvements made for nested environments. There has been a fling available for some time that basically reduced the traffic and load on the individual hosts through a MAC learning algorithm. It has now been integrated in the full release of vSphere 6.5. This results in significant performance improvement in nested ESXi environments with vSphere 6.5.

And a honorable mention goes to vSAN 6.5 with improvements such as iSCSI support for physical workloads, 2 node direct connect for the small deployments and of course PowerCLI cmdlets support for vSAN.

VMware vSAN iSCSI use case?

In a previous post I went through a list of my personal favorite features in vSphere 6.5. VMware vSAN is on of them, it’s a storage solution that handles your virtual machines – amongst others as we shall see today.

One of the nicest features introduced for vSAN in vSphere 6.5 is the ability to attach physical servers to the vSAN cluster. By letting the vSAN handle the iSCSI volume, since it’s just another object in the vSAN, you can apply the same policies on that volume as you do on the vSAN datastore used for you virtual machines. And that means you’re able to manage the performance and/or availability, not just when initially setting up the volume but any time the requirements change for the volume you’re able to adapt.

So is it hard setting up vSAN with iSCSI targets and attaching physical servers to it? Not at all, it’s quite easy actually. Let me show you how easy it is. But first we have to create a vSphere cluster with vSAN enabled and configured. Now, I’m a big fan of PowerCLI and automating a lab environment set-up is thanks to William Lam a breeze. I’ve adjusted his script slightly to allow me to select 3 different sizes of the ESXi hosts to choose from: small, medium or large. This allows me within 30 minutes to have a lab consisting of 3 ESXi hosts and a vCenter appliance up and running, configured with a cluster and vSAN enabled. The medium and large size configurations would normally be used for VMware NSX labs and demos.

It might not be the intended use case, but wouldn’t it be fun to host virtual machines from a Hyper-v environment on a vSAN? “Why?” I hear you say, no reason, just because we can!

PowerCLI

So first thing is to execute the PowerCLI script to bring my lab environment online:

Logging on to vCenter will now show the three hosts in a cluster with a new datastore created, a vSAN datastore.

Next, we need to enable the iSCSI functionality, which is a simple process:

Enabling the Virtual SAN iSCSI target service requires some input, specifically

  • Default iSCSI network to use – in a production environment you’d probably want to set up a dedicated VMkernel iSCSI interface for this.
  • Default TCP port – usually no need to change this
  • Default authentication – If you’d like you can set up the connection to use either None, CHAP or Mutual CHAP
  • Storage Policy for the home object – Using the predefined storage policies you can have the home object that stores metadata for the iSCSI target protected for host failures

If you want, you can create a new policy with specific settings for iSCSI Targets and Home object:

The default vSAN storage policy is used.

It protects you against one host failure, to do that it will have the object placed on two different hosts. This means it will use twice the space assigned to an object/VMDK file from the vSAN datastore.

Next up, creating a target and LUN:

You create both the Target IQN (with the desired settings and policies) and a LUN. Depending on the LUN Storage Policy you are using you will consume space from the vSAN datastore accordingly:

Now the set-up of the Target and the LUN is done. It’s that simple! After this point you can attach any physical server using iSCSI to the vSAN datastore and use the resources. In my lab though, I’m going to use a virtual machine from another host to emulate a physical machine.

You probably don’t want just any old server being able to attach to the LUN, so you can configure which initiators are allowed to connect. Either use individual IQNs or create a group of IQNs.

Let’s take a look what has happened on the vSAN Datastore:

We can now clearly see that there’s a new VMDK file that we can handle just as any other VMDK file, inflate it, move it or delete it:

Next up: Creating a Windows Server 2016 VM (emulating a physical machine) that will connect and use the new vSAN LUN we created. 

Setting up the VM is pretty straight forward.

Select the desired OS:

Here’s where we need to make changes, first assign at least two CPUs to the VM and secondly tick the box for hardware virtualization (otherwise the role Hyper-V won’t be able to start or be installed).

The summary page shows us what settings we’ve selected for the VM

We run through the installation of the operating system. Configure everything we need, a static IP address, updating windows and so on:

OK, Windows Server 2016 is ready to be used, let’s configure the iSCSI connection:

Yes, we’d like to start the iSCSI service:

Again, in a production environment you’d probably want to have som fancy stuff set up such as MPIO and so on but for this test we’ll just do a quick connect to the iSCSI target:

Oh, but what IP address should I use? The host that is responsible for the I/O to the LUN can be found on the configuration page of the iSCSI Target:

We connect Windows to the I/O Owner host:

We have a connection, all is well:

Open up the disk management tool and a new disk should be available (if not, try a rescan). Bring the disk online, initialize it and format it:

Give the volume a name:

Now the volume is ready to be used:

Hyper-v will now be installed on the Windows Server 2016 machine:

Add the required features and tools

Configure a virtual switch for Hyper-V

Now we can actually leverage the newly created vSAN volume and use that as the target for our Hyper-V virtual machines:

A simple test to confirm that it actually works, creating a virtual machine for Hyper-V running off of the vSAN volume:

Go through the wizard, give the VM a name and so on.

Choose what generation the VM should be created as:

Select a network connection if needed

Set the size of the virtual hard disk.

Select an iso-file to be used when installing the OS in the Hyper-V virtual machine

Done, just hit finish

The VM is created, start it and connect to it to get the console view

Install the operating system and update it, configure it the way you want it

Now we’re running a virtual machine in Hyper-V with it’s disk being handled by VMware vSAN! Pretty cool.

Now we can go back to see what impact, if any, installing the operating system had on consumed space on the vSAN volume:

As you can see, the VMDK file containing the Hyper-V volume has grown that’s just as we expected.

There you have it! It’s very, very easy to set up vSAN to use iSCSI – why not give it a try yourself?

Windows 2016 ReFS as a Veeam backup repository?

Do we really need dedupe storage for our long term backups? Well, dedupe storage certainly has it’s merits in the toolbox when you design your backup environment. But we do need to be aware of some of the drawbacks with dedupe, for instance performance for random I/O and rehydration of data if doing synthetic full backups. However, there’s been a great disturbance in the Force as of recently.

Microsoft made a huge release earlier this year with the “2016”-suite, I thought it’d be a good idea to go through how to set up a server to be used as a repository for Veeam Backup & replication 9.5 and show some of the results. More specifically, let’s use the amazing Resilient File System (ReFS) v3 found in Windows Server 2016.

Windows Server 2016 can be installed in two modes, with or without a GUI. Now, if no GUI is installed you can use powershell to do the configuration but for simplicity we’re going to use the GUI which is called “Desktop experience” in the installer.

So boot your server from the dvd/iso and select Windows Server 2016 (Desktop Experience) of your flavor.

Desktop Experience

In the next step, select “Custom: Install windows only (advanced)”

Install

Select the drive to install Windows on:

C drive

Now the installation will start copying files to the drive and commence the installation of Windows.

Start installation

Click on Tools and select Computer Management in the server manager window:

Computer management

Open up the Disk management tool, this is where you find the attached drives on the server. In my case I have one drive where I installed windows and two additional drives. I’ll make some benchmarks later so I’m going to format one drive with ReFS and the other one with NTFS for comparison, but first let’s get them online:

Disk online

Then initialize the disks:

Initialize disk

Next up, we create a new simple volume on one of the disks:

Simple kolumn

Select file system and give the volume a name:

ReFS volume

Next disk we format with NTFS instead:

100

The result looks like this:

110

So let’s test it out and see if there’s any noticeable difference when using these as target for our backups.

I’ve created a folder on each of the disks and made two Veeam Backup Repositories, I’ve created two jobs backing up the same VM running Windows Server 2016(with different targets) and set up synthetic full backup, this is where Windows ReFS should really shine if you believe the hype.

Backup jobs

When the backup has run a few iterations it was time for the Synthetic Full backup using the NTFS backup repository as target:

NTFS Usage

So with NTFS, a full backup, a few incremental backups and at the end a Synthetic full backup used roughly 12 GB of disk space. Good deduplication and compression and not a lot of changes in the VM of course. But what about the ReFS repository?

Usage ReFS

Same backup intervall with a Synthetic full backup at the end generated roughly 7 GB go disk usage, pretty significant reduction in usage if you ask me. Now disk usage is when of the key features on why to use ReFS with it’s “spaceless full backup technology” but there’s another benefit the has the same groundbreaking impact: Time.

Time it takes to make the full backup. Let’s take a look at the time it took to create the synthetic full backup on the NTFS repo.

NTFS synthetic full4 minutes and 15 seconds is not the bad for my VM, but imagine it being dozens of VMs…How does the ReFS repo compare then?

ReFS synthetic full11 seconds! Must be a typo? NO, it’s really not. Quite impressive. Now you might rethink the dedupe storage approach, you have all the space savings (almost) of the dedupe storage but performance of a traditional disk with no need to rehydrate data.

If you want to make it highly resilient you can leverage another Microsoft technology called Storage Spaces Direct where you’d get fault tolerance and “auto-healing” as well.

A fantastic job from Microsoft with ReFS/Storage Spaces Direct and Veeam with the integration of the technology into Backup & Replication!