Migrating ESXi Hosts and VMs to Distributed Virtual Switch
The distributed virtual switch provides a centralized interface where you can manage and configure network access across multiple hosts in one step. If you’ve managed multiple hosts using only standard virtual switches then you know that managing the different port groups can be quite the hassle. Port group names, on a virtual switch, have to be exactly the same name, including capitalization, in order for VMs to successful migrate between hosts. Try adding a few new port groups to a cluster of 8+ hosts using the GUI and you’ll know the pain I speak of. It’s time to take the leap and migrate to the distributed virtual switch.
Thankfully the distributed virtual switch resolves those types of problems and ensures that all switches across all participating hosts are identical. New port groups are configured in one interface and are immediately ready to be used by all hosts. Additionally, the dvSwitch provides features that the standard switch does not, including shaping for inbound traffic, private VLANS, support for LLDP, and port mirroring to name a few.
One of the coolest features are the built in wizards that allow you to migrate infrastructure to the dvSwitch in one shot. The Add and Remote Hosts wizards allows you to add hosts to the switch, migrate their VMkernel network adapters, reassign physical NICs, and migrate all VMs to the switch….all in one shot. You can also use the Migrate VM to Another Network wizard to easily migrate all your VMs to a new port group in bulk, rather than one by one. In this post, we’ll walk through migrating a host, it’s VMkernel adapters, and some VMs from a standard virtual switch to a distributed switch – all in one shot.
To start, let’s look at the standard virtual switch that is setup on one of the lab hosts. As you can see, it currently has two VMkernel network adapters and three running VMs. Additionally we can see that it has two pNICs attached as uplinks.
Now let’s head over the networking tab within the vSphere Web Client. You can see that I have a distributed virtual switch configured already and a few port groups ready to go.
To migrate this host completely to the distributed virtual switch, start by right-clicking the switch and choosing Add and Manage Hosts to open the wizard.
Select the task of Add Hosts and click Next. Next we’ll select the +New hosts… button and select the host you wish to add. Choose the host and click ok.
Next we’ll select the network adapter tasks that we wish to do. In our example, we’re going to select the first three as we want to move everything to the new dvSwitch including physical adapters, virtual adapters (VMKernel), and our VMs.
The next step will allow us to start making some changes. Here we’ll get to manage the physical adapters, or uplinks, for the distributed switch. For reference, each dvSwitch gets a SINGLE uplink group in which, without modification, all port groups and traffic will utilize.
In this step we’ll choose both physical NICs that are being used by the standard vSwitch and instruct the wizard to migrate them both to the dvSwitch. Click vmnic0 (or whatever NICs you are using) and click the Assign Uplink button. Choose Uplink 1 and click ok. Do the same to assign the second vmnic to the dvSwitch. Once you’re done, you’ll see that they are both assigned to the dvSwitch now.
After you’ve added both Uplinks to the dvSwitch, you should have this:
The next option allows you to modify the VMkernel network adapters. Remember that these are your Management, vMotion interfaces but could also be used for iSCSI and Fault Tolerance. Pay close attention to what you’re doing here as migrating these are especially important to the ability to connect to your host. Click the first VMkernel (vmk0) and choose Assign Port Group. Select the proper port group on the dvSwitch and click ok. Again you’ll see the VMkernel move under the “On this Switch” group. Continue with the other VMkernel adapters and click ok.
For reference, vSphere 5.1 and up will perform a connectivity test after making changes to the network. If it detects that your host is offline for longer than 30 seconds it’ll revert the change for you.
The next step will analyse your changes and determine if there are any impacts to network dependant services, namely iSCSI. If there are any issues, clear them up before moving on, otherwise click next.
The last change we need to make is migrate our VMs to the proper port groups on the dvSwitch. Here you can individually choose what port group each VM should be assigned to or do them in bulk by choosing multiple VMs and clicking the Assign Port Group button. As you can see below, I have these VMware appliance on the management network and don’t have any other running VMs, so I’ll move them all to the Management port group.
The last step gives you a nice little summary to view changes that will be completed. Click Finish to apply the changes. You can watch the progress in the Recent Tasks pane and ensure everything went smoothly.
Last but not least, let’s go look at the dvSwitch itself and confirm that everything was migrated to it as expect. If your’s looks like mine does below, everything went as planned and you are now running on the dvSwitch. Don’t forget to go back to your host and delete the unneeded standard switch to keep things clean and tidy.