Latest update

6/recent/ticker-posts

Change IP ESXi for vSAN host with VDS

I am doing a bit of network redesign in the home lab (a new post to come on that soon), for better manageability and security practices. With that being said, I had the need to move ESXi management IPs to a new subnet. I have a few VMware ESXi hosts that are outfitted with distributed switches and of course, running in vSphere vSAN clusters. I wanted to put together a quick post of the steps used to perform the change IP ESXi task for a vSAN host running with VDS.

Steps to Change IP ESXi FOR vSAN host

The steps are fairly straightforward to change IP ESXi. There are a few things to be aware of. It does make things a bit more tedious if a host is a member of a vSAN host and if it is running a vSphere Distributed Switch. These are the steps I used to change the IP for VMware ESXi that was running as part of a vSAN host.

  1. Place the host in maintenance mode
  2. Move the host out of the vSAN-enabled cluster
  3. Remove VDS (maybe)
  4. Change the management IP from the DCUI
  5. Remove the ESXi host from your vSphere inventory
  6. Add the host back to your vSphere inventory
  7. Reconcile VDS
  8. Move the host back into the vSAN cluster
  9. Check your vSAN partitions and networking
  10. Take the host out of maintenance mode

1. Place the host in maintenance mode

The first obvious step is to place the host in maintenance mode and make sure all workloads have been evacuated. With vSAN, choose your data evacuation option that suits your needs.

Place the VMware vSAN host in maintenance mode
Place the VMware vSAN host in maintenance mode

The VMware ESXi vSAN host is placed in maintenance mode with all the workloads migrated off.

VMware vSAN host is successfully placed in maintenance mode
VMware vSAN host is successfully placed in maintenance mode

2. Move the host out of the vSAN-enabled cluster

The next step is to move the ESXi host out of the vSAN-enabled cluster. This effectively disables vSAN for the host. You need to perform this step as you will run into issues if you just remove the host with vSAN enabled from vSphere altogether, change the IP and then bring it back in.

Move the host out of the vSAN enabled cluster
Move the host out of the vSAN enabled cluster

3. Remove VDS (maybe)

One sticky part of the vSphere Distributed Switch that can be a bear is removing the switch from the host altogether. You may run into issues with “resources in use” when trying to remove the switch from the host as shown below.

Errors with VDS resources in use
Errors with VDS resources in use

With my lab environment, I found it to be effective to leave the VDS intact on the ESXi host, remove it from your vSphere inventory, change the IP, bring it back into inventory, and then reconcile your VDS switch as I will show in just a bit.

4. Change the management IP from the DCUI

Open the DCUI and select Configure Management Network.

Login to the DCUI
Login to the DCUI

Choose IPv4 Configuration.

Navigate to IPv4 configuration
Navigate to IPv4 configuration

Change the IP to what you want it to be.

Change the IP to the target IP address you want to configure
Change the IP to the target IP address you want to configure

Apply the changes and restart the management network on your ESXi host.

Apply and restart the management network
Apply and restart the management network

Verify you have the proper IP address reflected.

View the newly configured IP address
View the newly configured IP address

5. Remove the ESXi host from your vSphere inventory

The only way to remove your VMware ESXi host from vSphere inventory with connected VDS resources is to have the host in a disconnected state. After changing the IP address of the host, it should shortly go to disconnected. This will allow you to remove it from the vSphere inventory. As a note, the screenshot below shows a different host I removed from inventory as I rolled through the cluster. However, I used the same process throughout.

Remove your not responding host from vSphere inventory
Remove your not responding host from vSphere inventory

6. Add the host back to your vSphere inventory

Now that we have the VMware ESXi host’s IP address changed, we can add it back to the vSphere inventory.

Add the host back to the vSphere datacenter
Add the host back to the vSphere datacenter

7. Reconcile VDS

After you add the ESXi host back to your vSphere inventory, you will need to reconcile your VDS switch. In my environment, I found that simply adding the switch back to the changed IP host correctly resolves VDS host proxy switch issues. You may see the below when the host is added back to vSphere inventory.

VDS proxy switch out of sync warning
VDS proxy switch out of sync warning

To reconcile VDS back to the host properly, you just need to add the VDS back to the host. This will reassociate the VDS proxy switch, etc.

Add the host back to the VDS switch
Add the host back to the VDS switch

You will need to buzz through and make sure you have your uplinks and port groups assigned.

Map uplinks and VMkernel ports
Map uplinks and VMkernel ports

8. Move the host back into the vSAN cluster

Once you have your VDS switch synchronized back to the ESXi host, you can move it back into the vSAN-enabled cluster.

Move the host back into the vSAN cluster
Move the host back into the vSAN cluster

9. Check your vSAN partitions and networking

I like to double-check the vSAN partitions and information using the command:

esxcli vsan cluster get
Check your vSAN sub cluster UUID
Check your vSAN sub-cluster UUID

10. Take the host out of maintenance moe

Now, we can take the host out of maintenance mode.

Host still in maintenance mode
Host still in maintenance mode

You should see any red bangs go away as the data is synchronized and your networking and everything else is like it should be.

Maintenance mode exited and cluster data resynchronized
Maintenance mode exited and cluster data unsynchronized

Troubleshooting

I did run into a couple of issues that were self-inflicted of sorts. This included that I didn’t have the proper firewall rules in place between my firewall zones to allow the traffic. Make sure if vCenter Server is on a separate subnet (for me it was for a while since I changed IPs of ESXi hosts first and then reIP’ed the vCenter Server), you have the proper rules in place to allow the traffic.

Firewall blocking traffic between vCenter Server and the ESXi host
Firewall blocking traffic between vCenter Server and the ESXi host

Another issue, on one of the hosts I wasn’t paying attention as I was performing other lab tasks and I removed the host from vSphere inventory without first taking it out of the vSAN cluster. This resulted in a cluster partition on the host, even though I had network connectivity between the host and vCenter. I had to leave the cluster once again and bring the host back into the vSAN-enabled cluster to rejoin it properly.

Wrapping Up

Make sure to lab your environment out first before making the changes to your production ESXi hosts. This helps to catch things like the firewall rules mentioned earlier. It also helps to get a feel for the process in your environment.

Post a Comment

0 Comments