Saturday, August 10, 2013

Simulating Management Network Redundancy

Hello Readers,

First of all let me thank you, for your time, you will be investing in reading this post. And as this is a long post, I appreciate your patience as the information documented here is very important and helps you Design/Configure/Troubleshoot/Maintain your VMware environment accordingly.


So many times during discussion I came across a situation where power outage happened in the environment and everything went down completely or partially. Now this really depends on the configuration and design you have opted on the Network side of things when you lost the connectivity to your Management Network (either to Service Console - ESX 3.x, ESX 4.x or to vmkernel post ESXi 5.x).

As this is a long post I appreciate your patience as the information documented here is very important and helps you design your environment accordingly.

The purpose of this article is to simulate a condition with which you can test the redundancy of your service console/vmkernel used for carrying the Management Network.

Assumptions made before we proceed:

1) Here I am making an assumption that the configuration on the ESX/ESXi host with atleast two uplinks for redundancy either in NIC Teaming or only have one uplink or more than one uplinks on one vSwitch/VDS or having a second vSwitch/dvportgroup on VDS configured wither either 1 or 2 uplinks.

2) Both NICs are either connected to one physical switch or if connected to two separate physical switches then both uplink switches are connected at L2 using any workable technologies such as vPC, MLAG, SMTL etc. etc.

3) vSwitch port group or VDS dvportgroup are configured with correct Load Balancing policies which corresponds to the configuration on the uplink physical switches

4) Environment is not running any Production workload and it could be Dev/Test/Lab so that no impact on the running Production VMs.

5) The default gateway is reachable from ESX/ESXi

6) If more than one isolation address is required on the Cluster/Host then necessary configuration is done on ESX/ESXi

7) Physical connectivity is configured properly for each ESX/ESXi host irrespective of hardware and other components necessary.

8) ESX/ESXi host is configured to use any of the available tools such as SSH (putty)/DCUI/ILO/DRAC/RSA/KVM etc.

9) All the network cables which are used for providing an uplink/s to ESX/ESXi host are identified properly and marked/documented properly on both ESX/ESXi and physical Switch/es Side.


10) Familiarity with vSphere Networking and use of the necessary commands where not listed.

11) Make changes at the vSwitch level only when you a separate vSwitch configured for Management Network with a different uplink or if using VDS you should have a different dvportgroup configured for Management Network.


1st Method


You can go into VSS port group settings and go to NIC teaming Tab where you can see the two uplinks in Active Active, check mark the box Override failover order, select one of the NICs from the Tea, select the arrow Move Down and click it twice so the NIC will be moved to Unused state from Active state

At this time the vSwitch/VDS will still shows both NICs as Active/Active state. Once you move the NIC into unused state you will see the Management Network will have only one uplink and you will received the warning under the Summary Tab "Host currently has no management network redundancy".

Now HA may also gives you an error but related information is not under the scope of this article and user should be aware about the impact and able to recover if needed.

Note: If the warning continues to appear, disable and re-enable VMware High Availability/FDM (depending on the version of ESX/ESXi in the Cluster.

To set das.ignoreRedundantNetWarning to true (for ESX 3.x and 4.x):

   1. From the VMware Infrastructure Client, right-click on the cluster and click Edit Settings.
   2. Select vSphere HA and click Advanced Options.
   3. In the Options column, enter das.ignoreRedundantNetWarning
   4. In the Value column, type true.

      Note: Steps 3 and 4 create a new option.
   5. Click OK.
   6. Right-click the host and click Reconfigure for vSphere HA. This reconfigures HA.

To set das.ignoreRedundantNetWarning to true in the vSphere 5.1 Web Client:

   1. From the vSphere Web Client, right click on the cluster.
   2. Click on the Manage tab for the cluster, then under Settings click vSphere HA.
   3. Click on the Edit button in the top right corner.
   4. Expand the Advanced Options section, and click Add.
   5. In the Options column, type das.ignoreRedundantNetWarning.
   6. In the Value column, type true.
   7. Click OK.
   8. Right-click the host and click Reconfigure for vSphere HA.

2nd Method

If you do not have access through vSphere client or Web Client then you can set the NIC to unused state using command line through ssh session to the ESX/ESXi hos. Now this command will move the NIC into unsed state for the whole vSwitch or VDS and will NOT make change at the particular port group (VSS) or dvport group (VDS) level.

If using vSwitch (VSS)

#esxcfg-vswitch -U vmnicX vSwitchX

(Note: Replace X with the actual NIC number and vSwitch Number)

For ESXi 5.x

Remove network card (known as vmnics) to or from a Standard vSwitch using these commands:

#esxcli network vswitch standard uplink remove --uplink-name=vmnic --vswitch-name=vSwitch 

Change the link status of one of the uplinks vmnic with one of these commands:

#esxcli network nic down -n vmnicX

If using VDS

To get the dvport ID for the dvportgroup

#esxcfg-vswitch -l | more

To unlink the uplink from VDS

#esxcfg-vswitch -Q vmnicX -V dvPort_ID_of_vmnic dvSwitch

3rd Method

For ESX/ESXi 3.x/4.x, ESXi 5.0 you can change the VLAN property on the VSS/VDS which carries the Management Network though SSH/DCUI

#esxcfg-vswitch -v -p “Service Console” vSwitch0

(Note: for ESXi 5.1 the rollback feature won't allow you to make such change due to which the ESXi host may get disconnected from vCenter server but you can still try the command as we have one uplink still available)

4th Method

You can change the load balancing policy using the command line as well for the vSwitch where the Management Network is configured using Service Console or vmkernel and we are using only one vSwitch with one uplink and have a 2nd vSwitch configured separately with 2nd Management Network (Service Console/VMkernel) with one uplink.

Here you have to use a different teaming policy than the physical switch which results in to loss of Network connectivity on the uplink/s used for Management Network VSS/dvportgroup.

Run the command on ESX/ESXi 3.x/4.x:

#vim-cmd /hostsvc/net/vswitch_setpolicy --nicteaming-policy='loadbalance_ip' vSwitch0

To change the load balancing policy on an ESXi 5.x host, run this command:

#esxcli network vswitch standard policy failover set -l iphash -v vSwitch0

* To change the load balancing policy to a route based on the originating virtual port ID, run this command:

#vim-cmd /hostsvc/net/vswitch_setpolicy --nicteaming-policy='loadbalance_srcid' vSwitch0

* To change the load balancing policy to a route based on the MAC hash, run this command:

#vim-cmd /hostsvc/net/vswitch_setpolicy --nicteaming-policy='loadbalance_srcmac' vSwitch0

Or

#Set load balancing policy. The options are:

    *      Port ID = loadbalance_srcid
    *      IP Hash = loadbalance_ip
    *      MAC = loadbalance_srcmac
    *      Failover Only = failover_explicit

Here is an example command:

#vimsh -n -e "hostsvc/net/vSwitch_setpolicy --nicteaming-policy loadbalance_iphash vSwitch#"

Refresh network settings with the command:
#vimsh -n -e "/internalsvc/refresh_network"

Restart the management service with the command:

#service mgmt-vmware restart

On ESXi 5.x,

* To change the load balancing policy to a route based on the originating virtual port ID, run this command:

#esxcli network vswitch standard policy failover set -l portid -v vSwitch0

* To change the load balancing policy to a route based on the MAC hash, run this command:

#esxcli network vswitch standard policy failover set -l mac -v vSwitchX

* To change the load balancing policy to a route based on the IP hash, run this command:

#esxcli network vswitch standard policy failover set -l iphash -v vSwitchX

Restart the management service with the command:

#/etc/init.d/hostd restart

5th Method

Using Cisco CDP protocol find out the uplink switch port (assuming the uplink switch/es are Cisco branded), access the uplink switch using SSH and go to the port configuration for one of the NIC which is used as an uplink for Management Network, and just shut the port down administratively.

You can click on the small Blue icon right beside the vmnic to find out the CDP information as shown in the attachment.





You will see red X against the NIC which is disconnected in vSphere client which means the link is down.

Using the command "esxcfg-nics -l" will show you the Link is down for that particular NIC as well.

6th Method

By changing the MTU for the portgorup/dvportgorup used for Service Console/vmkernel Management Network

For ESX/ESXi 3.x/4.x if the MTU value selected is Default of 1500 then use the command and use the commands if the configuration is vice versa.

Find out the vmkernel or Service Console first using the command/s.

#esxcfg-vswif -l

#esxcfg-vmknic -l

Find out the port group/dvportgroup name from the out put of the above command/s.

# esxcfg-vmknic -m 9000 portgroup_name

Run this command for ESXi 5.x:

# esxcli network ip interface set -m 9000 -i vmk_interface

7th Method (Need more testing)

Once you connect to the ESX/ESXi host using SSH (putty) or any other tools and use the following command which will blink the NIC for 5-6 seconds.

#ethtool -p  vmnicX

8th Method

You can actually pull the cable from the back of the physical ESX/ESXi host which is carrying the Management Traffic so you need to have the information about the port mapping and related cable used for port/s used for Management Network on VSS/VDS.

9th Method

Warning: Using vsish interface which is currently NOT SUPPORTED by VMware so be careful when using it.


To block a particular link for a certain NIC of the vmkernel, run this command:

vsish -e set /net/pNics/vmnicX/block 1

All the above methods basically isolating the uplink connectivity for one NIC which in turn should break the redundancy status for the Management Network and the host should stay connected inside Virtual Center Server/ vCenter Server.

Enjoy the information and test out your Management Network properly and DO NOT use Production Host/s to test out the above methods.

Share and Care !!

Cheers !!

Tuesday, August 6, 2013

FREE Raspberry Pi - vExpert

YES,

You read it correctly, @SimplivityCorp is offering it FREE to all the vExperts.

@bacon_is_king tweeted earlier today about this promotion on G+ so thought to write few words and post it for others which is kind of motivation too for other who are not yet vExperts but also can try to become one!!

One just need to go this link and register him/herself

Simplivity Promotion for vExperts

Now there are three options available how you can get your Raspberry Pi

1. Pick up at VMworld US
2. Pick up at VMworld Europe
3. Get it mailed at your address

So choose one and there you go !!

Enjoy creating some new innovation with Raspberry Pi and thanks to @bacon_is_king (for heads up) and @SimpliVityCorp for such wonderful Gift for the VMware vExpert Community !!

VMware Community rocks indeed !!

Share and Care !!

Thursday, August 1, 2013

Controlling Upgrade Chaos of VMware Environment

Lot of times depending on the VMware components you are using its always better to check which component to upgrade 1st whenever there is a newer version available.

Now this is a big question for any Organization as its affects the whole chain of Maintenance, downtime, change management requirements etc. etc.

If the downtime is involved in cost and other losses then such upgrade process must be thought out carefully before proceeding further.

Depending on the products in use sometimes people have the ability/luxury of having a standby/Dev/Test/Lab environment where they can do such upgrade first and verify if its affecting anything in such environment and if everything successfully upgraded then you can roll out the same steps in your Production environment.


Now look at the following Table which will clear up most of your doubts



Now as you can see the VCD (vCloud Director) is the 1st one in the process of upgrade. As the upgrade starts with the lowest number First you go with the same recommendation for all the products and related upgrade path (whether you run only few, all or some of them) and you should not be having any issues.

I would also suggest to search http://kb.vmware.com/ and the Release Notes for each products involved, for any known issues/gotchas during the upgrade so that you can take care of them before you proceed with the Upgrade.

Now here if you are using Cisco Nexus 1000v then you need to make sure before you upgrade your ESXi/vCenter Server, please make sure that the VSM/VEM version are supported/Compatible otherwise after upgrading your vCenter server you may end up in a dead water with NO connectivity on the VMs/ESXi if hosts if everything is managed through Nexus 1000v. With your existing version of VSM you can check up the Cisco's website for the Compatibility of ESXi/VC.

As Nexus 1000v is a Cisco product the above Matrix is not reflecting the same but it will be definitely grab a spot after vCenter/ESXi host upgrade paths.

There is a published KB 2037630 available too for more information which you can refer if you are running SRM/VR and VDP solution in your environment and which will guide you on the steps of overall upgrade.

And suppose you are running vCenter Server Appliance then the above KB will not help but instead refer the KB 2031331.

Enjoy upgrading VMware Environment !!

Share and care !

Cheers.