Monday, August 31, 2015

A Typical UCS-N1K deployment with upstream N5K vPC pair.

Differentiating the Physical and Virtual Interfaces










































Source :- http://www.cisco.com/c/en/us/support/docs/servers-unified-computing/unified-computing-system/116277-maintain-mc-ucs-00.html

Sunday, August 30, 2015

Diff b/w fault tolerance/load balancing Link Aggregation

The list that follows details the three options—a, b, and c—shown in Figure 2-1:

  • Fault tolerance— One port (or NIC) is active (receives and transmits traffic) while the other ports are in standby (these ports don’t forward or receive traffic). When the active port fails, one of the ports that was previously in standby takes over with the same MAC address. This option is depicted as option a in Figure 2-1.

  • Load balancing— Only one port receives traffic; all the ports transmit traffic. If the receive port fails, a new port is elected for the receive function. In Figure 2-1, this is option b.

  • Link aggregation— A number of ports (NICs) form a bundle that logically looks like a single link with a bandwidth equivalent to the sum of the bandwidth of each single link. Cisco calls this configuration an EtherChannel. In Figure 2-1, this is option c.


VSM (Virtual Switching Module)

VSM is used to configure following attributes :-

  1. VLANs
  2. ACL, PVLANs etc
  3. Netflow, ERSPAN
  4. QoS
  5. Physical NIC port configuration

Saturday, August 29, 2015

What is Nexus 1000v


  1. Nexus 1000V (N1kv) is a distributed edge virtual switch leveraging NX-OS which extends the switch to the ESX host and essentially sits b/w the host & the upstream switch. 
  2. It can be installed as a OVM (VM) on any host or as a physical appliance (Nexus1010).
  3. It has a sup (VSM) which controls multiple line cards in a sense (VEM) installed on a single host.
  4. Each host can only support one VEM and VSM can manage a max of 64 VEMs.

Below we have a 2 VSM modules connected to 2 upstream switches which are then connected bunch of VEMs installed in individual ESX hosts.


VSM allows configuration of::-

  1. VLANs
  2. ACL, PVLANs etc
  3. Netflow, ERSPAN
  4. QoS
  5. Physical NIC port configuration

A vethernet interface is defined for each VMNIC on the VM and an ethernet interface is defined for each physical NIC port on the ESX host.
  1. Each ethernet interface is assigned an uplink port profile to provide uplink(external) connectivity.
  2. The n/w admin can create port profiles on the VSM (with some/all of above attributes) and assign a vethernet port to any one of these port-profiles.
The design of N1Kv prevents loops by tagging/recognizing/switching that veth traffic to VEMs and ethernet traffic to uplink port of the physical NIC.


Why Nexus 1000v

Why N1kV :- With an ESX host and it's DVS allowing VMNICs on individual VMs to communicate with internal and external networks, we had following issues :-

A. Using single port or a port channel from DVS to upstream switch:-


  1. VMs creating bridges and unintended loops.
  2. Unintended VM to VM communication.
  3. Even though VMNICs could be on separate internal networks, the uplink is common through the DVS and hence the all VM traffic would traverse through the upstream switch regardless of the requirement. 
  4. VMs on wrong VLANs leading to no external/upstream connectivity.

B. Using Multiple ports with separate VLANs allowed on upstream switch per port on ESX host. 







  1. This required high number of physical ports on the switches as well as each hosts (multiple NICs, multi-port NICs)
  2. The n/w admin still had very little control over the DVS config and the VLANs on each of the VMNICs.