Posts Tagged ‘Nexus 7000’

h1

Troubleshooting MAC-Flushes on NX-OS

January 21, 2013

An interesting client problem in one of our multi-tenant data centers came to my attention the other day. A delay sensitive client noticed a slight increase in latency (20 ms) at very intermittent intervals from his servers in our data center to specific off-net destinations. The increase in latency was localized to the pair of Nexus 7000’s functioning as the core switch layer (CSW) and the layer3 edge for this particular data center. Beyond that all appeared normal on the N7K CSWs.

A TCP dump from a normal trunk interface attached to the N7Ks, showed unicast traffic on the N7K-2 device when the N7K-1 device was setup to receive internet traffic inbound and forward it into the data center client VLANs.  The N7Ks are setup using the Cisco VPC (Virtual Port Channels).

Read the rest of this entry ?

Advertisements
h1

Load-Sharing across ASICs

April 26, 2012

Port-channels have become an acceptable solution in data centers to both mitigate STP footprints and extend physical interface limits.

One of the biggest drawbacks with port-channels is the single point of failure.

Scenario 1- Failure of an ASIC on one switch, which could potentially bring the port-channel down, if all member interfaces were connected on one ASIC.

Scenario 2- Failure of one switch on either side. The obvious solution available today is multi-chassis port-channels which addresses the problem 95%.

Consider the following topology:

Even with multi-chassis port-channel there is the still the possibility of an ASIC failure.  Although not as detrimental as Scenario-1, there will still be some impact (depending on the traffic load) if both interfaces on one switch happen to connect to the same ASIC.

Thus it only makes sense that the ports used on the same switch, uses different ASICs. How would confirm this on the Nexus 5000 and Nexus 7000?

Read the rest of this entry ?

h1

Cisco Nexus 7000 upgrade to 8Gb

February 27, 2012

When upgrading a Nexus 7000 to NX-OS version 5.2 (using more than 1 VDC) or to NX-OS v6+, Cisco claims the need to upgrade the system memory to 8Gb.

Note I have run on v5.2 using only 4Gb per SUP using 2 VDCs and it has worked just fine, but I should mention that the box was not under heavy load.

See how much memory your N7K has on a SUP by using the following command:

N7K# show system resources
Load average:   1 minute: 0.47   5 minutes: 0.24   15 minutes: 0.15
Processes   :   959 total, 1 running
CPU states  :   3.0% user,   3.5% kernel,   93.5% idle
Memory usage:   4115776K total,   2793428K used,   1322348K free

The upgrade per SUP would need the Cisco Bundle upgrade package (Product code: N7K-SUP1-8GBUPG=). One package has one 4Gb module. (see picture below) If you have two SUPs you would need two bundles. Notice the 8Gb sticker on module in the red block.

Read the rest of this entry ?

h1

BGP between Cisco Nexus and Fortigate

October 12, 2011

It is not uncommon to find that different vendors have slightly different implementations when it comes to standards technologies that should work seamless.

I recently came across a BGP capability negotiation problem between a Nexus 7000 and a client Fortigate. Today’s post is not teaching about any new technologies, but instead showing the troubleshooting methodology I used to find the problem.

The setup is simple. A Nexus 7000 and a Fortigate connected via nexus layer2 hosting infrastructure, to peer with BGP.
At face value the eBGP session between Nexus 7000 and the Fortigate never came up:

N7K# sh ip bgp summary | i 10.5.0.20
Neighbor        V    AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd
10.5.0.20   4 65123     190     190        0    0    0 0:12:30  Idle

The first steps should verify the obvious.

  •  Configuration! This check should included checking the ASNs, the peering IP addresses, source-interfaces and passwords matching.

Read the rest of this entry ?

h1

Low Memory Handling

August 21, 2011

Memory problems on routers is nothing new. It is generally less of a problem in current day, but is still seen from time to time.

BGP is capable of handling large amount of routes and in comparison to other routing protocols, BGP can be a big memory hog. BGP peering devices, especially full internet peering devices, require larger amounts of memory to store all the BGP routes. Thus it’s not uncommon to see a BGP router run out of memory when a certain route count limit is exceeded.

A router running out of memory, commonly called Low Memory, is always a bad thing. The result of low memory problems may vary from the router crashing, to routing processes being shut down or if you lucky enough erratic behavior causing route flaps and instability in your network. None which is desired.

Low memory can be caused by any of the following:

  •     Partial physical memory failure.
  •     Software memory bugs.
  •     Applications not releasing used memory chunks.
  •     Incorrect configuration.
  •     Insufficient memory allocation to a Nexus VDC.

Read the rest of this entry ?

h1

Cisco OTV (Part III)

July 6, 2011

This is the final follow-on post from OTV (Part I) and OTV (Part II).

In this final post I will go through the configuration steps, some outputs and FHRP isolation.

OTV Lab Setup

I setup a mini lab using two Nexus 7000 switches, each with the four VDCs, two Nexus 5000 switches and a 3750 catalyst switch.
I emulated two data center sites, each with two core switches for typical layer3 breakout, each with two switches dedicated for OTV and each with one access switch to test connectivity. Site1 includes switches 11-14 (four VDCs on N7K-1) and switch 15 (N5K), whereas Site2 includes switches 21-24 (four VDCs on N7K-2) and switch 32 (3750).

To focus on OTV, I removed the complexity from the transport network by using OTV on dedicated VDCs (four of them for redundancy), connected as inline OTV appliances and by connecting the OTV Join interfaces on a single multi-access network.

This is the topology:

Before configuring OTV, the decision must be made how OTV will be integrated part of the data center design.

Recall the OTV/SVI co-existing limitation. If core switches are in place, which are not the Nexus 7000 switches, OTV may be implemented natively on the new Nexus 7000 switch/es or using a VDCs. If the Nexus 7000 switches are providing the core switch functionality, then separate VDCs are required for OTV.

Read the rest of this entry ?

h1

Cisco OTV (Part II)

June 28, 2011

This is a follow on post from OTV (Part I).

STP Separation

Edge Devices do take part in STP by sending and receiving BPDUs on their internal interface as would any other layer2 switch.

But an OTV Edge Device will not originate or forward BPDUs on the overlay network. OTV thus limits the STP domain to the boundaries of each site. This means a STP problem in the control plane of a given site would not produce any effect on the remote data centers. This is one of the biggest benefits of OTV in comparison to other DCI technologies. This is made possible because MAC reachability information is advertised and learned via the control plane protocol instead of learned using typical MAC flooding behavior.

With the STP separation between sites, the ability for different sites to use different STP technologies is made possible with OTV. I.e., one site can run MSTP while another runs RSTP. In the real world this is a nifty enhancement.

.

Multi-Homing

OTV allows multiple Edge Devices to co-exist in the same site for load-sharing purposes. (With NX-OS 5.1 that is limited to 2 OTV Edge Devices per site.)

With multiple OTV Edge Devices per site and no STP across the overlay to shut down redundant links, the possibility of an end-to-end site loops are created. The absence of STP between sites holds valuable benefits, but a loop prevention mechanism is still required, so an alternative method was used. The boys who wrote OTV, decided on electing a master device responsible for traffic forwarding (similar to some non-STP protocols).

With OTV this master elected device is called an AED (Authoritative Edge Device).

An AED is an Edge Device that is responsible for forwarding the extended VLAN frames in and out of a site, from and to the overlay network. It is a very important to understand this before carrying on. Only the AED will forward traffic out of the site onto the overlay. With optimal traffic replication in a transport network, a site’s broadcast and multicast traffic will reach every Edge Device in the remote site. Only the AED in the remote site will forward traffic from the overlay into the remote site. The AED thus ensures that traffic crossing the site-overlay boundary does not get duplicated or create loops when a site is multi-homed.

Read the rest of this entry ?