We all too familiar with the devastating impact a talented layer 2 loop could have on a data center lacking sufficient controls and processes. If you are using Cisco Nexus switches in your data center, you would be happy to know that NX-OS offers an interesting new tool you should add to your loop detection list. The somewhat undocumented feature is known as (for the lack of a better name) FWM-Loop Detection. FWM refers to the NX-OS Forwarding Manager. In Syslog it is seen as:
Continue reading “Detecting Layer2 Loops”
“Fabric” is a loosely used term, which today creates more confusion instead of offering direction.
What exactly is a Fabric ? What is a Switch Fabric?
Greg Ferro did a post here explaining how Ethernet helped the layer 2 switch fabric evolve. Sadly the use of switch fabric did not stop there. And this is the part where the confusion trickles in.
The term fabric has been butchered (mostly by marketing people) to incorporate just about any function these days. The term ‘switch fabric’ today (in the networking industry) is broadly used to describe among others the following:
- The structure of an ASIC, e.g., the cross bar silicon fabric.
- The hardware forwarding architecture used within layer2 bridges or switches.
- The hardware forwarding architecture used with routers, e.g., the Cisco CRS and its 3-stage Benes switch fabric.
- Storage topologies like the fabric-A and fabric-B SAN architecture.
- Holistic Ethernet technologies like TRILL, Fabric-Path, Short-Path Bridging, Q-Fabric, etc.
- A port extender device that is marketed as a fabric extender (a.k.a. FEX) namely the Cisco Nexus 2000 series.
In short, a switch fabric is basically the interconnection of points with the purpose to transport data from one point to another. These points, as evolved with time, could represent anything from an ASIC, to a port, to a device, to an entire architecture.
Cisco added a whole new dimension to this by marketing a Port Extender device as a Fabric Extender and doing so with different FEX architectures namely VM-FEX and Adapter FEX…. More on that in the next post. :)
In this post I would like to cover the base of what is needed to know about the Cisco Fabric Extender that ships today as the Nexus 2000 series hardware.
The Modular Switch
The concept is easy to understand referencing existing knowledge. Everybody is familiar with the distributed switch architecture commonly called a modular switch:
Consider the typical components:
- Supervisor module/s are responsible for the control and management plane functions.
- Linecards or I/O modules, offers physical port termination taking care of the forwarding plane.
- Connections between the supervisors and linecards to transport frames e.g., fabric cards, or backplane
- Encapsulating mechanism to identify frames that travel between the different components.
- Control protocol used to manage the linecards e.g., MTS on the catalyst 6500.
Most linecards nowadays have dedicated ASICs to make local hardware forwarding decisions, e.g., Catalyst 6500 DFCs (Distributed Forwarding Cards). Cisco took the concept of removing the linecards from the modular switch and boxing them with standalone enclosures. These linecards could then be installed in different locations connected back to the supervisors modules using standard Ethernet. These remote linecards are called Fabric Extenders (a.k.a. FEXs). Three really big benefits are gained by doing this.
- The reduction of the number of management devices in a given network segment since these remote linecards are still managed by the supervisor modules.
- The STP footprint is reduced since STP is unaware of the co-location in different cabinets.
- Another benefit is the cabling reduction to a distribution switches. I’ll cover this in a later post. Really awesome for migrations.
Lets take a deeper look at how this is done.
Continue reading “What is a Fabric Extender”
Another trivial post. The upcoming posts following this one will take a more in-depth look at the Nexus technologies.
So you do an non-ISSU NX-OS upgrade on a Nexus 5000 switch and something goes wrong. After reload you get the following prompt:
...Loader Version pr-1.3
The switch did not successfully boot from the images it was suppose to. How to go about restoring it?
Continue reading “N5K Stuck in Boot Mode”
Port-channels have become an acceptable solution in data centers to both mitigate STP footprints and extend physical interface limits.
One of the biggest drawbacks with port-channels is the single point of failure.
Scenario 1- Failure of an ASIC on one switch, which could potentially bring the port-channel down, if all member interfaces were connected on one ASIC.
Scenario 2- Failure of one switch on either side. The obvious solution available today is multi-chassis port-channels which addresses the problem 95%.
Consider the following topology:
Even with multi-chassis port-channel there is the still the possibility of an ASIC failure. Although not as detrimental as Scenario-1, there will still be some impact (depending on the traffic load) if both interfaces on one switch happen to connect to the same ASIC.
Thus it only makes sense that the ports used on the same switch, uses different ASICs. How would confirm this on the Nexus 5000 and Nexus 7000?
Continue reading “Load-Sharing across ASICs”
Consider the following output.
How is this possible, when no AAA or Privilege Profiles are configured? Have a look at the interface configuration:
Is this a bug/feature/annoyance. Depending on the platform, this is a feature. This test-interface is part of a port-channel. This is a common operational mistake. How many times has it happened in one of your data centers, where an engineer accidentally made a change to an interface which was a member of a port-channel, only to bring the port-channel and possibly any customer data that traversed the link down?
Continue reading “Smart Port-Channels”
This is the final follow-on post from OTV (Part I) and OTV (Part II).
In this final post I will go through the configuration steps, some outputs and FHRP isolation.
OTV Lab Setup
I setup a mini lab using two Nexus 7000 switches, each with the four VDCs, two Nexus 5000 switches and a 3750 catalyst switch.
I emulated two data center sites, each with two core switches for typical layer3 breakout, each with two switches dedicated for OTV and each with one access switch to test connectivity. Site1 includes switches 11-14 (four VDCs on N7K-1) and switch 15 (N5K), whereas Site2 includes switches 21-24 (four VDCs on N7K-2) and switch 32 (3750).
To focus on OTV, I removed the complexity from the transport network by using OTV on dedicated VDCs (four of them for redundancy), connected as inline OTV appliances and by connecting the OTV Join interfaces on a single multi-access network.
This is the topology:
Before configuring OTV, the decision must be made how OTV will be integrated part of the data center design.
Recall the OTV/SVI co-existing limitation. If core switches are in place, which are not the Nexus 7000 switches, OTV may be implemented natively on the new Nexus 7000 switch/es or using a VDCs. If the Nexus 7000 switches are providing the core switch functionality, then separate VDCs are required for OTV.
Continue reading “Cisco OTV (Part III)”