Assume you have either of the following setups. A single router (R3) with multiple links, either to the same upstream router (R2) or to 2 different upstream routers(R2+R4). And you want to load-share traffic across both links outbound (direction from left to right). Obviously the routing table needs multiple outgoing links as next-hops to perform the desired balancing. The command maximum-paths specifies how many paths or next hops are allowed per prefix in the routing table for a specific routing protocol, else default behavior dictates only the best route from each routing protocol which are candidate for insertion into the routing table.
Since the links terminate on the same router (R3) you have the following options:
- Per-Destination Load-Sharing using Fast Switching
- Per-Source-Destination Load-Sharing using CEF
- Per-Packet Load-Balancing using Process Switching
- Per-Packet Load-Balancing using CEF
You need to be aware that IOS makes switching decisions based on the configuration of the inbound interface first. If CEF is configured on an inbound interface, the packets will be CEF switched regardless of the configuration on the outbound interface. CEF is ONLY used if enabled on the inbound interface. If CEF is not configured on the inbound interface, the configuration of the exit interface determines the switching method. The following table illustrates the different behaviors:
|Inbound Configuration||Outbound Configuration||Switching Method Used|
Refer to the following article, for more info about the Switching Types and how to enable each.
Per-Destination Load-Sharing using Fast Switching
IOS performs per destination load sharing if the exit interfaces are configured with fast switching and CEF is not enabled on the inbound interface. Fast-switching is enabled by default outbound, even if CEF is enabled on the interface. With Fast-Switching all packets to a specific destination, will be routed out of the same interface. In the above scenarios, all traffic to 172.16.1.1 will always leave out of the same interface/link. This might not be the best desired behavior, especially when 172.16.1.1 for example is a SQL database server, and majority of traffic to the 172.16.1.0/24 subnet will be directed to 172.16.1.1. This would cause the top link to have a lot more traffic than the bottom link.
Per-Source-Destination Load-Sharing using CEF
Most commonly referred to as Per-Destination Load-Balancing, this is the default switching scheme CEF uses if enabled. There is unfortunately some documentation that misrepresents this. CEF Per-Destination Load-Balancing is done by hashing the source and destination IP address, resulting in a unique hash ID that randomizes the assignment across the end-to-end paths. Active paths are assigned internally to several of 16 hash buckets. The path-to-bucket assignment varies with the type of load balancing and the number of active paths. All traffic that has a particular source address and is destined to a specific destination address will exit the same interface. This provides more granularity and better load-sharing, but traffic is still not equal.
The simple case is for an even number of paths. The 16 buckets are evenly filled with the active paths. If 2 paths , 16 buckets/2 paths = 8 hash assignments per path that must be interlaced. Traffic will be sent accordingly. Top, bottom, top, bottom, top, bottom, top, bottom, etc.
This will be clearer seeing the output from the hidden command “show ip cef <prefix> internal“, and looking at the highlighted type and load distribution.
To see why I say it is Per-Source-Destination, with CEF you can actually see where a specific pair would be forwarded based on the hash with the command “show ip cef exact-route“. The output shows that EACH SRC-DST-Pair has a new exit interface:
But what happens if there were 3 paths between R3 and R2. If 16 (hash buckets) isn’t divisible by the number of active paths, the last few buckets that represent the remainder are disabled. If 3 paths, the closest divisible number is 15, ie 15/3 = 5 hash assignments per path, interlaced. Traffic in this instance will be sent using 1st interface, 2nd interface, 3rd interface, 1st interface, 2nd interface, 3rd interface, etc. The output from the same command above will show your Load-Distribution as 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2, with hash bucket 16 not used and removed.
Per-Packet Load-Balancing using Process Switching
This is the oldest form of Load-Balancing. Due to the nature of Process-Switching, Per-Packet Load-Balancing is inherent when multiple next-hops to the same prefix exist in the routing table. Because every packet is punted to the processing level for a destination look-up, each packet will be sent out from the next-available interface if multiples exist.
Per-Packet Load-Balancing using Process-Switching has a couple drawbacks. More noticeable in large networks, a huge increase in main processor utilization, an increase in in-memory data transfer time to and from the I/O Memory, and lastly out of order sequence packets. Thus CEF provides a more efficient alternative.
Per-Packet Load-Balancing using CEF
Per-Packet Load-Balancing is another method available to CEF. This has to be enabled on the outgoing interfaces with:
ip load-sharing per-packet
With Per-Packet Load-Balancing one packet is sent over one available link, the next packet is sent over the next available link, even if this next packet is to the same destination as the first, and so on, given equal-cost paths.
Per-Packet Load-Balancing will distribute the load more evenly than any other switching mechanism, but that comes at a price. Because the packets to a one given destination will be taking different paths, it is possible for packets to arrive out of order, which could be unacceptable for some applications.
This is one of two ways to do Load-Balancing, and the preferred way, as discussed in this article.
The output from the “show ip cef <prefix> internal“:
To see the behavior of Per-Packet use the command “show ip cef exact-route“. Every time the command is issued for the SAME SRC-DST-Pair the exit interface changes.