Home
| How Switches Work
| Switching and Bridging
| STP
| Switching Notes
| Intro to Layer 2
Switching Methods and Spanning Tree Protocol
We have looked at the basics of bridging and
switching, as well as how these devices impact Layer 2 communications. In
this article we'll build on the topic, with a look at the different
switching methods used by Cisco equipment, as well as a look at how
implementing redundant connections can lead to broadcast storms on bridged
or switched networks.
The material to be covered in this article includes:
Cisco Switching Methods
We already know that when a switch receives a frame from a host, it will
look up the destination MAC address in its forwarding table to determine
where the frame should be passed next. While this is true, how the switch
handles the forwarding process can vary. For example, one method used by
Cisco routers will buffer the entire frame, and then recalculate its CRC
to be sure it hasn't been corrupted. Another will begin forwarding a frame
almost immediately as it begins entering the switch, not bothering to look
at the CRC at all. The tradeoff here should be clear - certain methods
focus on reliability, while others focus on speed.
In all, Cisco supports three main switching methods on its equipment.
Not every method is supported all models, but you should be familiar with
the operation of each. The three main switching methods include:
- Store and forward
- Cut-through
- Fragment Free
The
Store and Forward switching method is the default method used on the Catalyst 1900
series of Cisco switches. When this method is used, a switch will wait for
the entire frame to enter the switch, copy it onto its buffers, and will
then calculate the frames CRC value. If the CRC value calculated by the
switch is the same as value stored in the frame, the frame is not corrupt,
and the switch will forward it to the destination port(s). If the
calculated CRC value is different, it means that the frame is corrupt, and
would be subsequently dropped by the switch. The Store and Forward method
is most concerned with frames being transmitted reliably, but obviously
adds latency to the communication process.
Cut-through switching follows a different method. Instead of copying the entire
incoming frame to its buffers and calculating its CRC value, a switch
using cut-through will instead begin forwarding a frame immediately once
the first 6 bytes of the frame have been received. As a reminder, the
first 6 bytes of a frame contain the destination MAC address, which is
enough information for the switch to begin making a forwarding decision.
While this method sacrifices reliability to some degree, it also speeds up
the frame forwarding process considerably. Given the reliability of most
networks (and networking equipment) in use today, the corruption of frames
is not nearly as much of an issue as it once was, making cut-through
switching a reasonable choice. Cut-through switching is the default
switching method in many higher-end Cisco switches, such as those in the
Catalyst 5000 series.
Fragment Free, the third switching method that you'll need to be familiar with for the
CCNA exam is known as Fragment Free. Instead of starting to forward a
frame after the first 6 bytes have been received, a switch configured to
use this method will wait to forward the frame until after the first 64
bytes are received. These 64 bytes are known as the "collision window,"
because a corrupted frame is usually recognized within the first 64 bytes.
This method assumes that if the first 64 bytes look good, then the frame
is probably ok. While this method is both faster than store and forward
switching, and more reliable than cut-through switching, providing a
reasonably balanced middle ground. The Cisco 1900 series of switches
supports both the Fragment Free and Store and Forward switching methods.
Fragment Free is the default method for Cisco switches.
In the figure below you see a frame and the points at which the forwarding decision is made.
Point 1 is the beginning of the frame.
Point 2 is where the Cut-through decision is made.
Point 3 is where the Fragment Free decision is made.
Point 4 is where the Store and Forward decision is made.
Any good network design will always consider the need for redundant
links, in order to help ensure that an alternate path through the network exists in
the case that a link (or piece of equipment) fails. For example, consider the diagram
below. In it, Switches A, B, and C are interconnected in a loop. If any one link
should fail, another path would exist, allowing hosts to still be able to communicate.
While redundancy on a switched network might immediately seem like a great idea, it
introduces one not-so-little problem. On a switched (or bridged) network, a loop will
lead to broadcast storms.
A broadcast storm is something that many people have heard off, but in
my experience is something that few people understand. Let's take a
look at an example to help make the problems that a loop introduces a
little easier to understand.
Consider the simple network below. It consists of two collision domains,
with two bridges connecting the segments, providing redundant paths.
We'll begin by looking at how the bridges build their forwarding tables.
Let's say that Computer A sends a frame out onto to network. Because
all devices on Network A are part of the same collision domain, it
will be seen by all hosts, including interface A on Bridges 1 and 2.
The bridges will use the source MAC address information to add entries
to their forwarding tables, which will identify Computer A as being
accessible via their network A interfaces. So far, so good.
By the same token, let's assume that Computer B also sends out a frame.
Bridges 1 and 2 will also see this frame, and will add Computer B to
their respective forwarding tables, as being accessible via interface
B. At this point, the forwarding (or MAC) tables on both Bridges 1 and 2 will appear as follows:
MAC Address |
Bridge Interface |
0000.0000.000A |
A |
0000.0000.000B |
B |
At this point, life is still pretty good on our network, since both
bridges are aware of the correct location of both hosts. The problem
arises when Computer B attempts to communicate with Computer A, or
vice versa.
For example, let's say that Computer B wants to communicate with
Computer A. It sends out a frame, with a source MAC address of
0000.0000.0000B, and a destination MAC address of 0000.0000.000A (note
that these addresses are hypothetical, for illustration purposes).
Even though only one frame is sent by Computer B, it will be "seen" by
both bridges, since they are all part of the same collision domain.
This is illustrated below.
Upon receiving the frame, Bridge 1 will inspect the destination MAC
address and notice that the host with address 0000.0000.000A is
accessible via its network A interface. As such, it will forward the
frame. Unfortunately, Bridge 2 will do the exact same thing - and
forward a second copy. Now, even though Computer B only sent out a
single frame, TWO frames have been forwarded onto network A.
All things considered, nothing seems too bad so far. You might think
that the worst of this is that Computer A would ultimately receive two
copies of the same frame, and just be a little confused.
Unfortunately, the problem is larger still. This is because those
frames that were forwarded onto segment A will not only be encountered
by Computer A, but also the "A" interface of each of the bridges. This
is illustrated below.
This is where the fun begins. In encountering the frame forwarded on to
segment A by Bridge 1, Bridge 2 will take a look at the source MAC
address of the frame and will notice that it came from Computer B (the
source MAC address). Since the frame was received on it's "A"
interface, Bridge 2 will automatically assume that Computer B has
moved, and it now part of network A (recall that bridges and switches
build their forwarding tables by looking at the source MAC address of
a frame). As such, it will change its MAC address table to reflect the
change. The exact same process would occur on Bridge 1, according to
the frame forwarded by Bridge 2.
Now, both Bridge 1 and Bridge 2 assume that A and B are on the same
segment. When Computer A attempts to respond to Computer B, neither
bridge will forward the frame - as far as they're concerned, both
hosts are on the same segment. If that wasn't bad enough, the problem
gets worse. Since it doesn't receive a response, Computer B will
likely try again, sending out another frame. Upon seeing this frame on
their "B" interfaces, both bridges will again think that Computer B
has moved, back to segment B. They will thus update their forwarding
tables, and again each forward a copy of the frame on to segment A.
Starting to see the problem? If it doesn't seem like a big issue with
only 2 clients, imagine a network with 50 clients on each segment, and
this situation occurring over and over again. This, ladies and
gentlemen, is a broadcast storm. In even the smallest bridged or
switched environments, a loop can easily and quickly bring a network
to its knees.
Does this mean that implementing redundancy in a bridged or switched
environment is impossible? Of course not. However, without the proper
precautions, creating a "loop," even with the best intentions, will
lead to broadcast storms. In order to circumvent this problem, a
protocol solution was developed -
Spanning Tree Protocol (STP).
STP allows you to implement redundancy in a switched
or bridged network without needing to worry about the broadcast storm
just described. How STP does this is by selectively blocking switch or
bridge interfaces such that a loop no longer exists, even if the
network is configured as such. Consider the example below. Our network
is exactly the same, except that this time, STP is enabled. Notice
that interface B on Bridge 2 has been put into a blocking mode. All
traffic between segments A and B will now be forwarded through Bridge
1, who provides the only active path.
The great things about Spanning Tree is that it constantly monitors the
network - if Bridge 1 were to fail, for example, it would
automatically change the state of port B on Bridge 2 to begin
forwarding traffic, thus providing automatic redundancy. If that
sounds a little simplified, it is. The actual operation of Spanning
Tree is a little more complex, and also has some downsides.
Next we'll take a closer look at the operation of Spanning Tree,
including the processes that it goes through in ensuring that a
switched or bridged network topology is loop free, while still
providing redundancy.
Top