Home | LAN Technologies | Ethernet pg2 | Ethernet pg3

Ethernet - Page 1

Welcome to the Study Guide covering an overview of Ethernet.
The material to be covered in this article includes:
In the Essential Network Concepts Study Guides, we took a look at two network technologies, FDDI and Token Ring. While both enjoy a small piece of the LAN marketplace, Ethernet more or less rules the LAN world for a variety of reasons. These two pages will touch on some of those reasons.

Originally developed by Xerox, it was later jointly developed by Digital, Intel, and Xerox, commonly referred to as DIX. In 1982, the IEEE, in what is known as the 802.3 standard, standardized Ethernet.

Ethernet Media Access
Ethernet is a contention-based network technology. From the previous article you may recall that in a contention-based system, all system share, or contend for, the right to transmit data. Unlike Token Ring and FDDI which are deterministic, in an Ethernet environment all systems listen to the media, waiting for the opportunity to gain access to the media. Only one system can be transmitting at any given point in time, or collisions occur.

Specifically, Ethernet uses a contention system referred to as Carrier Sense Multiple Access with Collision Detection, or CSMA/CD for short. Let's break the name down, because it actually explains how the technology works:

Carrier Sense         - devices listen to the media for carrier signals
Multiple Access       - devices share the media
Collision Detection  - devices are capable of detecting a collision

When a device wishes to transmit on an Ethernet network, it first listens to be sure that no other station is transmitting at that point in time. If the media is clear of signals, it will transmit. The problem should be clear - it is possible that two stations will detect the media as being clear concurrently, and will attempt to transmit. If this happens, a collision occurs, which will corrupt the data that was sent.

When a collision does occur, systems will need to retransmit their data. In order to have a better chance of another collision not occurring immediately, systems involved in a collision will "back off" for a random period of time before attempting retransmission. If another collision occurs, the backoff time increases, and will keep increasing. This is part of the reason why collisions are such a hassle, especially on large networks.

In order to avoid or reduce collisions, a network should be broken down into small collision domains. Recall that when plugged into a hub, all systems are part of the same collision domain, meaning that their data is capable of colliding with one another. Collision domains can be created through the use of both bridges and switches. When a bridge is added to a network, it can separate the network into a few collision domains. When a switch is used, many more collision domains exist - each port becomes a collision domain. In fact, if each system is plugged into its own switch port, there will be no collisions. In short, a good reason to replace hubs with switches. We'll look at switches and bridges in more detail both later in this article and later in the series.

Part of the reason why many companies originally avoided Ethernet in favor of Token Ring was because of these collisions, and the impact they had on performance. With the advent of bridging (and especially switching) Ethernet has become much more robust, and as such, much more popular. It is by and large the most popular LAN technology used today, and its share is only moving one way - up.

Ethernet Topologies
When originally defined, Ethernet wasn't a very friendly solution. Implementing it involved connecting systems to a single long coaxial cable referred to as Thicknet or 10Base5. In order to connect to the Ethernet, systems were required to use a network interface card with an external transceiver that literally "tapped" into the core of the cable. This was referred to as a vampire tap. The transceiver then connected to the network card via an Attachment Unit Interface (AUI) cable. The connector type used was referred to as a DIX connector, as per the companies mentioned previously.

Later, Ethernet moved on to use a thinner variation of cable called 10Base2 or Thinnet. This network was more flexible, in that systems were directly connected to the coaxial cable using BNC connectors, not unlike those that connect to the back of your television. We'll look at the details of 10Base2 and 10Base5 later. For now, it's important to recognize the topology both use - what is referred to as a bus.

A bus topology is one where the signals are passed across the entire length of the cable. At either end, a device referred to as a terminator was used to absorb the signal, ensuring that it wouldn't bounce back down the wire and cause a collision.

While pure bus networks may not be terrible popular today, they still form the basis on which Ethernet networks are developed. Patch cables and hubs have replaced the long single wire, but the signals still travel to all connected systems on a segment (with the exception of when switches or bridges are used). When systems do connected to hubs, the topology is usually considered to be a star. Because of this, Ethernet is often referred to as a Star-wired bus, as shown below. Note that in this case, computers connecting to the hubs create two stars, but a length of Thinnet connects the actual hubs. Not as popular as it used to be, but this setup was a common sight up until only a few years ago.

Ethernet Addressing
An Ethernet address is also commonly known as a Media Access Control (MAC) or hardware address. These addresses are associated with a network card by burning it into a ROM chip at the time of manufacture. An Ethernet address not only should be unique for each card (this also acts as a serial number), but also identifies the vendor who manufactured the card, as we'll see shortly.

Ethernet addresses are 48 bits in length, and are represented in hexadecimal. The hex numbering system is referred to as being Base16, since 16 possible values exist - these range from 0-F. The table below outlines the value associated with each hexadecimal digit in both decimal and binary. Note that any value beyond F will never be valid, and that every address in hex will consist of 12 digits - each hex digit represents 4 bits.

Hexadecimal 0 1 2 3 4 5 6 7 8 9 A B C D E F
Decimal 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Formats that you will see hardware addresses listed in vary, but in general two formats tend to rule. An 0x is seen in front of a hexadecimal number to distinguish it from the more common decimal format. In Windows, MAC addresses are separated by a dash between each byte.
For example: 0x 01-22-E4-E5-44-20

On Cisco devices, addresses are generally represented in 3 groups of 4 hex digits.
For example: 0x 0122.E4F5.4420

A MAC broadcast address is all ones:    11111111  11111111  11111111  11111111
In dotted decimal format we see it as:
In hexadecimal format we see it as:      0x FFFF.FFFF

Notice the leading "0x". That tells us that the following number is in hexadecimal format. The format doesn't change the address since it's just a representation. Most hex numbers are represented in groups of 16 digits - but still, each hex digit represents 4 bits. However, seeing as this is a Cisco series, we should probably follow the 12 digit format.

The key thing with a MAC address is what it identifies. The first 24 bits represent a vendor Organizationally Unique Identifier, or OUI. The last 6 uniquely identify the network card. This breakdown is shown below. The IEEE allocates the OUIs to manufacturers. Note that mistakes do happen - while each network card should have a unique address, there have been many cases of mistakes being made during a production run. Duplicate MAC addresses can cause big problems on a network, but in general, it shouldn't be a big worry.


Serial Number



A list of the OUIs assigned to vendors can be found here if you're interested:

Full Versus Half Duplex Communication
Ethernet was originally designed to work as a half duplex system, meaning that a system could either send or receive data, but not both simultaneously. In reality, this isn't so much a design element - it's actually a function of contention-based media access. Since all systems share the media, and only one system can send data at any given point in time, half duplex was a requirement.

When systems are plugged into a hub, or connected to an Ethernet bus, they will always communicate in half duplex, even when a network card supports full duplex. In order to enable full duplex communication, where a system can both send and receive data simultaneously, a switch must be involved. More specifically, a system will require its own dedicated switch port to be capable of full duplex communication.

For example, imagine a scenario where you plug one system into a switch port, and them plug a hub into a different switch port. Connecting a hub to a switch allows a number of computers to be part of the same collision domain. However, the systems connected to this hub can only communicate in half duplex - they are still on a traditional shared network, after all. The only case where two systems can communicate in full duplex using the full available bandwidth is when both are plugged into their own dedicated switch ports. This makes the systems capable of sending and receiving data at the maximum NIC speed simultaneously.

When plugged into a switch or hub, most Ethernet systems are now capable of what is referred to as autonegotiation. When a system is connected, the port and the network interface card exchange what are known as Fast Link Pulses (FLPs). These carry information about the capabilities of a card. For example, is you plug a computer with a 10/100 Ethernet card into a 10Mbps switch port, they will automatically negotiate a speed of 10 Mbps full duplex. Note that not all network cards and switches or hubs are capable of autonegotiation, especially older models.