Ethernet is the world's most pervasive networking technology , since the 1970's. It is estimated that in 1996, 82% of all networking equipment shipped was Ethernet. In 1995 ,the Fast Ethernet Standard was approved by the IEEE. Fast Ethernet provided 10 times higher bandwidth, and other new features such as full-duplex operation, and auto-negotiation. This established Ethernet as a scalable technology. Now, with the emerging Gigabit Ethernet standard, it is expected to scale even further.
The Fast Ethernet standard was pushed by an industry consortium called the Fast Ethernet Alliance. A similar alliance, called the Gigabit Ethernet Alliance was formed by 11 companies in May 1996 , soon after IEEE announced the formation of the 802.3z Gigabit Ethernet Standards project. At last count, there were over 95 companies in the alliance from the networking, computer and integrated circuit industries.
A draft 802.3z standard was issued by IEEE in July 1997. The last technical changes are expected to be resolved by September. The standard is expected to be adopted by March 1998.
The new Gigabit Ethernet standards will be fully compatible with existing Ethernet installations. It will retain Carrier Sense Multiple Access/ Collision Detection (CSMA/CD) as the access method. It will support full-duplex as well as half duplex modes of operation. Initially, single-mode and multi mode fiber and short-haul coaxial cable will be supported. Standards for twisted pair cables are expected by 1999. The standard uses physical signalling technology used in Fiber Channel to support Gigabit rates over optical fibers.
Initially, Gigabit Ethernet is expected to be deployed as a backbone in existing networks. It can be used to aggregate traffic between clients and "server farms", and for connecting Fast Ethernet switches. It can also be used for connecting workstations and servers for high - bandwidth applications such as medical imaging or CAD.
CSMA/CD refers to the protocol used by stations sharing the medium, to arbitrate use of the medium. A sender has to "listen" to the medium. If no one else is transmitting, then the sender may transmit. If two senders start transmitting at the same time, then a collisionis said to have occurred. Transmitting stations, therefore, have to listen to the medium for collisions while transmitting, and retransmit a packet after some time, if a collision occurs.
The original 802.3 standard was published in 1985. Originally two types of coaxial cables were used called Thick Ethernetand Thin Ethernet. Later unshielded copper twisted pair (UTP) , used for telephones, was added.
In 1980, when Xerox, DEC and Intel published the DIX Ethernet standard, 10 Mbps was a lot of bandwidth. Since then, as computing technology improved, network bandwidth requirements also increased. In 1995, IEEE adopted the 802.3u Fast Ethernet standard. Fast Ethernet is a 100 Mbps Ethernet standard. Fast Ethernet established Ethernet scalability. With Fast Ethernet came full-duplex Ethernet. Until, now, all Ethernets worked in half-duplex mode, that is, if there were only two station on a segment, both could not transmit simultaneously. With full-duplex operation, this was now possible.
The next step in the evolution of Ethernet is Gigabit Ethernet. The standard is being developed by the IEEE 802.3z committee.
The Alliance represents a multi-vendor effort to provide open and inter-operable Gigabit Ethernet products. The objectives of the alliance are :
Currently membership of the alliance is over 95 companies. This indicates that the emerging standard will be backed by the industry. The alliance is pushing for speedy approval of the standard. So far, the standardization is proceeding without any delays, and is expected to be approved by March 1998.
The Physical Layer of Gigabit Ethernet uses a mixture of proven technologies from the original Ethernet and the ANSI X3T11 Fibre Channel Specification. Gigabit Ethernet is finally expected to support 4 physical media types . These will be defined in 802.3z (1000Base-X) and 802.3ab (1000Base-T).
Three types of media are include in the 1000Base-X standard :
Cable Type | Distance |
Single-mode Fiber (9 micron) | 3000 m using 1300 nm laser (LX) |
Multi mode Fiber (62.5 micron) | 300 m using 850 nm laser (SX)
550 m using 1300 nm laser (LX) |
Multi mode Fiber (50 micron) | 550 m using 850nm laser (SX)
550 m using 1300 nm laser (LX) |
Short-haul Copper | 25 m |
Ethernet has a minimum frame size of 64 bytes. The reason for having a minimum size frame is to prevent a station from completing the transmission of a frame before the first bit has reached the far end of the cable, where it may collide with another frame. Therefore, the minimum time to detect a collision is the time it takes for the signal to propagate from one end of the cable to the other. This minimum time is called the Slot Time. ( A more useful metric is Slot Size, the number of bytes that can be transmitted in one Slot Time. In Ethernet, the slot size is 64 bytes, the minimum frame length.)
The maximum cable length permitted in Ethernet is 2.5 km (with a maximum of four repeaters on any path). As the bit rate increases, the sender transmits the frame faster. As a result, if the same frames sizes and cable lengths are maintained, then a station may transmit a frame too fast and not detect a collision at the other end of the cable. So, one of two things has to be done : (i) Keep the maximum cable length and increase the slot time ( and therefore, minimum frame size) OR (ii) keep the slot time same and decrease the maximum cable length OR both. In Fast Ethernet, the maximum cable length is reduced to only 100 meters, leaving the minimum frame size and slot time intact.
Gigabit Ethernet maintains the minimum and maximum frame sizes of Ethernet. Since, Gigabit Ethernet is 10 times faster than Fast Ethernet, to maintain the same slot size, maximum cable length would have to be reduced to about 10 meters, which is not very useful. Instead, Gigabit Ethernet uses a bigger slot size of 512 bytes. To maintain compatibility with Ethernet, the minimum frame size is not increased, but the "carrier event" is extended. If the frame is shorter than 512 bytes, then it is padded with extension symbols. These are special symbols, which cannot occur in the payload. This process is called Carrier Extension.
For carrier extended frames, the non-data extension symbols are included in the "collision window", that is, the entire extended frame is considered for collision and dropped. However, the Frame Check Sequence (FCS) is calculated only on the original (without extension symbols) frame. The extension symbols are removed before the FCS is checked by the receiver. So the LLC (Logical Link Control) layer is not even aware of the carrier extension. Fig. 1 shows the ethernet frame format when Carrier Extension is used.
Packet Burstingis an extension of Carrier Extension. Packet Bursting is "Carrier Extension plus a burst of packets". When a station has a number of packets to transmit, the first packet is padded to the slot time if necessary using carrier extension. Subsequent packets are transmitted back to back, with the minimum Inter-packet gap (IPG) until a burst timer (of 1500 bytes) expires. Packet Bursting substantially increases the throughput. Fig. 2. shows how Packet Bursting works.
The GMII provides 2 media status signals : one indicates presence of the carrier, and the other indicates absence of collision. The Reconciliation Sublayer (RS) maps these signals to Physical Signalling (PLS) primitives understood by the existing MAC sublayer. With the GMII, it is possible to connect various media types such as shielded and unshielded twisted pair, and single-mode and multi mode optical fibre, while using the same MAC controller.
The GMII is divided into three sublayers : PCS, PMA and PMD.
Carrier Sense and Collision Detect indications are generated by this sublayer. It also manages the auto-negotiation process by which the NIC (Network Interface) communicates with the network to determine the network speed (10,100 or 1000 Mbps) and mode of operation (half-duplex or full-duplex).
Ethernet today supports full-duplex media, physical layer as well MAC layer. However it still supports half-duplex operation to maintain compatibility. A new device has been proposed which provides hub functionality with full duplex mode of operation. It is called various names such as Buffered Distributor, Full Duplex Repeaterand Buffered Repeater.The term "Buffered Distributor" is used for all these devices in the following discussion.
The basic principle is that CSMA/CD is used as the access method to the network and not to the link. A Buffered Distributor is a multi-port repeater with full-duplex links.
Each port has an input FIFO queue and an output FIFO queue. A frame arriving to an input queue is forwarded to all output queues, except the one on the incoming port. Within the distributor, CSMA/CD arbitration is done to forward the frames to output queues.
Since collisions can no longer occur on links, the distance restrictions no longer apply. The only restriction on cabling distances is the characteristics of the physical medium, and not the CSMA/CD protocol.
Since the sender can flood the FIFO, frame based flow control is used between the port and the sending station. This is defined in the 802.3x standard and already used in Ethernet switches.
The motivation behind development of the Buffered Distributor is it's cost compared to a Gigabit switch and not a need to accommodate half duplex media. The Buffered Distributor provides full duplex connectivity, just like a switch, yet it is not so expensive, because it is just an extension of a repeater.
Essentially, four types of hardware are needed to upgrade an exiting Ethernet/Fast Ethernet network to Gigabit Ethernet :
When ATM (Asynchronous Transfer Mode) was introduced, it offered 155 Mbps bandwidth, which was 1.5 times faster than Fast Ethernet. ATM was ideal for new applications demanding a lot of bandwidth, especially multimedia. Demand for ATM continues to grow for LAN's as well as WAN's.
On the one hand , proponents of ATM try to emulate Ethernet networks via LANE ( LAN Emulation) and IPOA ( IP over ATM). On the other, proponents of Ethernet/IP try to provide ATM functionality with RSVP( Resource Reservation Protocol ) and RTSP ( Real-time Streaming Transport Protocol ). Evidently, both technologies have their desirable features, and advantages over the other. It appears that these seemingly divergent technologies are actually converging.
ATM was touted to be the seamless and scaleable networking solution - to be used in LANs, backbones and WANs alike. However, that did not happen. And Ethernet, which was for a long time restricted to LANs alone, evolved into a scalable technology.
As Gigabit Ethernet products enter the market, both sides are gearing up for the battle. Currently, most installed workstations and personal computers do not have the capacity to use these high bandwidth networks. So, the imminent battle is for the backbones, the network connections between switches and servers in a large network.
Gigabit Ethernet seems to be ready to succeed. It is backed by the industry in the form of the Gigabit Ethernet Alliance. The standardization is currently on schedule. Pre-standard products with claims of inter-operability with standardized products have already hit the market. Many Fast Ethernet pre-standard products were inter-operable with the standard. So it is expected that most pre-standard Gigabit Ethernet products will also be compatible with the standard. This is possible because many of the companies that have come out with products are also actively participating in the standardization process.
ATM still has some advantages over Gigabit Ethernet :
Gigabit Ethernet has its own strengths :
Vijay Moorthy, Aug 14, 1997