Computers and their attachments (like networks and disks) are getting faster everyday. The current CPU speeds of processor like DEC Alpha and Pentium are well over 100MHz allowing them to perform billion instructions per second (BIPS). This speed is comparable to supercomputer's speed five years ago. With the growing speed of computers the applications which run on them are now ranging from interactive graphics, voice recognition, video conferencing, real time animations etc. All these new applications will use networks to carry more data.
Network bandwidth is also increasing concurrently with the CPU speeds. When in 1980s 10Mbs Ethernet was considered fast, we now have 100 Mbs Ethernet. The bandwidth is approaching the speed on 1 billion bits per second (1 Gbs), much due to the research in the field of fiber optic signalling.
The three main fields data communications, computing and telecommunications are undergoing a period of transition. The field of computing is rapidly advancing with processor speed doubling ever year. The latest RAID (Redundant Arrays of Inexpensive Disks) has given rise to file-systems with gigabit-bandwidth.
The field of data communications which facilitates the exchange of data between computing systems has to keep up with the pace of the growing computing technologies. In the past the data communications provided services like the e-mail. Now applications like virtual reality, video conferencing, video on demand services are present.
For a century the telecommunication industry has been carrying voice traffic. This scenario is changing with telephone networks carrying more data each year. The data being carried by telephone network is growing at 20% per year compared to voice traffic which is growing only at 3% per year. Soon the data traffic will overtake the voice traffic. All this, has made the telecommunication industry more interested in carrying data in their networks.
So the three communities are now converging with common interests of carrying more data at higher speeds. This has led to some joint activities. The most notable of these activities is that which has led to the setting up of gigabit testbeds in United States. Other joint activites are the standardization of ATM (Asynchronous Transfer Mode), a suite of communication protocol to support integrated voice, video and data networks. Some organizations which are doing research in gigabit networking are National Coordination Office for HPCC (High Performance Computing and Communications), The Corporation for National Research Initiatives, IEEE Communications Society Technical Committee on Gigabit Networking.
When the gigabit networking was in its horizon, many researchers felt that the current knowledge about networking would not apply to gigabit networks which are considerably faster than existing networks. Now, after several years of research it has been found that many of the strategies and techniques (like layering the protocol) still work in gigabit networks also.
There are many working Gigabit testbeds (see AppendixAfor full list). In five to ten years Gigabit networks will become a reality. It is now unclear whether there will be a single gigabit technology with a specific standard protocol. But it looks like that there will be many competing gigabit networking technologies (like many LAN technology) and many protocols but eventually one of them will become most popular (like IP).
The next section deals with the key concepts and technologies in the Gigabit networking. The third section deals with more specific issues in gigabit networking. The fourth sections discusses the various potential gigabit applications. The last section overviews the current state of gigabit networking. The appendix A gives a list of gigabit testbeds. The appendix B is an annotated bibiliography of the http sites, articles, papers and books refered in this paper.
Another important trend in gigabit networking is the increasing interest in a technology now widely known as cell networking, cell switching or cell-relay. In the following (sub) sections we will give a brief overview of these concepts and technologies.
Some part of the light gets reflected when passing from of one medium to another, and the rest of it gets refracted. Light has an interesting property that if the angle of incidence is greater than a critical angle that all of the light is reflected. Fiber Optics uses this property of the light in sending the signals.
Center of a Piece of Fiber
(Courtesy "Gigabit Networking", Craig Patridge)
The fiber has a thin strand of glass called core surrounded by a thicker outer layer cladding. Light is sent at the appropriate angle inside the core and it travels through the core, any light escaping the core will be reflected back into the core. The pulses of the light carry the bits. One important thing here is that the bits dont travel faster (propagation delay is similar to that of copper wire) than in copper wire, the higher bandwidth is got because the bits can be packed more densely (1000 times more than that of copper). Theoretically the fiber has bandwidth of 25 THz around each wavelengths of 0.85, 1.3, 1.5 microns. (These give rise to bands similar to those of radio waves). So the total capacity of a single fiber is 75 Terabits per second!
In fiber optics just like copper wire there are some problems while signalling. There are three major types of dispersions: modal, chromatic and material. Repeaters and Amplifiers which strengthen the signals are used to overcome these dispersions. Also single-mode fiber (also called monomode) is used to avoid dispersion. The multimodefiber still suffer from modal dispersion problems.
Transmitter and receivers are generic terms for devices attached to a fiber to respectively transmit and receive signals. These have two important varities: fixed and tunable. Fixed ones are set to a particular wavelength and tunable ones can dynamically set the lightwave frequency at which they transmit or receive. Fixed ones are simple, the tunable onces are more complicated. One example of a tunable device is the Mach-Zehnder Inferometer. Here one path of the light is made slightly longer than the other such that there are out of phase and the difference is used for tuning to a particular wavelength
In contrast to the SONET, WDM networks uses the special properties of the optical fibers. In WDM networks the bandwidth of the fiber is divided into multiple channels and hosts communicate to each other on a particular channel. There are two major types of WDM networks: single-hop, multihop. As the name suggests in single-hopWDM networks the hosts are directly connected to each other via a star-coupler. The single-hopcan be further classified depending on whether they use fixed/tunable transmitters and receivers. Two examples of single-hopWDM networks are LAMBDANET a project of Bell Communications Research and RAINBOW a project of IBM. The multihopWDM Networks can be designed in many ways. The main goal is to build a high-connectivity graph so that the number of hops between any two nodes is minimized. An example of multihopWDM network is the TeraNet, which was built as the part of ACRON project at Columbia University.
The ideology of cell networking is that all the data should be transmitted in fixed size packets called cells. In networks usually the data is sent in packetswhich vary in size. If the packets vary in size it is difficult to guarantee bounded delays which are required for isochoronous traffic. See figure below
(Courtesy Gigabit Networks, Craig Patridge).
The waiting time is large when the small packet of the isochoronous traffic (eg. voice traffic) is waiting behind a large packet. In contrast in cell networking, since the large packet is divided into small cells the waiting time is not large. In cell networking it possible to give guarantees for delay.
There are two ways of switching cells, one method is store-and-forwardand the other is cut-through. In store-and-forwardswitching the whole cell is received and then it is forwarded in a appropriate link. In cut-throughswitching the switch decides on which link to forward the cell by just examining a few bytes of the header in the cell. Asychronous Transfer Mode ( ATM ) is the brain-child of the cell networking technology. The telecommunication community has put in a lot of effort to standardize this technology. There is also a ATM Forum, which is a group which is deciding on the main issues of the ATM before waiting for the official standard come out. Currently ATM provides bandwidth from 45 Mbs to 622 Mbs. But in future, ATM will also provide gigabits of bandwidth.
Most of the Local area cell networking share the media. Since the media is shared the issues of who gets the access of the media has to be taken care. The Local area cell networks solve this problem by arbitrating the right to send a cell. Most the local area cell networks are ring networks. Now we shall see some examples of local area cell networks.
Switch design is key issue in wide area cell networking. The switchs will have to switch reliably very large bandwidths since it is operating at gigabit speeds. Usually these switches are designed using parallel interconnection devices. A fundamental problem in switching is blocking. Blockinghappens when two cells content for a particular link. Another important issue is the buffer management. Various buffering strategies like input-buffering, output-buffering, internal bufferingare used. We shall briefly see some of the wide area cell networking switches.
The main problem of wide area packet networks are the router which have to route packets at gigabit rates. Forwarding can be done using very small number of instructions (100-200) instructions, because for forwarding, the router just examines the header and makes some consistency checks and routes the packet to the appropriate ports. Using multi-function instructions, the CRC calculation can be done while the packet is being copied itself. Also various improved hashing and lookup table techniques have been developed which are used in these routers. We shall see two high speed routers
The following are the lessons learnt from these and other Gigabit Testbed initiatives.
Most of today's data network applications are not very sensitive to delay and variations in bandwidth. It does not matter very much if your files take a little longer to travel across the net. But in telecommunications (telephone) industry the applications are delay sensitive. Normally, when humans speak they pause in between sentences and if the pause is longer the other speaker speaks. If due to network delay if the pause is large then both the speaker may speak at the same time leading to confusion. So delay should be bounded in telephones. Other applications like X Windows, remote login etc will be faster if gigabit networks are used.
Any applications which needs low response time or high bandwidth is suitable to be a gigabit application. The recent advent of gigabit networks have given rise to many new applications. One of them is IVOD (Interactive Video on Demand). Here the consumers order which ever program they want to see and the programs are sent from a centeral server to the consumers. Since video applications use a large bandwidth and also different viewers may want to see different programs at the same time, this application will benefit from gigabit networking. Though compression methods like MPEGmay be used for compression, every once in a while full screen data has to be sent. This can be done using gigabit networking technology.
Highly computation intensive problems can be broken into smaller problems and given to computers with high bandwidth networks connecting them for interchanging data. For example in UCLA, researchers are experimenting with simulation studies of atmosphere and ocean interactions. One supercomputer (CM-2) simulates the ocean and another simulates the atmosphere and these interchange huge amounts of data and the interactions are studied. Typically 5 to 10 Mbs of data is exchanged per cycle, this will take a second in a 10 Mbs Ethernet while it will take only 100ms in a gigabit networking environment.
Another class of applications are those which have real-time interactions with humans. A typical example is video conferencing. Humans are capable of absorbing large amounts of visual data and are very sensitive to the quality of the visual data. Another class is the virtual realityapplications which give the user the illusion of being somewhere else. There have been interesting experiments done in NASA. They developed a system, by which the geologists can interact with the surface of the Mars. Geologists study by interacting with the surface, touching it (virtually), seeing the 3D scene from different angles etc. All these require large amount bandwidth, and gigabit networking comes to help them out.
One of the main difference between the traditional data-communication applications and those of interactive nature is that the later need timing requirements about spacing between samples. Recently there has been some work in some innovative experiments with adaptive applications. These applications change their behaviour dynamically and require only loose performance guarantees from the network. An example of the adaptive application is the vatvoice-conferencing system developed by Van Jacobson. vatis like a telephone but uses the computer and internet to connect two peoples. The vatavoids isochoronous samples by keeping a large buffer and timestamping all the data it receives. Traffic which arrives earlier are buffered appropriately and then played. The time of the delay is called the playbackpoint. So adaptive applications could have a major impact in network designing in future. The vat also has demonstrated that slow networks like Internet can support real-time applications with enough buffering.
The first steps towards gigabit networking have been taken by exploring the capabilities of various media which can support gigabits of bandwidth. There is considerable amount of research being done in various gigabit testbeds, but still gigabit as field has a long way to go before becoming mature. One of the main issues is what protocol should gigabit networks run on. It is interesting to note here that, traditional protocols like IP, TCP with various modifications and extensions have been shown to support gigabit bandwidth. But should we stick with these are not is a question yet to be answered. Most of these protocols have not been verified because they have a huge number of states. Protocol verification is an important issue which has to answered by gigabit networking since at such high speed something can go wrong very fast. Some of still unexplored issues in gigabit networking is Network management. It is unclear whether the existing network management will still work at gigabit speeds. Encryptions of data takes some time, so fast encryptions algorithms (like DES, Digitial Encryption Standard) should be developed if authentication is to be supported by gigabit networks.
In most of the operating systems since the network interface is not well integrated, there is lot of redundant copying of data. So operating systems face the challenge of changing to support better network interfaces. This will become critical in supporting real-time systems. Some work has already been done in this area with improvements in processor speeds, better scheduling and caching techniques.
Traffic modeling is another area where work has to done. It has been recently shown that Ethernet traffic is Self-Similar (fractal) in nature. It has to verified whether this hold for gigabit networks or come up with new models for traffic analysis.
Will the network world stop progressing after gigabit networks or will terabits network be developed ? It is already been known that the potential of a single fiber is in the order of terabits. We can envisage terabit speed networks were a parallel array of high speed computers are connected. But will there be terabit applications to run on it? As it happens most of the time an application will be found whenever the technology has been invented. So we can expect to be working with terabit networks in the future.
1. Tokizawa, Ikuo; Kikuchi, Katsuaki,"ATM link system technologies-super high-speed information highway for future multimedia communication services",NTT Review vol 6 n 1 Jan 1994. p 49-55.
2. Cheng, Wood-Hi; Bechtel, James H."Gigabit fiber optic link using compact disc lasers", The International Soc. for Optical Eng. vol 2148 1884. p 210-215
3. Dumortier, Philip; Van Hauweermerien, Luc; Boerjan, Joost,"Transport of gigabit ATM cell streams over lower order SDH backbone", Proc. - IEEE INFOCOM v 3 1994. IEEE, Piscatway, NJ p 1160-1169
4. Watson, Greg; Banks, David; Calamvokis, Costas; Dalton, Chris,"AAL5 at a Gigbit for a kilobuck", Jrnl of High Speed Networks v 3 n 2 1994. p 127-145
5. Bohm, Christer; Lindgren, Per;"DTM Gigabit Network", Jrnl of High Speed Networks v 3 n 2 1994. p 109-126
6. Morrison, John"Gigabit Network Applications at Los Alamos National Lab.", Conf. on Optical Fiber Communication, Tech Digest Series v 4 1994. p 64-65
7. Antonio, John K."Concurrent communication in high-speed wide area networks",IEEE Trans. on Paralled and Distributed Systems v 5 n 3. Mar 1994 p 264-273.
This is a theoretical paper. A parameter called receptivity is formally defined which is a measure of the amount of concurrent communication supported by the source-destination pairs in a high-speed network. The simulation results agree well with the theoretical predicitions. It also gives a easily computable approximations for receptivity.
8. Zurfluh, E.A; Cideciyan, R.D,"IBM Zurich Research Laboratory's 1.13 Gb/s LAN/MAN prototype", Computer Networks and ISDN Systems v 26 n 2 Oct 1993. p 163-183
9. Joseph D. Touch,"Defining High-Speed Protocols: Five Challenges and an Example That Survives the Challenges", USC/Information Sciences Institute, August 1994.
In this paper the author demonstrates how the WWW service meets the five challenges in high-speed networks and also meets the defining characteristics of a application requiring high-speed bandwidth.
10. Ahmed Tantawy, Odysseas Koufopavlou, Martina Zitterbart and Joseph Abler, "On the Design of a MultiGigabit IP Router", Journal of High Speed Networks, vol 3 (1994), pg 209-232.
Routers will form the main bottleneck in high speed computing environment. This paper shows that fast IP routers can be built by identifying time critical points in the data flow and developing a special module for processing those points.
11. I. Chlamatc, A. Fumagalli, L. G. Kazovsky and P. T. Poggiolini,"A Contention/Collision Free WDM Ring Network for Multi Gigabit Packet Switched Communication", Journal of High Speed Networks, vol 4 (1994), pg 201-219.
12. Abhaya Asthana, Catherine Delph, H. V. Jagadish and Paul Krzyzanowski, "Towards A Gigabit IP Router", Journal of High Speed Networks, vol 1 (1992), pg 281-288.
13. K. Y. Eng, et al.,"A Prototype Growable 2.5 GB/s ATM switch for BroadBand Applications", Journal of High Speed Networks, vol 1 (1992) pg 237-253.
14. James P. G. Sterbenz and Gurudatta M. Parulkar,"Design of a Gigabit Host-Network Interface", Journal of High Speed Networks, vol 2 (1993), pg 27-62.
15. Jean-Paul Nussbaumer, et al.,"Networking Requirements for Interactive Video on Demand", Gigabit Networks Workshop, 1994.
In this paper the author has proposed different cost measuring parameters for estimating the cost of providing interactive video on demand service. Various caching and tree networks are considered for providing this service.
16. Ahmed E. Kamal, Bandula W. Abeysundara,"A Survey of MAC Protocols for High-Speed LANs", High Performance Networks, 1994.
The authors have presented an extensive survey on set of MAC protocols for high-speed LANs. The different LAN technology are classified according to Timing, Topology and Access Mode and studied in view of supporting gigabit speeds.
Bay Area Gigabit Testbet homepage. Give lot of details about the testbed and there is alo a map showing the testbed connectivity.
2. Gigabit Networking, http://www.yahoo.com/Computers_and_Internet/Communications_and_Networking/Gigabit_Networking/
Very useful starting point for other references on Gigabit Networking.
3. HIGH-Speed Networking, http://www.yahoo.com/Computers_and_Internet/Communications_and_Networking/High_Speed_Networking/
Another useful starting point for other points on Gigabit Networking.
4. High Performance Networks and Distributed Systems Archive, http://hill.lut.ac.uk/DS-Archive/index.html
Useful starting point for other points on Gigabit Networking, which leads to pointer of various gigabit testbeds.
Books
Craig Patridge, "Gigabit Networking", Addision-Wesley 1993.
This is the first book on gigabit networking. The author has been a frontier researcher in the area of gigabit networking. The books gives a comprehensive overview of the gigabit networking research. The book is very readable and not very mathematical.
Organisations