Abstract
The paper begins with a discussion of current trends in networking and a historical reviews of past networking technologies some of which failed. This leads us to the discussion about what it takes for a new technology to succeed and what challanges we face in making the current dream of a seamless world-wide high-speed ATM network a reality.
Issues in using ATM cells for very high speed applications are presented. Ensuring that the users benefit from ATM networks involves several other related disciplines. These are reviewed.
There are several other reasons for communications and networking becoming critical. First, the users have been moving away from the computer. In the sixties, computer users went to computer rooms to use them. In the seventies, they moved to terminal rooms away from the the computer rooms. In the eighties, the users moved to their desktop. In the nineties, they are mobile and can be anywhere. This distance between the users and the computers has lead to a natural need for communication between the user and the computer or the user interface device (which may be a portable computer) and the servers.
Second, the system extent has been growing continuously. Up until eighties, the computers consisted of one node spread within 10 meters. In nineties, the systems consists of hundreds of nodes within a campus. The increasing extent leads to increasing needs for computing.
In the last ten years, we have seen increasing personalization of computing resources. We moved from timesharing to personal computing. Now we need ways to work together with other users. So, in the next ten years, emphasis will be on cooperative computing. This will further lead to increase in communications.
In the last decade, we were busy developing corporate networks, and campus networks. In the next decade, we need to develop intercorporate networks, national information infrastructures, and international information infrastructures. All these developments will lead to more growth in the field of networking and more demands for the personnel with skills in networking.
The increasing role of communications in computing has lead to the merger of the telecommunications and computing industries. The line between voice and data communications is fading away. Data communication is expected to take over voice communication in terms of volume as discussed in Section 6.4.
After some of the key problems have been solved, a lot of other problems can be solved by spending little money. At this stage, the curve takes an upturn. The amount of revenues to be made from the technology is much more than the investments. It is at this stage, that the industries take over technology development. Numerous small companies are formed and quickly grow to become large corporations.
Finally, when all the easy problems have been solved, the remaining problems are hard and would require a lot of resources. At this stage, the researchers usually move on to some other technology and a new S-shaped curve is born.
The computing industry in general and the networking sector in particular is currently going through the middle fast growing region of the technology life cycle curve. The number of problems solved is indicated by the deployment of the technology. In case of networks, one can plot the number of hosts on the networks, bytes per host, number of networks on the internet, total capacity (in MIPS) of hosts on the network, total memory or total disk space of the hosts on the network. In each case, one would see a sudden exponential up turn in the last few years.
Figure 2shows the famous Internet growth curve. The figure shows the number of hosts on the Internet in the last 20 years. The data before July 1988, although plotted is hardly visible. Since 1988, growth has defied all predictions.
The standardization requires a change in the way business is done. Before standardization, a majority of the market is vertical. The only way for users to maintain compatibility is to buy the complete system from one manufacturer. System vendors make more money than component vendors. IBM, DEC, and Sun Microsystems are examples of such system vendors. After standardization, the business situation changes. Users can and do buy components from different vendors. The market becomes horizontal. Companies specializing in specific components and fields take prominence. Intel for processors, Microsoft for operating systems, Novell for networking are examples of this trend.
To survive in this post-standardization era, invention alone is not sufficient. Only those new ideas that are backed by a number of vendors become standardized and are adopted. It, thus, becomes necessary to form technology partnerships.
In early 80's, when Ethernet was being introduced, some argued that broadband Ethernet, which allows voice, video, and data to share a single cable would be more popular than baseband Ethernet. As we all know, today there are a few broadband installations. Most installations of Ethernet are baseband. The cost of combining the three services was just too high. The analog circuits required for frequency multiplexing were not as reliable and economical as digital circuits with separate wiring.
Around the same time, when computer companies were trying to sell Ethernets, PBX manufacturers were presenting PBX as the better alternative, again because it was already there and it could handle voice as well as data. However, PBX was not accepted by the customers simply because it did not provide enough bandwidth.
The Integrated Service Digital Network (ISDN) was standardized in 1984 and was very promising then. However it's deployment has been much too slow. Even after ten years, it is not possible to get an ISDN connection at most places. Even at those places where it is available, the 64 kbps bandwidth provided by it is not sufficient for most data applications. For low bandwidth applications, modems on analog lines provide a better alternative. Modem technology has advanced much beyond expectation. Today, one can get 28.8 kbps and 56 kbps modems that work with all pervasive analog lines and do not have monthly charges associated with the extra ISDN line.
In 1986, IEEE 802.4 (token bus) was touted as a better alternative than IEEE 802.3/Ethernet for real time environment. It was said that Ethernet could not provide the delay guarantees required for manufacturing and industrial environments. Manufacturing Automation Protocol (MAP/TOP) was seen as the right solution. Today, IEEE 802.3/Ethernet is used in all such environments. Token buses are practically nonexistant.
Up until 1988, ISO/OSI protocol stack was seen as the leading contender for networking everywhere. Networking researchers in most countries were implementing ISO/OSI protocols. The United States Government Open Systems Interconnection Profile (GOSIP) even made ISO/OSI a mandatory requirement for government purchases. Today, TCP/IP protocol stack dominates instead. The OSI protocols suffered from the common problems of standards: it had too many features. Any feature required by any application in the world needed to be supported by the standard. The protocols took too long to standardize and were quite complex. The ``build before you standardize'' philosophy of the TCP/IP protocol stack helped in its success.
Up until 1991, IEEE 802.6 Dual Queue Dual Bus (DQDB) was seen as a promising candidate for metropolitan area networks. It is no longer considered viable. The unfairness problem and general problems of bus architectures have made it undesirable.
Xpress Transfer Protocol (XTP) was designed as the high-performance alternative to TCP/IP. Protocol Engines -- the company leading the design of XTP declared bankruptcy in 1992.
Year | Failure | Success |
---|---|---|
1980 | Broadband Ethernet | Baseband Ethernet |
1981 | PBX | Ethernet |
1984 | ISDN | Modems |
1988 | OSI | TCP/IP |
1991 | DQDB | |
1992 | XTP | TCP |
In summary, ATM technology has better scalability and lower delay variance than current packet switching technology.
Today, there is diseconomy of scale. Higher speed networks cost more in per-bit than lower speed networks. Ten Mbps Ethernet cards can be had for $50. However, 100 Mbps FDDI cards cost closer to $1000. This diseconomy of scale has a significant impact on user adoption. We have seen this happen in other areas of computing. Today ten 100-MIPS computers cost much less than one 1000-MIPS computer. Therefore, we see more distributed computing than supercomputing. Applying the same logic, it appears that unless there is economy of scale, people may divide their networking applications among multiple low-speed links than one high-speed link. Of course, there are a few applications that will not work on speeds in the range of 10 Mbps. For these, the users have no choice but to use higher speed links.
This diseconomy of scale affects all high speed technologies including ATM. However, ATM has a bigger uphill battle due to its newness. In a recent ATM Forum user survey conducted Dr.~John McQuillan, the users were asked that given the same price, which 100 Mbps network they would buy: ATM or 100-Mbps Ethernet. The answer was Ethernet because it is something the users feel very comfortable with. We ATM designers will have to work hard to get the ATM equipment prices below the 100-Mbps Ethernet to get acceptance.
The next (fourth) layer consists of processor designers. Unless designed carefully, a processor may not be able to keep up with the high speed LAN adapters. The fifth layer of operating system reduces the performance further. The sixth layer is that of network protocols-- some of which are not able to cope with high speed links. Finally the top (seventh) layer is the application, which sees a usable bandwidth of a few megabits per second.
Unless higher layers are improved, changing the lower layers will not result in any higher performance for the user. Today's problem is not so much in networking protocols as it is in proper I/O designs for the processors and operating systems. We can provide the user with 155 Mbps or 622 Mbps links, but they will not be able to use it unless, operating system and processor designs are improved accordingly. The only exception is the backbone where specialized hardware and software are used. The backbone components (switches, routers, or bridges) are designed specially for high communication speeds. Thus, the first place where high-speed is will be used is in the backbone. The desktop market will have to wait for better operating systems and processors.
Another lesson to learn from this layered model is that for high speed networks to become a reality, all seven layers have to be improved. Bad performance even in one layer can delay the introduction of the high-speed networking.
Next, let us consider characteristics of video traffic. One hour of uncompressed HDTV requires 540 GB of storage. At today's storage prices of $1/MB, this works out to approximately $150 per second of video. This is somewhat expensive and only researchers funded on government grant can afford to store the video at this price. If compressed, the storage requirements drop by a factor of 60 to 200 and the price becomes $2.50 to $0.75 per second, which is more reasonable. The conclusion is that most video will be in compressed form simply for storage. Compression means that the bandwidth requirements vary, and therefore, variable bit rate (VBR) service rather than constant bit rate (CBR) is likely to be used more often.
Also, at high speeds, the connection holding times become shorter. At 1 Gbps, it takes only 10 seconds to transmit 1 hour of compressed VHS movie. It takes even smaller time at higher speeds. Thus, unless the bandwidth is free, most users of high speed will start and shutdown the connection after 10-20 seconds. In other words, the traffic will be short-lived and bursty. This is closer to today's data traffic than voice traffic. For ATM networks to succeed, they should be able to handle the bursty traffic efficiently.
In 1984, when the ATM cell size was being decided in CCITT, they were thinking about 64 kbps voice. At that speed, 32-byte cells need 4 ms. If larger cells were used, the time to collect the voice would become too large and would require echo cancellation. The Europeans, therefore, wanted 32-byte cells, while US position was that the cells be at least 64 bytes long. The limit of 48 bytes was chosen as an average of 32 and 64. In other words, the cell size was chosen not for high speed applications but for 64-kbps voice applications. Several other design and implementation decisions for ATM networks were similarly done as if they were being designed for voice. One example of such design philosophy is simply dropping cells on congestion. Requiring users to indicate which cells aren't important by the congestion-loss priority (CLP) bit is another example. For voice, some cells can be dropped without significant impact. However, this is not true for data. Every single bit is important and all dropped bits have to be retransmitted.
The cell size is not suitable for high-speed applications in general and for video traffic in particular. A single HDTV frame requires 50,000 cells. Switching 50,000 times for each frame is a not the optimal way.
Prior to the formation of the ATM Forum in October 1991, most of the ATM networks design decisions were made as if the network was being designed for voice. It is only in 1994 that the importance of data traffic was realized and the available bit rate (ABR) service was introduced. It is now well understood that the key to ATM technology success is its support of data traffic. If ATM fails to support data, it will not be able to stay around for video traffic.
It is important to note that we used time in the above equation and not size. The time requirements of an application do not change as the bandwidth of the network changes. For example, 30 frames per second video will need one frame every 33 ms regardless of the speed of the link. Even at Gigabit per second or Terabit per second speeds, the video will need a response time variation in milliseconds. The cell time of 6 ms would satisfy most delay sensitive applications. Although 6 ms is 48 bytes at 64 kbps, it is 900 kB at 1.2 Gbps. By using smaller cells at higher speeds we get micro- and nano-second delay variation. Unfortunately our eyes cannot even feel the difference between millisecond and microsecond delays, and therefore, we are wasting switching resources.
The cost of a switch depends partly upon its switching speed in cells per second. Given a switch design, it is possible to make a higher speed switch by simply increasing the cell size without any significant increase in cost.
What we need are ``constant-time'' cells and not ``constant-size'' cells for scalability to high speed. With constant size cells, the cell time decreases as the speed increases and it becomes necessary to switch more and more cells per second thereby increasing the cost.
The telecommunications industry claims SONET to be scalable in bandwidth. SONET uses constant-time frames. All SONET frames are 125 microseconds long. As the speed increases, the number of bytes in the frame increases proportionately.
Many of today's ATM networks use SONET links. In these networks, the large video image is broken down in small cells, which are then packed into a large SONET frame and transmitted. At the receiver, the SONET frame is unloaded, the information is switched cell by cell -- all of whom are probably going to the same destination. After switching, the cells are loaded into another SONET frame and forwarded to the next switch. This process of unloading SONET frames and switching cell by cell is clearly unnecessary, given that at high speed, the amount of information to be transmitted is also generally higher. We could just switch SONET frames or use a technology which makes use of the best features of SONET and ATM.
One of the advantages of switches over routers was that switches were supposed to simple. They no longer are. For ATM switches, switching is only a negligible part of their responsibility. A large part of switch resources is consumed by connection set up, route determination, address translation, multicasting, anycasting, flow control, congestion control, and so on.
Another element adding to the complexity of ATM is the fact that it is being developed at multiple standards bodies: ITU and ATM Forum. Strictly speaking, ATM forum is not a standards body. ITU is supposed to develop the standards and ATM Forum is supposed to select a subset of options provided by ITU. But in reality, ITU is too slow. ATM Forum cannot wait for ITU to finalize its standard and so it is taking a leading role in developing it in parallel. A considerable amount of time at both bodies is used in reconciling the agreements made at the other body. Vendors will end up implementing both ITU and ATM Forum versions of the standards and users will have to bear the cost even though just one set would have been fine.
High speed networking will succeed if and only if there is economy of scale so that using higher speed links results in cost savings. Unfortunately, this is not case right now. We face the danger of users dividing their applications and using several low-speed links.
Considerable amount of resources are being put on ATM networks. However, its success will depend upon our being able to transfer data at a lower cost and higher performance than legacy LANs. Also, we will have to control the desire to incorporate all options at once otherwise it will become too complex.