| |
What is the Internet
To help answer the question more completely, the rest of this paper
contains an updated second chapter from "The Whole Internet User's
Guide and Catalog" by Ed Krol (1992) that gives a more thorough
explanation.
The Internet was born about
20 years ago, trying to connect together a
U.S. Defense Department network called the ARPAnet and various
other radio and satellite networks. The ARPAnet was an experimental
network designed to support military research in particular,
research about how to build networks that could withstand partial
outages (like bomb attacks) and still function.
(Think about this when I describe
how the network works; it may give you some insight
into the design of the Internet.) In the ARPAnet model, communication
always occurs between a source and a destination computer. The
network itself is assumed to be unreliable; any portion of the
network could disappear at any moment (pick your favorite
catastrophe--these days backhoes cutting cables are more of a threat
than bombs). It was designed to require the minimum of information
from the computer clients. To send a message on the network, a
computer only had to put its data in an envelope, called an Internet
Protocol (IP) packet, and "address" the packets correctly. The
communicating computers--not the network itself--were also given the
responsibility to ensure that the communication was accomplished. The
philosophy was that every computer on the network could talk, as a
peer, with any other computer.
These decisions may sound odd, like the assumption of an "unreliable"
network, but history has proven that most of them were reasonably
correct. Although the Organization for International Standardization
(ISO) was spending years designing the ultimate standard for computer
networking, people could not wait. Internet developers in the US, UK
and Scandinavia, responding to market pressures, began to put their
IP software on every conceivable type of computer. It became the only
practical method for computers from different manufacturers to
communicate. This was attractive to the government and universities,
which didn't have policies saying that all computers must be bought
from the same vendor. Everyone bought whichever computer they liked,
and expected the computers to work together over the network.
At about the same time as the Internet was coming into being,
Ethernet local area networks ("LANs") were developed. This
technology matured quietly, until desktop
workstations became available around 1983.
Most of these workstations came with Berkeley UNIX, which
included IP networking software. This created a new demand: rather
than connecting to a single large timesharing computer per site,
organizations wanted to connect the ARPAnet to their entire local
network. This would allow all the computers on that LAN to access
ARPAnet facilities. About the same time, other organizations started
building their own networks using the same communications protocols
as the ARPAnet: namely, IP and its relatives. It became obvious that
if these networks could talk together, users on one network could
communicate with those on another; everyone would benefit.
One of the most important of these newer networks was the NSFNET,
commissioned by the National Science Foundation (NSF), an agency of
the U.S. government. In the late 80's the NSF created five
supercomputer centers. Up to this point, the world's fastest
computers had only been available to weapons developers and a few
researchers from very large corporations. By creating supercomputer
centers, the NSF was making these resources available for any
scholarly research. Only five centers were created because they were
so expensive--so they had to be shared. This created a communications
problem: they needed a way to connect their centers together and to
allow the clients of these centers to access them.
At first, the NSF tried to use the
ARPAnet for communications, but this strategy failed
because of bureaucracy and staffing problems.
In response, NSF decided to build its own network, based on the
ARPAnet's IP technology. It connected the centers with 56,000 bit per
second (56k bps) telephone lines. (This
is roughly the ability to transfer two
full typewritten pages per second. That's
slow by modern standards, but was
reasonably fast in the mid 80's.) It
was obvious, however, that if they tried
to connect every university directly to a
supercomputing center, they would go broke. You pay for
these telephone lines by the mile. One line per campus with a
supercomputing center at the hub, like spokes on a bike wheel, adds
up to lots of miles of phone lines. Therefore, they decided to create
regional networks. In each area of the country, schools would be
connected to their nearest neighbor. Each chain was connected to a
supercomputer center at one point and the centers were connected
together. With this configuration, any computer could eventually
communicate with any other by forwarding the conversation through its
neighbors.
This solution was successful--and, like any successful solution, a
time came when it no longer worked. Sharing supercomputers also
allowed the connected sites to share a lot of other things not
related to the centers. Suddenly these schools had a world of data
and collaborators at their fingertips. The network's traffic
increased until, eventually, the computers controlling the network
and the telephone lines connecting them were overloaded. In 1987, a
contract to manage and upgrade the network was awarded to Merit Network
Inc., which ran Michigan's educational network, in
partnership with IBM and MCI. The old network was replaced with
faster telephone lines (by a factor of 20), with faster computers to
control it.
The process of running out of horsepower and getting bigger engines
and better roads continues to this day. Unlike changes to the highway
system, however, most of these changes aren't noticed by the people
trying to use the Internet to do real work. You won't go to your
office, log in to your computer, and find a message saying that the
Internet will be inaccessible for the next six months because of
improvements. Perhaps even more important: the process of running out
of capacity and improving the network has created a technology that's
extremely mature and practical. The ideas have been tested; problems
have appeared, and problems have been solved.
For our purposes, the most important aspect of the NSF's networking
effort is that it allowed everyone to access the network. Up to that
point, Internet access had been available only to researchers in
computer science, government employees, and government contractors.
The NSF promoted universal educational access by funding campus
connections only if the campus had a plan to spread the access
around. So everyone attending a four year college could become an
Internet user.
The demand keeps growing. Now that most four-year colleges are
connected, people are trying to get secondary and primary schools
connected. People who have graduated from college know what the
Internet is good for, and talk their employers into connecting
corporations. All this activity points to continued growth,
networking problems to solve, evolving technologies, and job security
for net workers. |