0% found this document useful (0 votes)
11 views6 pages

Internet

Uploaded by

jhoanna villa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views6 pages

Internet

Uploaded by

jhoanna villa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Internet

COMPUTER NETWORK
WRITTEN BY:
 Robert Kahn
 Michael Aaron Dennis
See Article History
Internet
, a system architecture that has revolutionized communications and methods of
commerce by allowing various computer networks around the world to interconnect.
Sometimes referred to as a “network of networks,” the Internet emerged in the United
States in the 1970s but did not become visible to the general public until the early
1990s. By 2015, approximately 3.2 billion people, or nearly half of the world’s
population, were estimated to have access to the Internet.
The Internet provides a capability so powerful and general that it can be used for almost
any purpose that depends on information, and it is accessible by every individual who
connects to one of its constituent networks. It supports human communication
via electronic mail (e-mail), “chat rooms,” newsgroups, and audio and video
transmission and allows people to work collaboratively at many different locations. It
supports access to digital information by many applications, including the World Wide
Web. The Internet has proved to be a spawning ground for a large and growing number
of “e-businesses” (including subsidiaries of traditional “brick-and-mortar” companies)
that carry out most of their sales and services over the Internet. (See electronic
commerce.) Many experts believe that the Internet will dramatically transform business
as well as society.

Origin And Development

Early networks

The first computer networks were dedicated special-purpose systems such as SABRE
(an airline reservation system) and AUTODIN I (a defense command-and-control
system), both designed and implemented in the late 1950s and early 1960s. By the
early 1960s computer manufacturers had begun to use semiconductor technology in
commercial products, and both conventional batch-processing and time-
sharing systems were in place in many large, technologically advanced companies.
Time-sharing systems allowed a computer’s resources to be shared in rapid succession
with multiple users, cycling through the queue of users so quickly that the computer
appeared dedicated to each user’s tasks despite the existence of many others
accessing the system “simultaneously.” This led to the notion of sharing computer
resources (called host computers or simply hosts) over an entire network. Host-to-host
interactions were envisioned, along with access to specialized resources (such
as supercomputers and mass storage systems) and interactive access by remote users
to the computational powers of time-sharing systems located elsewhere. These ideas
were first realized in ARPANET, which established the first host-to-host network
connection on Oct. 29, 1969. It was created by the Advanced Research Projects
Agency (ARPA) of the U.S. Department of Defense. ARPANET was one of the first
general-purpose computer networks. It connected time-sharing computers at
government-supported research sites, principally universities in the United States, and it
soon became a critical piece of infrastructure for the computer
science research community in the United States. Tools and applications—such as the
simple mail transfer protocol (SMTP, commonly referred to as e-mail), for sending short
messages, and the file transfer protocol (FTP), for longer transmissions—quickly
emerged. In order to achieve cost-effective interactive communications between
computers, which typically communicate in short bursts of data, ARPANET employed
the new technology of packet switching. Packet switching takes large messages (or
chunks of computer data) and breaks them into smaller, manageable pieces (known as
packets) that can travel independently over any available circuit to the target
destination, where the pieces are reassembled. Thus, unlike traditional voice
communications, packet switching does not require a single dedicated circuit between
each pair of users.
Commercial packet networks were introduced in the 1970s, but these were designed
principally to provide efficient access to remote computers by dedicated terminals.
Briefly, they replaced long-distance modem connections by less-expensive “virtual”
circuits over packet networks. In the United States, Telenet and Tymnet were two such
packet networks. Neither supported host-to-host communications; in the 1970s this was
still the province of the research networks, and it would remain so for many years.
Facts Matter. Support the truth and unlock all of Britannica’s content.Subscribe Today
DARPA (Defense Advanced Research Projects Agency; formerly ARPA)
supported initiatives for ground-based and satellite-based packet networks. The ground-
based packet radio system provided mobile access to computing resources, while the
packet satellite network connected the United States with several European countries
and enabled connections with widely dispersed and remote regions. With the
introduction of packet radio, connecting a mobile terminal to a computer
network became feasible. However, time-sharing systems were then still too large,
unwieldy, and costly to be mobile or even to exist outside a climate-controlled
computing environment. A strong motivation thus existed to connect the packet radio
network to ARPANET in order to allow mobile users with simple terminals to access the
time-sharing systems for which they had authorization. Similarly, the packet satellite
network was used by DARPA to link the United States with satellite terminals serving
the United Kingdom, Norway, Germany, and Italy. These terminals, however, had to be
connected to other networks in European countries in order to reach the end users.
Thus arose the need to connect the packet satellite net, as well as the packet radio net,
with other networks.
Foundation of the Internet

The Internet resulted from the effort to connect various research networks in the United
States and Europe. First, DARPA established a program to investigate the
interconnection of “heterogeneous networks.” This program, called Internetting, was
based on the newly introduced concept of open architecture networking, in which
networks with defined standard interfaces would be interconnected by “gateways.” A
working demonstration of the concept was planned. In order for the concept to work, a
new protocol had to be designed and developed; indeed, a system architecture was
also required.
In 1974 Vinton Cerf, then at Stanford University in California, and this author, then at
DARPA, collaborated on a paper that first described such a protocol and system
architecture—namely, the transmission control protocol (TCP), which enabled different
types of machines on networks all over the world to route and assemble data packets.
TCP, which originally included the Internet protocol (IP), a global addressing
mechanism that allowed routers to get data packets to their ultimate destination, formed
the TCP/IP standard, which was adopted by the U.S. Department of Defense in 1980.
By the early 1980s the “open architecture” of the TCP/IP approach was adopted
and endorsed by many other researchers and eventually by technologists and
businessmen around the world.
By the 1980s other U.S. governmental bodies were heavily involved with networking,
including the National Science Foundation (NSF), the Department of Energy, and
the National Aeronautics and Space Administration (NASA). While DARPA had played
a seminal role in creating a small-scale version of the Internet among its researchers,
NSF worked with DARPA to expand access to the entire scientific and
academic community and to make TCP/IP the standard in all federally supported
research networks. In 1985–86 NSF funded the first five supercomputing centres—
at Princeton University, the University of Pittsburgh, the University of California, San
Diego, the University of Illinois, and Cornell University. In the 1980s NSF also funded
the development and operation of the NSFNET, a national “backbone” network to
connect these centres. By the late 1980s the network was operating at millions of bits
per second. NSF also funded various nonprofit local and regional networks to connect
other users to the NSFNET. A few commercial networks also began in the late 1980s;
these were soon joined by others, and the Commercial Internet Exchange (CIX) was
formed to allow transit traffic between commercial networks that otherwise would not
have been allowed on the NSFNET backbone. In 1995, after extensive review of the
situation, NSF decided that support of the NSFNET infrastructure was no longer
required, since many commercial providers were now willing and able to meet the
needs of the research community, and its support was withdrawn. Meanwhile, NSF had
fostered a competitive collection of commercial Internet backbones connected to one
another through so-called network access points (NAPs).
From the Internet’s origin in the early 1970s, control of it steadily devolved from
government stewardship to private-sector participation and finally to private custody with
government oversight and forbearance. Today a loosely structured group of several
thousand interested individuals known as the Internet Engineering Task Force
participates in a grassroots development process for Internet standards. Internet
standards are maintained by the nonprofit Internet Society, an international body with
headquarters in Reston, Virginia. The Internet Corporation for Assigned Names and
Numbers (ICANN), another nonprofit, private organization, oversees various aspects of
policy regarding Internet domain names and numbers.

Commercial expansion

The rise of commercial Internet services and applications helped to fuel a rapid
commercialization of the Internet. This phenomenon was the result of several other
factors as well. One important factor was the introduction of the personal computer and
the workstation in the early 1980s—a development that in turn was fueled by
unprecedented progress in integrated circuit technology and an attendant rapid decline
in computer prices. Another factor, which took on increasing importance, was the
emergence of ethernet and other “local area networks” to link personal computers. But
other forces were at work too. Following the restructuring of AT&T in 1984, NSF took
advantage of various new options for national-level digital backbone services for the
NSFNET. In 1988 the Corporation for National Research Initiatives received approval to
conduct an experiment linking a commercial e-mail service (MCI Mail) to the Internet.
This application was the first Internet connection to a commercial provider that was not
also part of the research community. Approval quickly followed to allow other e-mail
providers access, and the Internet began its first explosion in traffic.
In 1993 federal legislation allowed NSF to open the NSFNET backbone to commercial
users. Prior to that time, use of the backbone was subject to an “acceptable use” policy,
established and administered by NSF, under which commercial use was limited to those
applications that served the research community. NSF recognized that commercially
supplied network services, now that they were available, would ultimately be far less
expensive than continued funding of special-purpose network services.

Also in 1993 the University of Illinois made widely available Mosaic, a new type
of computer program, known as a browser, that ran on most types of computers and,
through its “point-and-click” interface, simplified access, retrieval, and display of files
through the Internet. Mosaic incorporated a set of access protocols and display
standards originally developed at the European Organization for Nuclear Research
(CERN) by Tim Berners-Lee for a new Internet application called the World Wide
Web (WWW). In 1994 Netscape Communications Corporation (originally called Mosaic
Communications Corporation) was formed to further develop the Mosaic browser
and server software for commercial use. Shortly thereafter, the software giant Microsoft
Corporation became interested in supporting Internet applications on personal
computers (PCs) and developed its Internet Explorer Web browser (based initially on
Mosaic) and other programs. These new commercial capabilities accelerated the growth
of the Internet, which as early as 1988 had already been growing at the rate of 100
percent per year.
By the late 1990s there were approximately 10,000 Internet service providers (ISPs)
around the world, more than half located in the United States. However, most of these
ISPs provided only local service and relied on access to regional and national ISPs for
wider connectivity. Consolidation began at the end of the decade, with many small to
medium-size providers merging or being acquired by larger ISPs. Among these larger
providers were groups such as America Online, Inc. (AOL), which started as a dial-up
information service with no Internet connectivity but made a transition in the late 1990s
to become the leading provider of Internet services in the world—with more than 25
million subscribers by 2000 and with branches in Australia, Europe, South America,
and Asia. Widely used Internet “portals” such as AOL, Yahoo!, Excite, and others were
able to command advertising fees owing to the number of “eyeballs” that visited their
sites. Indeed, during the late 1990s advertising revenue became the main quest of
many Internet sites, some of which began to speculate by offering free or low-cost
services of various kinds that were visually augmented with advertisements. By 2001
this speculative bubble had burst.

Future directions

While the precise structure of the future Internet is not yet clear, many directions of
growth seem apparent. One is the increased availability of wireless access. Wireless
services enable applications not previously possible in any economical fashion. For
example, global positioning systems (GPS) combined with wireless Internet access
would help mobile users to locate alternate routes, generate precise accident reports
and initiate recovery services, and improve traffic management and congestion control.
In addition to wireless laptop computers and personal digital assistants (PDAs),
wearable devices with voice input and special display glasses are under development.

Another future direction is toward higher backbone and network access speeds.
Backbone data rates of 10 billion bits (10 gigabits) per second are readily available
today, but data rates of 1 trillion bits (1 terabit) per second or higher will eventually
become commercially feasible. If the development of computer hardware, software,
applications, and local access keeps pace, it may be possible for users to access
networks at speeds of 100 gigabits per second. At such data rates, high-resolution
video—indeed, multiple video streams—would occupy only a small fraction of available
bandwidth. Remaining bandwidth could be used to transmit auxiliary information about
the data being sent, which in turn would enable rapid customization of displays and
prompt resolution of certain local queries. Much research, both public and private, has
gone into integrated broadband systems that can simultaneously carry multiple signals
—data, voice, and video. In particular, the U.S. government has funded research to
create new high-speed network capabilities dedicated to the scientific-research
community.
It is clear that communications connectivity will be an important function of a future
Internet as more machines and devices are interconnected. In 1998, after four years of
study, the Internet Engineering Task Force published a new 128-bit IP address standard
intended to replace the conventional 32-bit standard. By allowing a vast increase in the
number of available addresses (2128, as opposed to 232), this standard will make it
possible to assign unique addresses to almost every electronic device imaginable.
Thus, the expressions “wired” office, home, and car may all take on new meanings,
even if the access is really wireless.
The dissemination of digitized text, pictures, and audio and video recordings over the
Internet, primarily available today through the World Wide Web, has resulted in an
information explosion. Clearly, powerful tools are needed to manage network-based
information. Information available on the Internet today may not be available tomorrow
without careful attention’s being paid to preservation and archiving techniques. The key
to making information persistently available is infrastructure and the management of
that infrastructure. Repositories of information, stored as digital objects, will soon
populate the Internet. At first these repositories may be dominated by digital objects
specifically created and formatted for the World Wide Web, but in time they will contain
objects of all kinds in formats that will be dynamically resolvable by users’ computers in
real time. Movement of digital objects from one repository to another will still leave them
available to users who are authorized to access them, while replicated instances of
objects in multiple repositories will provide alternatives to users who are better able to
interact with certain parts of the Internet than with others. Information will have its own
identity and, indeed, become a “first-class citizen” on the Internet.
Robert Kahn

You might also like