Computer Project
Computer Project
Computer once meant a person who did computations, but now the term
almost universally refers to automated electronic machinery. The first
section of this article focuses on modern digital electronic computers and
their design, constituent parts, and applications. The second section covers
the history of computing. For details on computer architecture, software,
and theory, see computer science.
Computing basics
The first computers were used primarily for numerical calculations.
However, as any information can be numerically encoded, people soon
realized that computers are capable of general-purpose information
processing. Their capacity to handle large amounts of data has extended the
range and accuracy of weather forecasting. Their speed has allowed them
to make decisions about routing telephone connections through a network
and to control mechanical systems such as automobiles, nuclear reactors,
and robotic surgical tools. They are also cheap enough to be embedded in
everyday appliances and to make clothes dryers and rice cookers “smart.”
Computers have allowed us to pose and answer questions that were difficult
to pursue in the past. These questions might be about DNA sequences in
genes, patterns of activity in a consumer market, or all the uses of a word in
texts that have been stored in a database. Increasingly, computers can also
learn and adapt as they operate by using processes such as machine
learning.Computers also have limitations, some of which are theoretical.
For example, there are undecidable propositions whose truth cannot be
determined within a given set of rules, such as the logical structure of a
computer. Because no universal algorithmic method can exist to identify
such propositions, a computer asked to obtain the truth of such a
proposition will (unless forcibly interrupted) continue indefinitely—a
condition known as the “halting problem.” (See Turing machine.) Other
limitations reflect current technology. For example, although computers
have progressed greatly in terms of processing data and using artificial
intelligence algorithms, they are limited by their incapacity to think in a
more holistic fashion. Computers may imitate humans—quite effectively,
even—but imitation may not replace the human element in social
interaction. Ethical concerns also limit computers, because computers rely
on data, rather than a moral compass or human conscience, to make
decisions.
Analog computers
Analog computers use continuous physical magnitudes to represent
quantitative information. At first they represented quantities with
mechanical components (see differential analyzer and integrator), but
after World War II voltages were used; by the 1960s digital computers had
largely replaced them. Nonetheless, analog computers, and some hybrid
digital-analog systems, continued in use through the 1960s in tasks such as
aircraft and spaceflight simulation.One advantage of analog computation is
that it may be relatively simple to design and build an analog computer to
solve a single problem. Another advantage is that analog computers can
frequently represent and solve a problem in “real time”; that is, the
computation proceeds at the same rate as the system being modeled by it.
Their main disadvantages are that analog representations are limited in
precision—typically a few decimal places but fewer in complex mechanisms
—and general-purpose devices are expensive and not easily programmed.
Digital computers
In contrast to analog computers, digital computers represent information
in discrete form, generally as sequences of 0s and 1s (binary digits, or bits).
The modern era of digital computers began in the late 1930s and early
1940s in the United States, Britain, and Germany. The first devices used
switches operated by electromagnets (relays). Their programs were stored
on punched paper tape or cards, and they had limited internal data storage.
For historical developments, see the section Invention of the modern
computer.
Mainframe computer
During the 1950s and ’60s, Unisys (maker of
the UNIVAC computer), International Business Machines
Corporation (IBM), and other companies made large, expensive computers
of increasing power. They were used by major corporations and government
research laboratories, typically as the sole computer in the organization. In
1959 the IBM 1401 computer rented for $8,000 per month (early IBM
machines were almost always leased rather than sold), and in 1964 the
largest IBM S/360 computer cost several million dollars.
These computers came to be called mainframes, though the term did not
become common until smaller computers were built. Mainframe computers
were characterized by having (for their time) large storage capabilities, fast
components, and powerful computational abilities. They were highly
reliable, and, because they frequently served vital needs in an organization,
they were sometimes designed with redundant components that let them
survive partial failures. Because they were complex systems, they were
operated by a staff of systems programmers, who alone had access to the
computer. Other users submitted “batch jobs” to be run one at a time on the
mainframe.
Such systems remain important today, though they are no longer the sole,
or even primary, central computing resource of an organization, which will
typically have hundreds or thousands of personal computers (PCs).
Mainframes now provide high-capacity data storage for Internet servers, or,
through time-sharing techniques, they allow hundreds or thousands of users
to run programs simultaneously. Because of their current roles, these
computers are now called servers rather than mainframes.
Supercomputer
The most powerful computers of the day have typically been
called supercomputers. They have historically been very expensive and their
use limited to high-priority computations for government-sponsored
research, such as nuclear simulations and weather modeling. Today many of
the computational techniques of early supercomputers are in common use
in PCs. On the other hand, the design of costly, special-purpose processors
for supercomputers has been replaced by the use of large arrays of
commodity processors (from several dozen to over 8,000) operating in
parallel over a high-speed communications network.
Minicomputer
Although minicomputers date to the early 1950s, the term was introduced
in the mid-1960s. Relatively small and inexpensive, minicomputers were
typically used in a single department of an organization and often dedicated
to one task or shared by a small group. Minicomputers generally had limited
computational power, but they had excellent compatibility with various
laboratory and industrial devices for collecting and inputting data.
Microcomputer.
A microcomputer is a small computer built around
a microprocessor integrated circuit, or chip. Whereas the early
minicomputers replaced vacuum tubes with discrete transistors,
microcomputers (and later minicomputers as well) used microprocessors
that integrated thousands or millions of transistors on a single chip. In 1971
the Intel Corporation produced the first microprocessor, the Intel 4004,
which was powerful enough to function as a computer although it was
produced for use in a Japanese-made calculator. In 1975 the first personal
computer, the Altair, used a successor chip, the Intel 8080 microprocessor.
Like minicomputers, early microcomputers had relatively limited storage
and data-handling capabilities, but these have grown as
storage technology has improved alongside processing power.
Laptop computer
The first true laptop computer marketed to consumers was the Osborne 1,
which became available in April 1981. A laptop usually features a
“clamshell” design, with a screen located on the upper lid and a keyboard
on the lower lid. Such computers are powered by a battery, which can be
recharged with alternating current (AC) power chargers. The 1991
PowerBook, created by Apple, was a design milestone, featuring a trackball
for navigation and palm rests; a 1994 model was the first laptop to feature a
touchpad and an Ethernet networking port. The popularity of the laptop
continued to increase in the 1990s, and by the early 2000s laptops were
earning more revenue than desktop models. They remain the most popular
computers on the market and have outsold desktop computers and tablets
since 2018.
Computer hardware
The physical elements of a computer, its hardware, are generally divided
into the central processing unit (CPU), main memory (or random-access
memory, RAM), and peripherals. The last class encompasses all sorts of
input and output (I/O) devices: keyboard, display monitor, printer, disk
drives, network connections, scanners, and more.The CPU and RAM
are integrated circuits (ICs)—small silicon wafers, or chips, that contain
thousands or millions of transistors that function as electrical switches. In
1965 Gordon Moore, one of the founders of Intel, stated what has become
known as Moore’s law: the number of transistors on a chip doubles about
every 18 months. Moore suggested that financial constraints would soon
cause his law to break down, but it has been remarkably accurate for far
longer than he first envisioned. Advances in the design of chips and
transistors, such as the creation of three-dimensional chips (rather than
their previously flat design), have helped to bolster their capabilities,
though there are limits to this process as well. While cofounder and CEO
of NVIDIA Jensen Huang claims that the law has largely run its
course, Intel’s CEO, Pat Gelsinger, has argued otherwise. Companies such
as IBM continue to experiment with using materials other than silicon to
design chips. The viability of Moore’s law relies on immense advancements
in chip technology as well as breakthroughs in moving beyond silicon as
the industry standard going forward.
The ALU has circuits that add, subtract, multiply, and divide two
arithmetic values, as well as circuits for logic operations such as AND and
OR (where a 1 is interpreted as true and a 0 as false, so that, for instance, 1
AND 0 = 0; see Boolean algebra). The ALU has several to more than a
hundred registers that temporarily hold results of its computations for
further arithmetic operations or for transfer to main memory.
The circuits in the CPU control section provide branch instructions, which
make elementary decisions about what instruction to execute next. For
example, a branch instruction might be “If the result of the last ALU
operation is negative, jump to location A in the program; otherwise,
continue with the following instruction.” Such instructions allow “if-then-
else” decisions in a program and execution of a sequence of instructions,
such as a “while-loop” that repeatedly does some set of instructions while
some condition is met. A related instruction is the subroutine call, which
transfers execution to a subprogram and then, after the subprogram
finishes, returns to the main program where it left off.
In a stored-program computer, programs and data in memory are
indistinguishable. Both are bit patterns—strings of 0s and 1s—that may be
interpreted either as data or as program instructions, and both are fetched
from memory by the CPU. The CPU has a program counter that holds the
memory address (location) of the next instruction to be executed. The basic
operation of the CPU is the “fetch-decode-execute” cycle:
Fetch the instruction from the address held in the program counter,
and store it in a register.
Decode the instruction. Parts of it specify the operation to be done,
and parts specify the data on which it is to operate. These may be in
CPU registers or in memory locations. If it is a branch instruction,
part of it will contain the memory address of the next instruction to
execute once the branch condition is satisfied.
Fetch the operands, if any.
Execute the operation if it is an ALU operation.
Store the result (in a register or in memory), if there is one.
Update the program counter to hold the next instruction location,
which is either the next memory location or the address specified by a
branch instruction.
At the end of these steps the cycle is ready to repeat, and it continues until
a special halt instruction stops execution. Steps of this cycle and all internal
CPU operations are regulated by a clock that oscillates at a high frequency
(now typically measured in gigahertz, or billions of cycles per second).
Another factor that affects performance is the “word” size—the number of
bits that are fetched at once from memory and on which CPU instructions
operate. Digital words now consist of 32 or 64 bits, though sizes from 8 to
128 bits are seen.
There are two major kinds of instruction-level parallelism (ILP) in the CPU,
both first used in early supercomputers. One is the pipeline, which allows
the fetch-decode-execute cycle to have several instructions under way at
once. While one instruction is being executed, another can obtain its
operands, a third can be decoded, and a fourth can be fetched from
memory. If each of these operations requires the same time, a new
instruction can enter the pipeline at each phase and (for example) five
instructions can be completed in the time that it would take to complete one
without a pipeline. The other sort of ILP is to have multiple execution units
in the CPU—duplicate arithmetic circuits, in particular, as well as
specialized circuits for graphics instructions or for floating-point
calculations (arithmetic operations involving noninteger numbers, such as
3.27). With this “superscalar” design, several instructions can execute at
once.
Main memory
The earliest forms of computer main memory were mercury delay lines,
which were tubes of mercury that stored data as ultrasonic waves,
and cathode-ray tubes, which stored data as charges on the tubes’ screens.
The magnetic drum, invented about 1948, used an iron oxide coating on a
rotating drum to store data and programs as magnetic patterns.
The first integrated circuit (IC) memory chip appeared in 1971. IC memory
stores a bit in a transistor-capacitor combination. The capacitor holds a
charge to represent a 1 and no charge for a 0; the transistor switches it
between these two states. Because a capacitor charge gradually decays, IC
memory is dynamic RAM (DRAM), which must have its stored values
refreshed periodically (every 20 milliseconds or so). There is also static
RAM (SRAM), which does not have to be refreshed. Although faster than
DRAM, SRAM uses more transistors and is thus more costly; it is used
primarily for CPU internal registers and cache memory.
Secondary memory
Secondary memory on a computer is storage for data and programs not in
use at the moment. In addition to punched cards and paper tape, early
computers also used magnetic tape for secondary storage. Tape is cheap,
either on large reels or in small cassettes, but has the disadvantage that it
must be read or written sequentially from one end to the other.
IBM introduced the first magnetic disk, the RAMAC, in 1955; it held 5
megabytes and rented for $3,200 per month. Magnetic disks are platters
coated with iron oxide, like tape and drums. An arm with a tiny wire coil,
the read/write (R/W) head, moves radially over the disk, which is divided
into concentric tracks composed of small arcs, or sectors, of data.
Magnetized regions of the disk generate small currents in the coil as it
passes, thereby allowing it to “read” a sector; similarly, a small current in
the coil will induce a local magnetic change in the disk, thereby “writing” to
a sector. The disk rotates rapidly (up to 15,000 rotations per minute), and
so the R/W head can rapidly reach any sector on the disk.Early disks had
large removable platters. In the 1970s IBM introduced sealed disks with
fixed platters known as Winchester disks—perhaps because the first ones
had two 30-megabyte platters, suggesting the Winchester 30-30 rifle. Not
only was the sealed disk protected against dirt, the R/W head could also
“fly” on a thin air film, very close to the platter. By putting the head closer
to the platter, the region of oxide film that represented a single bit could be
much smaller, thus increasing storage capacity. This basic technology is
still used.Refinements have included putting multiple platters—10 or more
—in a single disk drive, with a pair of R/W heads for the two surfaces of
each platter in order to increase storage and data transfer rates. Even
greater gains have resulted from improving control of the radial motion of
the disk arm from track to track, resulting in denser distribution of data on
the disk. By 2002 such densities had reached over 8,000 tracks per cm
(20,000 tracks per inch), and a platter the diameter of a coin could hold
over a gigabyte of data. In 2002 an 80-gigabyte disk cost about $200—only
one ten-millionth of the 1955 cost and representing an annual decline of
nearly 30 percent, similar to the decline in the price of main memory.
Examples of magnetic disks include hard disks and floppy disks.Optical
storage devices—CD-ROM (compact disc, read-only memory) and DVD-
ROM (digital videodisc, or versatile disc)—appeared in the mid-1980s and
’90s. They both represent bits as tiny pits in plastic, organized in a long
spiral like a phonograph record, written and read with lasers. A CD-ROM
can hold 2 gigabytes of data, but the inclusion of error-correcting codes (to
correct for dust, small defects, and scratches) reduces the usable data to
650 megabytes. DVDs are denser, have smaller pits, and can hold 17
gigabytes with error correction.Optical storage devices are slower than
magnetic disks, but they are well suited for making master copies
of software or for multimedia (audio and video) files that are read
sequentially. There are also writable and rewritable CD-ROMs (CD-R and
CD-RW) and DVD-ROMs (DVD-R and DVD-RW) that can be used like
magnetic tapes for inexpensive archiving and sharing of data.With the
introduction of affordable solid-state drives (SSDs) in the early 21st century,
consumers received even more memory in a smaller package. SSDs are
advantageous over hard disk drives in that they have no moving parts,
making them both quieter and more durable. However, they are not as
widely available as hard drives. The first consumer version of the modern
flash SSD was created in 1995 (a commercial version had been introduced
in 1991), but this and similar versions, ranging to tens of thousands of
dollars, were still far more expensive than was reasonable for the average
consumer. In 2003, cheaper SSDs, with capacities up to 512 megabytes,
were introduced. Capacity increased in the following years, with consumer
models usually ranging from 250 to 500 gigabytes of available memory.
However, models may contain as many as 100 terabytes of storage, though
such models often sell for exorbitant prices.
Input devices
A plethora of devices falls into the category of input peripheral. Typical
examples include keyboards, touchpads, mice, trackballs, joysticks, digital
tablets, and scanners.
Touchpads, or trackpads, are pointing devices usually built into laptops and
netbooks in front of the keyboard, though there are versions that connect to
a desktop computer. A touchpad usually features a flat rectangular surface
that a user can slide a finger across in order to move a cursor, with both
“left-click” and “right-click” options. Such options either appear as physical
buttons beneath the touchpad or can be activated on the lower part of the
touchpad. Touchpads can be useful for portability or nonflat surfaces, where
a mouse’s movement may be hindered.Mechanical mice and trackballs
operate alike, using a rubber or rubber-coated ball that turns two shafts
connected to a pair of encoders that measure the horizontal and vertical
components of a user’s movement, which are then translated into cursor
movement on a computer monitor. Optical mice employ a light beam and
camera lens to translate motion of the mouse into cursor
movement.Pointing sticks, which were popular on many laptop systems
prior to the invention of the trackpad, employ a technique that uses a
pressure-sensitive resistor. As a user applies pressure to the stick, the
resistor increases the flow of electricity, thereby signaling that movement
has taken place. Most joysticks operate in a similar manner. Though they
are not as popular in the 21st century, companies such as Lenovo still have
pointing sticks built into some of their laptop models.Digital tablets and
touchpads are similar in purpose and functionality. In both cases, input is
taken from a flat pad that contains electrical sensors that detect the
presence of either a special tablet pen or a user’s finger,
respectively.A scanner is akin to a photocopier. A light
source illuminates the object to be scanned, and the varying amounts of
reflected light are captured and measured by an analog-to-digital converter
attached to light-sensitive diodes. The diodes generate a pattern of binary
digits that are stored in the computer as a graphical image.
Such peripherals typically used physical wires to communicate and transfer
data between peripherals and computers in the 20th century. However, in
the early 21st century, Bluetooth technology, which uses radio frequencies
to enable device communication, gained prominence. The technology first
appeared in mobile phones and desktop computers in 2000 and spread to
printers and laptops the following year. By the middle of the decade,
Bluetooth headsets for mobile phones had become nearly ubiquitous.
Output devices
Printers are a common example of output devices. New
multifunction peripherals that integrate printing, scanning, and copying into
a single device are also popular. Computer monitors are sometimes treated
as peripherals. High-fidelity sound systems are another example of output
devices often classified as computer peripherals. Manufacturers have
announced devices that provide tactile feedback to the user—“force
feedback” joysticks, for example. This highlights the complexity of
classifying peripherals—a joystick with force feedback is truly both an input
and an output peripheral. Early printers often used a process known
as impact printing, in which a small number of pins were driven into a
desired pattern by an electromagnetic printhead. As each pin was driven
forward, it struck an inked ribbon and transferred a single dot the size of
the pinhead to the paper. Multiple dots combined into a matrix to form
characters and graphics, hence the name dot matrix. Another early
print technology, daisy-wheel printers, made impressions of whole
characters with a single blow of an electromagnetic printhead, similar to an
electric typewriter.Laser printers have replaced such printers in most
commercial settings. Laser printers employ a focused beam of light to etch
patterns of positively charged particles on the surface of a cylindrical drum
made of negatively charged organic, photosensitive material. As the drum
rotates, negatively charged toner particles adhere to the patterns etched by
the laser and are transferred to the paper. Another, less expensive printing
technology developed for the home and small businesses is inkjet printing.
The majority of inkjet printers operate by ejecting extremely tiny droplets of
ink to form characters in a matrix of dots—much like dot matrix
printers.Computer display devices have been in use almost as long as
computers themselves. Early computer displays employed the
same cathode-ray tubes (CRTs) used in television and radar systems. The
fundamental principle behind CRT displays is the emission of a controlled
stream of electrons that strike light-emitting phosphors coating the inside of
the screen. The screen itself is divided into multiple scan lines, each of
which contains a number of pixels—the rough equivalent of dots in a
dot matrix printer. The resolution of a monitor is determined by its pixel
size. More recent liquid crystal displays (LCDs) rely on liquid crystal cells
that realign incoming polarized light. The realigned beams pass through a
filter that permits only those beams with a particular alignment to pass. By
controlling the liquid crystal cells with electrical charges, various colors or
shades are made to appear on the screen.
Networking
Computer communication may occur through wires, optical fibers, or radio
transmissions. Wired networks may use shielded coaxial cable, similar to
the wire connecting a television to a videocassette recorder or an antenna.
They can also use simpler unshielded wiring with modular connectors
similar to telephone wires. Optical fibers can carry more signals than wires;
they are often used for linking buildings on a college campus or corporate
site and increasingly for longer distances as telephone companies update
their networks. Microwave radio also carries computer network signals,
generally as part of long-distance telephone systems. Low-power microwave
radio is becoming common for wireless networks within a building.
Computer software
Software denotes programs that run on computers. John Tukey, a
statistician at Princeton University and Bell Laboratories, is generally
credited with introducing the term in 1958 (as well as coining the
word bit for binary digit). Initially software referred primarily to what is
now called system software—an operating system and the utility programs
that come with it, such as those to compile (translate) programs
into machine code and load them for execution. This software came with a
computer when it was bought or leased. In 1969 IBM decided to “unbundle”
its software and sell it separately, and software soon became a major
income source for manufacturers as well as for dedicated software firms.
Local area networks
Local area networks (LANs) connect computers within a building or small
group of buildings. A LAN may be configured as (1) a bus, a main channel to
which nodes or secondary channels are connected in a branching structure,
(2) a ring, in which each computer is connected to two neighboring
computers to form a closed circuit, or (3) a star, in which each computer is
linked directly to a central computer and only indirectly to one another.
Each of these has advantages, though the bus configuration has become the
most common.
Even if only two computers are connected, they must follow rules,
or protocols, to communicate. For example, one might signal “ready to
send” and wait for the other to signal “ready to receive.” When many
computers share a network, the protocol might include a rule “talk only
when it is your turn” or “do not talk when anyone else is talking.” Protocols
must also be designed to handle network errors.
The most common LAN design since the mid-1970s has been the bus-
connected Ethernet, originally developed at Xerox PARC. Every computer or
other device on an Ethernet has a unique 48-bit address. Any computer that
wants to transmit listens for a carrier signal that indicates that
a transmission is under way. If it detects none, it starts transmitting,
sending the address of the recipient at the start of its transmission. Every
system on the network receives each message but ignores those not
addressed to it. While a system is transmitting, it also listens, and if it
detects a simultaneous transmission, it stops, waits for a random time, and
retries. The random time delay before retrying reduces the probability that
they will collide again. This scheme is known as carrier sense multiple
access with collision detection (CSMA/CD). It works very well until a
network is moderately heavily loaded, and then it degrades as collisions
become more frequent.
The first Ethernet had a capacity of about 2 megabits per second, and today
10- and 100-megabit-per-second Ethernet is common, with gigabit-per-
second Ethernet also in use. Ethernet transceivers (transmitter-receivers)
for PCs are inexpensive and easily installed.
History of computing
A computer might be described with deceptive simplicity as “an apparatus
that performs routine calculations automatically.” Such a definition would
owe its deceptiveness to a naive and narrow view of calculation as a strictly
mathematical process. In fact, calculation underlies many activities that are
not normally thought of as mathematical. Walking across a room, for
instance, requires many complex, albeit subconscious, calculations.
Computers, too, have proved capable of solving a vast array of problems,
from balancing a checkbook to even—in the form of guidance systems for
robots—walking across a room.
Before the true power of computing could be realized, therefore, the naive
view of calculation had to be overcome. The inventors who labored to bring
the computer into the world had to learn that the thing they were inventing
was not just a number cruncher, not merely a calculator. For example, they
had to learn that it was not necessary to invent a new computer for every
new calculation and that a computer could be designed to solve numerous
problems, even problems not yet imagined when the computer was built.
They also had to learn how to tell such a general problem-solving computer
what problem to solve. In other words, they had to invent programming.
They had to solve all the heady problems of developing such a device,
of implementing the design, of actually building the thing. The history of the
solving of these problems is the history of the computer. That history is
covered in this section, and links are provided to entries on many of the
individuals and companies mentioned. In addition, see the articles computer
science and supercomputer.
The full engine, designed to be room-size, was never built, at least not by
Babbage. Although he sporadically received several government grants—
governments changed, funding often ran out, and he had to personally bear
some of the financial costs—he was working at or near the tolerances of the
construction methods of the day, and he ran into numerous construction
difficulties. All design and construction ceased in 1833, when Joseph
Clement, the machinist responsible for actually building the machine,
refused to continue unless he was prepaid. (The completed portion of the
Difference Engine is on permanent exhibition at the Science Museum in
London.)
As with the Difference Engine, the project was far more complex than
anything theretofore built. The store was to be large enough to hold 1,000
50-digit numbers; this was larger than the storage capacity of any computer
built before 1960. The machine was to be steam-driven and run by one
attendant. The printing capability was also ambitious, as it had been for the
Difference Engine: Babbage wanted to automate the process as much as
possible, right up to producing printed tables of numbers.
The reader was another new feature of the Analytical Engine. Data
(numbers) were to be entered on punched cards, using the card-
reading technology of the Jacquard loom. Instructions were also to be
entered on cards, another idea taken directly from Jacquard. The use of
instruction cards would make it a programmable device and far more
flexible than any machine then in existence. Another element of
programmability was to be its ability to execute instructions in other
than sequential order. It was to have a kind of decision-making ability in its
conditional control transfer, also known as conditional branching, whereby
it would be able to jump to a different instruction depending on the value of
some data. This extremely powerful feature was missing in many of the
early computers of the 20th century.
All the same, Babbage’s Analytical Engine was something new under the
sun. Its most revolutionary feature was the ability to change its operation by
changing the instructions on punched cards. Until this breakthrough, all the
mechanical aids to calculation were merely calculators or, like the
Difference Engine, glorified calculators. The Analytical Engine, although not
actually completed, was the first machine that deserved to be called a
computer.
All these various networks were able to communicate with one another
because of two shared protocols: the Transmission-Control Protocol (TCP),
which split large files into numerous small files, or packets, assigned
sequencing and address information to each packet, and reassembled the
packets into the original file after arrival at their final destination; and the
Internet Protocol (IP), a hierarchical addressing system that controlled the
routing of packets (which might take widely divergent paths before being
reassembled).
What it took to turn a network of computers into something more was the
idea of the hyperlink: computer code inside a document that would cause
related documents to be fetched and displayed. The concept of hyperlinking
was anticipated from the early to the middle decades of the 20th century—
in Belgium by Paul Otlet and in the United States by Ted Nelson, Vannevar
Bush, and, to some extent, Douglas Engelbart. Their yearning for some kind
of system to link knowledge together, though, did not materialize until
1990, when Tim Berners-Lee of England and others at CERN (European
Organization for Nuclear Research) developed a protocol based
on hypertext to make information distribution easier. In 1991 this
culminated in the creation of the World Wide Web and its system of links
among user-created pages. A team of programmers at the U.S. National
Center for Supercomputing Applications, Urbana, Illinois, developed
a program called a browser that made it easier to use the World Wide Web,
and a spin-off company named Netscape Communications Corp. was
founded to commercialize that technology.
E-commerce
Early enthusiasm over the potential profits from e-commerce led to massive
cash investments and a “dot-com” boom-and-bust cycle in the 1990s. By the
end of the decade, half of these businesses had failed, though certain
successful categories of online business had been demonstrated, and most
conventional businesses had established an online presence. Search and
online advertising proved to be the most successful new business areas.
Some online businesses created niches that did not exist before. eBay,
founded in 1995 as an online auction and shopping website, gave members
the ability to set up their own stores online. Although sometimes criticized
for not creating any new wealth or products, eBay made it possible for
members to run small businesses from their homes without a large initial
investment. In 2003 Linden Research, Inc., launched Second Life, an
Internet-based virtual reality world in which participants (called
“residents”) have cartoonlike avatars that move through a
graphical environment. Residents socialize, participate in group activities,
and create and trade virtual products and virtual or real services. Second
Life has its own currency, the Linden Dollar, which can be converted to U.S.
dollars at several Internet currency exchange markets.
Social networking
Social networking services emerged as a significant online phenomenon in
the 2000s. These services used software to facilitate online communities,
where members with shared interests swapped files, photographs, videos,
and music, sent messages and chatted, set up blogs (Web diaries) and
discussion groups, and shared opinions. Early social networking services
included Classmates.com, which connected former schoolmates, and Yahoo!
360°, Myspace, and SixDegrees, which built networks of connections via
friends of friends. By 2018 the leading social networking services
included Facebook, Twitter, Instagram, LinkedIn, and Snapchat. LinkedIn
became an effective tool for business staff recruiting. Businesses began
exploring how to exploit these networks, drawing on social networking
research and theory which suggested that finding key “influential” members
of existing networks of individuals could give access to and credibility with
the whole network.
Blogs became a category unto themselves, and some blogs had thousands of
participants. Trust became a commodity, as sharing opinions or ratings
proved to be a key to effective blog discussions, as well as an important
component of many e-commerce websites. Daily Kos, one of the largest of
the political blogs, made good use of ratings, with high-rated members
gaining more power to rate other members’ comments; under such systems,
the idea is that the best entries will survive and the worst will quickly
disappear. The vendor rating system in eBay similarly allowed for a kind of
self-policing that was intended to weed out unethical or otherwise
undesirable vendors.