0% found this document useful (0 votes)
27 views

Computer Project

computer project
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

Computer Project

computer project
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 21

Computer

Device for processing, storing, and displaying information.

Computer once meant a person who did computations, but now the term
almost universally refers to automated electronic machinery. The first
section of this article focuses on modern digital electronic computers and
their design, constituent parts, and applications. The second section covers
the history of computing. For details on computer architecture, software,
and theory, see computer science.
Computing basics
The first computers were used primarily for numerical calculations.
However, as any information can be numerically encoded, people soon
realized that computers are capable of general-purpose information
processing. Their capacity to handle large amounts of data has extended the
range and accuracy of weather forecasting. Their speed has allowed them
to make decisions about routing telephone connections through a network
and to control mechanical systems such as automobiles, nuclear reactors,
and robotic surgical tools. They are also cheap enough to be embedded in
everyday appliances and to make clothes dryers and rice cookers “smart.”
Computers have allowed us to pose and answer questions that were difficult
to pursue in the past. These questions might be about DNA sequences in
genes, patterns of activity in a consumer market, or all the uses of a word in
texts that have been stored in a database. Increasingly, computers can also
learn and adapt as they operate by using processes such as machine
learning.Computers also have limitations, some of which are theoretical.
For example, there are undecidable propositions whose truth cannot be
determined within a given set of rules, such as the logical structure of a
computer. Because no universal algorithmic method can exist to identify
such propositions, a computer asked to obtain the truth of such a
proposition will (unless forcibly interrupted) continue indefinitely—a
condition known as the “halting problem.” (See Turing machine.) Other
limitations reflect current technology. For example, although computers
have progressed greatly in terms of processing data and using artificial
intelligence algorithms, they are limited by their incapacity to think in a
more holistic fashion. Computers may imitate humans—quite effectively,
even—but imitation may not replace the human element in social
interaction. Ethical concerns also limit computers, because computers rely
on data, rather than a moral compass or human conscience, to make
decisions.

Analog computers
Analog computers use continuous physical magnitudes to represent
quantitative information. At first they represented quantities with
mechanical components (see differential analyzer and integrator), but
after World War II voltages were used; by the 1960s digital computers had
largely replaced them. Nonetheless, analog computers, and some hybrid
digital-analog systems, continued in use through the 1960s in tasks such as
aircraft and spaceflight simulation.One advantage of analog computation is
that it may be relatively simple to design and build an analog computer to
solve a single problem. Another advantage is that analog computers can
frequently represent and solve a problem in “real time”; that is, the
computation proceeds at the same rate as the system being modeled by it.
Their main disadvantages are that analog representations are limited in
precision—typically a few decimal places but fewer in complex mechanisms
—and general-purpose devices are expensive and not easily programmed.

Digital computers
In contrast to analog computers, digital computers represent information
in discrete form, generally as sequences of 0s and 1s (binary digits, or bits).
The modern era of digital computers began in the late 1930s and early
1940s in the United States, Britain, and Germany. The first devices used
switches operated by electromagnets (relays). Their programs were stored
on punched paper tape or cards, and they had limited internal data storage.
For historical developments, see the section Invention of the modern
computer.

Mainframe computer
During the 1950s and ’60s, Unisys (maker of
the UNIVAC computer), International Business Machines
Corporation (IBM), and other companies made large, expensive computers
of increasing power. They were used by major corporations and government
research laboratories, typically as the sole computer in the organization. In
1959 the IBM 1401 computer rented for $8,000 per month (early IBM
machines were almost always leased rather than sold), and in 1964 the
largest IBM S/360 computer cost several million dollars.

These computers came to be called mainframes, though the term did not
become common until smaller computers were built. Mainframe computers
were characterized by having (for their time) large storage capabilities, fast
components, and powerful computational abilities. They were highly
reliable, and, because they frequently served vital needs in an organization,
they were sometimes designed with redundant components that let them
survive partial failures. Because they were complex systems, they were
operated by a staff of systems programmers, who alone had access to the
computer. Other users submitted “batch jobs” to be run one at a time on the
mainframe.

Such systems remain important today, though they are no longer the sole,
or even primary, central computing resource of an organization, which will
typically have hundreds or thousands of personal computers (PCs).
Mainframes now provide high-capacity data storage for Internet servers, or,
through time-sharing techniques, they allow hundreds or thousands of users
to run programs simultaneously. Because of their current roles, these
computers are now called servers rather than mainframes.

Supercomputer
The most powerful computers of the day have typically been
called supercomputers. They have historically been very expensive and their
use limited to high-priority computations for government-sponsored
research, such as nuclear simulations and weather modeling. Today many of
the computational techniques of early supercomputers are in common use
in PCs. On the other hand, the design of costly, special-purpose processors
for supercomputers has been replaced by the use of large arrays of
commodity processors (from several dozen to over 8,000) operating in
parallel over a high-speed communications network.

Minicomputer
Although minicomputers date to the early 1950s, the term was introduced
in the mid-1960s. Relatively small and inexpensive, minicomputers were
typically used in a single department of an organization and often dedicated
to one task or shared by a small group. Minicomputers generally had limited
computational power, but they had excellent compatibility with various
laboratory and industrial devices for collecting and inputting data.

One of the most important manufacturers of minicomputers was Digital


Equipment Corporation (DEC) with its Programmed Data Processor (PDP).
In 1960 DEC’s PDP-1 sold for $120,000. Five years later its PDP-8 cost
$18,000 and became the first widely used minicomputer, with more than
50,000 sold. The DEC PDP-11, introduced in 1970, came in a variety of
models, small and cheap enough to control a single manufacturing process
and large enough for shared use in university computer centers; more than
650,000 were sold. However, the microcomputer overtook this market in
the 1980s.

Microcomputer.
A microcomputer is a small computer built around
a microprocessor integrated circuit, or chip. Whereas the early
minicomputers replaced vacuum tubes with discrete transistors,
microcomputers (and later minicomputers as well) used microprocessors
that integrated thousands or millions of transistors on a single chip. In 1971
the Intel Corporation produced the first microprocessor, the Intel 4004,
which was powerful enough to function as a computer although it was
produced for use in a Japanese-made calculator. In 1975 the first personal
computer, the Altair, used a successor chip, the Intel 8080 microprocessor.
Like minicomputers, early microcomputers had relatively limited storage
and data-handling capabilities, but these have grown as
storage technology has improved alongside processing power.

In the 1980s it was common to distinguish between microprocessor-based


scientific workstations and personal computers. The former used the most
powerful microprocessors available and had high-performance color
graphics capabilities costing thousands of dollars. They were used by
scientists for computation and data visualization and by engineers
for computer-aided engineering. Today the distinction
between workstation and PC has virtually vanished, with PCs having the
power and display capability of workstations.

Laptop computer
The first true laptop computer marketed to consumers was the Osborne 1,
which became available in April 1981. A laptop usually features a
“clamshell” design, with a screen located on the upper lid and a keyboard
on the lower lid. Such computers are powered by a battery, which can be
recharged with alternating current (AC) power chargers. The 1991
PowerBook, created by Apple, was a design milestone, featuring a trackball
for navigation and palm rests; a 1994 model was the first laptop to feature a
touchpad and an Ethernet networking port. The popularity of the laptop
continued to increase in the 1990s, and by the early 2000s laptops were
earning more revenue than desktop models. They remain the most popular
computers on the market and have outsold desktop computers and tablets
since 2018.

Computer hardware
The physical elements of a computer, its hardware, are generally divided
into the central processing unit (CPU), main memory (or random-access
memory, RAM), and peripherals. The last class encompasses all sorts of
input and output (I/O) devices: keyboard, display monitor, printer, disk
drives, network connections, scanners, and more.The CPU and RAM
are integrated circuits (ICs)—small silicon wafers, or chips, that contain
thousands or millions of transistors that function as electrical switches. In
1965 Gordon Moore, one of the founders of Intel, stated what has become
known as Moore’s law: the number of transistors on a chip doubles about
every 18 months. Moore suggested that financial constraints would soon
cause his law to break down, but it has been remarkably accurate for far
longer than he first envisioned. Advances in the design of chips and
transistors, such as the creation of three-dimensional chips (rather than
their previously flat design), have helped to bolster their capabilities,
though there are limits to this process as well. While cofounder and CEO
of NVIDIA Jensen Huang claims that the law has largely run its
course, Intel’s CEO, Pat Gelsinger, has argued otherwise. Companies such
as IBM continue to experiment with using materials other than silicon to
design chips. The viability of Moore’s law relies on immense advancements
in chip technology as well as breakthroughs in moving beyond silicon as
the industry standard going forward.

Central processing unit


The CPU provides the circuits that implement the computer’s instruction set
—its machine language. It is composed of an arithmetic-logic unit (ALU) and
control circuits. The ALU carries out basic arithmetic and logic operations,
and the control section determines the sequence of operations, including
branch instructions that transfer control from one part of a program to
another. Although the main memory was once considered part of the CPU,
today it is regarded as separate. The boundaries shift, however, and CPU
chips now also contain some high-speed cache memory where data and
instructions are temporarily stored for fast access.

The ALU has circuits that add, subtract, multiply, and divide two
arithmetic values, as well as circuits for logic operations such as AND and
OR (where a 1 is interpreted as true and a 0 as false, so that, for instance, 1
AND 0 = 0; see Boolean algebra). The ALU has several to more than a
hundred registers that temporarily hold results of its computations for
further arithmetic operations or for transfer to main memory.

The circuits in the CPU control section provide branch instructions, which
make elementary decisions about what instruction to execute next. For
example, a branch instruction might be “If the result of the last ALU
operation is negative, jump to location A in the program; otherwise,
continue with the following instruction.” Such instructions allow “if-then-
else” decisions in a program and execution of a sequence of instructions,
such as a “while-loop” that repeatedly does some set of instructions while
some condition is met. A related instruction is the subroutine call, which
transfers execution to a subprogram and then, after the subprogram
finishes, returns to the main program where it left off.
In a stored-program computer, programs and data in memory are
indistinguishable. Both are bit patterns—strings of 0s and 1s—that may be
interpreted either as data or as program instructions, and both are fetched
from memory by the CPU. The CPU has a program counter that holds the
memory address (location) of the next instruction to be executed. The basic
operation of the CPU is the “fetch-decode-execute” cycle:

 Fetch the instruction from the address held in the program counter,
and store it in a register.
 Decode the instruction. Parts of it specify the operation to be done,
and parts specify the data on which it is to operate. These may be in
CPU registers or in memory locations. If it is a branch instruction,
part of it will contain the memory address of the next instruction to
execute once the branch condition is satisfied.
 Fetch the operands, if any.
 Execute the operation if it is an ALU operation.
 Store the result (in a register or in memory), if there is one.
 Update the program counter to hold the next instruction location,
which is either the next memory location or the address specified by a
branch instruction.

At the end of these steps the cycle is ready to repeat, and it continues until
a special halt instruction stops execution. Steps of this cycle and all internal
CPU operations are regulated by a clock that oscillates at a high frequency
(now typically measured in gigahertz, or billions of cycles per second).
Another factor that affects performance is the “word” size—the number of
bits that are fetched at once from memory and on which CPU instructions
operate. Digital words now consist of 32 or 64 bits, though sizes from 8 to
128 bits are seen.

Processing instructions one at a time, or serially, often creates a bottleneck


because many program instructions may be ready and waiting for
execution. Since the early 1980s, CPU design has followed a style originally
called reduced-instruction-set computing (RISC). This design minimizes the
transfer of data between memory and CPU (all ALU operations are done
only on data in CPU registers) and calls for simple instructions that can
execute very quickly. As the number of transistors on a chip has grown, the
RISC design requires a relatively small portion of the CPU chip to be
devoted to the basic instruction set. The remainder of the chip can then be
used to speed CPU operations by providing circuits that let several
instructions execute simultaneously, or in parallel.

There are two major kinds of instruction-level parallelism (ILP) in the CPU,
both first used in early supercomputers. One is the pipeline, which allows
the fetch-decode-execute cycle to have several instructions under way at
once. While one instruction is being executed, another can obtain its
operands, a third can be decoded, and a fourth can be fetched from
memory. If each of these operations requires the same time, a new
instruction can enter the pipeline at each phase and (for example) five
instructions can be completed in the time that it would take to complete one
without a pipeline. The other sort of ILP is to have multiple execution units
in the CPU—duplicate arithmetic circuits, in particular, as well as
specialized circuits for graphics instructions or for floating-point
calculations (arithmetic operations involving noninteger numbers, such as
3.27). With this “superscalar” design, several instructions can execute at
once.

Both forms of ILP face complications. A branch instruction might render


preloaded instructions in the pipeline useless if they entered it before the
branch jumped to a new part of the program. Also, superscalar execution
must determine whether an arithmetic operation depends on the result of
another operation, since they cannot be executed simultaneously. CPUs now
have additional circuits to predict whether a branch will be taken and to
analyze instructional dependencies. These have become highly
sophisticated and can frequently rearrange instructions to execute more of
them in parallel.

Main memory
The earliest forms of computer main memory were mercury delay lines,
which were tubes of mercury that stored data as ultrasonic waves,
and cathode-ray tubes, which stored data as charges on the tubes’ screens.
The magnetic drum, invented about 1948, used an iron oxide coating on a
rotating drum to store data and programs as magnetic patterns.

In a binary computer any bistable device (something that can be placed in


either of two states) can represent the two possible bit values of 0 and 1
and can thus serve as computer memory. Magnetic-core memory, the first
relatively cheap RAM device, appeared in 1952. It was composed of tiny,
doughnut-shaped ferrite magnets threaded on the intersection points of a
two-dimensional wire grid. These wires carried currents to change the
direction of each core’s magnetization, while a third wire threaded through
the doughnut detected its magnetic orientation.

The first integrated circuit (IC) memory chip appeared in 1971. IC memory
stores a bit in a transistor-capacitor combination. The capacitor holds a
charge to represent a 1 and no charge for a 0; the transistor switches it
between these two states. Because a capacitor charge gradually decays, IC
memory is dynamic RAM (DRAM), which must have its stored values
refreshed periodically (every 20 milliseconds or so). There is also static
RAM (SRAM), which does not have to be refreshed. Although faster than
DRAM, SRAM uses more transistors and is thus more costly; it is used
primarily for CPU internal registers and cache memory.

In addition to main memory, computers generally have special video


memory (VRAM) to hold graphical images, called bitmaps, for the computer
display. This memory is often dual-ported—a new image can be stored in it
at the same time that its current data is being read and displayed.It takes
time to specify an address in a memory chip, and, since memory is slower
than a CPU, there is an advantage to memory that can transfer a series of
words rapidly once the first address is specified. One such design is known
as synchronous DRAM (SDRAM), which became widely used by
2001.Nonetheless, data transfer through the “bus”—the set of wires that
connect the CPU to memory and peripheral devices—is a bottleneck. For
that reason, CPU chips now contain cache memory—a small amount of fast
SRAM. The cache holds copies of data from blocks of main memory. A well-
designed cache allows up to 85–90 percent of memory references to be
done from it in typical programs, giving a severalfold speedup in data
access.The time between two memory reads or writes (cycle time) was
about 17 microseconds (millionths of a second) for early core memory and
about 1 microsecond for core in the early 1970s. The first DRAM had a cycle
time of about half a microsecond, or 500 nanoseconds (billionths of a
second), and today it is 20 nanoseconds or less. An equally important
measure is the cost per bit of memory. The first DRAM stored 128 bytes
(1 byte = 8 bits) and cost about $10, or $80,000 per megabyte (millions of
bytes). In 2001 DRAM could be purchased for less than $0.25 per megabyte.
This vast decline in cost made possible graphical user interfaces (GUIs), the
display fonts that word processors use, and the manipulation and
visualization of large masses of data by scientific computers.

Secondary memory
Secondary memory on a computer is storage for data and programs not in
use at the moment. In addition to punched cards and paper tape, early
computers also used magnetic tape for secondary storage. Tape is cheap,
either on large reels or in small cassettes, but has the disadvantage that it
must be read or written sequentially from one end to the other.

IBM introduced the first magnetic disk, the RAMAC, in 1955; it held 5
megabytes and rented for $3,200 per month. Magnetic disks are platters
coated with iron oxide, like tape and drums. An arm with a tiny wire coil,
the read/write (R/W) head, moves radially over the disk, which is divided
into concentric tracks composed of small arcs, or sectors, of data.
Magnetized regions of the disk generate small currents in the coil as it
passes, thereby allowing it to “read” a sector; similarly, a small current in
the coil will induce a local magnetic change in the disk, thereby “writing” to
a sector. The disk rotates rapidly (up to 15,000 rotations per minute), and
so the R/W head can rapidly reach any sector on the disk.Early disks had
large removable platters. In the 1970s IBM introduced sealed disks with
fixed platters known as Winchester disks—perhaps because the first ones
had two 30-megabyte platters, suggesting the Winchester 30-30 rifle. Not
only was the sealed disk protected against dirt, the R/W head could also
“fly” on a thin air film, very close to the platter. By putting the head closer
to the platter, the region of oxide film that represented a single bit could be
much smaller, thus increasing storage capacity. This basic technology is
still used.Refinements have included putting multiple platters—10 or more
—in a single disk drive, with a pair of R/W heads for the two surfaces of
each platter in order to increase storage and data transfer rates. Even
greater gains have resulted from improving control of the radial motion of
the disk arm from track to track, resulting in denser distribution of data on
the disk. By 2002 such densities had reached over 8,000 tracks per cm
(20,000 tracks per inch), and a platter the diameter of a coin could hold
over a gigabyte of data. In 2002 an 80-gigabyte disk cost about $200—only
one ten-millionth of the 1955 cost and representing an annual decline of
nearly 30 percent, similar to the decline in the price of main memory.
Examples of magnetic disks include hard disks and floppy disks.Optical
storage devices—CD-ROM (compact disc, read-only memory) and DVD-
ROM (digital videodisc, or versatile disc)—appeared in the mid-1980s and
’90s. They both represent bits as tiny pits in plastic, organized in a long
spiral like a phonograph record, written and read with lasers. A CD-ROM
can hold 2 gigabytes of data, but the inclusion of error-correcting codes (to
correct for dust, small defects, and scratches) reduces the usable data to
650 megabytes. DVDs are denser, have smaller pits, and can hold 17
gigabytes with error correction.Optical storage devices are slower than
magnetic disks, but they are well suited for making master copies
of software or for multimedia (audio and video) files that are read
sequentially. There are also writable and rewritable CD-ROMs (CD-R and
CD-RW) and DVD-ROMs (DVD-R and DVD-RW) that can be used like
magnetic tapes for inexpensive archiving and sharing of data.With the
introduction of affordable solid-state drives (SSDs) in the early 21st century,
consumers received even more memory in a smaller package. SSDs are
advantageous over hard disk drives in that they have no moving parts,
making them both quieter and more durable. However, they are not as
widely available as hard drives. The first consumer version of the modern
flash SSD was created in 1995 (a commercial version had been introduced
in 1991), but this and similar versions, ranging to tens of thousands of
dollars, were still far more expensive than was reasonable for the average
consumer. In 2003, cheaper SSDs, with capacities up to 512 megabytes,
were introduced. Capacity increased in the following years, with consumer
models usually ranging from 250 to 500 gigabytes of available memory.
However, models may contain as many as 100 terabytes of storage, though
such models often sell for exorbitant prices.
Input devices
A plethora of devices falls into the category of input peripheral. Typical
examples include keyboards, touchpads, mice, trackballs, joysticks, digital
tablets, and scanners.

Keyboards contain mechanical or electromechanical switches that change


the flow of current through the keyboard when depressed.
A microprocessor embedded in the keyboard interprets these changes and
sends a signal to the computer. In addition to letter and number keys, most
keyboards also include “function” and “control” keys that modify input or
send special commands to the computer.

Touchpads, or trackpads, are pointing devices usually built into laptops and
netbooks in front of the keyboard, though there are versions that connect to
a desktop computer. A touchpad usually features a flat rectangular surface
that a user can slide a finger across in order to move a cursor, with both
“left-click” and “right-click” options. Such options either appear as physical
buttons beneath the touchpad or can be activated on the lower part of the
touchpad. Touchpads can be useful for portability or nonflat surfaces, where
a mouse’s movement may be hindered.Mechanical mice and trackballs
operate alike, using a rubber or rubber-coated ball that turns two shafts
connected to a pair of encoders that measure the horizontal and vertical
components of a user’s movement, which are then translated into cursor
movement on a computer monitor. Optical mice employ a light beam and
camera lens to translate motion of the mouse into cursor
movement.Pointing sticks, which were popular on many laptop systems
prior to the invention of the trackpad, employ a technique that uses a
pressure-sensitive resistor. As a user applies pressure to the stick, the
resistor increases the flow of electricity, thereby signaling that movement
has taken place. Most joysticks operate in a similar manner. Though they
are not as popular in the 21st century, companies such as Lenovo still have
pointing sticks built into some of their laptop models.Digital tablets and
touchpads are similar in purpose and functionality. In both cases, input is
taken from a flat pad that contains electrical sensors that detect the
presence of either a special tablet pen or a user’s finger,
respectively.A scanner is akin to a photocopier. A light
source illuminates the object to be scanned, and the varying amounts of
reflected light are captured and measured by an analog-to-digital converter
attached to light-sensitive diodes. The diodes generate a pattern of binary
digits that are stored in the computer as a graphical image.
Such peripherals typically used physical wires to communicate and transfer
data between peripherals and computers in the 20th century. However, in
the early 21st century, Bluetooth technology, which uses radio frequencies
to enable device communication, gained prominence. The technology first
appeared in mobile phones and desktop computers in 2000 and spread to
printers and laptops the following year. By the middle of the decade,
Bluetooth headsets for mobile phones had become nearly ubiquitous.

Output devices
Printers are a common example of output devices. New
multifunction peripherals that integrate printing, scanning, and copying into
a single device are also popular. Computer monitors are sometimes treated
as peripherals. High-fidelity sound systems are another example of output
devices often classified as computer peripherals. Manufacturers have
announced devices that provide tactile feedback to the user—“force
feedback” joysticks, for example. This highlights the complexity of
classifying peripherals—a joystick with force feedback is truly both an input
and an output peripheral. Early printers often used a process known
as impact printing, in which a small number of pins were driven into a
desired pattern by an electromagnetic printhead. As each pin was driven
forward, it struck an inked ribbon and transferred a single dot the size of
the pinhead to the paper. Multiple dots combined into a matrix to form
characters and graphics, hence the name dot matrix. Another early
print technology, daisy-wheel printers, made impressions of whole
characters with a single blow of an electromagnetic printhead, similar to an
electric typewriter.Laser printers have replaced such printers in most
commercial settings. Laser printers employ a focused beam of light to etch
patterns of positively charged particles on the surface of a cylindrical drum
made of negatively charged organic, photosensitive material. As the drum
rotates, negatively charged toner particles adhere to the patterns etched by
the laser and are transferred to the paper. Another, less expensive printing
technology developed for the home and small businesses is inkjet printing.
The majority of inkjet printers operate by ejecting extremely tiny droplets of
ink to form characters in a matrix of dots—much like dot matrix
printers.Computer display devices have been in use almost as long as
computers themselves. Early computer displays employed the
same cathode-ray tubes (CRTs) used in television and radar systems. The
fundamental principle behind CRT displays is the emission of a controlled
stream of electrons that strike light-emitting phosphors coating the inside of
the screen. The screen itself is divided into multiple scan lines, each of
which contains a number of pixels—the rough equivalent of dots in a
dot matrix printer. The resolution of a monitor is determined by its pixel
size. More recent liquid crystal displays (LCDs) rely on liquid crystal cells
that realign incoming polarized light. The realigned beams pass through a
filter that permits only those beams with a particular alignment to pass. By
controlling the liquid crystal cells with electrical charges, various colors or
shades are made to appear on the screen.

Networking
Computer communication may occur through wires, optical fibers, or radio
transmissions. Wired networks may use shielded coaxial cable, similar to
the wire connecting a television to a videocassette recorder or an antenna.
They can also use simpler unshielded wiring with modular connectors
similar to telephone wires. Optical fibers can carry more signals than wires;
they are often used for linking buildings on a college campus or corporate
site and increasingly for longer distances as telephone companies update
their networks. Microwave radio also carries computer network signals,
generally as part of long-distance telephone systems. Low-power microwave
radio is becoming common for wireless networks within a building.

Wide area networks


Wide area networks (WANs) span cities, countries, and the globe, generally
using telephone lines and satellite links. The Internet connects multiple
WANs; as its name suggests, it is a network of networks. Its success stems
from early support by the U.S. Department of Defense, which developed
its precursor, ARPANET, to let researchers communicate readily and share
computer resources. Its success is also due to its flexible communication
technique. The emergence of the Internet in the 1990s as not only a
communication medium but also one of the principal focuses of computer
use was one of the most significant developments in computing in that era.
For more on the history and technical details of Internet communication
protocols, see Internet.

Computer software
Software denotes programs that run on computers. John Tukey, a
statistician at Princeton University and Bell Laboratories, is generally
credited with introducing the term in 1958 (as well as coining the
word bit for binary digit). Initially software referred primarily to what is
now called system software—an operating system and the utility programs
that come with it, such as those to compile (translate) programs
into machine code and load them for execution. This software came with a
computer when it was bought or leased. In 1969 IBM decided to “unbundle”
its software and sell it separately, and software soon became a major
income source for manufacturers as well as for dedicated software firms.
Local area networks
Local area networks (LANs) connect computers within a building or small
group of buildings. A LAN may be configured as (1) a bus, a main channel to
which nodes or secondary channels are connected in a branching structure,
(2) a ring, in which each computer is connected to two neighboring
computers to form a closed circuit, or (3) a star, in which each computer is
linked directly to a central computer and only indirectly to one another.
Each of these has advantages, though the bus configuration has become the
most common.

Even if only two computers are connected, they must follow rules,
or protocols, to communicate. For example, one might signal “ready to
send” and wait for the other to signal “ready to receive.” When many
computers share a network, the protocol might include a rule “talk only
when it is your turn” or “do not talk when anyone else is talking.” Protocols
must also be designed to handle network errors.

The most common LAN design since the mid-1970s has been the bus-
connected Ethernet, originally developed at Xerox PARC. Every computer or
other device on an Ethernet has a unique 48-bit address. Any computer that
wants to transmit listens for a carrier signal that indicates that
a transmission is under way. If it detects none, it starts transmitting,
sending the address of the recipient at the start of its transmission. Every
system on the network receives each message but ignores those not
addressed to it. While a system is transmitting, it also listens, and if it
detects a simultaneous transmission, it stops, waits for a random time, and
retries. The random time delay before retrying reduces the probability that
they will collide again. This scheme is known as carrier sense multiple
access with collision detection (CSMA/CD). It works very well until a
network is moderately heavily loaded, and then it degrades as collisions
become more frequent.

The first Ethernet had a capacity of about 2 megabits per second, and today
10- and 100-megabit-per-second Ethernet is common, with gigabit-per-
second Ethernet also in use. Ethernet transceivers (transmitter-receivers)
for PCs are inexpensive and easily installed.

Wireless Ethernet, known as Wi-Fi, is the most common method of


connecting to the Internet. Commonly using frequencies from 2.4 to 5
gigahertz (GHz), such networks with speeds of up to 9.6 gigabits per second
are available to consumers. Early in 2002 another Ethernet-like standard
was released. Known as HomePlug, the first version could transmit data at
about 8 megabits per second through a building’s existing
electrical power infrastructure. A later version could achieve rates of 1
gigabit per second.

History of computing
A computer might be described with deceptive simplicity as “an apparatus
that performs routine calculations automatically.” Such a definition would
owe its deceptiveness to a naive and narrow view of calculation as a strictly
mathematical process. In fact, calculation underlies many activities that are
not normally thought of as mathematical. Walking across a room, for
instance, requires many complex, albeit subconscious, calculations.
Computers, too, have proved capable of solving a vast array of problems,
from balancing a checkbook to even—in the form of guidance systems for
robots—walking across a room.

Before the true power of computing could be realized, therefore, the naive
view of calculation had to be overcome. The inventors who labored to bring
the computer into the world had to learn that the thing they were inventing
was not just a number cruncher, not merely a calculator. For example, they
had to learn that it was not necessary to invent a new computer for every
new calculation and that a computer could be designed to solve numerous
problems, even problems not yet imagined when the computer was built.
They also had to learn how to tell such a general problem-solving computer
what problem to solve. In other words, they had to invent programming.

They had to solve all the heady problems of developing such a device,
of implementing the design, of actually building the thing. The history of the
solving of these problems is the history of the computer. That history is
covered in this section, and links are provided to entries on many of the
individuals and companies mentioned. In addition, see the articles computer
science and supercomputer.

The first computer


By the second decade of the 19th century, a number of ideas necessary for
the invention of the computer were in the air. First, the potential benefits
to science and industry of being able to automate routine calculations were
appreciated, as they had not been a century earlier. Specific methods to
make automated calculation more practical, such as doing multiplication by
adding logarithms or by repeating addition, had been invented, and
experience with both analog and digital devices had shown some of the
benefits of each approach. The Jacquard loom (as described in the previous
section, Computer precursors) had shown the benefits of directing a
multipurpose device through coded instructions, and it had demonstrated
how punched cards could be used to modify those instructions quickly and
flexibly. It was a mathematical genius in England who began to put all these
pieces together.

The Difference Engine


Charles Babbage was an English mathematician and inventor: he invented
the cowcatcher, reformed the British postal system, and was a pioneer in
the fields of operations research and actuarial science. It was Babbage who
first suggested that the weather of years past could be read from tree rings.
He also had a lifelong fascination with keys, ciphers, and mechanical dolls.

As a founding member of the Royal Astronomical Society, Babbage had seen


a clear need to design and build a mechanical device that could automate
long, tedious astronomical calculations. He began by writing a letter in
1822 to Sir Humphry Davy, president of the Royal Society, about the
possibility of automating the construction of mathematical tables—
specifically, logarithm tables for use in navigation. He then wrote a paper,
“On the Theoretical Principles of the Machinery for Calculating Tables,”
which he read to the society later that year. (It won the Royal Society’s first
Gold Medal in 1823.) Tables then in use often contained errors, which could
be a life-and-death matter for sailors at sea, and Babbage argued that, by
automating the production of the tables, he could assure their accuracy.
Having gained support in the society for his Difference Engine, as he called
it, Babbage next turned to the British government to fund development,
obtaining one of the world’s first government grants for research and
technological development.

Babbage approached the project very seriously: he hired a master


machinist, set up a fireproof workshop, and built a
dustproof environment for testing the device. Up until then calculations
were rarely carried out to more than 6 digits; Babbage planned to produce
20- or 30-digit results routinely. The Difference Engine was a digital device:
it operated on discrete digits rather than smooth quantities, and the digits
were decimal (0–9), represented by positions on toothed wheels, rather than
the binary digits that Leibniz favored (but did not use). When one of the
toothed wheels turned from 9 to 0, it caused the next wheel to advance one
position, carrying the digit just as Leibniz’s Step Reckoner calculator had
operated.

The Difference Engine was more than a simple calculator, however. It


mechanized not just a single calculation but a whole series of calculations
on a number of variables to solve a complex problem. It went far beyond
calculators in other ways as well. Like modern computers, the Difference
Engine had storage—that is, a place where data could be held temporarily
for later processing—and it was designed to stamp its output into soft
metal, which could later be used to produce a printing plate.

Nevertheless, the Difference Engine performed only one operation. The


operator would set up all of its data registers with the original data, and
then the single operation would be repeatedly applied to all of the registers,
ultimately producing a solution. Still, in complexity and audacity of design,
it dwarfed any calculating device then in existence.

The full engine, designed to be room-size, was never built, at least not by
Babbage. Although he sporadically received several government grants—
governments changed, funding often ran out, and he had to personally bear
some of the financial costs—he was working at or near the tolerances of the
construction methods of the day, and he ran into numerous construction
difficulties. All design and construction ceased in 1833, when Joseph
Clement, the machinist responsible for actually building the machine,
refused to continue unless he was prepaid. (The completed portion of the
Difference Engine is on permanent exhibition at the Science Museum in
London.)

The Analytical Engine


While working on the Difference Engine, Babbage began to imagine ways to
improve it. Chiefly he thought about generalizing its operation so that it
could perform other kinds of calculations. By the time the funding had run
out in 1833, he had conceived of something far more revolutionary: a
general-purpose computing machine called the Analytical Engine.The
Analytical Engine was to be a general-purpose, fully program-controlled,
automatic mechanical digital computer. It would be able to perform any
calculation set before it. Before Babbage there is no evidence that anyone
had ever conceived of such a device, let alone attempted to build one. The
machine was designed to consist of four components: the mill, the store, the
reader, and the printer. These components are the essential components of
every computer today. The mill was the calculating unit, analogous to
the central processing unit (CPU) in a modern computer; the store was
where data were held prior to processing, exactly analogous to memory and
storage in today’s computers; and the reader and printer were the input and
output devices.

As with the Difference Engine, the project was far more complex than
anything theretofore built. The store was to be large enough to hold 1,000
50-digit numbers; this was larger than the storage capacity of any computer
built before 1960. The machine was to be steam-driven and run by one
attendant. The printing capability was also ambitious, as it had been for the
Difference Engine: Babbage wanted to automate the process as much as
possible, right up to producing printed tables of numbers.

The reader was another new feature of the Analytical Engine. Data
(numbers) were to be entered on punched cards, using the card-
reading technology of the Jacquard loom. Instructions were also to be
entered on cards, another idea taken directly from Jacquard. The use of
instruction cards would make it a programmable device and far more
flexible than any machine then in existence. Another element of
programmability was to be its ability to execute instructions in other
than sequential order. It was to have a kind of decision-making ability in its
conditional control transfer, also known as conditional branching, whereby
it would be able to jump to a different instruction depending on the value of
some data. This extremely powerful feature was missing in many of the
early computers of the 20th century.

By most definitions, the Analytical Engine was a real computer as


understood today—or would have been, had not Babbage run into
implementation problems again. Actually building his ambitious design was
judged infeasible given the current technology, and Babbage’s failure to
generate the promised mathematical tables with his Difference Engine had
dampened enthusiasm for further government funding. Indeed, it was
apparent to the British government that Babbage was more interested
in innovation than in constructing tables.

All the same, Babbage’s Analytical Engine was something new under the
sun. Its most revolutionary feature was the ability to change its operation by
changing the instructions on punched cards. Until this breakthrough, all the
mechanical aids to calculation were merely calculators or, like the
Difference Engine, glorified calculators. The Analytical Engine, although not
actually completed, was the first machine that deserved to be called a
computer.

Microsoft’s Windows operating


system
In 1985 Microsoft came out with its Windows operating system, which gave
PC compatibles some of the same capabilities as the Macintosh. Year after
year, Microsoft refined and improved Windows so that Apple, which failed
to come up with a significant new advantage, lost its edge. IBM tried to
establish yet another operating system, OS/2, but lost the battle to Gates’s
company. In fact, Microsoft also had established itself as the leading
provider of application software for the Macintosh. Thus Microsoft
dominated not only the operating system and application software business
for PC-compatibles but also the application software business for the only
nonstandard system with any sizable share of the desktop computer market.
In 1998, amid a growing chorus of complaints about Microsoft’s business
tactics, the U.S. Department of Justice filed a lawsuit charging Microsoft
with using its monopoly position to stifle competition.

One interconnected world


The Internet
The Internet grew out of funding by the U.S. Advanced Research Projects
Agency (ARPA), later renamed the Defense Advanced Research Projects
Agency (DARPA), to develop a communication system among government
and academic computer-research laboratories. The first network
component, ARPANET, became operational in October 1969. With only 15
nongovernment (university) sites included in ARPANET, the U.S. National
Science Foundation decided to fund the construction and initial
maintenance cost of a supplementary network, the Computer Science
Network (CSNET). Built in 1980, CSNET was made available, on a
subscription basis, to a wide array of academic, government,
and industry research labs. As the 1980s wore on, further networks were
added. In North America there were (among others): BITNET (Because It’s
Time Network) from IBM, UUCP (UNIX-to-UNIX Copy Protocol) from Bell
Telephone, USENET (initially a connection between Duke University,
Durham, North Carolina, and the University of North Carolina and still the
home system for the Internet’s many newsgroups), NSFNET (a high-
speed National Science Foundation network connecting supercomputers),
and CDNet (in Canada). In Europe several small academic networks were
linked to the growing North American network.

All these various networks were able to communicate with one another
because of two shared protocols: the Transmission-Control Protocol (TCP),
which split large files into numerous small files, or packets, assigned
sequencing and address information to each packet, and reassembled the
packets into the original file after arrival at their final destination; and the
Internet Protocol (IP), a hierarchical addressing system that controlled the
routing of packets (which might take widely divergent paths before being
reassembled).

What it took to turn a network of computers into something more was the
idea of the hyperlink: computer code inside a document that would cause
related documents to be fetched and displayed. The concept of hyperlinking
was anticipated from the early to the middle decades of the 20th century—
in Belgium by Paul Otlet and in the United States by Ted Nelson, Vannevar
Bush, and, to some extent, Douglas Engelbart. Their yearning for some kind
of system to link knowledge together, though, did not materialize until
1990, when Tim Berners-Lee of England and others at CERN (European
Organization for Nuclear Research) developed a protocol based
on hypertext to make information distribution easier. In 1991 this
culminated in the creation of the World Wide Web and its system of links
among user-created pages. A team of programmers at the U.S. National
Center for Supercomputing Applications, Urbana, Illinois, developed
a program called a browser that made it easier to use the World Wide Web,
and a spin-off company named Netscape Communications Corp. was
founded to commercialize that technology.

Netscape was an enormous success. The Web grew exponentially, doubling


the number of users and the number of sites every few months. Uniform
resource locators (URLs) became part of daily life, and the use of electronic
mail (email) became commonplace. Increasingly business took advantage of
the Internet and adopted new forms of buying and selling in “cyberspace.”
(Science fiction author William Gibson popularized this term in the early
1980s.) With Netscape so successful, Microsoft and other firms
developed alternative Web browsers.

Originally created as a closed network for researchers, the Internet was


suddenly a new public medium for information. It became the home of
virtual shopping malls, bookstores, stockbrokers, newspapers, and
entertainment. Schools were “getting connected” to the Internet, and
children were learning to do research in novel ways. The combination of the
Internet, email, and small and affordable computing and communication
devices began to change many aspects of society.

It soon became apparent that new software was necessary to take


advantage of the opportunities created by the Internet. Sun Microsystems,
maker of powerful desktop computers known as workstations, invented a
new object-oriented programming language called Java. Meeting the design
needs of embedded and networked devices, this new language was aimed at
making it possible to build applications that could be stored on one system
but run on another after passing over a network. Alternatively, various parts
of applications could be stored in different locations and moved to run in a
single device. Java was one of the more effective ways to develop software
for “smart cards,” plastic debit cards with embedded computer chips that
could store and transfer electronic funds in place of cash.

E-commerce
Early enthusiasm over the potential profits from e-commerce led to massive
cash investments and a “dot-com” boom-and-bust cycle in the 1990s. By the
end of the decade, half of these businesses had failed, though certain
successful categories of online business had been demonstrated, and most
conventional businesses had established an online presence. Search and
online advertising proved to be the most successful new business areas.

Some online businesses created niches that did not exist before. eBay,
founded in 1995 as an online auction and shopping website, gave members
the ability to set up their own stores online. Although sometimes criticized
for not creating any new wealth or products, eBay made it possible for
members to run small businesses from their homes without a large initial
investment. In 2003 Linden Research, Inc., launched Second Life, an
Internet-based virtual reality world in which participants (called
“residents”) have cartoonlike avatars that move through a
graphical environment. Residents socialize, participate in group activities,
and create and trade virtual products and virtual or real services. Second
Life has its own currency, the Linden Dollar, which can be converted to U.S.
dollars at several Internet currency exchange markets.

Maintaining an Internet presence became common for conventional


businesses during the 1990s and 2000s as they sought to reach out to a
public that was increasingly active in online social communities. In addition
to seeking some way of responding to the growing numbers of their
customers who were sharing their experiences with company products and
services online, companies discovered that many potential customers
searched online for the best deals and the locations of nearby businesses.
With an Internet-enabled smartphone, a customer might, for example, check
for nearby restaurants using its built-in access to the Global Positioning
System (GPS), check a map on the Web for directions to the restaurant, and
then call for a reservation, all while en route.

The growth of online business was accompanied, though, by a rise


in cybercrime, particularly identity theft, in which a criminal might gain
access to someone’s credit card or other identification and use it to make
purchases.

Social networking
Social networking services emerged as a significant online phenomenon in
the 2000s. These services used software to facilitate online communities,
where members with shared interests swapped files, photographs, videos,
and music, sent messages and chatted, set up blogs (Web diaries) and
discussion groups, and shared opinions. Early social networking services
included Classmates.com, which connected former schoolmates, and Yahoo!
360°, Myspace, and SixDegrees, which built networks of connections via
friends of friends. By 2018 the leading social networking services
included Facebook, Twitter, Instagram, LinkedIn, and Snapchat. LinkedIn
became an effective tool for business staff recruiting. Businesses began
exploring how to exploit these networks, drawing on social networking
research and theory which suggested that finding key “influential” members
of existing networks of individuals could give access to and credibility with
the whole network.

Blogs became a category unto themselves, and some blogs had thousands of
participants. Trust became a commodity, as sharing opinions or ratings
proved to be a key to effective blog discussions, as well as an important
component of many e-commerce websites. Daily Kos, one of the largest of
the political blogs, made good use of ratings, with high-rated members
gaining more power to rate other members’ comments; under such systems,
the idea is that the best entries will survive and the worst will quickly
disappear. The vendor rating system in eBay similarly allowed for a kind of
self-policing that was intended to weed out unethical or otherwise
undesirable vendors.

You might also like