Computer: Navigation Search Computer Technology Limited Computer (Disambiguation) Computer System (Disambiguation)
Computer: Navigation Search Computer Technology Limited Computer (Disambiguation) Computer System (Disambiguation)
A computer is a general purpose device that can be programmed to carry out a set of arithmetic
or logical operations automatically. Since a sequence of operations can be readily changed, the
size of a large room, consuming as much power as several hundred modern personal computers
(PCs).[1]
Modern computers based on integrated circuits are millions to billions of times more capable
than the early machines, and occupy a fraction of the space.[2] Simple computers are small
enough to fit into mobile devices, and mobile computers can be powered by small batteries.
Personal computers in their various forms are icons of the Information Age and are what most
people think of as computers. However, the embedded computers found in many devices from
MP3 players to fighter aircraft and from toys to industrial robots are the most numerous.
Etymology
The first use of the word computer was recorded in 1613 in a book called The yong mans
gleanings by English writer Richard Braithwait I haue read the truest computer of Times, and
the best Arithmetician that euer breathed, and he reduceth thy dayes into a short number. It
referred to a person who carried out calculations, or computations, and the word continued with
the same meaning until the middle of the 20th century. From the end of the 19th century the
History
word began to take on its more familiar meaning, a machine that carries out computations.[3]
Main article: History of computing hardware
Rudimentary calculating devices first appeared in antiquity and mechanical calculating aids were
invented in the 17th century. The first recorded use of the word "computer" is also from the 17th
century, applied to human computers, people who performed calculations, often as employment.
The first computer devices were conceived of in the 19th century, and only emerged in their
modern form in the 1940s.
simplified version of the analytical engine's computing unit (the mill) in 1888. He gave a
successful demonstration of its use in computing tables in 1906.
Kurt Gdel's 1931 results on the limits of proof and computation, replacing Gdel's universal
arithmetic-based formal language with the formal and simple hypothetical devices that became
known as Turing machines. He proved that some such machine would be capable of performing
any conceivable mathematical computation if it were representable as an algorithm. He went on
to prove that there was no solution to the Entscheidungsproblem by first showing that the halting
problem for Turing machines is undecidable: in general, it is not possible to decide
algorithmically whether a given Turing machine will ever halt.
He also introduced the notion of a 'Universal Machine' (now known as a Universal Turing
machine), with the idea that such a machine could perform the tasks of any other machine, or in
other words, it is provably capable of computing anything that is computable by executing a
program stored on tape, allowing the machine to be programmable. Von Neumann
acknowledged that the central concept of the modern computer was due to this paper.[10] Turing
machines are to this day a central object of study in theory of computation. Except for the
limitations imposed by their finite memory stores, modern computers are said to be Turingcomplete, which is to say, they have algorithm execution capability equivalent to a universal
Turing machine.
Replica of Zuse's Z3, the first fully automatic, digital (electromechanical) computer.
Early digital computers were electromechanical; electric switches drove mechanical relays to
perform the calculation. These devices had a low operating speed and were eventually
superseded by much faster all-electric computers, originally using vacuum tubes. The Z2, created
by German engineer Konrad Zuse in 1939, was one of the earliest examples of an
electromechanical relay computer.[11]
In 1941, Zuse followed his earlier machine up with the Z3, the world's first working
electromechanical programmable, fully automatic digital computer.[12][13] The Z3 was built with
2000 relays, implementing a 22 bit word length that operated at a clock frequency of about 5
10 Hz.[14] Program code and data were stored on punched film. It was quite similar to modern
machines in some respects, pioneering numerous advances such as floating point numbers.
Replacement of the hard-to-implement decimal system (used in Charles Babbage's earlier
design) by the simpler binary system meant that Zuse's machines were easier to build and
potentially more reliable, given the technologies available at that time.[15] The Z3 was probably a
complete Turing machine.
Colossus was the first electronic digital programmable computing device, and was used to break
German ciphers during World War II.
During World War II, the British at Bletchley Park achieved a number of successes at breaking
encrypted German military communications. The German encryption machine, Enigma, was first
attacked with the help of the electro-mechanical bombes. To crack the more sophisticated
German Lorenz SZ 40/42 machine, used for high-level Army communications, Max Newman
and his colleagues commissioned Flowers to build the Colossus.[18] He spent eleven months from
early February 1943 designing and building the first Colossus.[19] After a functional test in
December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January
1944[20] and attacked its first message on 5 February.[18]
Colossus was the world's first electronic digital programmable computer.[7] It used a large
number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to
perform a variety of boolean logical operations on its data, but it was not Turing-complete. Nine
Mk II Colossi were built (The Mk I was converted to a Mk II making ten machines in total).
Colossus Mark I contained 1500 thermionic valves (tubes), but Mark II with 2400 valves, was
both 5 times faster and simpler to operate than Mark 1, greatly speeding the decoding
process.[21][22]
ENIAC was the first Turing-complete device,and performed ballistics trajectory calculations for
the United States Army.
The US-built ENIAC[23] (Electronic Numerical Integrator and Computer) was the first electronic
programmable computer built in the US. Although the ENIAC was similar to the Colossus it was
much faster and more flexible. It was unambiguously a Turing-complete device and could
compute any problem that would fit into its memory. Like the Colossus, a "program" on the
ENIAC was defined by the states of its patch cables and switches, a far cry from the stored
program electronic machines that came later. Once a program was written, it had to be
mechanically set into the machine with manual resetting of plugs and switches.
It combined the high speed of electronics with the ability to be programmed for many complex
problems. It could add or subtract 5000 times a second, a thousand times faster than any other
machine. It also had modules to multiply, divide, and square root. High speed memory was
limited to 20 words (about 80 bytes). Built under the direction of John Mauchly and J. Presper
Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from
1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200
kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds
of thousands of resistors, capacitors, and inductors.[24]
Stored program computers eliminate the need for re-wiring
device. John von Neumann at the University of Pennsylvania, also circulated his First Draft of a
Report on the EDVAC in 1945.[7]
125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory,
so it was not the first completely transistorized computer. That distinction goes to the Harwell
CADET of 1955,[32] built by the electronics division of the Atomic Energy Research
Establishment at Harwell.[33][34]
The first practical ICs were invented by Jack Kilby at Texas Instruments and Robert Noyce at
Fairchild Semiconductor.[36] Kilby recorded his initial ideas concerning the integrated circuit in
July 1958, successfully demonstrating the first working integrated example on 12 September
1958.[37] In his patent application of 6 February 1959, Kilby described his new device as a body
of semiconductor material ... wherein all the components of the electronic circuit are completely
integrated.[38][39] Noyce also came up with his own idea of an integrated circuit half a year later
than Kilby.[40] His chip solved many practical problems that Kilby's had not. Produced at
Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium.
This new development heralded an explosion in the commercial and personal use of computers
and led to the invention of the microprocessor. While the subject of exactly which device was the
first microprocessor is contentious, partly due to lack of agreement on the exact definition of the
term "microprocessor", it is largely undisputed that the first single-chip microprocessor was the
Intel 4004,[41] designed and realized by Ted Hoff, Federico Faggin, and Stanley Mazor at
Intel.[42]
Programs
The defining feature of modern computers which distinguishes them from all other machines is
that they can be programmed. That is to say that some type of instructions (the program) can be
given to the computer, and it will process them. Modern computers based on the von Neumann
architecture often have machine code in the form of an imperative programming language.
In practical terms, a computer program may be just a few instructions or extend to many millions
of instructions, as do the programs for word processors and web browsers for example. A typical
modern computer can execute billions of instructions per second (gigaflops) and rarely makes a
mistake over many years of operation. Large computer programs consisting of several million
instructions may take teams of programmers years to write, and due to the complexity of the task
almost certainly contain errors.
Replica of the Small-Scale Experimental Machine (SSEM), the world's first stored-program
computer, at the Museum of Science and Industry in Manchester, England
; set sum to 0
; set num to 1
ble loop
halt
Once told to run this program, the computer will perform the repetitive addition task without
further human intervention. It will almost never make a mistake and a modern PC can complete
the task in about a millionth of a second.[44]
Machine code
In most computers, individual instructions are stored as machine code with each instruction
being given a unique number (its operation code or opcode for short). The command to add two
numbers together would have one opcode; the command to multiply them would have a different
opcode, and so on. The simplest computers are able to perform any of a handful of different
instructions; the more complex computers have several hundred to choose from, each with a
unique numerical code. Since the computer's memory is able to store numbers, it can also store
the instruction codes. This leads to the important fact that entire programs (which are just lists of
these instructions) can be represented as lists of numbers and can themselves be manipulated
inside the computer in the same way as numeric data. The fundamental concept of storing
programs in the computer's memory alongside the data they operate on is the crux of the von
Neumann, or stored program[citation needed], architecture. In some cases, a computer might store
some or all of its program in memory that is kept separate from the data it operates on. This is
called the Harvard architecture after the Harvard Mark I computer. Modern von Neumann
computers display some traits of the Harvard architecture in their designs, such as in CPU
caches.
While it is possible to write computer programs as long lists of numbers (machine language) and
while this technique was used with many early computers,[45] it is extremely tedious and
potentially error-prone to do so in practice, especially for complicated programs. Instead, each
basic instruction can be given a short name that is indicative of its function and easy to
remember a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are
collectively known as a computer's assembly language. Converting programs written in assembly
language into something the computer can actually understand (machine language) is usually
done by a computer program called an assembler.
A 1970s punched card containing one line from a FORTRAN program. The card reads: Z(1) =
Y + W(1) and is labeled PROJ039 for identification purposes.
Programming language
Main article: Programming language
Programming languages provide various ways of specifying programs for computers to run.
Unlike natural languages, programming languages are designed to permit no ambiguity and to be
concise. They are purely written languages and are often difficult to read aloud. They are
generally either translated into machine code by a compiler or an assembler before being run, or
translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid
method of the two techniques.
Low-level languages
Main article: Low-level programming language
Machine languages and the assembly languages that represent them (collectively termed lowlevel programming languages) tend to be unique to a particular type of computer. For instance,
an ARM architecture computer (such as may be found in a PDA or a hand-held videogame)
cannot understand the machine language of an Intel Pentium or the AMD Athlon 64 computer
that might be in a PC.[46]
Higher-level languages
Main article: High-level programming language
Though considerably easier than in machine language, writing long programs in assembly
language is often difficult and is also error prone. Therefore, most practical programs are written
in more abstract high-level programming languages that are able to express the needs of the
programmer more conveniently (and thereby help reduce programmer error). High level
languages are usually compiled into machine language (or sometimes into assembly language
and then into machine language) using another computer program called a compiler.[47] High
level languages are less related to the workings of the target computer than assembly language,
and more related to the language and structure of the problem(s) to be solved by the final
program. It is therefore often possible to use different compilers to translate the same high level
language program into the machine language of many different types of computer. This is part of
the means by which software like video games may be made available for different computer
Control unit
Main articles: CPU design and Control unit
Diagram showing how a particular MIPS architecture instruction would be decoded by the
control system
The control unit (often called a control system or central controller) manages the computer's
various components; it reads and interprets (decodes) the program instructions, transforming
them into a series of control signals which activate other parts of the computer.[50] Control
systems in advanced computers may change the order of some instructions so as to improve
performance.
A key component common to all CPUs is the program counter, a special memory cell (a register)
that keeps track of which location in memory the next instruction is to be read from.[51]
The control system's function is as followsnote that this is a simplified description, and some
of these steps may be performed concurrently or in a different order depending on the type of
CPU:
1. Read the code for the next instruction from the cell indicated by the program counter.
2. Decode the numerical code for the instruction into a set of commands or signals for each
of the other systems.
3. Increment the program counter so it points to the next instruction.
4. Read whatever data the instruction requires from cells in memory (or perhaps from an
input device). The location of this required data is typically stored within the instruction
code.
5. Provide the necessary data to an ALU or register.
6. If the instruction requires an ALU or specialized hardware to complete, instruct the
hardware to perform the requested operation.
7. Write the result from the ALU back to a memory location or to a register or perhaps an
output device.
8. Jump back to step (1).
Since the program counter is (conceptually) just another set of memory cells, it can be changed
by calculations done in the ALU. Adding 100 to the program counter would cause the next
instruction to be read from a place 100 locations further down the program. Instructions that
modify the program counter are often known as jumps and allow for loops (instructions that
are repeated by the computer) and often conditional instruction execution (both examples of
control flow).
The sequence of operations that the control unit goes through to process an instruction is in itself
like a short computer program, and indeed, in some more complex CPU designs, there is another
yet smaller computer called a microsequencer, which runs a microcode program that causes all
of these events to happen.
Logic operations involve Boolean logic: AND, OR, XOR and NOT. These can be useful for
creating complicated conditional statements and processing boolean logic.
Superscalar computers may contain multiple ALUs, allowing them to process several
instructions simultaneously.[53] Graphics processors and computers with SIMD and MIMD
features often contain ALUs that can perform arithmetic on vectors and matrices.
Memory
Main article: Computer data storage
Magnetic core memory was the computer memory of choice throughout the 1960s, until it was
replaced by semiconductor memory.
A computer's memory can be viewed as a list of cells into which numbers can be placed or read.
Each cell has a numbered address and can store a single number. The computer can be
instructed to put the number 123 into the cell numbered 1357 or to add the number that is in
cell 1357 to the number that is in cell 2468 and put the answer into cell 1595. The information
stored in memory may represent practically anything. Letters, numbers, even computer
instructions can be placed into memory with equal ease. Since the CPU does not differentiate
between different types of information, it is the software's responsibility to give significance to
what the memory sees as nothing but a series of numbers.
In almost all modern computers, each memory cell is set up to store binary numbers in groups of
eight bits (called a byte). Each byte is able to represent 256 different numbers (2^8 = 256); either
from 0 to 255 or 128 to +127. To store larger numbers, several consecutive bytes may be used
(typically, two, four or eight). When negative numbers are required, they are usually stored in
two's complement notation. Other arrangements are possible, but are usually not seen outside of
specialized applications or historical contexts. A computer can store any kind of information in
memory if it can be represented numerically. Modern computers have billions or even trillions of
bytes of memory.
The CPU contains a special set of memory cells called registers that can be read and written to
much more rapidly than the main memory area. There are typically between two and one
hundred registers depending on the type of CPU. Registers are used for the most frequently
needed data items to avoid having to access main memory every time data is needed. As data is
constantly being worked on, reducing the need to access main memory (which is often slow
compared to the ALU and control units) greatly increases the computer's speed.
Computer main memory comes in two principal varieties: random-access memory or RAM and
read-only memory or ROM. RAM can be read and written to anytime the CPU commands it, but
ROM is preloaded with data and software that never changes, therefore the CPU can only read
from it. ROM is typically used to store the computer's initial start-up instructions. In general, the
contents of RAM are erased when the power to the computer is turned off, but ROM retains its
data indefinitely. In a PC, the ROM contains a specialized program called the BIOS that
orchestrates loading the computer's operating system from the hard disk drive into RAM
whenever the computer is turned on or reset. In embedded computers, which frequently do not
have disk drives, all of the required software may be stored in ROM. Software stored in ROM is
often called firmware, because it is notionally more like hardware than software. Flash memory
blurs the distinction between ROM and RAM, as it retains its data when turned off but is also
rewritable. It is typically much slower than conventional ROM and RAM however, so its use is
restricted to applications where high speed is unnecessary.[54]
In more sophisticated computers there may be one or more RAM cache memories, which are
slower than registers but faster than main memory. Generally computers with this sort of cache
are designed to move frequently needed data into the cache automatically, often without the need
for any intervention on the programmer's part.
Input/output (I/O)
Hard disk drives are common storage devices used with computers.
I/O is the means by which a computer exchanges information with the outside world.[55] Devices
that provide input or output to the computer are called peripherals.[56] On a typical personal
computer, peripherals include input devices like the keyboard and mouse, and output devices
such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve
as both input and output devices. Computer networking is another form of I/O.
I/O devices are often complex computers in their own right, with their own CPU and memory. A
graphics processing unit might contain fifty or more tiny computers that perform the calculations
necessary to display 3D graphics.[citation needed] Modern desktop computers contain many smaller
computers that assist the main CPU in performing I/O.
Multitasking
Main article: Computer multitasking
While a computer may be viewed as running one gigantic program stored in its main memory, in
some systems it is necessary to give the appearance of running several programs simultaneously.
This is achieved by multitasking i.e. having the computer switch rapidly between running each
program in turn.[57]
One means by which this is done is with a special signal called an interrupt, which can
periodically cause the computer to stop executing instructions where it was and do something
else instead. By remembering where it was executing prior to the interrupt, the computer can
return to that task later. If several programs are running at the same time, then the interrupt
generator might be causing several hundred interrupts per second, causing a program switch each
time. Since modern computers typically execute instructions several orders of magnitude faster
than human perception, it may appear that many programs are running at the same time even
though only one is ever executing in any given instant. This method of multitasking is sometimes
termed time-sharing since each program is allocated a slice of time in turn.[58]
Before the era of cheap computers, the principal use for multitasking was to allow many people
to share the same computer.
Seemingly, multitasking would cause a computer that is switching between several programs to
run more slowly, in direct proportion to the number of programs it is running, but most programs
spend much of their time waiting for slow input/output devices to complete their tasks. If a
program is waiting for the user to click on the mouse or press a key on the keyboard, then it will
not take a time slice until the event it is waiting for has occurred. This frees up time for other
programs to execute so that many programs may be run simultaneously without unacceptable
speed loss.
Multiprocessing
Main article: Multiprocessing
CPUs on a single integrated circuit) personal and laptop computers are now widely available,
and are being increasingly used in lower-end markets as a result.
Supercomputers in particular often have highly unique architectures that differ significantly from
the basic stored-program architecture and from general purpose computers.[59] They often feature
thousands of CPUs, customized high-speed interconnects, and specialized computing hardware.
Such designs tend to be useful only for specialized tasks due to the large scale of program
organization required to successfully utilize most of the available resources at once.
Supercomputers usually see usage in large-scale simulation, graphics rendering, and
cryptography applications, as well as with other so-called embarrassingly parallel tasks.
Misconceptions
Main articles: Human computer and Harvard Computers
Required technology
Notes
1. ^ In 1946, ENIAC required an estimated 174 kW. By comparison, a modern laptop
computer may use around 30 W; nearly six thousand times less. "Approximate Desktop
& Notebook Power Usage". University of Pennsylvania. Retrieved 20 June 2009.
2. ^ Early computers such as Colossus and ENIAC were able to process between 5 and 100
operations per second. A modern commodity microprocessor (as of 2007) can process
billions of operations per second, and many of these operations are more complicated and
useful than early computer operations. "Intel Core2 Duo Mobile Processor: Features".
Intel Corporation. Retrieved 20 June 2009.
3. ^ computer, n.. Oxford English Dictionary (2 ed.). Oxford University Press. 1989.
Retrieved 10 April 2009.
4. ^ Halacy, Daniel Stephen (1970). Charles Babbage, Father of the Computer. CrowellCollier Press. ISBN 0-02-741370-5.
5. ^ "Babbage". Online stuff. Science Museum. 2007-01-19. Retrieved 2012-08-01.
6. ^ "Let's build Babbage's ultimate mechanical computer". opinion. New Scientist. 23
December 2010. Retrieved 2012-08-01.
References
Fuegi, J. and Francis, J. "Lovelace & Babbage and the creation of the 1843 'notes'". IEEE
Annals of the History of Computing 25 No. 4 (OctoberDecember 2003): Digital Object
Identifier[dead link]
Shannon, Claude Elwood (1940). A symbolic analysis of relay and switching circuits.
Verma, G.; Mielke, N. (1988). Reliability performance of ETOX based flash memories.
IEEE International Reliability Physics Symposium.
Meuer, Hans; Strohmaier, Erich; Simon, Horst; Dongarra, Jack (13 November 2006).
"Architectures Share Over Time". TOP500. Retrieved 27 November 2006.
Zuse, Konrad (1993). The Computer - My life. Berlin: Pringler-Verlag. ISBN 0-38756453-5.
Felt, Dorr E. (1916). Mechanical arithmetic, or The history of the counting machine.
Chicago: Washington Institute.