0% found this document useful (0 votes)
95 views30 pages

Limited-Function Early Computers

The document traces the history of computers from their earliest conceptualizations as mechanical calculating devices operated by people to their modern form as programmable electronic machines. Some key developments included Charles Babbage's conceptualization of a programmable mechanical computer called the Analytical Engine in 1837, Herman Hollerith's invention of punched cards for data storage in the 1880s, and Alan Turing's formalization of the concept of a Turing machine and algorithm in 1936, which provided the theoretical underpinnings for modern computers. The world's first fully operational computer was Konrad Zuse's Z3, built in 1941, while the ENIAC, completed in 1946, is considered the first general-purpose electronic computer.

Uploaded by

Anvesh Jalla
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views30 pages

Limited-Function Early Computers

The document traces the history of computers from their earliest conceptualizations as mechanical calculating devices operated by people to their modern form as programmable electronic machines. Some key developments included Charles Babbage's conceptualization of a programmable mechanical computer called the Analytical Engine in 1837, Herman Hollerith's invention of punched cards for data storage in the 1880s, and Alan Turing's formalization of the concept of a Turing machine and algorithm in 1936, which provided the theoretical underpinnings for modern computers. The world's first fully operational computer was Konrad Zuse's Z3, built in 1941, while the ENIAC, completed in 1946, is considered the first general-purpose electronic computer.

Uploaded by

Anvesh Jalla
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 30

The first use of the word "computer" was recorded in 1613, referring to a person who carried out

calculations, or computations, and the word continued with the same meaning until the middle of
the 20th century. From the end of the 19th century onwards, the word began to take on its more
familiar meaning, describing a machine that carries out computations.[3]
Limited-function early computers

The Jacquard loom, on display at the Museum of Science and Industry in Manchester, England,
was one of the first programmable devices.
The history of the modern computer begins with two separate technologies—automated
calculation and programmability—but no single device can be identified as the earliest computer,
partly because of the inconsistent application of that term. A few devices are worth mentioning
though, like some mechanical aids to computing, which were very successful and survived for
centuries until the advent of the electronic calculator, like the Sumerianabacus, designed around
2500 BC[4] which descendant won a speed competition against a modern desk calculating
machine in Japan in 1946,[5] the slide rules, invented in the 1620s, which were carried on five
Apollo space missions, including to the moon[6] and arguably the astrolabe and the Antikythera
mechanism, an ancient astronomical computer built by the Greeks around 80 BC.[7] The Greek
mathematician Hero of Alexandria (c. 10–70 AD) built a mechanical theater which performed a
play lasting 10 minutes and was operated by a complex system of ropes and drums that might be
considered to be a means of deciding which parts of the mechanism performed which actions and
when.[8] This is the essence of programmability.
Around the end of the tenth century, the French monk Gerbert d'Aurillac brought back from
Spain the drawings of a machine invented by the Moors that answered Yes or No to the
questions it was asked (binary arithmetic).[9] Again in the thirteenth century, the monks Albertus
Magnus and Roger Bacon built talking androids without any further development (Albertus
Magnus complained that he had wasted forty years of his life when Thomas Aquinas, terrified by
his machine, destroyed it).[10] In the same century analog computers like the castle clock of Al-
Jazari were invented.
In 1642, the Renaissance saw the invention of the mechanical calculator,[11] a device that could
perform all four arithmetic operations without relying on human intelligence.[12] The mechanical
calculator was at the root of the development of computers in two separate ways ; initially, it is
in trying to develop more powerful and more flexible calculators[13] that the computer was first
theorized by Charles Babbage[14][15] and then developed,[16] leading to the development of
mainframe computers in the 1960s, but also the microprocessor, which started the personal
computer revolution, and which is now at the heart of all computer systems regardless of size or
purpose,[17] was invented serendipitously by Intel[18] during the development of an electronic
calculator, a direct descendant to the mechanical calculator.[19]
First general-purpose computers
In 1801, Joseph Marie Jacquard made an improvement to the textile loom by introducing a series
of punched paper cards as a template which allowed his loom to weave intricate patterns
automatically. The resulting Jacquard loom was an important step in the development of
computers because the use of punched cards to define woven patterns can be viewed as an early,
albeit limited, form of programmability.

The Most Famous Image in the Early History of Computing[20]

This portrait of Jacquard was woven in silk on a Jacquard loom and required 24,000 punched
cards to create (1839). It was only produced to order. Charles Babbage owned one of these
portraits ; it inspired him in using perforated cards in his analytical engine[21]
It was the fusion of automatic calculation with programmability that produced the first
recognizable computers. In 1837, Charles Babbage was the first to conceptualize and design a
fully programmable mechanical computer, his analytical engine.[22] Limited finances and
Babbage's inability to resist tinkering with the design meant that the device was never
completed ; nevertheless his son, Henry Babbage, completed a simplified version of the
analytical engine's computing unit (the mill) in 1888. He gave a successful demonstration of its
use in computing tables in 1906. This machine was given to the Science museum in South
Kensington in 1910.
In the late 1880s, Herman Hollerith invented the recording of data on a machine readable
medium. Prior uses of machine readable media, above, had been for control, not data. "After
some initial trials with paper tape, he settled on punched cards ..."[23] To process these punched
cards he invented the tabulator, and the keypunch machines. These three inventions were the
foundation of the modern information processing industry. Large-scale automated data
processing of punched cards was performed for the 1890 United States Census by Hollerith's
company, which later became the core of IBM. By the end of the 19th century a number of
technologies that would later prove useful in the realization of practical computers had begun to
appear: the punched card, Boolean algebra, the vacuum tube (thermionic valve) and the
teleprinter.
During the first half of the 20th century, many scientific computing needs were met by
increasingly sophisticated analog computers, which used a direct mechanical or electrical model
of the problem as a basis for computation. However, these were not programmable and generally
lacked the versatility and accuracy of modern digital computers.
Alan Turing is widely regarded to be the father of modern computer science. In 1936 Turing
provided an influential formalisation of the concept of the algorithm and computation with the
Turing machine, providing a blueprint for the electronic digital computer.[24] Of his role in the
creation of the modern computer, Time magazine in naming Turing one of the 100 most
influential people of the 20th century, states: "The fact remains that everyone who taps at a
keyboard, opening a spreadsheet or a word-processing program, is working on an incarnation of
a Turing machine".[24]

The Zuse Z3, 1941, considered the world's first working programmable, fully automatic
computing machine.
The ENIAC, which became operational in 1946, is considered to be the first general-purpose
electronic computer.

EDSAC was one of the first computers to implement the stored program (von Neumann)
architecture.

Die of an Intel 80486DX2microprocessor (actual size: 12×6.75 mm) in its packaging.


The Atanasoff–Berry Computer (ABC) was among the first electronic digital binary computing
devices. Conceived in 1937 by Iowa State College physics professor John Atanasoff, and built
with the assistance of graduate student Clifford Berry,[25] the machine was not programmable,
being designed only to solve systems of linear equations. The computer did employ parallel
computation. A 1973 court ruling in a patent dispute found that the patent for the 1946 ENIAC
computer derived from the Atanasoff–Berry Computer.
The inventor of the program-controlled computer was Konrad Zuse, who built the first working
computer in 1941 and later in 1955 the first computer based on magnetic storage.[26]
George Stibitz is internationally recognized as a father of the modern digital computer. While
working at Bell Labs in November 1937, Stibitz invented and built a relay-based calculator he
dubbed the "Model K" (for "kitchen table", on which he had assembled it), which was the first to
use binary circuits to perform an arithmetic operation. Later models added greater sophistication
including complex arithmetic and programmability.[27]
A succession of steadily more powerful and flexible computing devices were constructed in the
1930s and 1940s, gradually adding the key features that are seen in modern computers. The use
of digital electronics (largely invented by Claude Shannon in 1937) and more flexible
programmability were vitally important steps, but defining one point along this road as "the first
digital electronic computer" is difficult.Shannon 1940 Notable achievements include.
• Konrad Zuse's electromechanical "Z machines". The Z3 (1941) was the first working
machine featuring binary arithmetic, including floating point arithmetic and a measure of
programmability. In 1998 the Z3 was proved to be Turing complete, therefore being the
world's first operational computer.[28]
• The non-programmable Atanasoff–Berry Computer (commenced in 1937, completed in
1941) which used vacuum tube based computation, binary numbers, and regenerative
capacitor memory. The use of regenerative memory allowed it to be much more compact
than its peers (being approximately the size of a large desk or workbench), since
intermediate results could be stored and then fed back into the same set of computation
elements.
• The secret British Colossus computers (1943),[29] which had limited programmability but
demonstrated that a device using thousands of tubes could be reasonably reliable and
electronically reprogrammable. It was used for breaking German wartime codes.
• The Harvard Mark I (1944), a large-scale electromechanical computer with limited
programmability.[30]
• The U.S. Army's Ballistic Research LaboratoryENIAC (1946), which used decimal
arithmetic and is sometimes called the first general purpose electronic computer (since
Konrad Zuse's Z3 of 1941 used electromagnets instead of electronics). Initially, however,
ENIAC had an inflexible architecture which essentially required rewiring to change its
programming.
Stored-program architecture
Several developers of ENIAC, recognizing its flaws, came up with a far more flexible and
elegant design, which came to be known as the "stored program architecture" or von Neumann
architecture. This design was first formally described by John von Neumann in the paper First
Draft of a Report on the EDVAC, distributed in 1945. A number of projects to develop
computers based on the stored-program architecture commenced around this time, the first of
these being completed in Great Britain. The first working prototype to be demonstrated was the
Manchester Small-Scale Experimental Machine (SSEM or "Baby") in 1948. The Electronic
Delay Storage Automatic Calculator (EDSAC), completed a year after the SSEM at Cambridge
University, was the first practical, non-experimental implementation of the stored program
design and was put to use immediately for research work at the university. Shortly thereafter, the
machine originally described by von Neumann's paper—EDVAC—was completed but did not
see full-time use for an additional two years.
Nearly all modern computers implement some form of the stored-program architecture, making
it the single trait by which the word "computer" is now defined. While the technologies used in
computers have changed dramatically since the first electronic, general-purpose computers of the
1940s, most still use the von Neumann architecture.
Beginning in the 1950s, Soviet scientists Sergei Sobolev and Nikolay Brusentsov conducted
research on ternary computers, devices that operated on a base three numbering system of −1, 0,
and 1 rather than the conventional binary numbering system upon which most computers are
based. They designed the Setun, a functional ternary computer, at Moscow State University. The
device was put into limited production in the Soviet Union, but supplanted by the more common
binary architecture.
Semiconductors and microprocessors
Computers using vacuum tubes as their electronic elements were in use throughout the 1950s,
but by the 1960s had been largely replaced by transistor-based machines, which were smaller,
faster, cheaper to produce, required less power, and were more reliable. The first transistorised
computer was demonstrated at the University of Manchester in 1953.[31] In the 1970s, integrated
circuit technology and the subsequent creation of microprocessors, such as the Intel 4004, further
decreased size and cost and further increased speed and reliability of computers. By the late
1970s, many products such as video recorders contained dedicated computers called
microcontrollers, and they started to appear as a replacement to mechanical controls in domestic
appliances such as washing machines. The 1980s witnessed home computers and the now
ubiquitous personal computer. With the evolution of the Internet, personal computers are
becoming as common as the television and the telephone in the household[citation needed].
Modern smartphones are fully programmable computers in their own right, and as of 2009 may
well be the most common form of such computers in existence[citation needed].
Programs
The defining feature of modern computers which distinguishes them from all other machines is
that they can be programmed. That is to say that some type of instructions (the program) can be
given to the computer, and it will carry process them. While some computers may have strange
concepts "instructions" and "output" (see quantum computing), modern computers based on the
von Neumann architecture are often have machine code in the form of an imperative
programming language.
In practical terms, a computer program may be just a few instructions or extend to many millions
of instructions, as do the programs for word processors and web browsers for example. A typical
modern computer can execute billions of instructions per second (gigaflops) and rarely makes a
mistake over many years of operation. Large computer programs consisting of several million
instructions may take teams of programmers years to write, and due to the complexity of the task
almost certainly contain errors.
Stored program architecture
Main articles: Computer program and Computer programming
A 1970s punched card containing one line from a FORTRAN program. The card reads: "Z(1) =
Y + W(1)" and is labelled "PROJ039" for identification purposes.
This section applies to most common RAM machine-based computers.
In most cases, computer instructions are simple: add one number to another, move some data
from one location to another, send a message to some external device, etc. These instructions are
read from the computer's memory and are generally carried out (executed) in the order they were
given. However, there are usually specialized instructions to tell the computer to jump ahead or
backwards to some other place in the program and to carry on executing from there. These are
called "jump" instructions (or branches). Furthermore, jump instructions may be made to happen
conditionally so that different sequences of instructions may be used depending on the result of
some previous calculation or some external event. Many computers directly support subroutines
by providing a type of jump that "remembers" the location it jumped from and another
instruction to return to the instruction following that jump instruction.
Program execution might be likened to reading a book. While a person will normally read each
word and line in sequence, they may at times jump back to an earlier place in the text or skip
sections that are not of interest. Similarly, a computer may sometimes go back and repeat the
instructions in some section of the program over and over again until some internal condition is
met. This is called the flow of control within the program and it is what allows the computer to
perform tasks repeatedly without human intervention.
Comparatively, a person using a pocket calculator can perform a basic arithmetic operation such
as adding two numbers with just a few button presses. But to add together all of the numbers
from 1 to 1,000 would take thousands of button presses and a lot of time—with a near certainty
of making a mistake. On the other hand, a computer may be programmed to do this with just a
few simple instructions. For example:

mov #0, sum ; set sum to 0


mov #1, num ; set num to 1
loop: add num, sum ; add num to sum
add #1, num ; add 1 to num
cmp num, #1000 ; compare num to 1000
ble loop ; if num <= 1000, go back to 'loop'
halt ; end of program. stop running
Once told to run this program, the computer will perform the repetitive addition task without
further human intervention. It will almost never make a mistake and a modern PC can complete
the task in about a millionth of a second.[32]
Bugs
Main article: software bug
The actual first computer bug, a moth found trapped on a relay of the Harvard Mark II computer
Errors in computer programs are called "bugs". Bugs may be benign and not affect the usefulness
of the program, or have only subtle effects. But in some cases they may cause the program to
"hang"—become unresponsive to input such as mouse clicks or keystrokes, or to completely fail
or "crash". Otherwise benign bugs may sometimes be harnessed for malicious intent by an
unscrupulous user writing an "exploit"—code designed to take advantage of a bug and disrupt a
computer's proper execution. Bugs are usually not the fault of the computer. Since computers
merely execute the instructions they are given, bugs are nearly always the result of programmer
error or an oversight made in the program's design.[33]
Rear AdmiralGrace Hopper is credited for having first used the term 'bugs' in computing after a
dead moth was found shorting a relay of the Harvard Mark II computer in September 1947.[34]
Machine code
In most computers, individual instructions are stored as machine code with each instruction
being given a unique number (its operation code or opcode for short). The command to add two
numbers together would have one opcode, the command to multiply them would have a different
opcode and so on. The simplest computers are able to perform any of a handful of different
instructions; the more complex computers have several hundred to choose from—each with a
unique numerical code. Since the computer's memory is able to store numbers, it can also store
the instruction codes. This leads to the important fact that entire programs (which are just lists of
these instructions) can be represented as lists of numbers and can themselves be manipulated
inside the computer in the same way as numeric data. The fundamental concept of storing
programs in the computer's memory alongside the data they operate on is the crux of the von
Neumann, or stored program, architecture. In some cases, a computer might store some or all of
its program in memory that is kept separate from the data it operates on. This is called the
Harvard architecture after the Harvard Mark I computer. Modern von Neumann computers
display some traits of the Harvard architecture in their designs, such as in CPU caches.
While it is possible to write computer programs as long lists of numbers (machine language) and
while this technique was used with many early computers,[35] it is extremely tedious and
potentially error-prone to do so in practice, especially for complicated programs. Instead, each
basic instruction can be given a short name that is indicative of its function and easy to
remember—a mnemonic such as ADD, SUB, MULT or JUMP. These mnemonics are
collectively known as a computer's assembly language. Converting programs written in assembly
language into something the computer can actually understand (machine language) is usually
done by a computer program called an assembler. Machine languages and the assembly
languages that represent them (collectively termed low-level programming languages) tend to be
unique to a particular type of computer. For instance, an ARM architecture computer (such as
may be found in a PDA or a hand-held videogame) cannot understand the machine language of
an Intel Pentium or the AMD Athlon 64 computer that might be in a PC.[36]
Higher-level languages and program design
Though considerably easier than in machine language, writing long programs in assembly
language is often difficult and is also error prone. Therefore, most practical programs are written
in more abstract high-level programming languages that are able to express the needs of the
programmer more conveniently (and thereby help reduce programmer error). High level
languages are usually "compiled" into machine language (or sometimes into assembly language
and then into machine language) using another computer program called a compiler.[37] High
level languages are less related to the workings of the target computer than assembly language,
and more related to the language and structure of the problem(s) to be solved by the final
program. It is therefore often possible to use different compilers to translate the same high level
language program into the machine language of many different types of computer. This is part of
the means by which software like video games may be made available for different computer
architectures such as personal computers and various video game consoles.
The task of developing large software systems presents a significant intellectual challenge.
Producing software with an acceptably high reliability within a predictable schedule and budget
has historically been difficult; the academic and professional discipline of software engineering
concentrates specifically on this challenge.
Function
Main articles: Central processing unit and Microprocessor
A general purpose computer has four main components: the arithmetic logic unit (ALU), the
control unit, the memory, and the input and output devices (collectively termed I/O). These parts
are interconnected by busses, often made of groups of wires.
Inside each of these parts are thousands to trillions of small electrical circuits which can be
turned off or on by means of an electronic switch. Each circuit represents a bit (binary digit) of
information so that when the circuit is on it represents a "1", and when off it represents a "0" (in
positive logic representation). The circuits are arranged in logic gates so that one or more of the
circuits may control the state of one or more of the other circuits.
The control unit, ALU, registers, and basic I/O (and often other hardware closely linked with
these) are collectively known as a central processing unit (CPU). Early CPUs were composed of
many separate components but since the mid-1970s CPUs have typically been constructed on a
single integrated circuit called a microprocessor.
Control unit
Main articles: CPU design and Control unit
Diagram showing how a particular MIPS architecture instruction would be decoded by the
control system.
The control unit (often called a control system or central controller) manages the computer's
various components; it reads and interprets (decodes) the program instructions, transforming
them into a series of control signals which activate other parts of the computer.[38] Control
systems in advanced computers may change the order of some instructions so as to improve
performance.
A key component common to all CPUs is the program counter, a special memory cell (a register)
that keeps track of which location in memory the next instruction is to be read from.[39]
The control system's function is as follows—note that this is a simplified description, and some
of these steps may be performed concurrently or in a different order depending on the type of
CPU:
1. Read the code for the next instruction from the cell indicated by the program counter.
2. Decode the numerical code for the instruction into a set of commands or signals for each
of the other systems.
3. Increment the program counter so it points to the next instruction.
4. Read whatever data the instruction requires from cells in memory (or perhaps from an
input device). The location of this required data is typically stored within the instruction
code.
5. Provide the necessary data to an ALU or register.
6. If the instruction requires an ALU or specialized hardware to complete, instruct the
hardware to perform the requested operation.
7. Write the result from the ALU back to a memory location or to a register or perhaps an
output device.
8. Jump back to step (1).
Since the program counter is (conceptually) just another set of memory cells, it can be changed
by calculations done in the ALU. Adding 100 to the program counter would cause the next
instruction to be read from a place 100 locations further down the program. Instructions that
modify the program counter are often known as "jumps" and allow for loops (instructions that
are repeated by the computer) and often conditional instruction execution (both examples of
control flow).
It is noticeable that the sequence of operations that the control unit goes through to process an
instruction is in itself like a short computer program—and indeed, in some more complex CPU
designs, there is another yet smaller computer called a microsequencer that runs a microcode
program that causes all of these events to happen.
Arithmetic/logic unit (ALU)
Main article: Arithmetic logic unit
The ALU is capable of performing two classes of operations: arithmetic and logic.[40]
The set of arithmetic operations that a particular ALU supports may be limited to adding and
subtracting or might include multiplying or dividing, trigonometry functions (sine, cosine, etc.)
and square roots. Some can only operate on whole numbers (integers) whilst others use floating
point to represent real numbers—albeit with limited precision. However, any computer that is
capable of performing just the simplest operations can be programmed to break down the more
complex operations into simple steps that it can perform. Therefore, any computer can be
programmed to perform any arithmetic operation—although it will take more time to do so if its
ALU does not directly support the operation. An ALU may also compare numbers and return
boolean truth values (true or false) depending on whether one is equal to, greater than or less
than the other ("is 64 greater than 65?").
Logic operations involve Boolean logic: AND, OR, XOR and NOT. These can be useful both for
creating complicated conditional statements and processing boolean logic.
Superscalar computers may contain multiple ALUs so that they can process several instructions
at the same time.[41]Graphics processors and computers with SIMD and MIMD features often
provide ALUs that can perform arithmetic on vectors and matrices.
Memory
Main article: Computer data storage

Magnetic core memory was the computer memory of choice throughout the 1960s, until it was
replaced by semiconductor memory.
A computer's memory can be viewed as a list of cells into which numbers can be placed or read.
Each cell has a numbered "address" and can store a single number. The computer can be
instructed to "put the number 123 into the cell numbered 1357" or to "add the number that is in
cell 1357 to the number that is in cell 2468 and put the answer into cell 1595". The information
stored in memory may represent practically anything. Letters, numbers, even computer
instructions can be placed into memory with equal ease. Since the CPU does not differentiate
between different types of information, it is the software's responsibility to give significance to
what the memory sees as nothing but a series of numbers.
In almost all modern computers, each memory cell is set up to store binary numbers in groups of
eight bits (called a byte). Each byte is able to represent 256 different numbers (2^8 = 256); either
from 0 to 255 or −128 to +127. To store larger numbers, several consecutive bytes may be used
(typically, two, four or eight). When negative numbers are required, they are usually stored in
two's complement notation. Other arrangements are possible, but are usually not seen outside of
specialized applications or historical contexts. A computer can store any kind of information in
memory if it can be represented numerically. Modern computers have billions or even trillions of
bytes of memory.
The CPU contains a special set of memory cells called registers that can be read and written to
much more rapidly than the main memory area. There are typically between two and one
hundred registers depending on the type of CPU. Registers are used for the most frequently
needed data items to avoid having to access main memory every time data is needed. As data is
constantly being worked on, reducing the need to access main memory (which is often slow
compared to the ALU and control units) greatly increases the computer's speed.
Computer main memory comes in two principal varieties: random-access memory or RAM and
read-only memory or ROM. RAM can be read and written to anytime the CPU commands it, but
ROM is pre-loaded with data and software that never changes, so the CPU can only read from it.
ROM is typically used to store the computer's initial start-up instructions. In general, the contents
of RAM are erased when the power to the computer is turned off, but ROM retains its data
indefinitely. In a PC, the ROM contains a specialized program called the BIOS that orchestrates
loading the computer's operating system from the hard disk drive into RAM whenever the
computer is turned on or reset. In embedded computers, which frequently do not have disk
drives, all of the required software may be stored in ROM. Software stored in ROM is often
called firmware, because it is notionally more like hardware than software. Flash memory blurs
the distinction between ROM and RAM, as it retains its data when turned off but is also
rewritable. It is typically much slower than conventional ROM and RAM however, so its use is
restricted to applications where high speed is unnecessary.[42]
In more sophisticated computers there may be one or more RAM cache memories which are
slower than registers but faster than main memory. Generally computers with this sort of cache
are designed to move frequently needed data into the cache automatically, often without the need
for any intervention on the programmer's part.
Input/output (I/O)
Main article: Input/output

Hard disk drives are common storage devices used with computers.
I/O is the means by which a computer exchanges information with the outside world.[43] Devices
that provide input or output to the computer are called peripherals.[44] On a typical personal
computer, peripherals include input devices like the keyboard and mouse, and output devices
such as the display and printer. Hard disk drives, floppy disk drives and optical disc drives serve
as both input and output devices. Computer networking is another form of I/O.
Often, I/O devices are complex computers in their own right with their own CPU and memory. A
graphics processing unit might contain fifty or more tiny computers that perform the calculations
necessary to display 3D graphics[citation needed]. Modern desktop computers contain many smaller
computers that assist the main CPU in performing I/O.
Multitasking
Main article: Computer multitasking
While a computer may be viewed as running one gigantic program stored in its main memory, in
some systems it is necessary to give the appearance of running several programs simultaneously.
This is achieved by multitasking i.e. having the computer switch rapidly between running each
program in turn.[45]
One means by which this is done is with a special signal called an interrupt which can
periodically cause the computer to stop executing instructions where it was and do something
else instead. By remembering where it was executing prior to the interrupt, the computer can
return to that task later. If several programs are running "at the same time", then the interrupt
generator might be causing several hundred interrupts per second, causing a program switch each
time. Since modern computers typically execute instructions several orders of magnitude faster
than human perception, it may appear that many programs are running at the same time even
though only one is ever executing in any given instant. This method of multitasking is sometimes
termed "time-sharing" since each program is allocated a "slice" of time in turn.[46]
Before the era of cheap computers, the principal use for multitasking was to allow many people
to share the same computer.
Seemingly, multitasking would cause a computer that is switching between several programs to
run more slowly — in direct proportion to the number of programs it is running. However, most
programs spend much of their time waiting for slow input/output devices to complete their tasks.
If a program is waiting for the user to click on the mouse or press a key on the keyboard, then it
will not take a "time slice" until the event it is waiting for has occurred. This frees up time for
other programs to execute so that many programs may be run at the same time without
unacceptable speed loss.
Multiprocessing
Main article: Multiprocessing

Cray designed many supercomputers that used multiprocessing heavily.


Some computers are designed to distribute their work across several CPUs in a multiprocessing
configuration, a technique once employed only in large and powerful machines such as
supercomputers, mainframe computers and servers. Multiprocessor and multi-core (multiple
CPUs on a single integrated circuit) personal and laptop computers are now widely available,
and are being increasingly used in lower-end markets as a result.
Supercomputers in particular often have highly unique architectures that differ significantly from
the basic stored-program architecture and from general purpose computers.[47] They often feature
thousands of CPUs, customized high-speed interconnects, and specialized computing hardware.
Such designs tend to be useful only for specialized tasks due to the large scale of program
organization required to successfully utilize most of the available resources at once.
Supercomputers usually see usage in large-scale simulation, graphics rendering, and
cryptography applications, as well as with other so-called "embarrassingly parallel" tasks.
Networking and the Internet
Main articles: Computer networking and Internet

Visualization of a portion of the routes on the Internet.


Computers have been used to coordinate information between multiple locations since the 1950s.
The U.S. military's SAGE system was the first large-scale example of such a system, which led
to a number of special-purpose commercial systems like Sabre.[48]
In the 1970s, computer engineers at research institutions throughout the United States began to
link their computers together using telecommunications technology. This effort was funded by
ARPA (now DARPA), and the computer network that it produced was called the ARPANET.[49]
The technologies that made the Arpanet possible spread and evolved.
In time, the network spread beyond academic and military institutions and became known as the
Internet. The emergence of networking involved a redefinition of the nature and boundaries of
the computer. Computer operating systems and applications were modified to include the ability
to define and access the resources of other computers on the network, such as peripheral devices,
stored information, and the like, as extensions of the resources of an individual computer.
Initially these facilities were available primarily to people working in high-tech environments,
but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with
the development of cheap, fast networking technologies like Ethernet and ADSL saw computer
networking become almost ubiquitous. In fact, the number of computers that are networked is
growing phenomenally. A very large proportion of personal computers regularly connect to the
Internet to communicate and receive information. "Wireless" networking, often utilizing mobile
phone networks, has meant networking is becoming increasingly ubiquitous even in mobile
computing environments.
Misconceptions
A computer does not need to be electric, nor even have a processor, nor RAM, nor even hard
disk. The minimal definition of a computer is anything that transforms information in a
purposeful way.[citation needed] However the traditional definition of a computer is a device having
memory, mass storage, processor (CPU), and Input & Output devices.[50] Anything less would be
a simple processor.
Required technology
Main article: Unconventional computing
Computational systems as flexible as a personal computer can be built out of almost anything.
For example, a computer can be made out of billiard balls (billiard ball computer); this is an
unintuitive and pedagogical example that a computer can be made out of almost anything. More
realistically, modern computers are made out of transistors made of
photolithographedsemiconductors.
Historically, computers evolved from mechanical computers and eventually from vacuum tubes
to transistors.
There is active research to make computers out of many promising new types of technology,
such as optical computing, DNA computers, neural computers, and quantum computers. Some of
these can easily tackle problems that modern computers cannot (such as how quantum computers
can break some modern encryption algorithms by quantum factoring).
Computer architecture paradigms
Some different paradigms of how to build a computer from the ground-up:
RAM machines
These are the types of computers with a CPU, computer memory, etc., which understand
basic instructions in a machine language. The concept evolved from the Turing machine.
Brains
Brains are massively parallel processors made of neurons, wired in intricate patterns, that
communicate via electricity and neurotransmitter chemicals.
Programming languages
Such as the lambda calculus, or modern programming languages, are virtual computers
built on top of other computers.
Cellular automata
For example, the game of Life can create "gliders" and "loops" and other constructs that
transmit information; this paradigm can be applied to DNA computing, chemical
computing, etc.
Groups and committees
The linking of multiple computers (brains) is itself a computer
Logic gates are a common abstraction which can apply to most of the above digital or analog
paradigms.
The ability to store and execute lists of instructions called programs makes computers extremely
versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical
statement of this versatility: any computer with a minimum capability (being Turing-complete)
is, in principle, capable of performing the same tasks that any other computer can perform.
Therefore any type of computer (netbook, supercomputer, cellular automaton, etc.) is able to
perform the same computational tasks, given enough time and storage capacity.
Limited-function computers
Conversely, a computer which is limited in function (one that is not "Turing-complete") cannot
simulate arbitrary things. For example, simple four-function calculators cannot simulate a real
computer without human intervention. As a more complicated example, without the ability to
program a gaming console, it can never accomplish what a programmable calculator from the
1990s could (given enough time); the system as a whole is not Turing-complete, even though it
contains a Turing-complete component (the microprocessor). Living organisms (the body, not
the brain) are also limited-function computers designed to make copies of themselves; they
cannot be reprogrammed without genetic engineering.
Virtual computers
A "computer" is commonly considered to be a physical device. However, one can create a
computer program which describes how to run a different computer, i.e. "simulating a computer
in a computer". Not only is this a constructive proof of the Church-Turing thesis, but is also
extremely common in all modern computers. For example, some programming languages use
something called an interpreter, which is a simulated computer built on top of the basic
computer; this allows programmers to write code (computer input) in a different language than
the one understood by the base computer (the alternative is to use a compiler). Additionally,
virtual machines are simulated computers which virtually replicate a physical computer in
software, and are very commonly used by IT. Virtual machines are also a common technique
used to create emulators, such game console emulators.
Further topics
• Glossary of computers
Artificial intelligence
A computer will solve problems in exactly the way they are programmed to, without regard to
efficiency nor alternative solutions nor possible shortcuts nor possible errors in the code.
Computer programs which learn and adapt are part of the emerging field of artificial intelligence
and machine learning.
Hardware
The term hardware covers all of those parts of a computer that are tangible objects. Circuits,
displays, power supplies, cables, keyboards, printers and mice are all hardware.

History of computing hardware


Jacquard loom, Analytical
Programmable Devices engine, Harvard Mark I,
Z3
Second Generation (Vacuum Tubes) Atanasoff–Berry Computer,
Calculators IBM 604, UNIVAC 60,
UNIVAC 120
Programmable Devices Colossus, ENIAC, Manchester
Small-Scale Experimental
Machine, EDSAC, Manchester
Mark 1, Ferranti Pegasus,
Ferranti Mercury, CSIRAC,
EDVAC, UNIVAC I, IBM 701,
IBM 702, IBM 650, Z22
IBM 7090, IBM 7080, IBM
Third Generation (Discrete Mainframes
System/360, BUNCH
transistors and SSI, MSI, LSI
Integrated circuits) PDP-8, PDP-11, IBM
Minicomputer
System/32, IBM System/36
Minicomputer VAX, IBM System i
4-bit microcomputer Intel 4004, Intel 4040
Intel 8008, Intel 8080, Motorola
8-bit microcomputer 6800, Motorola 6809, MOS
Technology 6502, Zilog Z80
Intel 8088, Zilog Z8000, WDC
16-bit microcomputer
65816/65802
Fourth Generation (VLSI integrated 32-bit microcomputer Intel 80386, Pentium, Motorola
circuits) 68000, ARM architecture
Alpha, MIPS, PA-RISC,
64-bit microcomputer[51]
PowerPC, SPARC, x86-64
Embedded computer Intel 8048, Intel 8051
Desktop computer, Home
computer, Laptop computer,
Personal computer Personal digital assistant (PDA),
Portable computer, Tablet PC,
Wearable computer
Quantum computer,
Chemical computer, DNA
Theoretical/experimental computing, Optical
computer, Spintronics
based computer

Other Hardware Topics


Monitor, Printer,
Output
Loudspeaker
Floppy disk drive, Hard
Both disk drive, Optical disc
drive, Teleprinter
Short range RS-232, SCSI, PCI, USB
Computer busses Long range (Computer
Ethernet, ATM, FDDI
networking)
Software
Main article: Computer software
Software refers to parts of the computer which do not have a material form, such as programs,
data, protocols, etc. When software is stored in hardware that cannot easily be modified (such as
BIOSROM in an IBM PC compatible), it is sometimes called "firmware" to indicate that it falls
into an uncertain area somewhere between hardware and software.

Computer software
List of Linux
distributions,
GNU/Linux
Comparison of
Linux distributions
Windows 95,
Windows 98,
Windows NT,
Microsoft
Windows 2000,
Windows
Windows XP,
Windows Vista,
Windows 7
86-DOS (QDOS),
PC-DOS, MS-
DOS
DOS, DR-DOS,
FreeDOS
Mac OS classic,
Mac OS
Mac OS X
Embedded List of embedded
and real-time operating systems
Amoeba,
Oberon/Bluebottle,
Experimental
Plan 9 from Bell
Labs
Multimedia DirectX, OpenGL, OpenAL
Library Programming
C standard library, Standard Template Library
library
Protocol TCP/IP, Kermit, FTP, HTTP, SMTP
Data
File format HTML, XML, JPEG, MPEG, PNG
Graphical user Microsoft Windows, GNOME, KDE, QNX Photon, CDE,
User interface (WIMP) GEM, Aqua
interface Text-based user
Command-line interface, Text user interface
interface
Application Word processing, Desktop publishing, Presentation program,
Office suite Database management system, Scheduling & Time
management, Spreadsheet, Accounting software
Browser, E-mail client, Web server, Mail transfer agent, Instant
Internet Access
messaging
Computer-aided design, Computer-aided manufacturing, Plant
Design and
management, Robotic manufacturing, Supply chain
manufacturing
management
Graphics Raster graphics editor, Vector graphics editor, 3D modeler,
Animation editor, 3D computer graphics, Video editing, Image
processing
Digital audio editor, Audio playback, Mixing, Audio synthesis,
Audio
Computer music
Compiler, Assembler, Interpreter, Debugger, Text editor,
Software
Integrated development environment, Software performance
engineering
analysis, Revision control, Software configuration management
Edutainment, Educational game, Serious game, Flight
Educational
simulator
Strategy, Arcade, Puzzle, Simulation, First-person shooter,
Games
Platform, Massively multiplayer, Interactive fiction
Artificial intelligence, Antivirus software, Malware scanner,
Misc
Installer/Package management systems, File manager
Programming languages
Main article: Programming language
Programming languages provide various ways of specifying programs for computers to run.
Unlike natural languages, programming languages are designed to permit no ambiguity and to be
concise. They are purely written languages and are often difficult to read aloud. They are
generally either translated into machine code by a compiler or an assembler before being run, or
translated directly at run time by an interpreter. Sometimes programs are executed by a hybrid
method of the two techniques. There are thousands of different programming languages—some
intended to be general purpose, others useful only for highly specialized applications.
Programming languages
Timeline of programming languages, List of programming languages
Lists of programming
by category, Generational list of programming languages, List of
languages
programming languages, Non-English-based programming languages
Commonly used
ARM, MIPS, x86
Assembly languages
Commonly used high-
Ada, BASIC, C, C++, C#, COBOL, Fortran, Java, Lisp, Pascal, Object
level programming
Pascal
languages
Commonly used
Bourne script, JavaScript, Python, Ruby, PHP, Perl
Scripting languages
Professions and organizations
As the use of computers has spread throughout society, there are an increasing number of careers
involving computers.
Computer-related professions
Hardware- Electrical engineering, Electronic engineering, Computer engineering,
related Telecommunications engineering, Optical engineering, Nanoengineering
Computer science, Desktop publishing, Human–computer interaction, Information
Software-
technology, Information systems, Computational science, Software engineering,
related
Video game industry, Web design
The need for computers to work well together and to be able to exchange information has
spawned the need for many standards organizations, clubs and societies of both a formal and
informal nature.
Organizations
Standards groups ANSI, IEC, IEEE, IETF, ISO, W3C
Professional Societies ACM, AIS, IET, IFIP, BCS
Free/Open source software Free Software Foundation, Mozilla Foundation, Apache Software
groups Foundation

See also
Information technology portal

• Computability theory
• Computer security
• Computer insecurity
• List of computer term etymologies
• List of fictional computers
• Pulse computation
Notes
1. ^ In 1946, ENIAC required an estimated 174 kW. By comparison, a modern laptop
computer may use around 30 W; nearly six thousand times less. "Approximate Desktop
& Notebook Power Usage". University of Pennsylvania. Retrieved 2009-06-20.
2. ^ Early computers such as Colossus and ENIAC were able to process between 5 and 100
operations per second. A modern "commodity" microprocessor (as of 2007) can process
billions of operations per second, and many of these operations are more complicated and
useful than early computer operations. "Intel Core2 Duo Mobile Processor: Features".
Intel Corporation. Retrieved 2009-06-20.
3. ^computer, n.. Oxford English Dictionary (2 ed.). Oxford University Press. 1989.
Retrieved 2009-04-10
4. ^ * Ifrah, Georges (2001). The Universal History of Computing: From the Abacus to the
Quantum Computer. New York: John Wiley & Sons. ISBN 0471396710. From 2700 to
2300 BC, Georges Ifrah, pp.11
5. ^ Berkeley, Edmund (1949). Giant Brains, or Machines That Think. John Wiley & Sons.
pp. 19. Edmund Berkeley
6. ^ According to advertising on Pickett's N600 slide rule boxes."Pickett Apollo Box
Scans". Copland.udel.edu. Retrieved 2010-02-20.
7. ^"Discovering How Greeks Computed in 100 B.C.". The New York Times. 31 July 2008.
Retrieved 27 March 2010.
8. ^"Heron of Alexandria". Retrieved 2008-01-15.
9. ^ Felt, Dorr E. (1916). Mechanical arithmetic, or The history of the counting machine.
Chicago: Washington Institute. pp. 8. Dorr E. Felt
10. ^"Speaking machines". The parlour review, Philadelphia1 (3). January 20, 1838.
Retrieved October 11, 2010.
11. ^ Felt, Dorr E. (1916). Mechanical arithmetic, or The history of the counting machine.
Chicago: Washington Institute. pp. 10. Dorr E. Felt
12. ^ "Pascal and Leibnitz, in the seventeenth century, and Diderot at a later period,
endeavored to construct a machine which might serve as a substitute for human
intelligence in the combination of figures" The Gentleman's magazine, Volume 202,
p.100
13. ^ Babbage's Difference engine in 1823 and his Analytical engine in the mid 1830s
14. ^ "It is reasonable to inquire, therefore, whether it is possible to devise a machine which
will do for mathematical computation what the automatic lathe has done for engineering.
The first suggestion that such a machine could be made came more than a hundred years
ago from the mathematician Charles Babbage. Babbage's ideas have only been properly
appreciated in the last ten years, but we now realize that he understood clearly all the
fundamental principles which are embodied in modern digital computers" Faster than
thought, edited by B. V. Bowden, 1953, Pitman publishing corporation
15. ^ "...Among this extraordinary galaxy of talent Charles Babbage appears to be one of the
most remarkable of all. Most of his life he spent in an entirely unsuccessful attempt to
make a machine which was regarded by his contemporaries as utterly preposterous, and
his efforts were regarded as futile, time-consuming and absurd. In the last decade or so
we have learnt how his ideas can be embodied in a modern digital computer. He
understood more about the logic of these machines than anyone else in the world had
learned until after the end of the last war" Foreword, Irascible Genius, Charles Babbage,
inventor by Maboth Moseley, 1964, London, Hutchinson
16. ^ In the proposal that Aiken gave IBM in 1937 while requesting funding for the Harvard
Mark I we can read: "Few calculating machines have been designed strictly for
application to scientific investigations, the notable exceptions being those of Charles
Babbage and others who followed him....After abandoning the difference engine,
Babbage devoted his energy to the design and construction of an analytical engine of far
higher powers than the difference engine....Since the time of Babbage, the development
of calculating machinery has continued at an increasing rate." Howard Aiken, Proposed
automatic calculating machine, reprinted in: The origins of Digital computers, Selected
Papers, Edited by Brian Randell, 1973, ISBN 3-540-06169-X
17. ^ "Parallel processors composed of these high-performance microprocessors are
becoming the supercomputing technology of choice for scientific and engineering
applications", 1993, "Microprocessors: From Desktops to Supercomputers". Science
Magazine. Retrieved 2011-04-23.
18. ^Intel Museum - The 4004, Big deal then, Big deal now
19. ^ Please read Sumlock ANITA calculator#History of ANITA calculators
20. ^From cave paintings to the internet HistoryofScience.com
21. ^ See: Anthony Hyman, ed., Science and Reform: Selected Works of Charles Babbage
(Cambridge, England: Cambridge University Press, 1989), page 298. It is in the
collection of the Science Museum in London, England. (Delve (2007), page 99.)
22. ^ The analytical engine should not be confused with Babbage's difference engine which
was a non-programmable mechanical calculator.
23. ^"Columbia University Computing History: Herman Hollerith". Columbia.edu. Retrieved
2010-12-11.
24. ^ ab"Alan Turing – Time 100 People of the Century". Time Magazine. Retrieved 2009-
06-13. "The fact remains that everyone who taps at a keyboard, opening a spreadsheet or
a word-processing program, is working on an incarnation of a Turing machine"
25. ^"Atanasoff-Berry Computer". Retrieved 2010-11-20.
26. ^"Spiegel: The inventor of the computer's biography was published". Spiegel.de. 2009-
09-28. Retrieved 2010-12-11.
27. ^"Inventor Profile: George R. Stibitz". National Inventors Hall of Fame Foundation, Inc..
28. ^Rojas, R. (1998). "How to make Zuse's Z3 a universal computer". IEEE Annals of the
History of Computing20 (3): 51–54. doi:10.1109/85.707574.
29. ^ B. Jack Copeland, ed., Colossus: The Secrets of Bletchley Park's Codebreaking
Computers, Oxford University Press, 2006
30. ^"Robot Mathematician Knows All The Answers", October 1944, Popular Science.
Books.google.com. Retrieved 2010-12-11.
31. ^Lavington 1998, p. 37
32. ^ This program was written similarly to those for the PDP-11minicomputer and shows
some typical things a computer can do. All the text after the semicolons are comments for
the benefit of human readers. These have no significance to the computer and are
ignored. (Digital Equipment Corporation 1972)
33. ^ It is not universally true that bugs are solely due to programmer oversight. Computer
hardware may fail or may itself have a fundamental problem that produces unexpected
results in certain situations. For instance, the Pentium FDIV bug caused some
Intelmicroprocessors in the early 1990s to produce inaccurate results for certain floating
point division operations. This was caused by a flaw in the microprocessor design and
resulted in a partial recall of the affected devices.
34. ^ Taylor, Alexander L., III (1984-04-16). "The Wizard Inside the Machine". TIME.
Retrieved 2007-02-17.
35. ^ Even some later computers were commonly programmed directly in machine code.
Some minicomputers like the DECPDP-8 could be programmed directly from a panel of
switches. However, this method was usually used only as part of the booting process.
Most modern computers boot entirely automatically by reading a boot program from
some non-volatile memory.
36. ^ However, there is sometimes some form of machine language compatibility between
different computers. An x86-64 compatible microprocessor like the AMDAthlon 64 is
able to run most of the same programs that an Intel Core 2 microprocessor can, as well as
programs designed for earlier microprocessors like the Intel Pentiums and Intel 80486.
This contrasts with very early commercial computers, which were often one-of-a-kind
and totally incompatible with other computers.
37. ^ High level languages are also often interpreted rather than compiled. Interpreted
languages are translated into machine code on the fly, while running, by another program
called an interpreter.
38. ^ The control unit's role in interpreting instructions has varied somewhat in the past.
Although the control unit is solely responsible for instruction interpretation in most
modern computers, this is not always the case. Many computers include some
instructions that may only be partially interpreted by the control system and partially
interpreted by another device. This is especially the case with specialized computing
hardware that may be partially self-contained. For example, EDVAC, one of the earliest
stored-program computers, used a central control unit that only interpreted four
instructions. All of the arithmetic-related instructions were passed on to its arithmetic unit
and further decoded there.
39. ^ Instructions often occupy more than one memory address, so the program counters
usually increases by the number of memory locations required to store one instruction.
40. ^ David J. Eck (2000). The Most Complex Machine: A Survey of Computers and
Computing. A K Peters, Ltd.. p. 54. ISBN 9781568811284.
41. ^ Erricos John Kontoghiorghes (2006). Handbook of Parallel Computing and Statistics.
CRC Press. p. 45. ISBN 9780824740672.
42. ^ Flash memory also may only be rewritten a limited number of times before wearing
out, making it less useful for heavy random access usage. (Verma & Mielke 1988)
43. ^ Donald Eadie (1968). Introduction to the Basic Computer. Prentice-Hall. p. 12.
44. ^ Arpad Barna; Dan I. Porat (1976). Introduction to Microcomputers and the
Microprocessors. Wiley. p. 85. ISBN 9780471050513.
45. ^ Jerry Peek; Grace Todino, John Strang (2002). Learning the UNIX Operating System:
A Concise Guide for the New User. O'Reilly. p. 130. ISBN 9780596002619.
46. ^ Gillian M. Davis (2002). Noise Reduction in Speech Applications. CRC Press. p. 111.
ISBN 9780849309496.
47. ^ However, it is also very common to construct supercomputers out of many pieces of
cheap commodity hardware; usually individual computers connected by networks. These
so-called computer clusters can often provide supercomputer performance at a much
lower cost than customized designs. While custom architectures are still used for most of
the most powerful supercomputers, there has been a proliferation of cluster computers in
recent years. (TOP500 2006)
48. ^ Agatha C. Hughes (2000). Systems, Experts, and Computers. MIT Press. p. 161.
ISBN 9780262082853. "The experience of SAGE helped make possible the first truly
large-scale commercial real-time network: the SABRE computerized airline reservations
system..."
49. ^"A Brief History of the Internet". Internet Society. Retrieved 2008-09-20.
50. ^"What is a computer?". Webopedia. Retrieved 25 February 2011.
51. ^ Most major 64-bit instruction set architectures are extensions of earlier designs. All of
the architectures listed in this table, except for Alpha, existed in 32-bit forms before their
64-bit incarnations were introduced.
References
• a
Kempf, Karl (1961). Historical Monograph: Electronic Computers Within the
Ordnance Corps. Aberdeen Proving Ground (United States Army).
• a
Phillips, Tony (2000). "The Antikythera Mechanism I". American Mathematical
Society. Retrieved 2006-04-05.
• a
Shannon, Claude Elwood (1940). A symbolic analysis of relay and switching circuits.
Massachusetts Institute of Technology.
• Digital Equipment Corporation (1972) (PDF). PDP-11/40 Processor Handbook.
Maynard, MA: Digital Equipment Corporation.
• Verma, G.; Mielke, N. (1988). Reliability performance of ETOX based flash memories.
IEEE International Reliability Physics Symposium.
• Meuer, Hans; Strohmaier, Erich; Simon, Horst; Dongarra, Jack (2006-11-13).
"Architectures Share Over Time". TOP500. Retrieved 2006-11-27.
• Lavington, Simon (1998). A History of Manchester Computers (2 ed.). Swindon: The
British Computer Society. ISBN 0902505018
• Stokes, Jon (2007). Inside the Machine: An Illustrated Introduction to Microprocessors
and Computer Architecture. San Francisco: No Starch Press. ISBN 978-1-59327-104-6.
• Felt, Dorr E. (1916). Mechanical arithmetic, or The history of the counting machine.
Chicago: Washington Institute.
• Ifrah, Georges (2001). The Universal History of Computing: From the Abacus to the
Quantum Computer. New York: John Wiley & Sons. ISBN 0471396710.
• Berkeley, Edmund (1949). Giant Brains, or Machines That Think. John Wiley & Sons.
External links
Find more about Computer on Wikipedia's sister projects:

Definitions from Wiktionary

Images and media from Commons

Learning resources from Wikiversity

News stories from Wikinews

Quotations from Wikiquote

Source texts from Wikisource


Textbooks from Wikibooks

• A Brief History of Computing - slideshow by Life magazine


Categories: Computers | Computing
• Log in / create account
• Article
• Discussion
• Read
• View source
• View history
Top of Form

Bottom of Form

• Main page
• Contents
• Featured content
• Current events
• Random article
• Donate to Wikipedia
Interaction
• Help
• About Wikipedia
• Community portal
• Recent changes
• Contact Wikipedia
Toolbox
Print/export
Languages
• Acèh
• Afrikaans
• Alemannisch
• አማርኛ
• Ænglisc
• ‫العربية‬
• Aragonés
• ‫ܐܪܡܝܐ‬
• অসমীয়া
• Asturianu
• Azərbaycanca
• বাংলা
• Bân-lâm-gú
• Башҡортса
• Беларуская
• Беларуская (тарашкевіца)
• Boarisch
• བོད་ཡིག
• Bosanski
• Brezhoneg
• Български
• Català
• Чӑвашла
• Cebuano
• Česky
• Cymraeg
• Dansk
• Deutsch
• Diné bizaad
• Eesti
• Ελληνικά
• Emiliàn e rumagnòl
• Español
• Esperanto
• Euskara
• ‫فارسی‬
• Fiji Hindi
• Føroyskt
• Français
• Frysk
• Furlan
• Gaeilge
• Gaelg
• Gàidhlig
• Galego
• 贛語
• ગુજરાતી
• ������
• Hak-kâ-fa
• 한국어
• َ ‫هَُو‬
‫س‬
• Հայերեն
• िहनदी
• Hrvatski
• Ido
• Igbo
• Ilokano
• ইমারঠার/িবষুিপয়ামিণপুরী
• Bahasa Indonesia
• Interlingua
• ᐃᓄᒃᑎᑐᑦ/inuktitut
• isiXhosa
• Íslenska
• Italiano
• ‫עברית‬
• Basa Jawa
• ಕನನಡ
• Kapampangan
• ქართული
• Қазақша
• Kernowek
• Кыргызча
• Kiswahili
• Kongo
• Kurdî
• Ladino
• Къарачай-Малкъар
• ລາວ
• Latina
• Latviešu
• Lëtzebuergesch
• Lietuvių
• Limburgs
• Lingála
• Lojban
• Lumbaart
• Magyar
• Македонски
• Malagasy
• മലയാളം
• Malti
• मराठी
• ‫مصرى‬
• ‫ماِزرونی‬
• Bahasa Melayu
• Mirandés
• Монгол
• မမမမမမမမမမ
• Nāhuatl
• Nederlands
• Nedersaksisch
• नेपाली
• नेपालभाषा
• 日本語
• Nnapulitano
• Нохчийн
• Norsk (bokmål)
• Norsk (nynorsk)
• Occitan
• Олык Марий
• ଓଡ଼ିଆ
• O'zbek
• ਪੰਜਾਬੀ
• ‫پنجابی‬
• ‫پښتو‬
• ភសែខ្រ
• Plattdüütsch
• Polski
• Português
• Română
• Runa Simi
• Русский
• Русиньскый
• Саха тыла
• Sardu
• Scots
• Seeltersk
• Shqip
• Sicilianu
• සිංහල
• Simple English
• Slovenčina
• Словѣ́ньскъ / ⰔⰎⰑⰂⰡⰐⰠⰔⰍⰟ
• Slovenščina
• Soomaaliga
• Српски / Srpski
• Srpskohrvatski / Српскохрватски
• Suomi
• Svenska
• Tagalog
• தமிழ்
• Татарча/Tatarça
• తలుగు
• ไทย
• Тоҷикӣ
• Türkçe
• Türkmençe
• ᨅᨔ ᨕᨘᨁᨗ
• Українська
• ‫اردو‬
• Vahcuengh
• Vèneto
• Tiếng Việt
• Võro
• Walon
• West-Vlams
• Winaray
• Wolof
• 吴语
• ‫יִידיש‬
• Yorùbá
• 粵語
• Zazaki
• Žemaitėška
• 中文
• This page was last modified on 15 May 2011 at 15:21.
• Text is available under the Creative Commons Attribution-ShareAlike License;
additional terms may apply. See Terms of Use for details.
Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit
organization.
• Contact us

You might also like