Computer (Disambiguation) : Jump To Navigation Jump To Search
Computer (Disambiguation) : Jump To Navigation Jump To Search
Contents
1Etymology
2History
o 2.1Pre-20th century
o 2.2First computing device
o 2.3Analog computers
o 2.4Digital computers
o 2.5Modern computers
o 2.6Mobile computers
3Types
o 3.1By architecture
o 3.2By size, form-factor and purpose
4Hardware
o 4.1History of computing hardware
o 4.2Other hardware topics
o 4.3Input devices
o 4.4Output devices
o 4.5Control unit
o 4.6Central processing unit (CPU)
o 4.7Arithmetic logic unit (ALU)
o 4.8Memory
o 4.9Input/output (I/O)
o 4.10Multitasking
o 4.11Multiprocessing
5Software
o 5.1Languages
o 5.2Programs
6Networking and the Internet
7Unconventional computers
8Future
o 8.1Computer architecture paradigms
o 8.2Artificial intelligence
9Professions and organizations
10See also
11References
12Notes
13External links
Etymology
A human computer, with microscope and calculator, 1952
According to the Oxford English Dictionary, the first known use of the word "computer" was
in 1613 in a book called The Yong Mans Gleanings by English writer Richard Braithwait: "I
haue [sic] read the truest computer of Times, and the best Arithmetician that euer [sic]
breathed, and he reduceth thy dayes into a short number." This usage of the term referred
to a human computer, a person who carried out calculations or computations. The word
continued with the same meaning until the middle of the 20th century. During the latter
part of this period women were often hired as computers because they could be paid less
than their male counterparts.[1] By 1943, most human computers were women.[2]
The Online Etymology Dictionary gives the first attested use of "computer" in the 1640s,
meaning "one who calculates"; this is an "agent noun from compute (v.)". The Online
Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any
type) is from 1897." The Online Etymology Dictionary indicates that the "modern use" of the
term, to mean "programmable digital electronic computer" dates from "1945 under this
name; [in a] theoretical [sense] from 1937, as Turing machine".[3]
History
Main article: History of computing hardware
Pre-20th century
A slide rule.
The slide rule was invented around 1620–1630, shortly after the publication of the concept
of the logarithm. It is a hand-operated analog computer for doing multiplication and
division. As slide rule development progressed, added scales provided reciprocals, squares
and square roots, cubes and cube roots, as well as transcendental functions such as
logarithms and exponentials, circular and hyperbolic trigonometry and other functions.
Slide rules with special scales are still used for quick performance of routine calculations,
such as the E6B circular slide rule used for time and distance calculations on light aircraft.
In the 1770s, Pierre Jaquet-Droz, a Swiss watchmaker, built a mechanical doll (automaton)
that could write holding a quill pen. By switching the number and order of its internal
wheels different letters, and hence different messages, could be produced. In effect, it could
be mechanically "programmed" to read instructions. Along with two other complex
machines, the doll is at the Musée d'Art et d'Histoire of Neuchâ tel, Switzerland, and still
operates.[15]
In 1831–1835, mathematician and engineer Giovanni Plana devised a Perpetual Calendar
machine, which, though a system of pulleys and cylinders and over, could predict
the perpetual calendar for every year from AD 0 (that is, 1 BC) to AD 4000, keeping track of
leap years and varying day length. The tide-predicting machine invented by Sir William
Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of
pulleys and wires to automatically calculate predicted tide levels for a set period at a
particular location.
The differential analyser, a mechanical analog computer designed to solve differential
equations by integration, used wheel-and-disc mechanisms to perform the integration. In
1876, Lord Kelvin had already discussed the possible construction of such calculators, but
he had been stymied by the limited output torque of the ball-and-disk integrators.[16] In a
differential analyzer, the output of one integrator drove the input of the next integrator, or
a graphing output. The torque amplifier was the advance that allowed these machines to
work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential
analyzers.
First computing device
A portion of Babbage's Difference engine.
Charles Babbage, an English mechanical engineer and polymath, originated the concept of a
programmable computer. Considered the "father of the computer",[17] he conceptualized
and invented the first mechanical computer in the early 19th century. After working on his
revolutionary difference engine, designed to aid in navigational calculations, in 1833 he
realized that a much more general design, an Analytical Engine, was possible. The input of
programs and data was to be provided to the machine via punched cards, a method being
used at the time to direct mechanical looms such as the Jacquard loom. For output, the
machine would have a printer, a curve plotter and a bell. The machine would also be able to
punch numbers onto cards to be read in later. The Engine incorporated an arithmetic logic
unit, control flow in the form of conditional branching and loops, and integrated memory,
making it the first design for a general-purpose computer that could be described in
modern terms as Turing-complete.[18][19]
The machine was about a century ahead of its time. All the parts for his machine had to be
made by hand – this was a major problem for a device with thousands of parts. Eventually,
the project was dissolved with the decision of the British Government to cease funding.
Babbage's failure to complete the analytical engine can be chiefly attributed to political and
financial difficulties as well as his desire to develop an increasingly sophisticated computer
and to move ahead faster than anyone else could follow. Nevertheless, his son, Henry
Babbage, completed a simplified version of the analytical engine's computing unit (the mill)
in 1888. He gave a successful demonstration of its use in computing tables in 1906.
Analog computers
Main article: Analog computer
Sir William Thomson's third tide-predicting machine design, 1879–81
During the first half of the 20th century, many scientific computing needs were met by
increasingly sophisticated analog computers, which used a direct mechanical or electrical
model of the problem as a basis for computation. However, these were not programmable
and generally lacked the versatility and accuracy of modern digital computers. [20] The first
modern analog computer was a tide-predicting machine, invented by Sir William
Thomson in 1872. The differential analyser, a mechanical analog computer designed to
solve differential equations by integration using wheel-and-disc mechanisms, was
conceptualized in 1876 by James Thomson, the brother of the more famous Lord Kelvin.[16]
The art of mechanical analog computing reached its zenith with the differential analyzer,
built by H. L. Hazen and Vannevar Bush at MIT starting in 1927. This built on the
mechanical integrators of James Thomson and the torque amplifiers invented by H. W.
Nieman. A dozen of these devices were built before their obsolescence became obvious. By
the 1950s, the success of digital electronic computers had spelled the end for most analog
computing machines, but analog computers remained in use during the 1950s in some
specialized applications such as education (slide rule) and aircraft (control systems).
Digital computers
Electromechanical
By 1938, the United States Navy had developed an electromechanical analog computer
small enough to use aboard a submarine. This was the Torpedo Data Computer, which used
trigonometry to solve the problem of firing a torpedo at a moving target. During World War
II similar devices were developed in other countries as well.
ENIAC was the first electronic, Turing-complete device, and performed ballistics trajectory
calculations for the United States Army.
The ENIAC[37] (Electronic Numerical Integrator and Computer) was the first electronic
programmable computer built in the U.S. Although the ENIAC was similar to the Colossus, it
was much faster, more flexible, and it was Turing-complete. Like the Colossus, a "program"
on the ENIAC was defined by the states of its patch cables and switches, a far cry from
the stored program electronic machines that came later. Once a program was written, it
had to be mechanically set into the machine with manual resetting of plugs and switches.
The programmers of the ENIAC were six women, often known collectively as the "ENIAC
girls".[38][39]
It combined the high speed of electronics with the ability to be programmed for many
complex problems. It could add or subtract 5000 times a second, a thousand times faster
than any other machine. It also had modules to multiply, divide, and square root. High
speed memory was limited to 20 words (about 80 bytes). Built under the direction of John
Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and
construction lasted from 1943 to full operation at the end of 1945. The machine was huge,
weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum
tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. [40]
Modern computers
Concept of modern computer
The principle of the modern computer was proposed by Alan Turing in his seminal 1936
paper,[41] On Computable Numbers. Turing proposed a simple device that he called
"Universal Computing machine" and that is now known as a universal Turing machine. He
proved that such a machine is capable of computing anything that is computable by
executing instructions (program) stored on tape, allowing the machine to be
programmable. The fundamental concept of Turing's design is the stored program, where
all the instructions for computing are stored in memory. Von Neumann acknowledged that
the central concept of the modern computer was due to this paper. [42] Turing machines are
to this day a central object of study in theory of computation. Except for the limitations
imposed by their finite memory stores, modern computers are said to be Turing-complete,
which is to say, they have algorithm execution capability equivalent to a universal Turing
machine.
Stored programs
Main article: Stored-program computer
The development of the MOS integrated circuit led to the invention of the microprocessor,
[85][86]
and heralded an explosion in the commercial and personal use of computers. While
the subject of exactly which device was the first microprocessor is contentious, partly due
to lack of agreement on the exact definition of the term "microprocessor", it is largely
undisputed that the first single-chip microprocessor was the Intel 4004,[87] designed and
realized by Federico Faggin with his silicon-gate MOS IC technology, [85] along with Ted
Hoff, Masatoshi Shima and Stanley Mazor at Intel.[88][89] In the early 1970s, MOS IC
technology enabled the integration of more than 10,000 transistors on a single chip.[59]
System on a Chip (SoCs) are complete computers on a microchip (or chip) the size of a coin.
[90]
They may or may not have integrated RAM and flash memory. If not integrated, The
RAM is usually placed directly above (known as Package on package) or below (on the
opposite side of the circuit board) the SoC, and the flash memory is usually placed right
next to the SoC, this all done to improve data transfer speeds, as the data signals don't have
to travel long distances. Since ENIAC in 1945, computers have advanced enormously, with
modern SoCs (Such as the Snapdragon 865) being the size of a coin while also being
hundreds of thousands of times more powerful than ENIAC, integrating billions of
transistors, and consuming only a few watts of power.
Mobile computers
The first mobile computers were heavy and ran from mains power. The 50lb IBM 5100 was
an early example. Later portables such as the Osborne 1 and Compaq Portable were
considerably lighter but still needed to be plugged in. The first laptops, such as the Grid
Compass, removed this requirement by incorporating batteries – and with the continued
miniaturization of computing resources and advancements in portable battery life, portable
computers grew in popularity in the 2000s.[91] The same developments allowed
manufacturers to integrate computing resources into cellular mobile phones by the early
2000s.
These smartphones and tablets run on a variety of operating systems and recently became
the dominant computing device on the market.[92] These are powered by System on a
Chip (SoCs), which are complete computers on a microchip the size of a coin. [90]
Types
See also: Classes of computers
Computers can be classified in a number of different ways, including:
By architecture
Analog computer
Digital computer
Hybrid computer
Harvard architecture
Von Neumann architecture
Complex instruction set computer
Reduced instruction set computer
By size, form-factor and purpose
Supercomputer
Mainframe computer
Minicomputer (term no longer used)
Server
o Rackmount server
o Blade server
o Tower server
Personal computer
o Workstation
o Microcomputer (term no longer used)
Home computer
o Desktop computer
Tower desktop
Slimline desktop
Multimedia computer (non-linear editing system computers,
video editing PCs and the like)
Gaming computer
All-in-one PC
Nettop (Small form factor PCs, Mini PCs)
Home theater PC
Keyboard computer
Portable computer
Thin client
Internet appliance
o Laptop
Desktop replacement computer
Gaming laptop
Rugged laptop
2-in-1 PC
Ultrabook
Chromebook
Subnotebook
Netbook
Mobile computers:
o Tablet computer
o Smartphone
o Ultra-mobile PC
o Pocket PC
o Palmtop PC
o Handheld PC
Wearable computer
o Smartwatch
o Smartglasses
Single-board computer
Plug computer
Stick PC
Programmable logic controller
Computer-on-module
System on module
System in a package
System-on-chip (Also known as an Application Processor or AP if it lacks circuitry
such as radio circuitry)
Microcontroller
Hardware
Main articles: Computer hardware, Personal computer hardware, Central processing unit,
and Microprocessor
Video demonstrating the standard components of a "slimline" computer
The term hardware covers all of those parts of a computer that are tangible physical
objects. Circuits, computer chips, graphic cards, sound cards, memory (RAM), motherboard,
displays, power supplies, cables, keyboards, printers and "mice" input devices are all
hardware.
History of computing hardware
Main article: History of computing hardware
Pascal's
calculator, Arithmometer, Differ
Calculators
ence engine, Quevedo's
analytical machines
First generation
(mechanical/electromecha
nical)
Jacquard loom, Analytical
engine, IBM ASCC/Harvard
Programmable devices
Mark I, Harvard Mark II, IBM
SSEC, Z1, Z2, Z3
Atanasoff–Berry Computer, IBM
Calculators
604, UNIVAC 60, UNIVAC 120
Third generation
(discrete transistors and HP 2116A, IBM System/32, IBM
SSI, MSI, LSI integrated Minicomputer System/36, LINC, PDP-8, PDP-
circuits) 11
Alpha, MIPS, PA-RISC, PowerPC,
64-bit microcomputer[93]
SPARC, x86-64, ARMv8-A
Desktop computer, Home
computer, Laptop computer, Per
sonal digital
Personal computer
assistant (PDA), Portable
computer, Tablet PC, Wearable
computer
Quantum
computer, Chemical
computer, DNA
computing, Optical
Theoretical/experimental
computer, Spintronics-
based
computer, Wetware/Org
anic computer
Other hardware topics
Mouse, keyboard, joystick, image
Input scanner, webcam, graphics
tablet, microphone
Peripheral device
(input/output) Output Monitor, printer, loudspeaker
Computer buses
Long range
(computer Ethernet, ATM, FDDI
networking)
A general purpose computer has four main components: the arithmetic logic unit (ALU),
the control unit, the memory, and the input and output devices (collectively termed I/O).
These parts are interconnected by buses, often made of groups of wires. Inside each of
these parts are thousands to trillions of small electrical circuits which can be turned off or
on by means of an electronic switch. Each circuit represents a bit (binary digit) of
information so that when the circuit is on it represents a "1", and when off it represents a
"0" (in positive logic representation). The circuits are arranged in logic gates so that one or
more of the circuits may control the state of one or more of the other circuits.
Input devices
When unprocessed data is sent to the computer with the help of input devices, the data is
processed and sent to output devices. The input devices may be hand-operated or
automated. The act of processing is mainly regulated by the CPU. Some examples of input
devices are:
Computer keyboard
Digital camera
Digital video
Graphics tablet
Image scanner
Joystick
Microphone
Mouse
Overlay keyboard
Real-time clock
Trackball
Touchscreen
Output devices
The means through which computer gives output are known as output devices. Some
examples of output devices are:
Computer monitor
Printer
PC speaker
Projector
Sound card
Video card
Control unit
Main articles: CPU design and Control unit
1. Read the code for the next instruction from the cell indicated by the program
counter.
2. Decode the numerical code for the instruction into a set of commands or signals for
each of the other systems.
3. Increment the program counter so it points to the next instruction.
4. Read whatever data the instruction requires from cells in memory (or perhaps from
an input device). The location of this required data is typically stored within the
instruction code.
5. Provide the necessary data to an ALU or register.
6. If the instruction requires an ALU or specialized hardware to complete, instruct the
hardware to perform the requested operation.
7. Write the result from the ALU back to a memory location or to a register or perhaps
an output device.
8. Jump back to step (1).
Since the program counter is (conceptually) just another set of memory cells, it can be
changed by calculations done in the ALU. Adding 100 to the program counter would cause
the next instruction to be read from a place 100 locations further down the program.
Instructions that modify the program counter are often known as "jumps" and allow for
loops (instructions that are repeated by the computer) and often conditional instruction
execution (both examples of control flow).
The sequence of operations that the control unit goes through to process an instruction is
in itself like a short computer program, and indeed, in some more complex CPU designs,
there is another yet smaller computer called a microsequencer, which runs
a microcode program that causes all of these events to happen.
Central processing unit (CPU)
Main articles: Central processing unit and Microprocessor
The control unit, ALU, and registers are collectively known as a central processing
unit (CPU). Early CPUs were composed of many separate components. Since the 1970s,
CPUs have typically been constructed on a single MOS integrated circuit chip called
a microprocessor.
Arithmetic logic unit (ALU)
Main article: Arithmetic logic unit
The ALU is capable of performing two classes of operations: arithmetic and logic. [96] The set
of arithmetic operations that a particular ALU supports may be limited to addition and
subtraction, or might include multiplication, division, trigonometry functions such as sine,
cosine, etc., and square roots. Some can only operate on whole numbers (integers) while
others use floating point to represent real numbers, albeit with limited precision. However,
any computer that is capable of performing just the simplest operations can be
programmed to break down the more complex operations into simple steps that it can
perform. Therefore, any computer can be programmed to perform any arithmetic
operation—although it will take more time to do so if its ALU does not directly support the
operation. An ALU may also compare numbers and return boolean truth values (true or
false) depending on whether one is equal to, greater than or less than the other ("is 64
greater than 65?"). Logic operations involve Boolean logic: AND, OR, XOR, and NOT. These
can be useful for creating complicated conditional statements and processing boolean logic.
Superscalar computers may contain multiple ALUs, allowing them to process several
instructions simultaneously.[97] Graphics processors and computers
with SIMD and MIMD features often contain ALUs that can perform arithmetic
on vectors and matrices.
Memory
Main articles: Computer memory and Computer data storage
In more sophisticated computers there may be one or more RAM cache memories, which
are slower than registers but faster than main memory. Generally computers with this sort
of cache are designed to move frequently needed data into the cache automatically, often
without the need for any intervention on the programmer's part.
Input/output (I/O)
Main article: Input/output
Software
Main article: Computer software
Software refers to parts of the computer which do not have a material form, such as
programs, data, protocols, etc. Software is that part of a computer system that consists of
encoded information or computer instructions, in contrast to the physical hardware from
which the system is built. Computer software includes computer programs, libraries and
related non-executable data, such as online documentation or digital media. It is often
divided into system software and application software Computer hardware and software
require each other and neither can be realistically used on its own. When software is stored
in hardware that cannot easily be modified, such as with BIOS ROM in an IBM PC
compatible computer, it is sometimes called "firmware".
86-DOS (QDOS), IBM PC DOS, MS-DOS, DR-
DOS
DOS, FreeDOS
Macintosh
Classic Mac OS, macOS (previously OS X and Mac
operating
OS X)
systems
Embedded and re
List of embedded operating systems
al-time
Library
Programming
C standard library, Standard Template Library
library
Protocol TCP/IP, Kermit, FTP, HTTP, SMTP
Data
File format HTML, XML, JPEG, MPEG, PNG
Microsoft
Graphical user
Windows, GNOME, KDE, QNX Photon, CDE, GEM,
interface (WIMP)
Aqua
User interface
Text-based user
Command-line interface, Text user interface
interface
Computer-aided design, Computer-aided
Design and
manufacturing, Plant management, Robotic
manufacturing
manufacturing, Supply chain management
Compiler, Assembler, Interpreter, Debugger, Tex
t editor, Integrated development
Software
environment, Software performance
engineering
analysis, Revision control, Software
configuration management
Edutainment, Educational game, Serious
Educational
game, Flight simulator
Strategy, Arcade, Puzzle, Simulation, First-
Games person shooter, Platform, Massively
multiplayer, Interactive fiction
Artificial intelligence, Antivirus
Misc software, Malware scanner, Installer/Package
management systems, File manager
Languages
There are thousands of different programming languages—some intended to be general
purpose, others useful only for highly specialized applications.
Programming languages
Commonly ARM, MIPS, x86
used assembl
y languages
Commonly
used high-
Ada, BASIC, C, C++, C#, COBOL, Fortran, PL/I, REXX, Java, Lisp, Pascal, Obj
level
ect Pascal
programming
languages
Commonly
used scriptin Bourne script, JavaScript, Python, Ruby, PHP, Perl
g languages
Programs
The defining feature of modern computers which distinguishes them from all other
machines is that they can be programmed. That is to say that some type
of instructions (the program) can be given to the computer, and it will process them.
Modern computers based on the von Neumann architecture often have machine code in the
form of an imperative programming language. In practical terms, a computer program may
be just a few instructions or extend to many millions of instructions, as do the programs
for word processors and web browsers for example. A typical modern computer can
execute billions of instructions per second (gigaflops) and rarely makes a mistake over
many years of operation. Large computer programs consisting of several million
instructions may take teams of programmers years to write, and due to the complexity of
the task almost certainly contain errors.
Stored program architecture
Main articles: Computer program and Computer programming
begin:
addi $8, $0, 0 # initialize sum to 0
addi $9, $0, 1 # set first number to add = 1
loop:
slti $10, $9, 1000 # check if the number is less than 1000
beq $10, $0, finish # if odd number is greater than n then exit
add $8, $8, $9 # update sum
addi $9, $9, 1 # get next number
j loop # repeat the summing process
finish:
add $2, $8, $0 # put sum in output register
Once told to run this program, the computer will perform the repetitive addition task
without further human intervention. It will almost never make a mistake and a modern PC
can complete the task in a fraction of a second.
Machine code
In most computers, individual instructions are stored as machine code with each
instruction being given a unique number (its operation code or opcode for short). The
command to add two numbers together would have one opcode; the command to multiply
them would have a different opcode, and so on. The simplest computers are able to
perform any of a handful of different instructions; the more complex computers have
several hundred to choose from, each with a unique numerical code. Since the computer's
memory is able to store numbers, it can also store the instruction codes. This leads to the
important fact that entire programs (which are just lists of these instructions) can be
represented as lists of numbers and can themselves be manipulated inside the computer in
the same way as numeric data. The fundamental concept of storing programs in the
computer's memory alongside the data they operate on is the crux of the von Neumann, or
stored program[citation needed], architecture. In some cases, a computer might store some or all
of its program in memory that is kept separate from the data it operates on. This is called
the Harvard architecture after the Harvard Mark I computer. Modern von Neumann
computers display some traits of the Harvard architecture in their designs, such as in CPU
caches.
While it is possible to write computer programs as long lists of numbers (machine
language) and while this technique was used with many early computers, [104] it is extremely
tedious and potentially error-prone to do so in practice, especially for complicated
programs. Instead, each basic instruction can be given a short name that is indicative of its
function and easy to remember – a mnemonic such as ADD, SUB, MULT or JUMP. These
mnemonics are collectively known as a computer's assembly language. Converting
programs written in assembly language into something the computer can actually
understand (machine language) is usually done by a computer program called an
assembler.
A 1970s punched card containing one line from a Fortran program. The card reads: "Z(1) =
Y + W(1)" and is labeled "PROJ039" for identification purposes.
Programming language
Main article: Programming language
Programming languages provide various ways of specifying programs for computers to
run. Unlike natural languages, programming languages are designed to permit no
ambiguity and to be concise. They are purely written languages and are often difficult to
read aloud. They are generally either translated into machine code by a compiler or
an assembler before being run, or translated directly at run time by an interpreter.
Sometimes programs are executed by a hybrid method of the two techniques.
Low-level languages
Main article: Low-level programming language
Machine languages and the assembly languages that represent them (collectively
termed low-level programming languages) are generally unique to the particular
architecture of a computer's central processing unit (CPU). For instance, an ARM
architecture CPU (such as may be found in a smartphone or a hand-held videogame) cannot
understand the machine language of an x86 CPU that might be in a PC.[105] Historically a
significant number of other cpu architectures were created and saw extensive use, notably
including the MOS Technology 6502 and 6510 in addition to the Zilog Z80.
High-level languages
Main article: High-level programming language
Although considerably easier than in machine language, writing long programs in assembly
language is often difficult and is also error prone. Therefore, most practical programs are
written in more abstract high-level programming languages that are able to express the
needs of the programmer more conveniently (and thereby help reduce programmer error).
High level languages are usually "compiled" into machine language (or sometimes into
assembly language and then into machine language) using another computer program
called a compiler.[106] High level languages are less related to the workings of the target
computer than assembly language, and more related to the language and structure of the
problem(s) to be solved by the final program. It is therefore often possible to use different
compilers to translate the same high level language program into the machine language of
many different types of computer. This is part of the means by which software like video
games may be made available for different computer architectures such as personal
computers and various video game consoles.
Program design
This section does not cite any sources. Please help improve
this section by adding citations to reliable sources. Unsourced
material may be challenged and removed. (July 2012) (Learn
how and when to remove this template message)
Program design of small programs is relatively simple and involves the analysis of the
problem, collection of inputs, using the programming constructs within languages, devising
or using established procedures and algorithms, providing data for output devices and
solutions to the problem as applicable. As problems become larger and more complex,
features such as subprograms, modules, formal documentation, and new paradigms such as
object-oriented programming are encountered. Large programs involving thousands of line
of code and more require formal software methodologies. The task of developing
large software systems presents a significant intellectual challenge. Producing software
with an acceptably high reliability within a predictable schedule and budget has historically
been difficult; the academic and professional discipline of software
engineering concentrates specifically on this challenge.
Bugs
Main article: Software bug
The actual first computer bug, a moth found trapped on a relay of the Harvard Mark II
computer
Errors in computer programs are called "bugs". They may be benign and not affect the
usefulness of the program, or have only subtle effects. But in some cases, they may cause
the program or the entire system to "hang", becoming unresponsive to input such
as mouse clicks or keystrokes, to completely fail, or to crash. Otherwise benign bugs may
sometimes be harnessed for malicious intent by an unscrupulous user writing an exploit,
code designed to take advantage of a bug and disrupt a computer's proper execution. Bugs
are usually not the fault of the computer. Since computers merely execute the instructions
they are given, bugs are nearly always the result of programmer error or an oversight
made in the program's design.[107] Admiral Grace Hopper, an American computer scientist
and developer of the first compiler, is credited for having first used the term "bugs" in
computing after a dead moth was found shorting a relay in the Harvard Mark II computer
in September 1947.[108]
Unconventional computers
Main article: Human computer
See also: Harvard Computers
A computer does not need to be electronic, nor even have a processor, nor RAM, nor even
a hard disk. While popular usage of the word "computer" is synonymous with a personal
electronic computer, the modern[111] definition of a computer is literally: "A device that
computes, especially a programmable [usually] electronic machine that performs high-
speed mathematical or logical operations or that assembles, stores, correlates, or otherwise
processes information."[112] Any device which processes information qualifies as a computer,
especially if the processing is purposeful.[citation needed]
Future
There is active research to make computers out of many promising new types of
technology, such as optical computers, DNA computers, neural computers, and quantum
computers. Most computers are universal, and are able to calculate any computable
function, and are limited only by their memory capacity and operating speed. However
different designs of computers can give very different performance for particular
problems; for example quantum computers can potentially break some modern encryption
algorithms (by quantum factoring) very quickly.
Computer architecture paradigms
There are many types of computer architectures:
Computer-related professions
The need for computers to work well together and to be able to exchange information has
spawned the need for many standards organizations, clubs and societies of both a formal
and informal nature.
Organizations
See also
Glossary of computers
Computability theory
Computer insecurity
Computer security
Glossary of computer hardware terms
History of computer science
List of computer term etymologies
List of fictional computers
List of pioneers in computer science
Pulse computation
TOP500 (list of most powerful computers)
Unconventional computing
References