Microprocessor
Microprocessor
The integration of a whole CPU onto a single or a few integrated circuits using
Very-Large-Scale Integration (VLSI) greatly reduced the cost of processing power.
Integrated circuit processors are produced in large numbers by highly automated
metal�oxide�semiconductor (MOS) fabrication processes, resulting in a relatively
low unit price. Single-chip processors increase reliability because there are fewer
electrical connections that could fail. As microprocessor designs improve, the cost
of manufacturing a chip (with smaller components built on a semiconductor chip the
same size) generally stays the same according to Rock's law.
Before microprocessors, small computers had been built using racks of circuit
boards with many medium- and small-scale integrated circuits, typically of TTL
type. Microprocessors combined this into one or a few large-scale ICs. While there
is disagreement over who deserves credit for the invention of the microprocessor,
the first commercially available microprocessor was the Intel 4004, designed by
Federico Faggin and introduced in 1971.[2]
Structure
The ability to put large numbers of transistors on one chip makes it feasible to
integrate memory on the same die as the processor. This CPU cache has the advantage
of faster access than off-chip memory and increases the processing speed of the
system for many applications. Processor clock frequency has increased more rapidly
than external memory speed, so cache memory is necessary if the processor is not to
be delayed by slower external memory.
Special-purpose designs
A microprocessor is a general - purpose entity. Several specialized processing
devices have followed:
Embedded applications
Thousands of items that were traditionally not computer-related include
microprocessors. These include household appliances, vehicles (and their
accessories), tools and test instruments, toys, light switches/dimmers and
electrical circuit breakers, smoke alarms, battery packs, and hi-fi audio/visual
components (from DVD players to phonograph turntables). Such products as cellular
telephones, DVD video system and HDTV broadcast systems fundamentally require
consumer devices with powerful, low-cost, microprocessors. Increasingly stringent
pollution control standards effectively require automobile manufacturers to use
microprocessor engine management systems to allow optimal control of emissions over
the widely varying operating conditions of an automobile. Non-programmable controls
would require bulky, or costly implementation to achieve the results possible with
a microprocessor.
History
See also: Microprocessor chronology
The advent of low-cost computers on integrated circuits has transformed modern
society. General-purpose microprocessors in personal computers are used for
computation, text editing, multimedia display, and communication over the Internet.
Many more microprocessors are part of embedded systems, providing digital control
over myriad objects from appliances to automobiles to cellular phones and
industrial process control. Microprocessors perform binary operations based on
Boolean logic, named after George Boole. The ability to operate computer systems
using Boolean Logic was first proven in a 1938 thesis by master's student Claude
Shannon, who later went on to become a professor. Shannon is considered "The Father
of Information Theory".
Following the development of MOS integrated circuit chips in the early 1960s, MOS
chips reached higher transistor density and lower manufacturing costs than bipolar
integrated circuits by 1964. MOS chips further increased in complexity at a rate
predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of
transistors on a single MOS chip by the late 1960s. The application of MOS LSI
chips to computing was the basis for the first microprocessors, as engineers began
recognizing that a complete computer processor could be contained on several MOS
LSI chips.[6] Designers in the late 1960s were striving to integrate the central
processing unit (CPU) functions of a computer onto a handful of MOS LSI chips,
called microprocessor unit (MPU) chipsets.
While there is disagreement over who invented the microprocessor,[2] the first
commercially produced microprocessor was the Intel 4004, released as a single MOS
LSI chip in 1971.[7] The single-chip microprocessor was made possible with the
development of MOS silicon-gate technology (SGT).[8] The earliest MOS transistors
had aluminium metal gates, which Italian physicist Federico Faggin replaced with
silicon self-aligned gates to develop the first silicon-gate MOS chip at Fairchild
Semiconductor in 1968.[8] Faggin later joined Intel and used his silicon-gate MOS
technology to develop the 4004, along with Marcian Hoff, Stanley Mazor and
Masatoshi Shima in 1971.[9] The 4004 was designed for Busicom, which had earlier
proposed a multi-chip design in 1969, before Faggin's team at Intel changed it into
a new single-chip design. Intel introduced the first commercial microprocessor, the
4-bit Intel 4004, in 1971. It was soon followed by the 8-bit microprocessor Intel
8008 in 1972.
Other embedded uses of 4-bit and 8-bit microprocessors, such as terminals,
printers, various kinds of automation etc., followed soon after. Affordable 8-bit
microprocessors with 16-bit addressing also led to the first general-purpose
microcomputers from the mid-1970s on.
Since the early 1970s, the increase in capacity of microprocessors has followed
Moore's law; this originally suggested that the number of components that can be
fitted onto a chip doubles every year. With present technology, it is actually
every two years,[11][obsolete source] and as a result Moore later changed the
period to two years.[12]
First projects
These projects delivered a microprocessor at about the same time: Garrett
AiResearch's Central Air Data Computer (CADC) (1970), Texas Instruments' TMS 1802NC
(September 1971) and Intel's 4004 (November 1971, based on an earlier 1969 Busicom
design). Arguably, Four-Phase Systems AL1 microprocessor was also delivered in
1969.