Term Paper OF ECE-201 (Digital Electronics) Topic:-Study of Microprocessors
Term Paper OF ECE-201 (Digital Electronics) Topic:-Study of Microprocessors
Term Paper OF ECE-201 (Digital Electronics) Topic:-Study of Microprocessors
OF
ECE-201
(DIGITAL ELECTRONICS)
TOPIC:-STUDY OF
MICROPROCESSORS
RD6802B50
10810899
ACKNOWLEDGEMENT
1. INTODUCTION
2. HISTORY
4. CLASSIFICATION OF MICROPROCESSORS
5. MICROPROCESSOR APPLICATIONS
7. BIBLOGRAPHY
MICROPROCESSORS
INTRODUCTION:
Microprocessors are regarded as one of the most important devices in our everyday machines
called computers. Before we start, we need to understand what exactly microprocessors are
and their appropriate implementations. Microprocessor is an electronic circuit that functions
as the central processing unit (CPU) of a computer, providing computational control.
Microprocessors are also used in other advanced electronic systems, such as computer
printers, automobiles, and jet airliners Typical microprocessors incorporate arithmetic and
logic functional units as well as the associated control logic, instruction processing circuitry,
and a portion of the memory hierarchy. Portions of the interface logic for the input/output
(I/O) and memory subsystems may also be infused, allowing cheaper overall systems. While
many microprocessors and single chip designs, some high-performance designs rely on a few
chips to provide multiple functional units and relatively large caches.
When combined with other integrated circuits that provide storage for data and programs,
often on a single semiconductor base to form a chip, the microprocessor becomes the heart of
a small computer, or microcomputer.
HISTORY:
The first digital computers were built in the 1940’s using bulky relay and vacuum-tube
switches. Relays had mechanical speed limitations. Vacuum tubes required considerable
power, dissipated a significant amount of heat, and suffered high failure rates. Some systems
achieved processing rates up to 1,000 operations per second. In 1947, Bell Laboratories
invented the transistor, which rapidly replaced the vacuum tube as a computer switch for
several reasons, including smaller size, faster switching speeds, lower power consumption
and dissipation, and higher reliability. In the 1960s Texas Instruments invented the integrated
circuit, allowing a single silicon chip to contain several transistors as well as their
interconnections.
The first microprocessor was the Intel 4004, produced in 1971. Originally developed for a
calculator, and revolutionary for its time, it contained 2,300 transistors on a 4-bit
microprocessor that could perform only 60,000 operations per second. The first 8-bit
microprocessor was the Intel 8008, developed in 1972 to run computer terminals. The Intel
8008 contained 3,300 transistors. The first truly general-purpose microprocessor, developed
in 1974, was the 8-bit Intel 8080, which contained 4,500 transistors and could execute
200,000 instructions per second. By 1989, 32- bit microprocessors containing 1.2 million
transistors and capable of executing 20 million instructions per second had been introduced.
Reductions in both device size and power dissipation are essential in achieving these high
densities. Smaller device sizes also allow faster witching speeds, which in turn permit higher
processor clock rates. Increased density also lets designers add circuitry to increase the
amount of work performed within a cycle. On many benchmarks, high-end microprocessors
are two orders of magnitude faster than the DEC VAX- 11/780 minicomputer, a performance
standard in the 1970’s. Key distinctions between mainframe and high-performance
microprocessor based systems often are simply physical size, ability to handle large amounts
of I/O, and software issues.
Microprocessors are fabricated using techniques similar to those used for other integrated
circuits, such as memory chips. Microprocessors generally have a more complex structure
than do other chips, and their manufacture requires extremely precise techniques. The first
step in producing a microprocessor is the creation of an ultra pure silicon substrate, a silicon
slice in the shape of a round wafer that is polished to a mirror-like smoothness. At present,
the largest wafers used in industry are 300 mm (12 in) in diameter. Economical
manufacturing of microprocessors requires mass production. Several hundred dies, or circuit
patterns, are created on the surface of a silicon wafer simultaneously. Microprocessors are
constructed by a process of deposition and removal of conducting, insulating, and semi
conducting materials one thin layer at a time until, after hundreds of separate steps, a
complex sandwich is constructed that contains all the interconnected circuitry of the
microprocessor. Only the outer surface of the silicon wafer— a layer about 10 microns (about
0.01 mm/0.0004 in) thick, or about onetenth the thickness of a human hair—is used for the
electronic circuit. The processing steps include substrate creation, oxidation, lithography,
etching, ion implantation, and film deposition. In the oxidation step, an electrically no
conducting layer, called a dielectric, is placed between each conductive layer on the wafer.
The most important type of dielectric is silicon dioxide, which is “grown” by exposing
the silicon wafer to oxygen in a furnace at about 1000°C (about 1800°F).
The oxygen combines with the silicon to form a thin layer of oxide about 75 angstroms deep
(an angstrom is one ten-billionth of a meter). Nearly every layer that is deposited on the
wafer must be patterned accurately into the shape of the transistors and other electronic
elements. Usually this is done in a process known as photolithography, which is analogous to
transforming the wafer into a piece of photographic film and projecting a picture of the
circuit on it. A coating on the surface of the wafer, called the photo resist or resist, changes
when exposed to light, making it easy to dissolve in a developing solution. These patterns are
as small as 0.13 microns in size. Because the shortest wavelength of visible light is about 0.5
microns, short wavelength ultraviolet light must be used to resolve the tiny details of the
patterns. After photolithography, the wafer is etched—that is, the resist is removed from the
wafer either by chemicals, in a process known as wet etching, or by exposure to a corrosive
gas, in a special vacuum chamber
In the next step of the process, ion implantation, also called doping, impurities such as boron
and phosphorus are introduced into the silicon to alter its conductivity. This is accomplished
by ionizing the boron or phosphorus atoms and propelling them at the wafer with an ion
implanter at very high energies. The ions become embedded in the surface of the wafer. The
thin layers used to build up a microprocessor are referred to as films. In the final step of the
process, the films are deposited using sputterers in which thin films are grown in a plasma; by
means of evaporation, whereby the material is melted and then evaporated coating the wafer;
or by means of chemical-vapor deposition, whereby the material condenses from a gas at low
or atmospheric pressure. In each case, the film must be of high purity and its thickness must
be controlled within a small fraction of a micron. Microprocessor features are so small and
precise that a single speck of dust can destroy an entire die. The rooms used for
microprocessor creation are called clean rooms because the air in them is extremely well
filtered and virtually free of dust. The purest of today's clean rooms are referred to, as class 1,
indicating that there is no more than one speck of dust per cubic foot of air.
CLASSIFICATION OF MICROPROCESSORS:
Several functional classifications can be used to classify microprocessors. The different types
of microprocessors used most frequently are as follows:
INTEL MICROPROCESSORS:
4004 (1970)
Intel's Ted Hoff and Federico Faggin designed and implemented (respectively) the first
general-purpose microprocessor. The 4004 processor, used in a hand-held calculator built by
Busicom of Japan, was part of a fourchip set called the 4000 Family:
· 4001 - 2,048-bit ROM memory
· 4002 - 320-bit RAM memory
· 4003 - 10-bit I/O shift register
· 4004 - 4-bit central processor
8008 (1972)
The 8008 increased the 4004's word length from four to eight bits, and doubled the volume of
information that could be processed. It was still an invention in search of a market however,
as the technology world was just beginning to view the microprocessor as a solution to many
needs.
8080 (1974)
The 8080 were 20 times as fast as the 4004 and contained twice as many transistors.
This 8-bit chip represented a technological milestone as engineers recognized its value and
used it in a wide variety of products. It was perhaps most notable as the processor in the first
kit computer, the Altair, which ignited the personal computing phenomenon.
8088 (1979)
Created as a cheaper version of Intel's 8086, the 8088 was a 16-bit processor with an 8-bit
external bus. This chip became the most ubiquitous in the computer industry when IBM
chose it for its first PC. The success of the IBM PC and its clones gave Intel a dominant
position in the semiconductor industry.
80286 (1982)
With 16 MB of addressable memory and 1 GB of virtual memory, this 16-bit chip is referred
to as the first "modern" microprocessor. Many novices were introduced to desktop computing
with a "286 machine" and it became the dominant chip of its time. It contained 130,000
transistors and packed serious compute power (12 MHz) into a tiny footprint.
4-BIT MICROPROCESSORS:
Historically, the 4-bit microprocessor was the first general purpose microprocessor
introduced on the market. The basic design of the early microprocessors was derived from
that of the desk calculator. The Intel 4004, a 4-bit design, was the grandfather of
microprocessors. Introduced in late 1971, the 4004 was originally designed for a Japanese
manufacturer as the processing element of a desk calculator; it was not designed as a general-
purpose computer. The shortcomings of the 4004 were recognized as soon as it was
introduced. But it was the first generalpurpose computing device on a chip to be placed on
the market. Many of the chips introduced at about the same time by other companies were, in
fact, mere calculator chips. Some of them were even serial-by-bit devices, 48 pins. This was
not due to physical, but rather to economic, constraints: industrial tester of the time was
generally limited to 40-pin DIPs. The ancestor of today’s 8-bit microprocessors was the Intel
8008, introduced in 1972-1973. The 8008 was not intended to be a general-purpose
microprocessor. IT was to be a CRT display controller for Data point. Taking into account all
of its design inadequacies and its limited performance, the 8008 was an overwhelming
success.
MICROPROCESSOR APPLICATIONS:
When microprocessors appeared, they were first used in computer systems for a negative
reason. In the early 1970’s there were few support chips and microprocessors were
programmed to perform functions that are now done by a wide variety of hardware chips. For
this reason assembling a complete microprocessor-based system required both hardware and
software expertise. Only five years later in 1976 companies realized that microprocessors
could be used to build inexpensive personal computers. It then took several more years to
manufacture computers that were adequate for business and professional purposes. Yet the
technology had been there all along. (Naturally with time costs have diminished and
integrated circuits have been improved). Many of the early microprocessor applications found
markets by accident rather than by design. New product development had generally been a
direct result of the dissemination of technical information.
In the early 1970s the necessary combination of hardware and software expertise was rarely
found outside the computer manufacturing industry. This was not perceived as a problem,
because when microprocessors were introduced, the computer establishment saw them only
as low-cost processors for simple control applications. In fact, the first 8-bit microprocessor,
the Intel 8008, was designed for direct control of a CRT display. Microprocessors are now
used for controlling virtually every computer peripheral that does not require bipolar speeds.
Initially, such applications were limited by the relatively low speed of early microprocessors.
But now, with the faster microprocessors coupled with specialized peripheral controller
chips, such as CRT and floppy disk controllers, it is possible to control fast devices such as
CRT’s and disks. With microprocessors, we have now entered the era of distributed systems.
In distributed systems, intercommunication between a number of processors is reduced to a
minimum because they do not interact in real-time but exchange data words or block. Each
processor is then a direct process controller that completely controls a process.
Such network may involve multiple microprocessors. Traditionally, a multiprocessor system
is one in which several processors interact with each other in real-time for control purposes.
Most systems involving networks of microprocessors do not interact so closely and therefore
do not qualify as “ multi microprocessor systems.”
.
1. WWW.WIKIPEDIA.COM
2. WWW.BOOKSRAG.COM
3. WWW.SPINGAL.PLUS.COM
4. WWW.CLEMSON.EDU.COM