0% found this document useful (0 votes)
32 views23 pages

CIT204 Comp. Arch. & Org. (Part D (I) )

Uploaded by

victorychijioke3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views23 pages

CIT204 Comp. Arch. & Org. (Part D (I) )

Uploaded by

victorychijioke3
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

CIT201: COMPUTER ARCHITECTURE AND ORGANIZATION 1

TEXTS: (1) Computer Architecture and Organization by Patterson and Hennessy.


(2) Computer Architecture by Null and Lobour
(3) Computer Organization and Design by Robert L. Britton,
(4) MIPS Assembly Language Programming by Karen Mille and Dave Norris note.
(5) Computer Maintenance, Troubleshooting and Repair Resource Book, by Heidi Neff, First Edition.

A. COMPUTER ARCHITECHURE VERSUS ORGANIZATION


According to Patterson and Hennessy:
▪ Computer architecture is the study of system components and how they’re interconnected
▪ Computer organization is the implementation of an architecture
o Implementation differences include the following:
Number of processors
Memory sizes
Method used for data transfers
Execution times

B. BRIEF HISTORICAL DEVELOPMENT OF COMPUTERS


1. First Generation Computers: The first-generation computers; ENIAC and UNIVAC were built with
vacuum tubes in the 1940s. Their power consumption was high and their reliability was very poor.
They were also very expensive.

2. Second Generation Computers: With the discovery of transistors in 1948, the second generation of
computers performed better because they were built with transistors. Although their size was still
massive occupying a whole building could perform processing operations at amazing speed. They
were called mainframes. Mainframe computers were very expensive only very few rich
organizations owned it.

3. Third Generation Computers: By 1958, integrated circuits had been developed so all computers
were manufactured thereafter with integrated circuit technology. This caused a dramatic reduction
in the size of computers and they were called minicomputers because of their size. Minicomputers
were still expensive and only big organizations could afford minicomputers.

4. Fourth generation Computers: By early 1970s, the central processing unit of a computer was
successfully integrated on a single silicon chip while trying to make a calculator on a chip. This was
called a microprocessor-4004. An improvement was 8008, then 8080, 8085. IBM bought the idea
and used the microprocessor to make small size computers affordable by individuals; hence they
were called personal computers (PCs). This was the birth of microcomputers. Intel was the industry
leader in manufacturing microprocessors. Intel launched a new improved microprocessor that was
compatible with the previous one every other year; 8086, 80125, 80286, 80386, 80486, 80586,
PentuimPro, Pentium I, II, III & IV, Itanium. The microcomputer industry has gone through a
revolution, consequently there are various configurations: Desktop PC, Laptop, Notebook and
Palmtop PCs.

5. Fifth generation Computers: These are computers that are endured with Artificial Intelligence – a
mimic of intelligence of man in machines.

CIT204: Computer Architecture and Organization 1 Lecture Note by Etus, C. 1


C. COMPUTING BASICS
The first computers were used primarily for numerical calculations. However, as any information can be
numerically encoded, people soon realized that computers are capable of general-purpose information
processing. Their capacity to handle large amounts of data has extended the range and accuracy of
weather forecasting. Their speed has allowed them to make decisions about routing telephone
connections through a network and to control mechanical systems such as automobiles, nuclear reactors,
and robotic surgical tools. They are also cheap enough to be embedded in everyday appliances and to
make clothes dryers and rice cookers “smart.” Computers have allowed us to pose and answer questions
that could not be pursued before. These questions might be about DNA sequences in genes, patterns of
activity in a consumer market, or all the uses of a word in texts that have been stored in a database.
Increasingly, computers can also learn and adapt as they operate.

Computers also have limitations, some of which are theoretical. For example, a computer asked to obtain
the truth of such undecidable propositions will (unless forcibly interrupted) continues indefinitely—a
condition known as the “halting problem” (e.g. in Turing machine). Other limitations reflect current
technology. Human minds are skilled at recognizing spatial patterns—easily distinguishing among human
faces, for instance—but this is a difficult task for computers, which must process information sequentially,
rather than grasping details overall at a glance. Another problematic area for computers involves natural
language interactions. Because so much common knowledge and contextual information is assumed in
ordinary human communication, researchers have yet to solve the problem of providing relevant
information to general-purpose natural language programs.

1. Types of Computers
Computers are classified according to the signals they handle, size and computing strength, as:
i. Analog computers: Analog computers use continuous physical magnitudes to represent
quantitative information. At first they represented quantities with mechanical components (see
differential analyzer and integrator), but after World War II voltages were used; by the 1960s digital
computers had largely replaced them. Nonetheless, analog computers, and some hybrid digital-analog
systems, continued in use through the 1960s in tasks such as aircraft and spaceflight simulation. One
advantage of analog computation is that it may be relatively simple to design and build an analog
computer to solve a single problem. Another advantage is that analog computers can frequently
represent and solve a problem in “real time”; that is, the computation proceeds at the same rate as the
system being modeled by it. Their main disadvantages are that analog representations are limited in
precision—typically a few decimal places but fewer in complex mechanisms—and general-purpose
devices are expensive and not easily programmed.

ii. Digital computers: In contrast to analog computers, digital computers represent information in
discrete form, generally as sequences of 0s and 1s (binary digits, or bits). The modern era of digital
computers began in the late 1930s and early 1940s in the United States, Britain, and Germany. The first
devices used switches operated by electromagnets (relays). Their programs were stored on punched
paper tape or cards, and they had limited internal data storage.

iii. Mainframe computer: During the 1950s and '60s, Unisys, International Business Machines
Corporation (IBM), and other companies made large, expensive computers of increasing power. They
were used by major corporations and government research laboratories, typically as the sole computer
in the organization. In 1959 the IBM 1401 computer rented for $8,000 per month (early IBM machines
were almost always leased rather than sold), and in 1964 the largest IBM S/360 computer cost several
million dollars. These computers came to be called mainframes, though the term did not become
common until smaller computers were built. Mainframe computers were characterized by having (for
their time) large storage capabilities, fast components, and powerful computational abilities. They were
highly reliable, and, because they frequently served vital needs in an organization, they were
CIT204: Computer Architecture and Organization 1 Lecture Note by Etus, C. 2
sometimes designed with redundant components that let them survive partial failures. Because they
were complex systems, they were operated by a staff of systems programmers, who alone had access
to the computer. Other users submitted “batch jobs” to be run one at a time on the mainframe. Such
systems remain important today, though they are no longer the sole, or even primary, central
computing resource of an organization, which will typically have hundreds or thousands of personal
computers (PCs). Mainframes now provide high-capacity data storage for Internet servers, or, through
time-sharing techniques, they allow hundreds or thousands of users to run programs simultaneously.
Because of their current roles, these computers are now called servers rather than mainframes.

iv. Supercomputer: The most powerful computers of the day have typically been called
supercomputers. They have historically been very expensive and their use limited to high-priority
computations for government-sponsored research, such as nuclear simulations and weather modeling.
Today many of the computational techniques of early supercomputers are in common use in PCs. On
the other hand, the design of costly, special-purpose processors for supercomputers has been
supplanted by the use of large arrays of commodity processors (from several dozen to over 8,000)
operating in parallel over a high-speed communications network.

v. Minicomputer: Although minicomputers date to the early 1950s, the term was introduced in the
mid-1960s. Relatively small and inexpensive, minicomputers were typically used in a single department
of an organization and often dedicated to one task or shared by a small group. Minicomputers
generally had limited computational power, but they had excellent compatibility with various
laboratory and industrial devices for collecting and inputting data. One of the most important
manufacturers of minicomputers was Digital Equipment Corporation (DEC) with its Programmed Data
Processor (PDP). In 1960 DEC's PDP-1 sold for $120,000. Five years later its PDP-8 cost $18,000 and
became the first widely used minicomputer, with more than 50,000 sold. The DEC PDP-11, introduced
in 1970, came in a variety of models, small and cheap enough to control a single manufacturing process
and large enough for shared use in university computer centers; more than 650,000 were sold.
However, the microcomputer overtook this market in the 1980s.

vi. Microcomputer: A microcomputer is a small computer (see figure) built around a microprocessor
integrated circuit, or chip. Whereas the early minicomputers replaced vacuum tubes with discrete
transistors, microcomputers (and later minicomputers as well) used microprocessors that integrated
thousands or millions of transistors on a single chip. In 1971 the Intel Corporation produced the first
microprocessor, the Intel 4004, which was powerful enough to function as a computer although it was
produced for use in a Japanese-made calculator. In 1975 the first personal computer, the Altair, used a
successor chip, the Intel 8080 microprocessor. Like minicomputers, early microcomputers had
relatively limited storage and data-handling capabilities, but these have grown as storage technology
has improved alongside processing power. In the 1980s it was common to distinguish between
microprocessor-based scientific workstations and personal computers. The former used the most
powerful microprocessors available and had high-performance colour graphics capabilities costing
thousands of dollars. They were used by scientists for computation and data visualization and by
engineers for computer-aided engineering. Today the distinction between workstation and PC has
virtually vanished, with PCs having the power and display capability of workstations.

vii. Embedded processors: Another class of computer is the embedded processor. These are small
computers that use simple microprocessors to control electrical and mechanical functions. They
generally do not have to do elaborate computations or be extremely fast, nor do they have to have
great “input-output” capability, and so they can be inexpensive. Embedded processors help to control
aircraft and industrial automation, and they are common in automobiles and in both large and small
household appliances. One particular type, the digital signal processor (DSP), has become as prevalent

CIT204: Computer Architecture and Organization 1 Lecture Note by Etus, C. 3


as the microprocessor. DSPs are used in wireless telephones, digital telephone and cable modems, and
some stereo equipment.

2. How a Computer Works and Looks Like


A computer is a fabulous instrument that turns human inputs into electronic information that it then can
store or share/distribute through various output devices. A computer performs (if instructed to do so) the
steps shown in the diagram below, using information that a user provides (such as a typed sentence):

Input Processing Storage


The information is digitized, The information is
Via keyboard, mouse
becoming a simple code that stored as a part of the
or microphone
the computer can store computer’s memory

Output Further Processing


Information is shared via If instructed to do so, the
monitor, printer, speakers or information is edited or enhanced
projector with input from the user

Amazingly, the information that the user inputs into a computer are processed so that they become simple
codes made up of only two digits: zeros and ones – representing two signal levels (ON or OFF). But
computers compensate for the very simple codes by using them in huge quantities. A single unit of this
zeros/ones code is called a bit. Grouping 8 bits together makes a unit of information called a byte. These
represents numbers and codes.

3. The Basic Computer Hardware Components


Hardware is the physical equipment needed for a computer to function properly. The basic
hardware parts are briefly described below. A desktop computer description is used below, but all of the
units can also found (in a more compact arrangement) in a laptop computer.

i. Computer Case / Casing: The computer case (also called a tower or housing) is the box that
encloses many of the parts shown below. It has attachment points, slots and screws that allow
these parts to be fitted onto the case. The case is also sometimes called the CPU, since it houses
the CPU (central processing unit or processor), but this designation can lead to confusion. Please
see the description of the processor, below.

ii. Power Supply: The power supply is used to connect all of the parts of the computer described
below to electrical power. It is usually is found at the back of the computer case.

iii. Fan: A fan is needed to disperse the significant amount of heat that is generated by the
electrically powered parts in a computer. It is important for preventing overheating of the various
electronic components. Some computers will also have a heat sink (a piece of fluted metal) located
near the processor to absorb heat from the processor.

iv. Motherboard: The motherboard is a large electronic board that is used to connect the power
supply to various other electronic parts, and to hold these parts in place on the computer. The
computer’s memory (RAM, described below) and processor are attached to the motherboard. Also
found on the motherboard is the BIOS (Basic Input and Output System) chip that is responsible for

CIT204: Computer Architecture and Organization 1 Lecture Note by Etus, C. 4


some fundamental operations of the computer, such as linking hardware and software. The
motherboard also contains a small battery (that looks like a watch battery) and the chips that work
with it to store the system time and some other computer settings.

v. Drives: A computer’s drives are the devices used for long term storage of information. The main
storage area for a computer is its internal hard drive (also called a hard disk). The computer should
also have disk drives for some sort of removable storage media. A floppy disk drive was very
common until recent years, and is still found on many older desk top computers. It was replaced by
CD-ROM and DVD drives, which have higher storage capacities. The current standard is a DVD-RW
drive, which can both read and write information using both CD and DVD disks. The USB ports
(described later) on a computer can also be used to connect other storage devices such as flash
drives and external hard drives.

vi. Cards: This term is used to describe important tools that allow your computer to connect and
communicate with various input and output devices. The term “card” is used because these items
are relatively flat in order to fit into the slots provided in the computer case. A computer will
probably have a sound card, a video card, a network card and a modem.

vii. RAM: RAM is the abbreviation for random access memory. This is the short-term memory that
is used to store documents while they are being processed. The amount of RAM in a computer is
one of the factors that affect the speed of a computer. RAM attaches to the motherboard via some
specific slots. It is important to have the right type of RAM for a specific computer, as RAM has
changed over the years.

viii. Processor: The processor is the main “brain” of a computer system. It performs all of the
instructions and calculations that are needed and manages the flow of information through a
computer. It is also called the CPU (central processing unit), although this term can also be used to
describe a computer case along with all of the hardware found inside it. Another name for the
processor is a computer “chip” although this term can refer to other lesser processors (such as the
BIOS). Processors are continually evolving and becoming faster and more powerful. The speed of a
processor is measured in megahertz (MHz) or gigahertz (GHz). An older computer might have a
processor with a speed of 1000 MHz (equivalent to 1 GHz) or lower, but processors with speeds of
over 2 GHz are now common. One processor company, Intel, made a popular series of processors
called Pentium. Many reconditioned computers contain Pentium II, Pentium III and Pentium 4
processors, with Pentium 4 being the fastest of these.

ix. Peripheral hardware: Peripheral hardware is the name for the computer components that are
not found within the computer case. This includes input devices such as a mouse, microphone and
keyboard, which carry information from the computer user to the processor, and output devices
such as a monitor, printer and speakers, which display or transmit information from the computer
back to the user.

CIT204: Computer Architecture and Organization 1 Lecture Note by Etus, C. 5


Power Supply

Drives
Fan Housing. The
processor is
underneath on the
motherboard.

RAM

Cards

Inside a Desktop Computer Case

4. Computer Architectural Concepts and Units


These are foundational Issues upon which computer developments are based. They include the von-
Neumann architecture concept, the stored program concept and moore’s law, amongst others. Based the
above foundations, the main computer units evolved; and have be discussed in sub-sections below.

i. The Von Neumann Computer Architecture Concept


Despite the dramatic advances in computer technology, the architecture of mainframe,
minicomputers and microcomputers has remained similar i.e. the von Neumann architecture.
Von Neumann proposed a one-processor computer architecture with memory elements and
input/output ports for interfacing to the peripherals. Non-von Neumann architectures use more
than one processor e.g. systolic array.

CENTRAL
INPUT OUTPUT
PROCESSING
UNIT UNIT
UNIT

MEMORY
UNIT

The Von Neumann computer architecture. .

CIT204: Computer Architecture and Organization 1 Lecture Note by Etus, C. 6


ii. The Stored Program Concept
Lady Ada Lovelace proposed this concept whereby the program is stored in the computer before
execution. Earlier on, instructions to be executed were input one after another consequently, it was
not possible to take advantage of the high speed processing machine. The capability of the
computer was thus limited to rate at which a human types in the instruction.

iii. Moore's law


The CPU and RAM are integrated circuits (ICs)—small silicon wafers, or chips, that contain
thousands or millions of transistors that function as electrical switches. In 1965 Gordon Moore, one
of the founders of Intel, stated what has become known as Moore's law: the number of transistors
on a chip doubles about every 18 months. Moore suggested that financial constraints would soon
cause his law to break down, but it has been remarkably accurate for far longer than he first
envisioned. It now appears that technical constraints may finally invalidate Moore's law, since
sometime between 2010 and 2020 transistors would have to consist of only a few atoms each, at
which point the laws of quantum physics imply that they would cease to function reliably. There are
three categories of integrated circuits in a typical computer, Which incidentally represents the
various hardware units of the computer namely:
▪ CPU device (processor – i.e., The ALU, Datapath and Control elements or sub-units)
▪ Memory devices
▪ I/O devices
▪ Peripheral Interfaces devices (Chipsets) – control Buses that connect integrated circuits
together.

iv. Computer Hardware Units


Based on the above computer concepts, the physical elements of a computer, its hardware, are
generally divided into the central processing unit (CPU), memory Unit, and Input / Output
(Peripheral) Unit.

a. Central processing unit


The CPU is responsible for fetching the instructions, decoding them and performing the indicated
operations on the correct data. The CPU consists of:
1. ALU (Arithmetic and logic unit) – the computation engine of the CPU.
▪ Usually has two data inputs and one data output
▪ Performs operations that often affect bits in the status register (status register bits
indicate overflow, etc.)
▪ Controlled by the control unit
▪ Uses registers for operations
▪ Registers are high-speed memory for storing temporary results and control information.
▪ They are implemented with D flip-flops
o One D flip-flop → 1-bit register
o For 32-bit register, need 16 D flip-flops connected together
o Collection of D flip-flops must be clocked to work together
▪ With a clock pulse, input enters register and remains unchanged until clock
pulses again
o They are a fixed size on a given architecture
o 16, 32 or 64 bits
o Different architectures have different number of registers
o 16 and 32 are most common
o Some are special purpose; others are general purpose
o Special purpose registers are used to do any of these while general purpose registers are
used to do all of these:
CIT204: Computer Architecture and Organization 1 Lecture Note by Etus, C. 7
o store information
o shift values
o compare values
o count
o store temp values (act as scratch pad)
o control program looping
o manage stacks
o hold status or mode of operation (e.g. overflow, carry or zero conditions)
o They are addressed and manipulated by the control unit

2. Control Unit – interprets program instructions for memory, I/O devices and data path and is
also responsible for sequencing operations and makes sure correct data is where it needs to be
at the correct time. Hence, the control unit:
▪ Monitors execution of all instructions and transfer of all information
▪ Decodes instructions it takes from memory and tells ALU which registers to use
▪ Services interrupts
▪ Tells ALU what kind of operation to do
▪ “Sends signals that determine operations of the datapath, memory, input and output.”
(P&H)

3. Datapath – network of registers, ALU and bus connections where timing is controlled by the
system clock.
▪ Every computer has an internal clock
▪ CPU requires a fixed number of clock ticks to execute each instruction
▪ Time between clock ticks is called a clock cycle
o Performance measurement
o Clock cycle is reciprocal of clock frequency.
o If bus clock rate is 133 MHz, then length of clock cycle is 1/133,000,000 = 7.52ns
▪ Clock rate (speed) is the clock frequency
o Measured in MHz (1 MHz = 1 million cycles per second; 1 hertz = 1 cycle per second)
▪ Minimum clock cycle time has to be at least as long as maximum propagation delay of circuit
from each set of register outputs to register inputs
▪ Two kinds of clocks
o CPU clock is the system clock
o Bus clocks are slower and often cause bottleneck problems

The CPU is presently integrated on a chip called microprocessor. Although there are various types
and manufacturers, Intel’s product the Pentium processor has dominated the market. The
functional blocks within a CPU is shown below. Every CPU/Microprocessor has its own Instruction
set. The 8085 for instance has 74 instructions in its set. The instruction set is the list of machine
instructions it can execute. For convenience, the instruction set is written in Assembly language.
Assembly language is a short hand/mnemonic representation of machine code. The type of
instructions and addressing modes are used to also classify CPU as CISC –Complex Instruction Set
Computer Architecture or RISC-Reduced Instruction Set Computer Architecture. Intel and many
early CPUs were designed with CISC architecture, but the MIPS architecture represents the RISC
architecture.

CIT204: Computer Architecture and Organization 1 Lecture Note by Etus, C. 8


INSTRUCTION REGISTER
A Register
INSTRUCTION DECODER
8
B bits.
Register C Register

8 bits. 8 bits.
D Register E Register
Control
8 bits. 8 bits.
H Register L Register
unit
8 bits. Stack pointer
8 bits.

ALU 16 bits.
Program counter

16 bits.

Major functional blocks within a CPU.

The CPU provides the circuits that implement the computer's instruction set—its machine
language. It is composed of an arithmetic-logic unit (ALU) and control circuits. The ALU carries out
basic arithmetic and logic operations, and the control section determines the sequence of
operations, including branch instructions that transfer control from one part of a program to
another. Although the main memory was once considered part of the CPU, today it is regarded as
separate. The boundaries shift, however, and CPU chips now also contain some high-speed cache
memory where data and instructions are temporarily stored for fast access.

The ALU has circuits that add, subtract, multiply, and divide two arithmetic values, as well as circuits
for logic operations such as AND and OR (where a 1 is interpreted as true and a 0 as false, so that,
for instance, 1 AND 0 = 0) The ALU has several to more than a hundred registers that temporarily
hold results of its computations for further arithmetic operations or for transfer to main memory.

The circuits in the CPU control section provide branch instructions, which make elementary
decisions about what instruction to execute next. For example, a branch instruction might be “If the
result of the last ALU operation is negative, jump to location A in the program; otherwise, continue
with the following instruction.” Such instructions allow “if-then-else” decisions in a program and
execution of a sequence of instructions, such as a “while-loop” that repeatedly does some set of
instructions while some condition is met. A related instruction is the subroutine call, which
transfers execution to a subprogram and then, after the subprogram finishes, returns to the main
program where it left off.

In a stored-program computer, programs and data in memory are indistinguishable. Both are bit
patterns—strings of 0s and 1s—that may be interpreted either as data or as program instructions,
and both are fetched from memory by the CPU. The CPU has a program counter that holds the
memory address (location) of the next instruction to be executed.

The basic operation of the CPU is the “fetch-decode-execute” cycle:


• Fetch the instruction from the address held in the program counter, and store it in a register.
• Decode the instruction. Parts of it specify the operation to be done, and parts specify the data on
which it is to operate. These may be in CPU registers or in memory locations. If it is a branch
instruction, part of it will contain the memory address of the next instruction to execute once the
branch condition is satisfied.

CIT204: Computer Architecture and Organization 1 Lecture Note by Etus, C. 9


• Fetch the operands, if any.
• Execute the operation if it is an ALU operation.
• Store the result (in a register or in memory), if there is one.
• Update the program counter to hold the next instruction location, which is either the next memory
location or the address specified by a branch instruction.

At the end of these steps the cycle is ready to repeat, and it continues until a special halt instruction
stops execution. Steps of this cycle and all internal CPU operations are regulated by a clock that
oscillates at a high frequency (now typically measured in gigahertz, or billions of cycles per second).
Another factor that affects performance is the “word” size—the number of bits that are fetched at
once from memory and on which CPU instructions operate. Digital words now consist of 32 or 64 bits,
though sizes from 8 to 128 bits are seen.

Processing instructions one at a time, or serially, often creates a bottleneck because many program
instructions may be ready and waiting for execution. Since the early 1980s, CPU design has followed a
style originally called reduced-instruction-set computing (RISC). This design minimizes the transfer of
data between memory and CPU (all ALU operations are done only on data in CPU registers) and calls
for simple instructions that can execute very quickly. As the number of transistors on a chip has grown,
the RISC design requires a relatively small portion of the CPU chip to be devoted to the basic instruction
set. The remainder of the chip can then be used to speed CPU operations by providing circuits that let
several instructions execute simultaneously, or in parallel.

There are two major kinds of instruction-level parallelism (ILP) in the CPU, both first used in early
supercomputers. One is the pipeline, which allows the fetch-decode-execute cycle to have several
instructions under way at once. While one instruction is being executed, another can obtain its
operands, a third can be decoded, and a fourth can be fetched from memory. If each of these
operations requires the same time, a new instruction can enter the pipeline at each phase and (for
example) five instructions can be completed in the time that it would take to complete one without a
pipeline. The other sort of ILP is to have multiple execution units in the CPU—duplicate arithmetic
circuits, in particular, as well as specialized circuits for graphics instructions or for floating-point
calculations (arithmetic operations involving non-integer numbers, such as 3.27). With this
“superscalar” design, several instructions can execute at once.

Both forms of ILP face complications. A branch instruction might render preloaded instructions in the
pipeline useless if they entered it before the branch jumped to a new part of the program. Also,
superscalar execution must determine whether an arithmetic operation depends on the result of
another operation, since they cannot be executed simultaneously. CPUs now have additional circuits to
predict whether a branch will be taken and to analyze instructional dependencies. These have become
highly sophisticated and can frequently rearrange instructions to execute more of them in parallel.

CIT204: Computer Architecture and Organization 1 Lecture Note by Etus, C. 10


b. The Memory Units
The memory Unit is classified into primary and secondary memories. The various types are as
follows.
SEMICONDUCTOR MEMORIES

READ ONLY READ/WRITE

(Non-volatile) (Volatile)

ROM STATIC RAM


PROM DYNAMIC RAM
EPROM FLASH DISK, ETC.
EEPROM, ETC.

Writing - Storing information; Reading- Retrieving information.


Volatile - Memory loses its stored data (erases) when power supply is turned off.
Non-volatile - Memory retains its data (does not erase) when power is turned off.

1. Primary or Main memory


Semiconductor Integrated Circuit memories are in this category: Random Access Memory (RAM)
and Read Only Memory (ROM). ROMs are non-volatile Memories accessed (read from) for
programs to run (e.g. BIOS); while RAMs are Volatile memories accessed (written to for storage and
read from for retrieval) while the programs are running. Main memories are mostly DRAMs
(dynamic random access memory), and has been used since 1975.
o RAM means memory accesses take same amount of time regardless of where the memory is
read (no searching for an address is required)
o Several DRAMs area used together to hold instructions and data for a program

A type of RAM that uses different technology called SRAM (static random access memory) is the
Cache Memory. It is a small, fast memory that is less dense and more expensive than DRAM. It
behaves like a buffer for DRAM, that is, it stores most recently used portions of memory in the
expectation that those locations will be accessed again (locality of reference).

The earliest forms of computer main memory were mercury delay lines, which were tubes of
mercury that stored data as ultrasonic waves, and cathode-ray tubes, which stored data as charges
on the tubes' screens. The magnetic drum, invented about 1948, used an iron oxide coating on a
rotating drum to store data and programs as magnetic patterns.

In a binary computer any bistable device (something that can be placed in either of two states) can
represent the two possible bit values of 0 and 1 and can thus serve as computer memory.
Magnetic-core memory, the first relatively cheap RAM device, appeared in 1952. It was composed
of tiny, doughnut-shaped ferrite magnets threaded on the intersection points of a two-dimensional
wire grid. These wires carried currents to change the direction of each core's magnetization, while a
third wire threaded through the doughnut detected its magnetic orientation.

The first integrated circuit (IC) memory chip appeared in 1971. IC memory stores a bit in a
transistor-capacitor combination. The capacitor holds a charge to represent a 1 and no charge for a
0; the transistor switches it between these two states. Because a capacitor charge gradually decays,
IC memory is dynamic RAM (DRAM), which must have its stored values refreshed periodically (every
20 milliseconds or so). There is also static RAM (SRAM), which does not have to be refreshed.
CIT204: Computer Architecture and Organization 1 Lecture Note by Etus, C. 11
Although faster than DRAM, SRAM uses more transistors and is thus more costly; it is used primarily
for CPU internal registers and cache memory.

In addition to main memory, computers generally have special video memory (VRAM) to hold
graphical images, called bit-maps, for the computer display. This memory is often dual-ported—a
new image can be stored in it at the same time that its current data is being read and displayed. It
takes time to specify an address in a memory chip, and, since memory is slower than a CPU, there is
an advantage to memory that can transfer a series of words rapidly once the first address is
specified. One such design is known as synchronous DRAM (SDRAM), which became widely used by
2001.

Nonetheless, data transfer through the “bus”—the set of wires that connect the CPU to memory
and peripheral devices—are a bottleneck. For that reason, CPU chips now contain cache memory—
a small amount of fast SRAM. The cache holds copies of data from blocks of main memory. A well-
designed cache allows up to 85–90 percent of memory references to be done from it in typical
programs, giving a several-fold speedup in data access.

The time between two memory reads or writes (cycle time) was about 17 microseconds (millionths
of a second) for early core memory and about 1 microsecond for core in the early 1970s. The first
DRAM had a cycle time of about half a microsecond, or 500 nanoseconds (billionths of a second),
and today it is 20 nanoseconds or less. An equally important measure is the cost per bit of memory.
The first DRAM stored 128 bytes (1 byte = 8 bits) and cost about $10, or $80,000 per megabyte
(millions of bytes). In 2001 DRAM could be purchased for less than $0.25 per megabyte. This vast
decline in cost made possible graphical user interfaces (GUIs), the display fonts that word
processors use, and the manipulation and visualization of large masses of data by scientific
computers.

2. Secondary memory
Secondary memory on a computer is storage for data and programs not in use at the moment. Its
main types include: Magnetic memories, Optical memories and Semiconductor Memories. They all
allow for permanent storage of data. They can be identified as Hard disk, Floppy disks/Zip disks and
magnetic tapes, CD-R/R-W and DVD-R/R-W, and Flash Disks and Memory cards.

In addition to punched cards and paper tapes, early computers also used magnetic tape for
secondary storage. Tape is cheap, either on large reels or in small cassettes, but has the
disadvantage that it must be read or written sequentially from one end to the other.

Magnetic disks are platters coated with iron oxide, like tape and drums. An arm with a tiny wire
coil, the read/write (R/W) head, moves radially over the disk, which is divided into concentric tracks
composed of small arcs, or sectors, of data. Magnetized regions of the disk generate small currents
in the coil as it passes, thereby allowing it to “read” a sector; similarly, a small current in the coil will
induce a local magnetic change in the disk, thereby “writing” to a sector. The disk rotates rapidly
(up to 15,000 rotations per minute), and so the R/W head can rapidly reach any sector on the disk.
Early disks had large removable platters. In the 1970s IBM introduced sealed disks with fixed
platters known as Winchester disks—perhaps because the first ones had two 30-megabyte platters,
suggesting the Winchester 30-30 rifle. Not only was the sealed disk protected against dirt, the R/W
head could also “fly” on a thin air film, very close to the platter. By putting the head closer to the
platter, the region of oxide film that represented a single bit could be much smaller, thus increasing
storage capacity. This basic technology is still being used in hard disks.

CIT204: Computer Architecture and Organization 1 Lecture Note by Etus, C. 12


▪ A hard disk is a collection of platters on a spindle; platters are covered with magnetic recording
material on both sides and rotate under the read/write head (electromagnetic coil) that is attached
to a movable arm
▪ Since these are mechanical devices, access time is much slower than DRAMs (5-15 ms versus 40-80
ns – a factor of 100,000 times slower). However, cost per megabyte for hard disks in 2004 was 100
times less expensive than for DRAM for the same storage capacity (P&H).
NB: Cost, volatility and access time are the distinguishing features that mark the difference between
magnetic disks and main memory.

The DVD player uses a laser that is higher-powered and has a correspondingly finer focus point than
Optical storage devices—CD-ROM (compact disc, read-only memory) and DVD-ROM (digital
videodisc, or versatile disc)—appeared in the mid-1980s and '90s. They both represent bits as tiny
pits in plastic, organized in a long spiral like a phonograph record, written and read with lasers. A
CD-ROM can hold 2 gigabytes of data, but the inclusion of error-correcting codes (to correct for
dust, small defects, and scratches) reduces the usable data to 650 megabytes. DVDs are denser,
have smaller pits, and can hold 17 gigabytes with error correction.

Optical storage devices are slower than magnetic disks, but they are well suited for making master
copies of software or for multimedia (audio and video) files that are read sequentially. There are
also writable and rewritable CD-ROMs (CD-R and CD-RW) and DVD-ROMs (DVD-R and DVD-RW)
that can be used like magnetic tapes for inexpensive archiving and sharing of data.

The decreasing cost of memory continues to make new uses possible. A single CD-ROM can store
100 million words, more than twice as many words as are contained in the printed Encyclopædia
Britannica. A DVD can hold a full-length motion picture. Nevertheless, even larger and faster
storage systems, such as three-dimensional optical media, are being developed for handling data
for computer simulations of nuclear reactions, astronomical data, and medical data, including X-ray
images requiring many terabytes (1 terabyte = 1,000 gigabytes) of storage, which can lead to
further complications in indexing and retrieval.

c. The Input/ Output (Peripherals) Units


The input unit comprises of the input ports that have been standardized to work with various types
of input peripherals such as the keyboard, mouse, joystick, scanners, touch screens and voice
recognition systems, etc. Similarly, the output ports have been standardized to work well with
output peripherals such as the monitors, printers, plotters, speakers, etc. These are represented by
the computer ports as discussed below.

Computer peripheral hardware are devices used to input information and instructions into a
computer for storage or processing and to output the processed data. In addition, devices that
enable the transmission and reception of data between computers are often classified as
peripherals. The peripheral hardware must attach to the computer so that it can transmit
information from the user to the computer (or vice versa). There are a variety of ports present on a
computer for these attachments. These ports have gradually changed over time as computers have
changed to become faster and easier to work with. Ports also vary with the type of peripheral
equipment that connects to the ports. A computer engineer or technician should become familiar
with the most common ports (and their uses), as described below.

1. Serial Port: This port for use with 9 pin connectors is no longer commonly used, but is found on many
older computers. It was used for printers, mice, modems and a variety of other digital devices.

CIT204: Computer Architecture and Organization 1 Lecture Note by Etus, C. 13


2. Parallel Port: This long and slender port is also no longer commonly used, but was the most common
way of attaching a printer to a computer until the introduction of USB ports (see below). The most
common parallel port has holes for 25 pins, but other models were also manufactured.

3. VGA: The Video Graphics Array port is found on most computers today and is used to connect video
display devices such as monitors and projectors. It has three rows of holes, for a 15 pin connector.

4. PS/2: Until recently, this type of port was commonly used to connect keyboards and mice to
computers. Most desktop computers have two of these round ports for six pin connectors, one for the
mouse and one for the keyboard.

5. USB: The Universal Serial Bus is now the most common type of port on a computer. It was developed
in the late 1990s as a way to replace the variety of ports described above. It can be used to connect
mice, keyboards, printers, and external storage devices such as DVD-RW drives and flash drives. It has
gone through three different models (USB 1.0, USB 2.0 and USB 3.0), with USB 3.0 being the fastest at
sending and receiving information. Older USB devices can be used in newer model USB ports.

6. TRS: TRS (tip, ring and sleeve) ports are also known as ports for mini-jacks or audio jacks, and
commonly used to connect audio devices such as headphones and microphones to computers.

7. Ethernet: This port, which looks like a slightly wider version of a port for a phone jack, is used to
network computers via category 5 (CAT5) network cable. Although many computers now connect
wirelessly, this port is still the standard for wired networked computers. Some computers also have
the narrower port for an actual phone jack, used for modem connections over telephone lines.

Serial Port (left)


Parallel Port (right)

PS/2 Ports

USB Ports

VGA Port
TRS (mini-jack) Ports

Phone/Modem Jacks (top)


Ethernet Port (bottom)

USB Ports

Back of Desktop Computer Showing Ports

I/O Device Communications are such that Devices are not connected directly to CPU, Port interface
handles data transfers via system bus, CPU communicates via I/O registers and Interrupts play
important role. These can be memory-mapped or instruction-based. In Memory-mapped I/O
communications, registers in I/O interface are mapped to memory locations, Uses memory space and
CIT204: Computer Architecture and Organization 1 Lecture Note by Etus, C. 14
are fast; while in Instruction-based I/O communications, CPU uses special instructions to perform I/O,
No memory is required and it can be used only by CPU that can execute specific instructions.

d. Input devices
A plethora of devices falls into the category of input peripheral. Typical examples include keyboards,
mice, trackballs, pointing sticks, joysticks, digital tablets, touch pads, and scanners.

Keyboards contain mechanical or electromechanical switches that change the flow of current through
the keyboard when depressed. A microprocessor embedded in the keyboard interprets these changes
and sends a signal to the computer. In addition to letter and number keys, most keyboards also include
“function” and “control” keys that modify input or send special commands to the computer.

Mechanical mice and trackballs operate alike, using a rubber or rubber-coated ball that turns two
shafts connected to a pair of encoders that measure the horizontal and vertical components of a user's
movement, which are then translated into cursor movement on a computer monitor. Optical mice
employ a light beam and camera lens to translate mouse motions into cursor movement. Hence,
mouse evolution:
▪ Was originally electromechanical (1967) such that, Large ball rolling across surface caused x and
y counter to be incremented.
▪ Is now optical - Optical mouse is a miniature optical processor with the following functions:
o LED illuminate surface under the mouse
o Camera takes 1500 sample pictures/second under illumination
o Pictures are sent to optical processor that compares images and determines whether mouse
has moved and how far

Pointing sticks, which are popular on many laptop systems, employ a technique that uses a pressure-
sensitive resistor. As a user applies pressure to the stick, the resistor increases the flow of electricity,
thereby signaling that movement has taken place. Most joysticks operate in a similar manner.

Digital tablets and touch pads are similar in purpose and functionality. In both cases, input is taken
from a flat pad that contains electrical sensors that detect the presence of either a special tablet pen or
a user's finger, respectively.

A scanner is somewhat akin to a photocopier. A light source illuminates the object to be scanned, and
the varying amounts of reflected light are captured and measured by an analog-to-digital converter
attached to light-sensitive diodes. The diodes generate a pattern of binary digits that are stored in the
computer as a graphical image.

e. Output devices
Printers are a common example of output devices. New multifunction peripherals that integrate
printing, scanning, and copying into a single device are also popular. Computer monitors are sometimes
treated as peripherals. High-fidelity sound systems are another example of output devices often
classified as computer peripherals. Manufacturers have announced devices that provide tactile
feedback to the user—“force feedback” joysticks, for example. This highlights the complexity of
classifying peripherals—a joystick with force feedback is truly both an input and an output peripheral.

Early printers often used a process known as impact printing, in which a small number of pins were
driven into a desired pattern by an electromagnetic printhead (Inkjet printers). As each pin was driven
forward, it struck an inked ribbon and transferred a single dot the size of the pinhead to the paper.
Multiple dots combined into a matrix to form characters and graphics, hence the name dot matrix.
Another early print technology, daisy-wheel printers, made impressions of whole characters with a
CIT204: Computer Architecture and Organization 1 Lecture Note by Etus, C. 15
single blow of an electromagnetic printhead, similar to an electric typewriter. Laser printers have
replaced such printers in most commercial settings. Laser printers employ a focused beam of light to
etch patterns of positively charged particles on the surface of a cylindrical drum made of negatively
charged organic, photosensitive material. As the drum rotates, negatively charged toner particles
adhere to the patterns etched by the laser and are transferred to the paper. Another, less expensive
printing technology developed for the home and small businesses is inkjet printing. The majority of
inkjet printers operate by ejecting extremely tiny droplets of ink to form characters in a matrix of
dots—much like dot matrix printers.

Computer display (Monitor) devices have been in use almost as long as computers themselves. Early
computer displays employed the same cathode-ray tubes (CRTs) used in television and radar systems.
The fundamental principle behind CRT displays is the emission of a controlled stream of electrons that
strike light-emitting phosphors coating the inside of the screen. The screen itself is divided into multiple
scan lines, each of which contains a number of pixels—the rough equivalent of dots in a dot matrix
printer. The resolution of a monitor is determined by its pixel size. More recent liquid crystal displays
(LCDs) rely on liquid crystal cells that realign incoming polarized light. The realigned beams pass
through a filter that permits only those beams with a particular alignment to pass. By controlling the
liquid crystal cells with electrical charges, various colours or shades are made to appear on the screen.

f. Communication devices
The most familiar example of a communication device is the common telephone modem (from
modulator/demodulator). Modems modulate, or transform, a computer's digital message into an
analog signal for transmission over standard telephone networks, and they demodulate the analog
signal back into a digital message on reception. In practice, telephone network components limit
analog data transmission to about 48 kilobits per second. Standard cable modems operate in a similar
manner over cable television networks, which have a total transmission capacity of 30 to 40 megabits
per second over each local neighbourhood “loop.” (Like Ethernet cards, cable modems are actually
local area network devices, rather than true modems, and transmission performance deteriorates as
more users share the loop.) Asymmetric digital subscriber line (ADSL) modems can be used for
transmitting digital signals over a local dedicated telephone line, provided there is a telephone office
nearby—in theory, within 5,500 metres (18,000 feet) but in practice about a third of that distance.
ADSL is asymmetric because transmission rates differ to and from the subscriber: 8 megabits per
second “downstream” to the subscriber and 1.5 megabits per second “upstream” from the subscriber
to the service provider. In addition to devices for transmitting over telephone and cable wires, wireless
communication devices exist for transmitting infrared, radiowave, and microwave signals.

g. Computer Peripheral Interfaces (Buses)


A variety of techniques have been employed in the design of interfaces to link computers and
peripherals. An interface of this nature is often termed a bus. This nomenclature derives from the
presence of many paths of electrical communication (e.g., wires) bundled or joined together in a single
device. Hence, A computer bus is a bundle of wires that carry similar signals in the computer; Or A set
of wires that acts as a shared but common data path to connect multiple subsystems within the
computer system. Multiple peripherals can be attached to a single bus—the peripherals need not be
homogeneous. An example is the small computer systems interface (SCSI; pronounced “scuzzy”). This
popular standard allows heterogeneous devices to communicate with a computer by sharing a single
bus. Under the auspices of various national and international organizations, many such standards have
been established by manufacturers and users of computers and peripherals.

There are four buses in the computer, namely Control bus, Data bus, Address bus and Power Bus. But,
communication between the CPU, Memory and I/O (or peripheral) units are organized through the
buses. Inside the CPU, the signals are arranged in three buses: control, data and address buses.
CIT204: Computer Architecture and Organization 1 Lecture Note by Etus, C. 16
CONTROL CONTROL

ADDRESS ADDRESS
INPUT
OUTPUT
UNIT CENTRAL PROCESSING
DATA DATA UNIT
UNIT

D A C

D O
A D N
T R T

E R
A
O
MEMORY UNIT
L

The Bus architecture of the Computer system.

o Data bus lines are dedicated to moving data – actual information from one location to another.
o Control lines indicate which device has permission to use the bus and for what purpose
(read/write from memory or from I/O). They also transfer acks for bus requests, interrupts and
clock synchronization signals.
o Address lines indicate location (e.g. in memory) that data should be read/written.
o Power lines provide electrical power.

Buses can be loosely classified as serial or parallel. Parallel buses have a relatively large number of
wires bundled together that enable data to be transferred in parallel. This increases the
throughput, or rate of data transfer, between the peripheral and computer. SCSI buses are parallel
buses. Examples of serial buses include the universal serial bus (USB). USB has an interesting
feature in that the bus carries not only data to and from the peripheral but also electrical power.
Examples of other peripheral integration schemes include integrated drive electronics (IDE) and
enhanced integrated drive electronics (EIDE). Predating USB, these two schemes were designed
initially to support greater flexibility in adapting hard disk drives to a variety of different computer
makers. Buses are often divided into master/slave: master initiates actions and slave responds to
requests by master. Point-to-point bus connects two specific components (e.g. serial
port→modem, ALU→control unit). Multipoint bus connects many components e.g. CPU to memory
to monitor to disk controller.

CIT204: Computer Architecture and Organization 1 Lecture Note by Etus, C. 17


5. Memory Organisation
One memory cell holds one bit of information. Memory cells can be organized in various ways.
8 x 1 bit memory – 8 cells or locations whereby each cell stores 1 bit.

0 Cell

8 x 8 Memory = 64 bits.
16 x 4 memory = 64 bits.
0 1 2 3 4 5 6 7
0 1 2 3
0
0
1
1
2
2
3
3
4
4
5
5
6
6
7
7

10

11

12

13

14

15

CIT204: Computer Architecture and Organization 1 Lecture Note by Etus, C. 18


With the array arrangement, each cell can be uniquely addressed or identified. When there are n
address lines, 2n locations can be uniquely addressed.

MEMORY SIZES
The memory size is given 2n x m bits, where m is the word size i.e number of cells in a row.
4 bits is called a nibble.
8 bits forms a byte, thus 1 byte is an 8 bit word.
16 x 8 bit memory - 16 bytes.

Memory sizes are in exponential powers of two i.e-


2 4 8 16 32 128 256 512 1024 2048 4096 8192
1K 2K 4K 8K

16384 32768 65536


16K 32K 64K 1M 2M 4M …..1G 2G …

RANDOM ACCESS MEMORY –RAM


RAM is actually a Read/Write integrated circuit memory. The acronym RAM is because the memory is
accessed at random, in other words, all memory cells have equal access time. This is in contrast to
magnetic tape memory that has sequential access. Block diagram of a typical RAM chip is given in Fig

RAM
Data input lines
Address lines

Data output lines


CS

R/ W
CS = Active low Chip select.

R/ W = Read when it is high


Typical RAM chip symbol.
and write when low.

HOW THE RAM WORKS Active low Chip select.


To store data, present the data at the data input lines, set the address where you want the data stored
then make R/ W low. To read, present the address, make CS low and R/ W high. Data at the addressed
location will be at the Data output lines.

TYPES OF RAM
▪ STATIC RAM (SRAM)
▪ DYNAMIC RAM(DRAM)
STATIC RAM
The storage cell in a static RAM contains a flip-flop and a combination of gates and circuitry for addressing,
reading and writing. The Static RAM cells are arranged in a Matrix form and do not need refreshing. The IC
SRAMs come in dual-in-line packages. For use in the computer systems, SRAMs come already in small
printed circuit boards to facilitate installation on the motherboard. This is the SIMM –Single–In line
Memory Module you plug in to motherboards to increase computer primary memory or when existing one
is spoilt.

CIT204: Computer Architecture and Organization 1 Lecture Note by Etus, C. 19


DYNAMIC RAMs
The storage is by the inherent capacitance in the transistor. A charged capacitor represents a ‘1’, an
uncharged capacitor represents a ‘0’. The drawback in DRAM is that the capacitor loses its charge with
time, so for it to retain data, it has to be refreshed or rewritten every now and then (2ms).This refreshing is
done automatically.

DRAM/SRAM
The advantage of DRAM is that the storage cell is smaller than the storage cell in SRAM, thus a given silicon
area can contain more DRAM cells. SRAM is costlier than DRAM and also requires more power.

MSI RAMs
7489 24 x 4 bit SRAM. 4164 64K x 1 bit DRAM.
2101 28 x 4 bit SRAM. 41464 64K x 4 bit DRAM.
2114 210 x 4bit SRAM.

CASCADING RAM CHIPS


RAM chips can be cascaded to increase memory capacity or word size. For instance, two 24 x 4 bits RAM
chips can be cascaded as shown in Fig below to get a 24 x 8 bits RAM. Similarly, two can be cascaded to get
25 x 4 bits RAM (see Fig overleaf).

A RAM D0
B D1
C D2
D D3

Q0
Q1
CS Q2

Q3

A RAM D4
B D5
C D6
D D7

Q4
Q5
CS Q6

Q7

Increasing word size (24 x 8 bits) RAM.

CIT204: Computer Architecture and Organization 1 Lecture Note by Etus, C. 20


A RAM D0
B D1
C D2
D D3

Q0
Q1
CS Q2

Q3

RAM

CS

Increasing word capacity (25 x 4 bits) RAM.

READ ONLY MEMORY (ROM)


ROM chip is also accessed at random i.e. all the memory locations have equal access time. ROM has a
similar matrix arrangement as the RAM but its storage cell uses a different technology (Fig 9.5). For the
basic ROM, its content can only be read, the user cannot write data into the ROM hence the name Read
Only Memory. The information is written into the chip during manufacture (Masked programmed ROM).
ROM is non-volatile; its stored data cannot be modified or erased even when power is switched off. ROM
storage cell is simpler than RAM storage cell. It can be a simple diode or a transistor hence ROM is
classified as a combinational circuit.
No connection

2 of 4
00
DECODER
01

10
Address ROM Content
00 0011
11 01 0100
10 0010
11 1000

ROM Matrix Concept.

Masked programmed ROM must be required in huge numbers to justify the manufacturing cost. Eg. ROM
Bios used in computer mother boards, ROMs used in microwave ovens, cars, etc.

CIT204: Computer Architecture and Organization 1 Lecture Note by Etus, C. 21


PROGRAMMABLE READ ONLY MEMORIES (PROMS)
This is programmed by the user in the laboratory using a special equipment called a PROM Programmer.
The PROM is manufactured with diodes/transistors at all row/column intersections (Fig 9.6). Each
diode/transistor has a fusible link in series with it. Thus, after manufacture it implies that ‘1’ is stored in all
memory locations. The PROM programmer equipment is used to convert some 1s to 0s as required. The
Programmer equipment applies twice the supply voltage to selected links. The selected links are
permanently burnt away to store 0. Thus, the PROM is programmed once only.

Fusible link

00
DECODER
01
CIRCUITRY

10

11 Address PROM Content


00 1111
01 1111
10 1111
11 1111

PROM STRUCTURE

MSI PROM
74S18832 x 8 bit.
74S287256 x 4 bit.
74S472512 x 8 bit.
74S4761024 x 4 bit.

ERASABLE PROGRAMMABLE READ ONLY MEMORY (EPROM)


EPROM is very useful for small projects because it can be reprogrammed. The storage cell of EPROM is a
MOSFET with an insulated polysilicon floating gate. When the floating gate receives a high, electrons are
injected into it, that makes the MOSFET to come on. An on MOSFET stores a ‘1’ and an OFF MOSFET stores
a ‘0’. Since the floating gate is completely insulated from the surrounding elements, the charge stays. It
does not leak away even when power is switched off.
To erase the charge on the floating gate, the chip is exposed to UV light (about 2500A o). EPROMs chips are
constructed with a tiny transparent window on top of the chip to allow UV light. The EPROM Eraser
equipment works for about 7minutes to 30minutes. Long exposure to sunlight can make EPROM to lose its
memory since sunlight contains UV rays.

MSI EPROM
2716 2048 X 8 bit.
2764 8192 x 8 bit.
27128 6,384 x 8 bit.
27512 65,536 x 8 bit.

CIT204: Computer Architecture and Organization 1 Lecture Note by Etus, C. 22


ELECTRICALLY ERASABLE READ ONLY MEMORY (EEPROM)
The disadvantage of EPROM is that it requires special programming and erasing equipment. EEPROM
eliminates this drawback because the floating gate technology it uses can be programmed and erased by
equal and opposite voltages. With additional circuitry, both programming and erasing can be done in-
circuit and in a very short time and individual memory slots can be selectively erased. Bulk erasing is not
done as in EPROM.

MSI EEPROM
2816A 2048 x 8 bits.
2817A 2048 x 8 bits.
2865A 8192 x 8 bits.

PROGRAMMABLE LOGIC ARRAYS(PLAs)


A programmable logic array (PLA) is used for implementing random logic in sum of products form. The PLA
structure comprises of an AND array and an OR array as shown in the Fig 9.7. A PLA is specified by the
number inputs, product terms and outputs. The PLA is programmed with similar technology as the ROM.
There are FPLA, field programmable PLAs, those you programme in the lab and then Masked programmed
PLAs, those programmed at the factory. With IC technology, it has been possible to integrated flip-flops on
same chip as PLA to realise sequential systems.

n-inputs
AND
OR
array
array
K-word
lines
m output lines

Fig 9.7: PLA Structure

CIT204: Computer Architecture and Organization 1 Lecture Note by Etus, C. 23

You might also like