0% found this document useful (0 votes)
9 views

COA Lecture Note Module 1 Part 2

The document provides an overview of various types of CPU registers, including the Program Counter, Instruction Register, and Memory Address Register, detailing their functions in program execution and data handling. It also explains basic operational concepts of instruction execution, memory access, and the role of buses in connecting the CPU with memory and I/O devices. Additionally, it discusses the importance of control signals and the interconnection of buses for efficient data transfer within a computer system.

Uploaded by

jyotiranjan
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

COA Lecture Note Module 1 Part 2

The document provides an overview of various types of CPU registers, including the Program Counter, Instruction Register, and Memory Address Register, detailing their functions in program execution and data handling. It also explains basic operational concepts of instruction execution, memory access, and the role of buses in connecting the CPU with memory and I/O devices. Additionally, it discusses the importance of control signals and the interconnection of buses for efficient data transfer within a computer system.

Uploaded by

jyotiranjan
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Module-1 (Part-2)

Registers: In addition to the ALU and the control unit of the CPU, there are a number of internal
registers that are used either as a pair or as a single unit. The main purpose of these registers is not
only to store the control information of the computer operation but also to perform program
execution.

Types of Register

i) Program Counter (PC): This register holds the address of the next instruction to be executed. It will
increment on the next clock-pulse when a function input is high-this function input comes from the
control unit of the CPU. Since program instructions are stored in memory in sequential order, the PC
is usually incremented once per instruction. The PC is part of the address handling area of the CPU.

ii) Instruction Register (IR): This register holds the binary code of the instruction to be executed. The
Instruction Register (IR) is part of the instruction handling area.

iii) Memory Address Register (MAR): This register holds the address of the data to be accessed in
RAM. Since memory is usually much slower than CPU registers, the MAR is considered neither part
of the CPU nor the RAM, but sort of a go-between. The CPU can store an address in the MAR in one
clock- pulse and the MAR can hold the address lines to the RAM for 3 or 4 clock-pulses until memory
is accessed.

iv) Memory Buffer Register (MBR): This register is used to transfer data to and from memory. Like
the MAR, the MBR is neither part of the CPU nor the memory unit, but acts as a buffer between
them. It can wait 3 or 4 clock-pulses for data to be accessed in the RAM, and transfer it to or from
the CPU in one clock cycle.

v) Accumulator: This register holds temporary data during calculations. The SR register is an example
of an accumulator. We will consider the SR register to be in the ALU part of the CPU.

vi) General Purpose Registers: These registers generally serve as temporary storage for data and
addresses. In some computers, the user may specify them as accumulators or program counters. The
A and B registers are examples of general purpose registers.

vii) Index Registers: This register holds an address so that the CPU can access data anywhere in
memory. Index registers incorporate the feature of a counter in that they may be automatically
incremented and are usually used to sequentially access and process blocks of data. Index registers
may contain a relative address and may be added to a base register or a general purpose register to
obtain the actual address. Index registers are part of the address handling area of the CPU.

viii) Condition Code Register (CCR): This register holds 1-bit flags which represent the state of
conditions inside the CPU. Because the state of these flags is the basis for computer decision-making
for conditional instructions, the CCR is part of the instruction handling area of the CPU.Types of Flags
found in CCRs

a) Carry: This flag is set to 1 if the last operation resulted in a carry from the most significant bit.

COA Lecture Notes Jyotiranjan Sahoo, Asst. Prof CSE BPUT, Odisha
b) Zero: This flag is set to 1 if the last operation resulted in a zero.

c) Overflow: This flag is set to I if the last operation resulted in a two's complement overflow, a carry
into and out of the sign bit.

d) Sign: This flag is set to 1 if the most significant bit of the result of the last operation was a 1,
designated a negative two's complement number.

e) Parity: This fing is set to 1 if the result of the last operation contained an even number of Is (called
even parity).

f) Half-Carry: This flag is set to 1 if the last operation generated a carry from the lower half word to
the upper half word.

2) Interrupt Enable: This flag is set to 1 if an interrupt is allowed, 0 if not. An interrupt is when the
program the computer is running is temporarily interrupted so that the CPU may handle some other
talk such as input/output from a disk drive.

b) Stack Pointer (SP): This register contains the address of the top of the stack. The SP is
incremented each time the CPU stores a word of data in RAM at the address "pointed" to by the SP.
The stack pointer is decremented each time the CPU uses the SP to retrieve a word of data from the
top of the stack. In this way the SP allows the CPU to build-up and build-down a stack of data in the
RAM. The SP is part of the address handling area of the CPU.

ix) Flag Register: Flag register (status register or condition code register (CCR)) is a collection of flag
bits for a processor. For example, FLAGS register of x86 architecture based microprocessors.

Basic Operational Concepts


The activity in a computer is governed by instructions. To perform a given task, an appropriate
program consisting of a set of instructions is stored in the main memory. Individual instructions are
brought from the memory into the processor, which executes the specified operations. Data to be
used as operands are also stored in the memory. For example, a typical instruction may be,

Add LOCA, R0

This instruction adds the operand at memory location LOCA to the operand in a register in the
processor, R0, and places the sum into register RO. The original contents of location LOCA are
preserved, whereas those of RO are overwritten. This instruction requires the performance of
several steps.

1) Instruction if fetched from the main memory into the processor.

2) Operand at LOCA is fetched and added to the contents of R0.

3) Resulting sum in stored in register R0.

COA Lecture Notes Jyotiranjan Sahoo, Asst. Prof CSE BPUT, Odisha
The preceding Add instruction combines a memory access operation with an ALU operation. In many
modem computers, these two types of operations are performed by separate instructions for
performance reasons. The effect of the above instruction can be realized by the two-instruction
sequence

Load LOCA, RI Transfers contents of main memory location LOCA into processor register R1;

Add RI, R0 Adds the contents of registers R1 and RO and places the sum into RO. Note that
this destroys the former contents of register R1 as well as those of RO, whereas the original contents
of memory location LOCA are preserved

Transfers between the main memory and the processor are started by sending the address of the
memory location to be accessed to the memory unit and issuing the appropriate control signals. The
data are then transferred to or from the memory.

In additional to the ALU and the control circuitry, the processor contains a number of registers used
for temporary storage of data. The Instruction Register (IR) holds the instruction that is currently
being executed. Its output is available to the control circuits, which generate the timing signals that
control the various processing elements involved in executing the instruction. The Program Counter
(PC) register keeps track of the execution of a program. It contains the memory address of the
instruction currently being executed. During the execution of an instruction, the contents of the PC
are updated to correspond to the address of the next instruction to be executed. It is customary to
say that the PC points to the next instruction that is to be fetched from the memory. Besides the IR
and PC, figure 1.2 shows n general- purpose registers, Ro through R-1

Finally, two registers facilitate communications with the main memory. These are the Memory
Address Register (MAR) and the Memory Data Register (MDR). The MAR holds the address of the
location to or from which data are to be transferred. The MDR contains the data to be written into
or read out of the address location

COA Lecture Notes Jyotiranjan Sahoo, Asst. Prof CSE BPUT, Odisha
Programs reside in the main memory and usually get there through the input unit. Execution of the
program starts when the PC is set to point to the first instruction of the program. The contents of the
PC are transferred to the MAR and a Read control signal is sent to the memory. After the time
required accessing the memory elapses, the addressed. word (in this case, the first instruction of the
program) is read out of the memory and loaded into the MDR. Next the contents of the MDR are
transferred to the IR. At this point, the instruction is ready to be decoded and executed.

If the instruction involves an operation to be performed by the ALU, it is necessary to obtain the
required operands. If an operand resides in the memory (it could also be in a general-purpose
register in the processor), it has to be fetched by sending its address to the MAR and initiating a
Read cycle. When the operand has been read from the memory into the MDR, it may be transferred
from the MDR to the ALU. After one or more operands are fetched in this way, the ALU can perform
the desired operation. If the result of this operation is to be stored in the memory, then the result is
sent to the MDR. The address of the location, where the result is to be stored is sent to the MAR,
and a Write cycle is initiated. While an instruction is being executed, the contents of the PC are
incremented so that the PC points to next instruction to be executed. Thus, as soon as the execution
of the current instruction is completed, a new instruction fetch may be started.

In addition to transferring data between the main memory and the processor, the computer accepts
data from input devices and sends data to output devices. Thus, some machine instructions with the
ability to handle I/O transfers are provided.

Normal execution of programs may be pre-empted if some device requires urgent servicing. For
example, a monitoring device in a computer-controlled industrial process may detect a dangerous
condition. In order to deal with the situation quickly, the normal flow of the running program must
be interrupted. To do this, the device raises an interrupt signal. An interrupt is a request from an I/O
device for service by the processor. The processor provides the requested service by executing an
appropriate interrupt-service routine. Because such diversions may alter the internal state of the
processor, its state must be saved in memory locations before servicing the interrupt. Normally, the
contents of the PC, the general registers, and some control information are stored in memory. When
the interrupt-service routine is completed, the state of the processor is restored so that the
interrupted program may continue.

Bus Structures

Various input/output devices and memory devices are connected to a CPU by groups of lines called
buses. A bus is a communication pathway connecting two or more devices. A key characteristic of a
bus is that it is a shared transmission medium. Multiple devices connect to the bus

Types of Buses

1) Data Bus: Data bus, as the name suggests, carries data. This data can be input from some
device, such as the keyboard, or some value to be read from or written to memory.
Regardless of the nature of the data, it is purpose of the data bus to serve as a conduit
between the CPU and the other devices in the system for the sole purpose of sending data
back and forth. The width of this bas determines the amount of data that can be sent in a
single memory access operation.

COA Lecture Notes Jyotiranjan Sahoo, Asst. Prof CSE BPUT, Odisha
Figure : Diagram of a Simplified Computer System containing Two Buses, One for Memory and
One for the Remaining Devices. Each Bus is Broken-Down into Three Sub- Sections, a Control
Bus, an Address Bus, and a Data Bus

For example, early computers had a data bus that was only eight wires wide. This meant that data
had to be sen in 8 bit (1 byte) chunks. So, a 32 bit single precision floating point value would require
four memory acet ss cycles to transmit across the bus. Modern computers, on the other hand, have
a data bus that is much wider, often 32 or 64 bits wide. In fact, some modern computers have even
wider data paths.

2) Adress Bus: The address bus provides a means for the CPU to specify where it intends to
read data from or write data to within the system.

For example, if the processor needs to read data from memory at a particular location, it would
place the address of that location on the address bus and then send a control signal to the memory
controller that it wants to read data. The memory controller reads the address of the address bas
and places the requested data onto the data bus. It then signals the CPU that data is ready to read.
CPU can then read the data that it needs. A similar process is used in the writing of data.

Value placed on the address bus determines the memory location of the data being dealt with. Many
computer systems make no distinction between actual system memory locations and external
devices such as video cards and keyboard interfaces. These devices are assigned memory locations
and data is read from and written to these devices just as if they were system memory.

Width (number of wires) of the address bus determines the total number of memory locations that
the computer can access. Early personal computers had a 16 bit address bus which meant that they
could access only 65,536 memory locations. Since most of these computers used an 8 bit or 1 byte
memory location size, this meant that early computers could address only 64 kilobytes of memory.
Note that when discussing sizes of storage related to computers, a kilobyte is not 1000 bytes, but
rather the nearest base 2 equivalent, which in this case is 1024 (two to the tenth power) bytes. Thus,
64 kilobytes is actually equal to 65,536 bytes instead of 64,000. Modern computers use address
buses with widths between 32 and 64 bits. This allows these computers to access more than 4

COA Lecture Notes Jyotiranjan Sahoo, Asst. Prof CSE BPUT, Odisha
million address locations. Some computers with extreme memory requirements use even more than
64 bits for their memory bus.

3) Control Bus: Control lines regulate the activity on the bus. CPU sends signals on the control bus to
enable the outputs of addressed memory devices or port address.

Typical control bus signals are: Read, Write, Interrupt Request (INTR), Bus Request (BR), Bus Grant
(BG), Clock, Reset, Ready and so on.

As figure shows, many computer systems contain more than one bus. The configuration in the figure
shows a computer system with two large buses, each composed of a control, address, and data bus.
The common configuration shown above is used when the system designer wishes to operate
certain parts of the system at different clock speeds.

For example, since modern memory is far faster than the external input/output devices such as the
hard drive and keyboard, it is desirable to run the memory bus clock at a higher rate of speed so that
the transfer of data from memory can occur more quickly. At the same time, we do not want to
require the input/output devices to respond at speeds above those that are practical. So, to allow
these devices to use a slower ciock, we simply create a bus with a slower clock rate and connect this
to our higher-speed bus through an interface. This interface bridges the gap between the high speed
bus connected to the memory and the processor and the slower speed bus that connects to the disk
drives and external peripheral devices of the computer system. It passes all of the necessary data
and communication between the two buses.

Bus Interconnection Scheme


Typically, a bus consists of multiple communication pathways, or lines. Each line is capable of
transmitting signals representing binary 1 and binary 0.

Computer systems contain a number of different buses that provide pathways between components
at various levels of the computer system hierarchy. A bus that connects major computer
components (processor memory, I/O) is called a system bus. The most common computer
interconnection structures are based on the use of one or more system buses.

A system bus consists of about 50 to 100 of separate lines. Each line is assigned a particular meaning
or function. Although there are many different bus designs, on any bus, the lines can be classified
into three functional groups (figure 1.10): data, address, and control lines. In addition, there may be
power distribution lines that supply power to the attached modules.

The data lines provide a path for moving data between system modules. These lines, collectively, are
called the data bus. The data bus may consist of from 32 to hundreds of separate lines, the number
of lines being referred to as the width of the data bus. Because each line can carry only 1 bit at a
time, the number of lines determines how many bits can be transferred at a time. The width of the
data bus is a key factor in determining overall system performance.

The address lines are used to designate the source or destination of the data on the data bus. For
example, if the Processor wishes to read a word (8, 16, or 32 bits) of data from memory, it puts the

COA Lecture Notes Jyotiranjan Sahoo, Asst. Prof CSE BPUT, Odisha
address of the desired word on the address lines. Clearly, the width of the, address bus determines
the maximum possible memory capacity of the system.

The control lines are used to control the access to and the use of the data and address lines. Because
the data and address lines are shared by all components, there must be a means of controlling their
use. Control signals transmit both command and timing information between system modules.
Timing signals indicate the validity of data and address information. Command signals specify
operations to be performed.

1) Memory Write: Causes data on the bus to be written into the addressed location.
2) Memory Read: Causes data from the addressed location to be placed on the bus.
3) L/O Write: Causes data on the bus to be output to the addressed I/O port.
4) I/O Read: Causes data from the addressed I/O port to be placed on the bus.
5) Transfer ACK: Indicates that data have been accepted from or placed on the bus.
6) Bus Request: Indicates that a module needs to gain control of the bus.
7) Bus Grant: Indicates that a requesting module has been granted control of the bus.
8) Interrupt Request: Indicates that an interrupt is pending.
9) Interrupt ACK: Acknowledges that the pending interrupt has been recognized.
10) Clock: Used to synchronize operations.
11) Reset: Initializes all modules.

Functioning of Bus in Memory Transfers

Some devices that attach to a bus are active and can initiate bus transfers, whereas others are
passive and wait for requests. The active ones are called masters; the passive ones are called slaves.
When the CPU orders a disk controller to read or write a block, the CPU is acting as a master and the
disk controller is acting as a slave. However, later on, the disk controller may act as a master when it
commands the memory to accept the words it is reading from the disk drive.

Most bus masters are connected to the bus by a chip called a bus driver, which is essentially a digital
amplifier. Similarly, a bus receiver connects most slaves to the bus. For devices that can act as both
master and slave, a combined chip called a bus transceiver is used. These bus interface chips are
often tri-state devices, to allow them to float (disconnect) when they are not needed, or are hooked
up in a somewhat different way, called open collector, that achieves a similar effect.

Like a CPU, a bus also has address, data, and control lines. However, there is not necessarily a one-
to-one mapping between the CPU pins and the bus signals.

COA Lecture Notes Jyotiranjan Sahoo, Asst. Prof CSE BPUT, Odisha
A typical bus might have one line for memory read, a second for memory write, a third for I/O read,
a fourth for 1/0 write, and so on. A decoder chip would then be needed between the CPU and such a
bus to match the two sides up, that is, to convert the 3-bit encoded signal into separate signals that
can drive the bus lines.

Multiple-Bus Hierarchies

If a great number of devices are connected to the bus, performance will suffer. There are two main
causes.

1) In general, the more devices attached to the bus, the greater the bus length and hence the greater
the propagation delay. This delay determines the time it takes for devices to coordinate the use of
the bus. When control of the bus passes from one device to another frequently, these propagation
delays can noticeably affect performance.

2) The bus may become a bottleneck as the aggregate data transfer demand approaches the
capacity of the bus. This problem can be countered to some extent by increasing the data rate that
the bus can carry and by using wider buses.

Accordingly, most computer systems use multiple buses, generally laid out in a hierarchy. A typical
traditional structure is shown in figure 1.10. There is a local bus that connects the processor to a
cache memory and that may support one or more local devices. The cache memory controller
connects the cache not only to this local bus, but also to a system bus to which are attached the
entire main memory module..

Hence, main memory can be moved off of the local bus onto a system bus. In this way, I/O transfers
to and from the main memory across the system bus do not interfere with the processor's activity.

It is possible to connect I/O controllers directly onto the system bus. A more efficient solution is to
make use of one or more expansion buses for this purpose. An expansion bus interface buffers data
transfers between the system bus and the I/O controllers on the expansion bus. This arrangement
allows the system to support a wide variety of I/O devices and at the same time insulate memory-to-
processor traffic from I/O traffic.

This traditional bus architecture is reasonably efficient but begins to break down as higher and
higher performance is seen in the I/O devices. In response to these growing demands, a common
approach taken by industry is to build a high-speed bus that is closely integrated with the rest of the
system, requiring only, bridge between the processor's bus and the high-speed bus. This
arrangement is sometimes known as Mezzanine Architecture.

The advantage of this arrangement is that the high-speed bus demand devices into closer integration
with the processor and at the same time is independent of the processor. Thus, differences in
processor and high-speed bus speeds and signal line definitions are tolerated. Changes in processor
architecture do not affect the high- speed bus, and vice versa.

COA Lecture Notes Jyotiranjan Sahoo, Asst. Prof CSE BPUT, Odisha
Von Neumann Concept

Program is made-up of a sequence of numbers that represent individual operations. These


operations are known as machine instructions or just instructions, and set of operations that a given
processor can execute is known as its instruction set.

Stored Program Concept

A stored program concept is one in which first the program and data are stored in the main memory
and then the processor fetches instructions and execute them, one after another.

COA Lecture Notes Jyotiranjan Sahoo, Asst. Prof CSE BPUT, Odisha
In von Neumann's stored-program computer architecture the program instructions and data are
stored in the main memory units without, distinguishing these words (bytes) from one another.
Figure 1.2 shows the memory architecture. Computers that have stored-program architecture are
also sometimes called von Neumann computers, after John von Neumann, one of the developers of
this concept.

Almost all computers in use today are stored-program computers. They represent programs as
numbers that are stored in the same address space as data

The stored-program abstraction (representing astructions as numbers stored in memory) was one of
the major breakthroughs in early computer architecture. Prior to this breakthrough, many
computers were programmed by setting roitches or rewiring circuit boards to define the new
program, which required a great deal of time and was prone to errors.

Advantages of Stored Program Concept

The stored-program abstraction provides two major advantages over previous approaches:

1) Exes of Storing and Loading: It allows programs to be easily stored and loaded into the machine
(processor) from the main memory. The set tof control signals is same for the instructions and data
fetches. Once a program has been developed and debugged, the numbers that that represent its
instructions can be written-out onto a storage device, allowing the program to be loaded back into
(main) memory at some point in the future

2) Acts as Self-Modifying Programs: Stored-program abstraction allows programs to treat themselves


or other programs as data. Programs that treat themselves as data also function. as the self-
modifying programs. In a self-modifying program, some of the instructions in a program compute
other instructions in the program. Self-modifying programs were common on early computers,
because they were often faster than non-self- modifying programs, and because early computers
implemented a small number of instructions, making some operations hard to do without self-
modifying code. For example, the loops and subroutine could not be implemented without
modifying the instructions, as index register and stack pointer concepts and corresponding
addressing modes s and instructions were not present. In fact, self-modifying code was the only way
to implement a conditional branch on at least one early computer the instruction set did not provide
a conditional branch operation.

So programmers implemented conditional branches by writing self-modifying code that computed


the designation addresses of unconditional branch instructions as the program executed.

Self-modifying codes have become much less common in more-modern machines. Also, due to the
powerful instruction set in the new machines it may not be needed. This is because changing the
programs during execution makes it harder to debug. As computers have become faster, ease of
program implementation and debugging has become more important than the performance
improvements achievable through self-modifying code in most cases. Also, memory systems with
caches make self-modifying code less efficient, reducing the performance improvements that can be
gained by using this technique.

COA Lecture Notes Jyotiranjan Sahoo, Asst. Prof CSE BPUT, Odisha

You might also like