0% found this document useful (0 votes)
11 views

Computer Organization

Uploaded by

manank.7409
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Computer Organization

Uploaded by

manank.7409
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Unit 1: Computer Arithmetic

Construction of ALU
ALU is a digital circuit that provides arithmetic and logic operations. It is the fundamental
building block of the central processing unit of a computer. A modern central processing
unit(CPU) has a very powerful ALU and it is complex in design. In addition to ALU modern CPU
contains a control unit and a set of registers. Most of the operations are performed by one or
more ALUs, which load data from the input register. Registers are a small amount of storage
available to the CPU. These registers can be accessed very fast. The control unit tells ALU what
operation to perform on the available data. After calculation/manipulation, the ALU stores the
output in an output register.
The CPU can be divided into two sections: the data section and the control section. The data
section is also known as the data path.

BUS
In early computers BUS were parallel electrical wires with multiple hardware connections.
Therefore a bus is a communication system that transfers data between components inside a
computer, or between computers. It includes hardware components like wires, optical fibers,
etc and software, including communication protocols. The Registers, ALU, and the
interconnecting BUS are collectively referred to as data paths.

Types of the bus


There are mainly three type of bus:-

Address bus: Transfers memory addresses from the processor to components like storage and
input/output devices. It’s one-way communication.
Data bus: carries the data between the processor and other components. The data bus is
bidirectional.
Control bus: carries control signals from the processor to other components. The control bus
also carries the clock’s pulses. The control bus is unidirectional.
The bus can be dedicated, i.e., it can be used for a single purpose or it can be multiplexed, i.e.,
it can be used for multiple purposes. when we would have different kinds of buses, different
types of bus organizations will take place.
Registers
In Computer Architecture, the Registers are very fast computer memory which is used to
execute programs and operations efficiently. but In that scenario, registers serve as gates,
sending signals to various components to carry out little tasks. Register signals are directed by
the control unit, which also operates the registers.

The following list of five registers for in-out signal data storage:
Program Counter
A program counter (PC) is a CPU register in the computer processor which has the address of
the next instruction to be executed from memory. As each instruction gets fetched, the
program counter increases its stored value by 1. It is a digital counter needed for faster
execution of tasks as well as for tracking the current execution point.
Instruction Register
In computing, an instruction register (IR) is the part of a CPU’s control unit that holds the
instruction currently being executed or decoded. The instruction register specifically holds the
instruction and provides it to the instruction decoder circuit.
Memory Address Register
The Memory Address Register (MAR) is the CPU register that either stores the memory address
from which data will be fetched from the CPU, or the address to which data will be sent and
stored. It is a temporary storage component in the CPU(central processing unit) that
temporarily stores the address (location) of the data sent by the memory unit until the
instruction for the particular data is executed.
Memory Data Register
The memory data register (MDR) is the register in a computer’s processor, or central processing
unit, CPU, that stores the data being transferred to and from the immediate access storage.
Memory data register (MDR) is also known as memory buffer register (MBR).
General Purpose Register
General-purpose registers are used to store temporary data within the microprocessor. It is a
multipurpose register. They can be used either by a programmer or by a user.
What is Data Path?
Suppose that the CPU needs to carry out any data processing action, such as copying data from
memory to a register and vice versa, moving register content from one register to another, or
adding two numbers in the ALU. Therefore, whenever a data processing action takes place in
the CPU, the data involved for that operation follows a particular path, or data path.

Data paths are made up of various functional components, such as multipliers or arithmetic
logic units. Data path is required to do data processing operations.

One Bus Organization


In one bus organization, a single bus is used for multiple purposes. A set of general-purpose
registers, program counters, instruction registers, memory address registers (MAR), memory
data registers (MDR) are connected with the single bus. Memory read/write can be done with
MAR and MDR. The program counterpoints to the memory location from where the next
instruction is to be fetched. Instruction register is that very register will hold the copy of the
current instruction. In the case of one bus organization, at a time only one operand can be read
from the bus.
As a result, if the requirement is to read two operands for the operation then the read
operation needs to be carried twice. So that’s why it is making the process a little longer. One of
the advantages of one bus organization is that it is one of the simplest and also this is very
cheap to implement. At the same time a disadvantage lies that it has only one bus and this “one
bus” is accessed by all general-purpose registers, program counter, instruction register, MAR,
MDR making each and every operation sequential. No one recommends this architecture
nowadays.

Two Bus Organization


To overcome the disadvantage of one bus organization another architecture was developed
known as two bus organization. In two bus organizations, there are two buses. The general-
purpose register can read/write from both the buses. In this case, two operands can be fetched
at the same time because of the two buses. One bus fetch operand for ALU and another bus
fetch for register. The situation arises when both buses are busy fetching operands, the output
can be stored in a temporary register and when the buses are free, the particular output can be
dumped on the buses.
There are two versions of two bus organizations, i.e., in-bus and out-bus. From in-bus, the
general-purpose register can read data and to the out bus, the general-purpose registers can
write data. Here buses get dedicated.
Three Bus Organization
In three bus organizations we have three buses, OUT bus1, OUT bus2, and an IN bus. From the
out buses, we can get the operand which can come from the general-purpose register and
evaluated in ALU and the output is dropped on In Bus so it can be sent to respective registers.
This implementation is a bit complex but faster in nature because in parallel two operands can
flow into ALU and out of ALU. It was developed to overcome the busy waiting problem of two
bus organizations. In this structure after execution, the output can be dropped on the bus
without waiting because of the presence of an extra bus. The structure is given below in the
figure.
The main advantages of multiple bus organizations over the single bus are as given below.
Increase in size of the registers.
Reduction in the number of cycles for execution.
Increases the speed of execution or we can say faster execution.
Unit 2: Processor Design
Addressing Modes

Addressing modes in computer architecture refer to the techniques and rules used by
processors to calculate the effective memory address or operand location for data operations.
They define how instructions specify the source or destination of data within the system’s
memory or registers. Here are some common addressing modes:
- Implied Mode: In implied addressing, the operand is specified in the instruction itself. For
example, the instruction CLC (used to reset Carry flag to 0) is an implied mode instruction.
- Immediate Addressing Mode: In this mode, data is present in the address field of the
instruction. For example, MOV AL, 35H moves the data 35H into AL register.
- Register Mode: In register addressing, the operand is placed in one of the 8-bit or 16-bit
general-purpose registers. For example, MOV AX, CX moves the contents of CX register to AX
register.
- Register Indirect Mode: In this addressing, the operand’s offset is placed in any one of the
registers BX, BP, SI, DI as specified in the instruction. For example, MOV AX, [BX] moves the
contents of memory location addressed by the register BX to the register AX.
- Auto Indexed (Increment Mode): Effective address of the operand is the contents of a register
specified in the instruction. After accessing the operand, the contents of this register are
automatically incremented to point to the next consecutive memory location.
These are just a few examples. There are other addressing modes as well, such as Direct,
Indirect, Relative, Indexed, Base Register, Auto-Decrement, and more. Each of these modes has
its own use cases and benefits depending on the specific requirements of the instruction or the
program.
Unit 3: Memory Organization
A memory unit is the collection of storage units or devices together. The memory unit stores
the binary information in the form of bits. Generally, memory/storage is classified into 2
categories:
Volatile Memory: This loses its data, when power is switched off.
Non-Volatile Memory: This is a permanent storage and does not lose any data when power is
switched off.

Memory Hierarchy

The total memory capacity of a computer can be visualized by hierarchy of components. The
memory hierarchy system consists of all storage devices contained in a computer system from
the slow Auxiliary Memory to fast Main Memory and to smaller Cache memory.
Auxillary memory access time is generally 1000 times that of the main memory, hence it is at
the bottom of the hierarchy.
The main memory occupies the central position because it is equipped to communicate directly
with the CPU and with auxiliary memory devices through Input/output processor (I/O).
When the program not residing in main memory is needed by the CPU, they are brought in
from auxiliary memory. Programs not currently needed in main memory are transferred into
auxiliary memory to provide space in main memory for other programs that are currently in
use.
The cache memory is used to store program data which is currently being executed in the CPU.
Approximate access time ratio between cache memory and main memory is about 1 to 7~10

Memory Access Methods

Each memory type, is a collection of numerous memory locations. To access data from any memory, first
it must be located and then the data is read from the memory location. Following are the methods to
access information from memory locations:

Random Access: Main memories are random access memories, in which each memory location
has a unique address. Using this unique address any memory location can be reached in the
same amount of time in any order.
Sequential Access: This methods allows memory access in a sequence or in order.
Direct Access: In this mode, information is stored in tracks, with each track having a separate
read/write head.

Main Memory
The memory unit that communicates directly within the CPU, Auxillary memory and Cache
memory, is called main memory. It is the central storage unit of the computer system. It is a
large and fast memory used to store data during computer operations. Main memory is made
up of RAM and ROM, with RAM integrated circuit chips holing the major share.
RAM: Random Access Memory
DRAM: Dynamic RAM, is made of capacitors and transistors, and must be refreshed every
10~100 ms. It is slower and cheaper than SRAM.
SRAM: Static RAM, has a six transistor circuit in each cell and retains data, until powered off.
NVRAM: Non-Volatile RAM, retains its data, even when turned off. Example: Flash memory.
ROM: Read Only Memory, is non-volatile and is more like a permanent storage for information.
It also stores the bootstrap loader program, to load and start the operating system when
computer is turned on. PROM(Programmable ROM), EPROM(Erasable PROM)
and EEPROM(Electrically Erasable PROM) are some commonly used ROMs.
PROM (Programmable Read-Only Memory): This is a type of ROM that can be programmed only
once by a user. After programming the PROM, the information written to it becomes
permanent and cannot be erased.
EPROM (Erasable Programmable Read-Only Memory): This is a type of ROM that can be erased
and reprogrammed using ultraviolet light. It allows manufacturers to modify or reprogram the
chip.
EEPROM (Electrically Erasable Programmable Read-Only Memory): This is a type of ROM that
can be erased and reprogrammed using an electrical charge. It is a replacement of both PROM
and EPROM and is used in many applications including computers, microcontrollers, smart
cards, etc. to store data, erase and to reprogram.

Auxiliary Memory
Devices that provide backup storage are called auxiliary memory. For example: Magnetic disks
and tapes are commonly used auxiliary devices. Other devices used as auxiliary memory are
magnetic drums, magnetic bubble memory and optical disks.
It is not directly accessible to the CPU, and is accessed using the Input/Output channels.

Cache Memory
The data or contents of the main memory that are used again and again by CPU, are stored in
the cache memory so that we can easily access that data in shorter time.
Whenever the CPU needs to access memory, it first checks the cache memory. If the data is not
found in cache memory then the CPU moves onto the main memory. It also transfers block of
recent data into the cache and keeps on deleting the old data in cache to accomodate the new
one.
Addressing Scheme for Main Memory: The addressing scheme for main memory refers to how
the CPU generates addresses to access specific locations in the main memory. The address
generated by the CPU is divided into a segment number and an offset within the segment. The
memory management unit (MMU) uses the segment table, which contains the address of the
page table (base) and limit.

Segmented Memory System: Segmentation is a memory management technique that divides a


computer's primary memory into segments or sections. Each segment can be allocated to a
process. The segment table stores all the details about the segments.

Paged Segment Memory: Paged segmentation is a memory management technique that divides
a process’s address space into segments and then divides each segment into pages. This allows
for a flexible allocation of memory, where each segment can have a different size, and each
page can have a different size within a segment.

High-Speed Memories: High-speed memories are typically found in the CPU and include
registers and cache memory. Registers are small, high-speed memory units located in the CPU.
Cache memory is an extremely fast memory type that acts as a buffer between RAM and the
CPU.

Characteristics of Cache Memory: Cache memory is an extremely fast memory type that acts as
a buffer between RAM and the CPU. It holds frequently requested data and instructions so that
they are immediately available to the CPU when needed. Cache memory is costlier than main
memory or disk memory but more economical than CPU registers. It allows the processor to
improve its performance and the results of the tasks by disposing its information as if it were a
continuous use tool.

Optimization of memory hierarchy:


Optimization of the memory hierarchy is crucial for improving the performance of a computer
system. Some key aspects to consider:
1. Memory Hierarchy Design: The memory hierarchy is designed to minimize access time. It is
based on a program behavior known as locality of references. The memory hierarchy includes
several levels, each with different sizes, costs, and speeds. The hierarchy typically includes
registers, cache memory, main memory, and secondary storage.
2. Accessing Data: The way data is accessed varies across different types of memory. Some
memories provide faster access but have less size and are costlier, while others offer more
storage but are slower.
3. Performance Models: Performance models like Accesses Per Cycle (APC), Concurrent Average
Memory Access Time (C-AMAT), and Layered Performance Matching (LPM) consider both data
locality and memory access concurrency. These models help identify potential bottlenecks in a
memory hierarchy and transform a global memory system optimization into localized
optimizations at each memory layer.
4. Matching Data Access Demands: The LPM method matches the data access demands of the
applications with the underlying memory system design. This approach helps establish a unified
mathematical foundation for model-driven performance analysis and optimization of
contemporary and future memory systems.
UNIT 4: System Organization
DMA Controller
DMA Controller is a hardware device that allows I/O devices to directly access memory with less
participation of the processor. DMA controller needs the same old circuits of an interface to
communicate with the CPU and Input/Output devices.
Direct Memory Access uses hardware for accessing the memory, that hardware is called a DMA
Controller. It has the work of transferring the data between Input Output devices and main
memory with very less interaction with the processor. The direct Memory Access Controller is a
control unit, which has the work of transferring data.
DMA Controller is a type of control unit that works as an interface for the data bus and the I/O
Devices. As mentioned, DMA Controller has the work of transferring the data without the
intervention of the processors, processors can control the data transfer. DMA Controller also
contains an address unit, which generates the address and selects an I/O device for the transfer
of data. Here we are showing the block diagram of the DMA Controller.

Types of Direct Memory Access (DMA)


There are four popular types of DMA.
• Single-Ended DMA
• Dual-Ended DMA
• Arbitrated-Ended DMA
• Interleaved DMA
Single-Ended DMA: Single-Ended DMA Controllers operate by reading and writing from a single
memory address. They are the simplest DMA.

Dual-Ended DMA: Dual-Ended DMA controllers can read and write from two memory
addresses. Dual-ended DMA is more advanced than single-ended DMA.

Arbitrated-Ended DMA: Arbitrated-Ended DMA works by reading and writing to several


memory addresses. It is more advanced than Dual-Ended DMA.
Interleaved DMA: Interleaved DMA are those DMA that read from one memory address and
write from another memory address.

Working of DMA Controller

The DMA controller registers have three registers as follows.


• Address register – It contains the address to specify the desired location in memory.
• Word count register – It contains the number of words to be transferred.
• Control register – It specifies the transfer mode.
Note: All registers in the DMA appear to the CPU as I/O interface registers. Therefore, the CPU
can both read and write into the DMA registers under program control via the data bus.
The figure below shows the block diagram of the DMA controller. The unit communicates with
the CPU through the data bus and control lines. Through the use of the address bus and
allowing the DMA and RS register to select inputs, the register within the DMA is chosen by the
CPU. RD and WR are two-way inputs. When BG (bus grant) input is 0, the CPU can communicate
with DMA registers. When BG (bus grant) input is 1, the CPU has relinquished the buses and
DMA can communicate directly with the memory.

Working Diagram of DMA Controller


Explanation: The CPU initializes the DMA by sending the given information through the data
bus.
• The starting address of the memory blocks where the data is available (to read) or
where data are to be stored (to write).
• It also sends word count which is the number of words in the memory block to be read
or written.
• Control to define the mode of transfer such as read or write.
• A control to begin the DMA transfer

Modes of Data Transfer in DMA
There are 3 modes of data transfer in DMA that are described below.
• Burst Mode: In Burst Mode, buses are handed over to the CPU by the DMA if the whole
data is completely transferred, not before that.
• Cycle Stealing Mode: In Cycle Stealing Mode, buses are handed over to the CPU by the
DMA after the transfer of each byte. Continuous request for bus control is generated by this
Data Transfer Mode. It works more easily for higher-priority tasks.
• Transparent Mode: Transparent Mode in DMA does not require any bus in the transfer
of the data as it works when the CPU is executing the transaction.

8237 DMA Controller

8237 DMA Controller is a type of DMA Controller which has a flexible number of channels but
generally works on 4 Input-Output channels. In these present channels, the channel has to be
given the highest priority to be decided by the Priority Encoder. Each channel in the 8237 DMA
Controller has to be programmed separately.

8257 DMA Controller

8257 DMA Controller is a type of DMA Controller, that when a single Intel 8212 I/O device is
paired with it, becomes 4 channel DMA Controller. In 8257 DMA Controller, the highest priority
channel is acknowledged. It contains two 16-bit registers, one is DMA Address Register and the
other one is Terminal Count Register.
Advantages of DMA Controller
• Data Memory Access speeds up memory operations and data transfer.
• CPU is not involved while transferring data.
• DMA requires very few clock cycles while transferring data.
• DMA distributes workload very appropriately.
• DMA helps the CPU in decreasing its load.

Disadvantages of DMA Controller


• Direct Memory Access is a costly operation because of additional operations.
• DMA suffers from Cache-Coherence Problems.
• DMA Controller increases the overall cost of the system.
• DMA Controller increases the complexity of the software.

Interrupt
An interrupt is a signal from a device attached to a computer or from a program within the
computer that requires the operating system to stop and figure out what to do next.
Interrupt systems are nothing but while the CPU can process the programs if the CPU needs any
IO operation. Then, it is sent to the queue and it does the CPU process. Later on Input/output
(I/O) operation is ready.
The I/O devices interrupt the data which is available and does the remaining process; like that
interrupts are useful. If interrupts are not present, the CPU needs to be in idle state for some
time, until the IO operation needs to complete. So, to avoid the CPU waiting time interrupts are
coming into picture.
Processor handle interrupts
Whenever an interrupt occurs, it causes the CPU to stop executing the current program. Then,
comes the control to interrupt handler or interrupt service routine.
These are the steps in which ISR handles interrupts. These are as follows −
Step 1 − When an interrupt occurs let assume processor is execu ng i'th instruc on and
program counter will point to the next instruction (i+1)th.
Step 2 − When an interrupt occurs the program value is stored on the process stack and the
program counter is loaded with the address of interrupt service routine.
Step 3 − Once the interrupt service rou ne is completed the address on the process stack is
popped and placed back in the program counter.
Step 4 − Now it executes the resume for (i+1)th line.
Types of interrupts
There are two types of interrupts which are as follows −
Hardware interrupts
The interrupt signal generated from external devices and i/o devices are made interrupt to CPU
when the instructions are ready.
For example − In a keyboard if we press a key to do some action this pressing of the keyboard
generates a signal that is given to the processor to do action, such interrupts are called
hardware interrupts.
Hardware interrupts are classified into two types which are as follows −
• Maskable Interrupt − The hardware interrupts that can be delayed when a highest
priority interrupt has occurred to the processor.
• Non Maskable Interrupt − The hardware that cannot be delayed and immediately be
serviced by the processor.
Software interrupts
The interrupt signal generated from internal devices and software programs need to access any
system call then software interrupts are present.
Software interrupt is divided into two types. They are as follows −
• Normal Interrupts − The interrupts that are caused by the software instructions are
called software instructions.
• Exception − Excep on is nothing but an unplanned interrup on while execu ng a
program. For example − while execu ng a program if we got a value that is divided by zero is
called an exception.

Programmed I/O
It is one of the simplest forms of I/O where the CPU has to do all the work. This technique is
called programmed I/O.
Consider a user process that wants to print the Nine-character string ‘‘AARAV’’ on the printer
with the help of a serial interface.
The software first assembles the string in a buffer in user space, as shown in the figure −

Explanation
Step 1 − The user process acquires the printer for wri ng by using a system call to open it.
Step 2 − If the printer is currently in use by another process, this system call fails and returns an
error code or it blocks until the printer is available, depending on the operating system and the
parameters of the call.
Step 3 − Once the printer is available, the user process makes a system call telling the operating
system to print the string on the printer.
Step 4 − The opera ng system generally copies the buffer with the string to an array.
Step 5 − It then checks to see if the printer is currently available. If not, it waits until it is.
Whenever the printer is available, the operating system copies the first character to the
printer’s data register, In the above example use memory-mapped I/O. This action activates the
printer. The character does not appear still, because some printers buffer a line or a page
before printing anything.
Step 6 − In the next figure, we see that the first character has been printed and that the system
has marked the ‘‘A’’ as the next character to be printed.
Step 7 − Whenever it has copied the first character to the printer, the operating system checks
to see if the printer is ready to accept another one.
Step 8 − Generally, the printer has a second register, which gives its status. The act of wri ng to
the data register causes the status to become not ready.
Step 9 − When the printer controller has processed the current character, it indicates its
availability by setting some bit in its status register or putting some value in it.
Step 10 − At this point the opera ng system waits for the printer to become ready again.
Step 11 − It prints the next character, as shown in the third figure.
Step 12 − This loop con nues ll the en re string has been printed.
Step 13 − Then control returns to the user process.

You might also like