0% found this document useful (0 votes)
119 views22 pages

CO2: 1. Concept of Program Execution/Interpretation

The document discusses the concept of program execution and the five-state process model. It describes the steps in program execution as fetching an instruction from memory, incrementing the program counter, and carrying out the instruction's actions. The five states in the process model are: new, ready, running, blocked/waiting, and exit. Memory is also discussed, including cache memory, primary/main memory, and secondary memory. CPU-memory interaction and memory organization are covered at a high level.

Uploaded by

Yasmeen Syed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
119 views22 pages

CO2: 1. Concept of Program Execution/Interpretation

The document discusses the concept of program execution and the five-state process model. It describes the steps in program execution as fetching an instruction from memory, incrementing the program counter, and carrying out the instruction's actions. The five states in the process model are: new, ready, running, blocked/waiting, and exit. Memory is also discussed, including cache memory, primary/main memory, and secondary memory. CPU-memory interaction and memory organization are covered at a high level.

Uploaded by

Yasmeen Syed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

CO2:

1. Concept of Program Execution/Interpretation

The Program Execution contains the following steps:

1. Fetch the contents of the memory location pointed at by the PC. The contents of this location
are interpreted as an instruction to be executed. Hence, they are stored in the instruction
register (IR). Symbolically this can be written as:

IR = [ [PC] ]

2. Increment the contents of the PC by 1.

PC = [PC] + 1

3. Carry out the actions specified by the instruction stored in the IR.

2. Single bus organization of the data path inside the CPU


ALU and all the registers are interconnected via a single common bus. ... This unit is responsible
for issuing the signals that control the operation of all the units inside the processor and for
increasing with the memory bus.

3. Micro-operation and their RTL specification

There are 4 types of common Micro-Ops:

• Transfer: transfers data from one register to another

R0 <- R1

• Arithmetic: performs arithmetic on data in registers

R0 <- R1 + R2

• Logic/bit manipulation: performs bit (Boolean) operations on data

R0 <- R1 & R2; or R0 <- R1 | R2

• Shift: shift data in registers by one or more bit positions

R0 <- R1 << 3; or R0 <- R2 >> 2

Micro-Ops Transfer Parallel

 Parallel transfer is typically used for transfers between registers


 Ex: Transfer all contents of A into B on one clock pulse

A <- B

 Control function: we can do this by structuring the RTL expression to indicate the controlling
condition. Ex: P: A<- B

 Control logic or unit will generate the load signal then only the data from B will be stored in
A within one clock cycle.
Micro-Ops Transfer Serial

 Serial transfer is used to specify that a collection of bits are to be moved, but that the
transfer is to occur one bit at a time

 Ex: S: A <- B, B <-B

 The control unit or logic will generate shift signal then only the individual bits will be
shifted from A to B and B to A.

Micro-Ops Transfer Bus

 A bus consists of a set of parallel data lines


 To transfer data using a bus: connect the output of the source register to the bus; connect
the input of the target register to the bus; when the clock pulse arrives, the transfer occurs

Micro-Ops Transfer Memory

 Memory transfers are similar to register transfers, but…Memory to register transfers are
called read operations, while register to memory transfers are called write operations.
 RTL expressions for a read operation, assuming the use of an address registers:

AR <- address

DR <- M[AR]

 RTL expressions for a write operation, assuming use of a data register:

AR <- address

DR <- value

M[AR] <- DR

MACHINE INSTRUCTION FORMAT

Micro instruction Format:

F1, F2, F3: Microoperation fields


CD: Condition for branching
BR: Branch field
AD: Address field

4. Control Unit
 To execute an instruction, the control unit of the CPU must generate the required control
signal in the proper sequence.
 To generate the control signal in proper sequence, a wide variety of techniques exist. Most
of these techniques, however, fall into one of the two categories,
• Hardwired Control
• Micro-programmed Control

Hardwired Control:

 In this hardwired control techniques, the control signals are generated by means of
hardwired circuit. The main objective of control unit is to generate the control signal in
proper sequence.

 Each step in this sequence is completed in one clock cycle.


 A Counter may be used to keep the track of the control steps
 In the hardwired control, the control unit use fixed logic circuits to Interprete instructions
and generate control signals from them.
 The required control signals are determined by the following information.

1. The contents of counter


2. Contents of the IR
3. Contents of the code flags
4. External input signals

Eg: Programmable Logic Array


Micro-programmed Control

 In micro-programmed control unit, the logic of the control unit is specified by a micro-
program.

 A micro-program consists of a sequence of instructions in a microprogramming


language. These are instructions that specify micro-operations.
 A micro-programmed control unit is a relatively simple logic circuit that is capable of (1)
sequencing through microinstructions and (2) generating control signals to execute each
microinstruction.
 The concept of micro-program is similar to computer program. In computer program the
complete instructions of the program is stored in main memory and during execution it
fetches the instructions from main memory one after another. The sequence of
instruction fetch is controlled by program counter (PC).
 Micro-programs are stored in micro-program memory and the execution is controlled by
micro-program counter (µPC).

Control Word (CW):

 Control word is defined as a word whose individual bits represent the various control
signals.

 The individual control words in this micro-program are referred to as microinstructions.


Memory Management

5. Concepts of semiconductor memory/ memory hierarchy

It is used to store data and instructions. Computer memory is the storage space in the
computer, where data is to be processed and instructions required for processing are stored.
The memory is divided into large number of small parts called cells. Each location or cell has a
unique address.

Memory is primarily of three types −

 Cache Memory
 Primary Memory/Main Memory or Internal Memory
 Secondary Memory or External memory

Cache Memory:

Cache memory is a very high speed semiconductor memory which can speed up the CPU. It acts
as a buffer between the CPU and the main memory. It is used to hold those parts of data and
program which are most frequently used by the CPU.
Advantages
The advantages of cache memory are as follows −
 Cache memory is faster than main memory.
 It consumes less access time as compared to main memory.
 It stores the program that can be executed within a short period of time.
 It stores data for temporary use.
Disadvantages
The disadvantages of cache memory are as follows −

 Cache memory has limited capacity.


 It is very expensive.

Primary Memory/Main Memory or Internal Memory

Primary memory holds only those data and instructions on which the computer is currently
working. It has a limited capacity and data is lost when power is switched off. It is generally
made up of semiconductor device. These memories are not as fast as registers. The data and
instruction required to be processed resides in the main memory. It is divided into two
subcategories RAM and ROM.

1. RAM: It is a read/write memory which stores data until the machine is working. Random
Access Memories are volatile in nature. As soon as the computer is switched off, the contents
of memory are also lost.

 Types: SRAM, DRAM


 DRAM
 Low Cost
 High Density
 Medium Speed
 SRAM
 High Speed
 Ease of use
 Medium Cost

2. ROM: Read only memories are non-volatile in nature. The storage is permanent, but it is
read only memory. We cannot store new information in ROM.

 Types: Masked ROM, PROM, EPROM, EEPROM


MROM (Masked ROM): The very first ROMs were hard-wired devices that contained a pre-
programmed set of data or instructions.
PROM (Programmable ROM): PROM is read-only memory that can be modified only once by a
user.

EPROM (Erasable and Programmable Read Only Memory): Erase data by UV rays.

EEPROM (Electrically Erasable Programmable ROM): Erase and write through electrical pulses.

6. CPU-memory interaction

CPU can interact with main memory in two ways:

a. It can write a byte/word to a given memory location.


i. The previous bits that were in that location are destroyed
ii. The new bits are saved for future use.
b. It can read a byte/word from a given memory location.
i. The CPU copies the bits stored at that location and stores them in a CPU
register
ii. The contents of the memory location are NOT changed

Main Memory Characteristics:

 Very closely connected to the CPU.


 Contents are quickly and easily changed.
 Holds the programs and data that the processor is actively working with.
 Interacts with the processor millions of times per second.
 Nothing permanent is kept in main memory.

Secondary Storage Characteristics:

 Connected to main memory through a bus and a device controller.


 Contents are easily changed, but access is very slow compared to main memory.
 Only occasionally interacts with CPU.
 Used for long-term storage of programs and data.
 Much larger than main memory (GBs vs. MBs).
7. organization of memory modules.
What is the five-state process model?
This process model contains five states that are involved in the life cycle of a process. Those are

1. New
2. Ready
3. Running
4. Blocked / Waiting
5. Exit

 New: When a new process is created, then this new process is in the new state.
 Ready: All those processes that are loaded on RAM and waiting for CPU are in ready state.
 Running: All processes that are running on the CPU are in running state.
 Blocked: All processes that leave the CPU and move to the waiting state are in the blocked
state. When CPU becomes free, processes from blocked state again move to the ready
state, and from ready to Running state.
 Exit / Terminated: A process that is terminated from CPU and RAM is in exit state.

Uni- Program and Multi –Program

Uni-Program:

 Uni-programming means one program sits in main memory at a time.


 In old operating systems (OS) only one program runs on the computer at a time. Either
of the browser, calculator or word processor runs at a time.
 Memory split into two
1. For operating system
2. For currently executing program

Memory

Multi-Program:

 In multiprogramming, multiple programs reside in main memory (RAM) at a time.

 OS which handles multiple programs at a time is known as multiprogramming operating


system. Memory split into multiple

 For operating system

 User part of memory is subdivided to accommodate multiple processes.


Partitioning

1. Fixed size partitions


2. Variable size partitions

Cache Memory

 It is the fact that CPU is a faster device and memory is a relatively slower device.
 Memory access is the main bottleneck for the performance efficiency. If a faster memory
device can be inserted between main memory and CPU, the efficiency can be increased.
 The faster memory that is inserted between CPU and Main Memory is termed as Cache
memory.

 High speed (towards CPU speed)


 Small size (power & cost)
Cache Performance:
When the processor needs to read or write a location in main memory, it first checks for a
corresponding entry in the cache.

 If the processor finds that the memory location is in the cache, a cache hit has occurred
and data is read from cache.
 If the processor does not find the memory location in the cache, a cache miss has
occurred. For a cache miss, the cache allocates a new entry and copies in data from main
memory, and then the request is fulfilled from the contents of the cache.
 The performance of cache memory is frequently measured in terms of a quantity
called Hit ratio.
 Hit ratio = hit / (hit + miss) = no. of hits/total accesses
 We can improve Cache performance using higher cache block size, higher associativity,
reduce miss rate, reduce miss penalty, and reduce the time to hit in the cache.
Cache memory mapping:

It is a method of loading the data of main memory into cache memory. In more technical
sense content of main memory is brought into cache memory which is referenced by the
CPU. This can be done in three ways –

1. Direct mapping

2. Associative mapping

3. Block-set-associative mapping

Associative mapping Technique


 The fastest and most flexible computer used associative memory in computer
organization.
 The associative memory has both the address and memory word. In associative mapping
technique any memory word from main memory can be store at any location in cache
memory.
 CPU generated 15 bits address is placed in the argument register and the associative
memory is searched for a matching address.
 If the address is present then corresponding 12-bit data is read from it and sent to the
CPU. But If no match occurs for that address, in that case required word is accessed
from the main memory, after that this address-data pair is sent to the associative cache
memory.
(ii) Direct mapping
 Associative memories are more costly as compared to random-access memories
because of logic are added in with each cell. The 15 bits address generated by the cpu is
divided into two fields.
 The nine lower bits represents the index field and the remaining six bits form
the tag field. The figure shows that main memory required an address that includes tag
and the index bits.
 It has been shown in cache that each word in cache consists of the data and tag
associated with it. When a new word is loaded into the cache, then its tag bits are also
stored alongside with the data bits. When the CPU generates a memory request, the
index field is used for the address to access that cache.
 The tag field of the address referred by the cpu are compared with the tag in the word
read from the cache. If these two tags match it means that there is a hit and the desired
data word is available in the cache.
 If these two tags does not match then there is a miss and the required word is not
present in cache and it is read from main memory. It is then stored in the cache
memory along with the new tag.
 The disadvantage of direct mapping technique is that the hit ratio can
drop considerably if two or more words whose addresses have the same index
but different tags are accessed repeatedly.
(iii)Set associative mapping
Disadvantage of direct mapping techniques is that it required a lot of comparisons. A third type
of cache organization, called set associative mapping, is an improvement over the direct-
mapping organization in that in set associative mapping technique. In this technique each
data word is stored together with its tag and the number of tag data items in one word of
cache is said to form a set.
When a miss occurs in a set-associative cache and the set if full, it is necessary to replace one of the
tag-data items with a new value using cache replacement algorithms.
Cache Replacement Policies
Replacement algorithm determines that which existing data in cache is removes
from cache and make a space free so that required data can be placed in cache.
1. Least Recently Used (LRU) – replace the cache line that has been in the cache the longest
with no references to it
2. First-in First-out (FIFO) – replace the cache line that has been in the cache the longest.
3. Least Frequently Used (LFU) – replace the cache line that has experienced the fewest
references.
4. Random – pick a line at random from the candidate lines.
Virtual Memory
 Basically, virtual memory provides an illusion to the users that the PC has enough
primary memory left to run the programs. Though the size of programs i.e. to be
executed may sometimes very bigger than the size of primary memory left, the user
never feels that he needs a bigger primary storage to run that program.
 In that part of the secondary storage, the part of the program which not currently being
executed is stored and all the parts of the program that are eventually executed are
first brought into the main memory.
Address space and memory space:
 An address used by a programmer will be called as virtual address or logical address and set
of such addresses the address space. It contains 20bit address.
 An address in the main memory is called as physical address and the set of such locations is
called the memory space. It contains 15bit address.
Pages and frames/ blocks
 The virtual memory and main memory will be divided into small number of partitions.
 The physical memory or main memory is broken into group of equal size called as
frames or blocks.
 The virtual memory or logical memory is broken into group of equal size called as
pages.

MMU (memory management unit)

The virtual address space is used to develop a process. The special hardware unit , called
Memory Management Unit (MMU) translates virtual address to physical address. When the
desired data is in the main memory, the CPU can work with these data. If the data are not in
the main memory, the MMU causes the operating system to bring into the memory from the
disk.

The conversion of the virtual or logical address into the physical address is done
by a special hardware unit known as MMU (Memory Management Unit). When
the required data are found in the main memory, then they are fetched to the
cache memory for further processing. If required data are not found in the main
memory, the MMU with the help of operating system brought the data from
secondary storage to main memory.

Conversion of virtual address/logical (20 bits) to physical address (15bits)


 MMU (memory management unit) converts logical address to physical address through
page table.
 Page table contains free memory locations or frames or blocks in the main memory.
 Logical address contains page number and data. Physical address contains frame number
and data.
Memory table for mapping a virtual address
Example for conversion from virtual address to physical address

You might also like