Unit I - Students
Unit I - Students
SSG 1
Computer
1
09-Aug-2024
Computer Types
• On the basis of data handling capabilities, the computer
is of three types:
• Analogue Computer
• Digital Computer
• Hybrid Computer
• On the basis of size, the computer can be of five types:
• Super Computer
• Mainframe Computer
• Mini Computer
• Workstation
• Micro Computer
SSG 3
Analogue Computer
SSG 4
2
09-Aug-2024
Digital Computer
Hybrid Computer
3
09-Aug-2024
Super Computer
SSG 7
• Supercomputers are the computers that are the fastest and they are also very
expensive.
• It can calculate up to ten trillion individual calculations per second, this is also
the reason which makes it even faster.
• It is used in the stock market or big organizations for managing the online
currency world such as Bitcoin etc.
• It is used in scientific research areas for analyzing data obtained from
exploring the solar system, satellites, etc.
SSG 8
4
09-Aug-2024
Mainframe Computer
• It can support hundreds or thousands of users at the same time. It also supports multiple
programs simultaneously. So, they can execute different processes simultaneously.
• All these features make the mainframe computer ideal for big organizations like banking,
telecom sectors, etc., which process a high volume of data in general.
Characteristics of Mainframe Computers
• It is also an expensive or costly computer.
• It has high storage capacity and great performance.
• It can process a huge amount of data (like data involved in the banking sector) very
quickly.
• It runs smoothly for a long time and has a long life.
SSG 9
Mini Computer
5
09-Aug-2024
Workstation
Micro Computer
6
09-Aug-2024
Functional Units
• A computer consists of five functionally independent main parts:
• Input • Memory • Arithmetic and Logic Unit (ALU)
• Output • Control Units
Program: A list of instructions that performs a task.
Data: Numbers and encoded characters that are used as operands by
the Instructions.
Bit: String of binary digits called bits.
Binary-coded decimal (BCD)
Alphanumeric characters:
ASCII (American Standard Code for Information Interchange)
EBCDIC (Extended Binary-Coded Decimal Interchange Code)
SSG 13
Functional Units
SSG 14
7
09-Aug-2024
Input Units
SSG 15
Memory Units
• The function of memory unit is to store programs and data.
• It is divided into ‘n’ number of blocks. Each block is divided into ‘n’ number of cells.
• Each cell is capable of storing one bit information at a time.
Primary storage
• It is made up of semiconductor material. So it is called Semiconductor memory.
• Data storage capacity is less than secondary memory. Cost is too expensive than
secondary memory.
• CPU can access data directly. Because it is an internal memory.
• Data accessing speed is very fast than secondary memory.
• When the memory is accessed, usually only one word of data is read or written.
• The number of bits in each word is referred to as the word length of the computer,
typically 16,32 or 64 bits.
• Ex.RAM & ROM
SSG 16
8
09-Aug-2024
RAM vs ROM
RAM ROM
Random Access Memory Read Only Memory
Volatile memory Non volatile memory
Data lost when the power turns off It retains data even in the absence of a
and that is used to hold data and power source and that is store
program while they are running. programs between runs.
Temporary storage medium Permanent storage medium
User perform both read and write
User can perform only read operation
operation
SSG 17
Cache Memory
• A small, fast memory that acts as a buffer for a slower, larger memory.
• At the start of program execution, the cache is empty. All program instructions and
required data are stored in the main memory.
• As execution proceeds, instructions are fetched into the processor chip and a copy of each
is placed in the cache.
• Cache memory contains the user frequently accessed information.
Secondary Memory
• Secondary memory (Nonvolatile storage) is a form of storage that retains data even in the
absence of a power source.
• It is made up of magnetic material. So it is called magnetic memory.
• Data storage capacity is high than primary memory.
• Cost is too low than primary memory.
• CPU cannot access data directly. Because it is an external memory.
• Data accessing speed is very slow than primary memory.
• Ex. Magnetic disk, Hard Disk, CD, DVD, Floppy Disk
SSG 18
9
09-Aug-2024
SSG 19
Control Unit
• The control unit coordinates the following operations like memory, arithmetic and logic,
input and output units, store and process information and perform input and output
operations and sends control signals to other units and senses their states.
• The timing signals that govern the I/O transfers are generated by the control circuits.
• Timing signals also control data transfer between the processor and the memory.
• Timing signals are the signals that determine when a given action is to take place.
• A physically separate unit that interacts with other parts of the machine.
• A large set of control lines (wires) carries the signals used for timing and synchronization
of events in all units.
• The operations of a computer: The computer accepts information in the form of
programs and data through an input unit and stores it in the memory. Information stored
in memory is fetched, under program control, into an ALU, where it is processed.
Processed information leaves the computer through an output unit. All activities inside
the machine are directed by the control unit.
SSG 20
10
09-Aug-2024
• The address bus carries the address location of the data or instruction.
• The data bus carries data from one component to another and the
control bus carries the control signals.
• The system bus is the common communication path that carries signals
to/from CPU, main memory and input/output devices.
• The input/output devices communicate with the system bus through the
controller circuit which helps in managing various input/output devices
attached to the computer.
SSG 22
11
09-Aug-2024
23
24
12
09-Aug-2024
25
Programming Example
26
13
09-Aug-2024
Programming Example
27
Number Representation
• To represent a number in a computer system is by a sequence of bits called binary number.
• INTEGERS: Three systems are used for representing integer numbers:
• Sign-and-magnitude
• 2’s-complement
• In both systems, the leftmost bit is 0 for positive numbers and 1 for negative numbers.
Sign and Magnitude
• Sign magnitude is a very simple representation of negative numbers.
• In sign magnitude the first bit(MSB) is dedicated to represent the sign and hence it is called sign bit.
• Sign bit ‘1’ represents negative sign.
• Sign bit ‘0’ represents positive sign.
• In sign magnitude representation of a n – bit number, the first bit will represent sign and rest n-1 bits
represent magnitude of number.
• For example, +25 = 011001 Where 11001 = 25
28
14
09-Aug-2024
Number Representation
• And 0 for ‘+’
• -25 = 111001 Where 11001 = 25 And 1 for ‘-‘.
• Range of number represented by sign magnitude method = -(2n-1-1) to +(2n-1-1) (for n bit number)
• But there is one problem in sign magnitude and that is we have two representations of 0
• +0 = 000000 – 0 = 100000
2’s complement method
• Two’s complement representation is a method of representing negative numbers in binary.
• In this representation, the most significant bit is used as a sign bit, with 0 indicating a positive number and 1
indicating a negative number.
• To represent a negative number in this form, first we need to take the 1’s complement of the number
represented in simple positive binary form and then add 1 to it.
• For example: (-8) = (1000) 1’s complement of 1000 = 0111 Adding 1 to it, 0111 + 1 = 1000
• So, (-8) = (1000)
• Please don’t get confused with (8) =1000 and (-8) =1000 as with 4 bits, we can’t represent a positive
number more than 7. So, 1000 is representing -8 only.
29
• Range of number represented by 2’s complement = (-2n-1 to 2n-1 – 1)
Number Representation
• 4 Bits
• 1 bit (MSB – Most Significant Bit) for sign
• 3 bits for value / magnitude
• -(2n-1-1) to +(2n-1-1) – for n bit number – Sign
magnitude – Range -7 to +7
• -2n-1 to +(2n-1-1) – for n bit number – 2’s
complement – Range -8 to +7
30
15
09-Aug-2024
16
09-Aug-2024
Combine Everything
• Putting it all together:
• Sign bit: 1
• Exponent: 10000010
• Mantissa: 10101000000000000000000
• So, the IEEE 754 representation of -13.25 in single precision is:
• 1 10000010 10101000000000000000000
33
34
17
09-Aug-2024
Combine Everything
• Putting it all together:
• Sign bit: 0
• Exponent: 10000000000
• Mantissa: 1110000000000000000000000000000000000000000000000000
• So, the IEEE 754 representation of 3.75 in double precision is:
0 10000000000 1110000000000000000000000000000000000000000000000000
35
Arithmetic Operations
Example 1: Addition - Add −3 and 4
Convert to Binary (4-bit Two's Complement)
-3:
Positive 3 in binary: 00112
Invert bits: 11002
Add 1: 1100 + 1 = 11012 • Overflow / Carry : In computer arithmetic,
So, overflow occurs when the result of an
−3 in 4-bit two's complement is 11012 arithmetic operation is too large to be
4: Binary: 01002 represented in the available number of bits.
Add the Numbers This can result in incorrect or unexpected
1101 results.
+ 0100
------
10001
The result is 100012, but since we're using 4 bits, we discard the overflow bit:
Discarding the leftmost bit: 00012
Result in decimal: 1
36
18
09-Aug-2024
Arithmetic Operations
Example 2: Subtraction -> 3 – 5 - Convert the second number to its two's complement, then add.
Convert to Binary (4-bit Two's Complement)
-5:
Positive 5 in binary:01012
Invert bits: 10102
Add 1: 1010 + 1 =10112
So,
−5 in 4-bit two's complement is 10112
3:
Binary: 00112
Subtract by Adding the Two's Complement
To subtract −5 from 3
add 3 to the two's complement of −5:
0011
+ 1011
------
1110
The result is 11102, which in two's complement represents −2
37
Arithmetic Operations
Example 3: Multiplication - Multiply −2 and 3.
Convert to Binary (4-bit Two's Complement)
-2:
• Positive 2 in binary: 00102
• Invert bits: 11012
• Add 1: 1101 + 1 = 11102
• So, −2 in 4-bit two's complement is 11102
3: Binary: 00112
Multiply the Numbers
Perform binary multiplication:
1110
x 0011
------
1110 (1110 * 1)
1110 (1110 * 1, shifted left by 1)
------
101010
The result is 1010102
Since we use 4-bit representation, discard the overflow bit: Result: 1010 2 - In decimal: −6 38
19
09-Aug-2024
Arithmetic Operations
Example 4: Division - Divide −8 by 2
Convert to Binary (4-bit Two's Complement)
-8:
Positive 8 in binary: 10002 (since it's an overflow)
Invert bits: 01112
Add 1: 0111 + 1 = 10002
So, −8 in 4-bit two's complement is 10002
2: Binary: 00102
Divide the Numbers - Perform binary division:
1000 ÷ 0010 = 0100 (Repeated subtraction)
1000 –
0010
------
0110 –
0010
------
0100
------
Result: 01002 which is 4
39
Performance Measurements
▪ Indicates how fast a computer can execute programs.
▪ The speed with which a computer executes programs is affected by
✓ The technology in which the hardware is implemented.
✓ The design of its instruction set.
✓ Its hardware and software (ex: OS).
40
20
09-Aug-2024
Performance Measurements
1.TECHNOLOGY
▪ The technology of Very Large Scale Integration (VLSI) that is used to fabricate the
electronic circuits for a processor on a single chip is a critical factor in the speed of
execution of machine instructions.
▪ The speed of switching between the 0 and 1 states in logic circuits is largely
determined by the size of the transistors that implement the circuits. Smaller
transistors switch faster.
▪ Advances in fabrication technology over several decades have reduced transistor
sizes dramatically.
Intel are mass-producing
transistors 14 nanometers
across—just 14 times wider than
DNA molecules.
41
Performance Measurements
Advantages:
▪ instructions can be executed faster.
▪ more transistors can be placed on a chip, leading to more logic functionality and
more memory storage capacity.
2. PARALLELISM
▪ Performance can be increased by performing a number of operations in parallel.
▪ Parallelism can be implemented on many different levels.
✓ Instruction-level parallelism
✓ Multicore processors
✓ Multiprocessors
42
21
09-Aug-2024
Performance Measurements
2.1 Instruction-level parallelism
▪ Simplest way to execute sequence of instructions in a processor in order to complete
all steps of the current instruction before starting the steps of the next instruction.
▪ If we overlap the execution of the steps of successive instructions, total execution
time will be reduced.
▪ For example, the next instruction could be fetched from memory at the same time
that an arithmetic operation is being performed on the register operands of the
current instruction.
▪ This form of parallelism is called pipelining – Is a process where multiple
instructions are overlapped during execution.
43
Performance Measurements
2.1 Instruction-level parallelism – Pipelining
Execution in a pipelined processor: Using a space-time diagram.
▪ Consider a processor having 4 stages and let there be 2 instructions to be executed.
44
22
09-Aug-2024
Performance Measurements
2.1 Instruction-level parallelism – Pipelining
Execution in a pipelined processor: Using a space-time diagram.
▪ Consider a processor having 4 stages and let there be 2 instructions to be executed.
45
Performance Measurements
2.2 Multicore Processors
▪ Integrated circuit has two or more processor cores attached for enhanced performance and reduced power
consumption.
▪ A CPU is the overall component.
▪ A core is one part of that component.
46
23
09-Aug-2024
Performance Measurements
2.3 Multiprocessors:
▪ Is a system with two or more central processing units (CPUs) that is capable of
performing multiple tasks.
▪ These systems either execute a number of different application tasks in parallel, or
they execute subtasks of a single large task in parallel.
▪ The main objective is to boost the system’s execution speed, fault tolerance and
application matching.
▪ Two types of multiprocessors - shared memory multiprocessor and distributed
memory multiprocessor.
47
Performance Measurements
2.3 Multiprocessors: Shared memory multiprocessor
▪ In shared memory multiprocessors, all the CPUs shares the common memory but in a
distributed memory multiprocessor, every CPU has its own private memory.
24
09-Aug-2024
Performance Measurements
2.3 Multiprocessors: Shared memory multiprocessor
▪ Applications of shared memory multiprocessor:
▪ Single Instruction Single Data stream (SISD) (ex: Uniprocessors)
▪ Single Instruction Multiple Data stream (SIMD) (ex: Vector processing)
▪ Multiple Instruction Single Data stream (MISD) (ex: Pipelined processors)
▪ Multiple Instruction Multiple Data stream (MIMD) (ex: A single system
executing multiple, individual series of instructions in multiple perspectives)
▪ Benefits of using a Multiprocessor:
▪ Enhanced performance.
▪ Multiple applications.
▪ Multi-tasking inside an application.
▪ High throughput and responsiveness.
▪ Hardware sharing among CPUs.
49
Performance Measurements
2.3 Multiprocessors: Distributed memory multiprocessor
▪ A multicomputer system with multiple processors that are connected together to solve a problem.
▪ Each processor has its own memory and it is accessible by that particular processor and those processors can
communicate with each other via an interconnection network.
▪ These computers normally have access only to their own
memory units.
▪ When the tasks they are executing need to share data, they
do so by exchanging messages over a communication
network.
▪ This property distinguishes them from shared-memory
multiprocessors, leading to the name message passing
multicomputer.
50
25
09-Aug-2024
Performance Measurements
3. PERFORMANCE AND METRICS
▪ For best performance, it is necessary to design the compiler, the machine instruction
set, and the hardware in a coordinated way.
▪ The operating system overlaps processing, disk transfers, and printing for several
programs to make the best possible use of the resources available.
▪ It is affected by the speed of the processor, the disk, and the printer.
3.1 Elapsed Time:
▪ The total time required to execute the entire program.
▪ The elapsed time for the execution of a program depends on all units in a computer
system.
Performance Measurements
3.2 Processor Time:
▪ It refers to the time the processor spends actively working on executing the program.
▪ The processor time depends on the hardware involved in the execution of individual
machine instructions.
▪ This hardware comprises the processor and the memory, which are usually connected
by a bus.
52
26
09-Aug-2024
Performance Measurements
4. CACHE MEMORY
▪ At the start of execution, all program instructions and the required data are stored in
the main memory.
▪ As execution proceeds, instructions are fetched one by one over the bus into the
processor, and a copy is placed in the cache.
▪ When the execution of an instruction calls for data located in the main memory, the
data are fetched and a copy is placed in the cache.
▪ Later, if the same instruction or data item is needed a second time, it is read directly
from the cache.
▪ The processor and a relatively small cache memory can be fabricated on a single
integrated circuit chip.
▪ The internal speed of performing the basic steps of instruction processing on such
chips is very high and is considerably faster than the speed at which instructions and
data can be fetched from the main memory.
53
Performance Measurements
4. CACHE MEMORY
▪ A program will be executed faster if the movement of instructions and data between
the main memory and the processor is minimized, which is achieved by using the
cache.
▪ For example, suppose a number of instructions are executed repeatedly over a short
period of time, as happens in a program loop, these instructions are made available in
the cache: and they can be fetched quickly during the period of repeated use.
▪ The same applies to data that are used repeatedly.
27
09-Aug-2024
Performance Measurements
5. PROCESSOR CLOCK
▪ Processor circuits are controlled by a timing signal called a clock.
▪ The clock defines regular time intervals, called clock cycles.
▪ To execute a machine instruction, the processor divides the action to be performed into a sequence of
basic steps, such that each step can be completed in one clock cycle.
▪ The length P of one clock cycle is an important parameter that affects processor performance.
▪ Its inverse is the clock rate, R = 1/P, which is measured in cycles per second.
▪ The standard electrical engineering terminology for the term ‘cycles per second’ is called hertz (Hz).
▪ The term “million” is denoted by the prefix Mega (M) and “billion’ is denoted by the prefix Giga (G).
Hence, 500 million cycles per second is usually abbreviated to 500 Megahertz (MHz) and 1250 million
cycles per second is abbreviated to 1.25 Gigahertz (GHz).
55
Performance Measurements
5. PROCESSOR CLOCK
▪ Processor circuits are controlled by a timing signal called a clock.
▪ To execute a machine instruction, the processor divides the action to be performed into a
sequence of basic steps, such that each step can be completed in one clock cycle.
56
28
09-Aug-2024
Performance Measurements
6. BASIC PERFORMANCE EQUATION
▪ Let T be the processor time required to execute a program that has been prepared some high-
level language.
▪ The number N is the actual number of instruction executions, and is not necessarily equal to
the number of machine instructions in the object program.
▪ Some instructions may be executed more than once which is the case for instructions inside a
program loop. Others may not be executed at all, depending on the input data used.
▪ Suppose that the average number of basic steps needed to execute one machine instruction is
S, where each basic step is completed in one clock cycle.
▪ If the clock rate is R cycles per second, the program execution time is given by
𝑵 ∗ 𝑺
𝑻=
𝑹
57
Performance Measurements
6. BASIC PERFORMANCE EQUATION
▪ If the clock rate is R cycles per second, the program execution time is given by
𝑵𝒙𝑺
𝑻=
𝑹
▪ The performance parameter T for an application program is much more important to the user
than the individual values of the parameters N, S or R.
▪ To achieve high performance, the value of T must be reduced, which means reducing the
values of N and S, and increasing the value of R.
▪ The value of N is reduced if source program is compiled into fewer machine instructions.
▪ The value of S is reduced if instructions have a smaller number of basic steps to perform or if
the execution of instructions is overlapped.
▪ Using a higher-frequency clock increases the value or R, which means that the time required
to complete a basic execution step is reduced.
▪ We must emphasize that N, S, and R are not independent parameters: changing one may affect
another.
58
29
09-Aug-2024
Performance Measurements
7. CLOCK RATE
▪ There are two possibilities for increasing the clock rate, R.
▪ First, improving the integrated-circuit (IC) technology makes logic circuits faster,
which reduces the time needed to complete a basic step.
▪ This allows the clock period P, to be reduced and the clock rate R, to be increased.
▪ Second reducing the amount of processing done in one basic step also makes it
possible to reduce the clock period, P.
▪ Increases in the value of R that are entirely caused by improvements in IC technology
affect all aspects of the processor’s operation equally with the exception of the time it
takes to access the main memory.
▪ In the presence of a cache, the percentage of accesses to the main memory is small.
▪ The value of T will be reduced by the same factor as R is increased because S and N
are not affected.
59
Performance Measurements
8. PIPELINING AND SUPERSCALAR OPERATION
▪ If the instructions are executed one after another, the value of S (total number of basic
steps or clock cycles) increases to execute an instruction.
▪ A substantial improvement in performance can be achieved by overlapping the
execution of successive instructions, using a technique called pipelining.
▪ Ex:Add R1, R2, R3//Adds the contents of registers R1 and R2 and stores sum in R3
▪ Actual working:
✓ The contents of R1 and R2 are first transferred to the inputs of the ALU.
✓ After the add operation is performed, the sum is transferred to R1.
✓ The processor can read the next instruction from the memory while the addition
operation is being performed.
60
30
09-Aug-2024
Performance Measurements
8. PIPELINING AND SUPERSCALAR OPERATION
▪ Key-point:
▪ Parallel execution must preserve the logical correctness of programs (results produced
must be the same as those produced by serial execution of program instructions)
▪ Many of today’s high-performance processors are designed to operate in this manner.
SUPERSCALAR OPERATION PIPELINING OPERATION
▪ Several scalar instructions can be ▪ Allows also several instructions to be
initiated simultaneously and executed executed at the same time, but at
independently. different pipeline stages at a given
moment.
61
Performance Measurements
9. INSTRUCTION SET: CISC AND RISC
RISC CISC
▪ It is a Reduced Instruction Set Computer. ▪ It is a Complex Instruction Set Computer.
▪ It emphasizes on software to optimize the ▪ It emphasizes on hardware to optimize the
instruction set. instruction set.
▪ Is a hardwired unit of programming in the ▪ Is a microprogramming unit in CISC
RISC Processor. Processor.
▪ Requires multiple register sets to store the ▪ Requires a single register set to store the
instruction. instruction.
▪ RISC has simple decoding of instruction. ▪ CISC has complex decoding of instruction.
▪ Uses of the pipeline are simple in RISC. ▪ Uses of the pipeline are difficult in CISC.
▪ Uses a limited number of instructions to be ▪ Uses a large number of instructions to be
executed at less time. executed consuming more time.
62
31
09-Aug-2024
63
64
32
09-Aug-2024
65
33
09-Aug-2024
▪ The 2 k addresses constitute the address space of the computer, and the memory can
have up to 2 k addressable locations.
▪ A 32-bit address creates an address space of 232 or 4G (4 giga) locations, where 1G is
230. Other notational conventions that are commonly used are K (kilo) for the number
210 (1,024), and T (tera) for the number 240.
34
09-Aug-2024
70
35
09-Aug-2024
71
72
36
09-Aug-2024
73
• Computers may have instructions of several different lengths containing varying number of addresses. According to
address reference there are three address, two address, one address and zero address reference instructions. Let us see
examples of each of them.
Three address instructions
• The three address instruction can be represented symbolically as
• ADD A, B, C
• where A, B, C are the variables. These variable names are assigned to distinct locations in the memory. In this instruction
operands A and B are called source operands and operand C is called destination operand and ADD is the operation to be
performed on the operands. Thus the general instruction format for three address instruction is
• Operation Source 1, Source 2, Destination
• The number of bits required to represent such instruction include:
• Bits required to specify the three memory addresses of the three operands. If n-bits are required to specify one
memory address, 3n bits are required to specify three memory addresses.
• Bits required to specify the operation.
74
37
09-Aug-2024
75
38
09-Aug-2024
77
39
09-Aug-2024
X = A×B+C×C
Explain how the above expression will be executed in one
address, two address and three address processors in an
accumulator organization.
Solution :
Using one address instruction
LOAD A : AC← M[A] Using three address instructions
MUL B : AC←AC* M[B] MUL A, B, R1 : R1←M[A]*M[B]
MUL C, C, R2 : R2←M[C]+M[C]
STORE T : M[T] ←AC
Add R1, R1, X : M[X]←R1*R2
LOAD C : AC← M[C]
MUL C : AC←AC* M[C]
ADD T : AC←AC + M[T]
STORE X : M[X] ← AC
Using two address instructions
MOV A, R1 : R1←M[A]
MUL B, R1 : R1←R1*M[B]
MOV C, R2 : R2←M[C]
MUL C, R2 : R2←R2*M[C]
ADD R2, R1 : R1←R1+R2
79
MOV R1, X : M[T]←R1
80
40
09-Aug-2024
81
Branching
• Every time it is not possible to store a program in the consecutive memory locations. After execution of decision
making instruction we have to follow one of the two program sequences.
• In such cases we can not use straight-line sequencing. Here, we have to use branch instructions to transfer the program
control from one straight-line sequence to another straight-line sequence of instruction.
• we have decided to branch the program control after checking the condition. Such branch instructions are called
conditional branch instructions.
• In branch instructions the new address called target address or branch target is loaded into PC and instruction is fetched
from the new address, instead of the instruction at the location that follows the branch instruction in sequential address
order.
• The conditional branch instructions are used for program looping. In looping, the program is instructed to execute
certain set of instructions repeatedly to execute a particular task number of times. For example, to add ten numbers
stored in the consecutive memory locations we have to perform addition ten times.
82
41
09-Aug-2024
Conditional Codes
• The condition code flags are used to store the results of certain condition when certain operations are performed during
execution of the program. The condition code flags are stored in the status registers. The status register is also referred to as
flag register.
• ALU operations and certain register operations may set or reset one or more bits in the status register.
• Status bits lead to a new set of microprocessor instructions. These instructions permit the execution of a program to change
flow on the basis of the condition of bits in the status register. So the condition bits in the status register can be used to take
logical decisions within the program. Some of the common condition code flags are :
1. Carry/Borrow: The carry bit is set when the summation of two 8-bit numbers is greater than 1111 1111 (FFH). A borrow
is generated when a large number is subtracted from a smaller number.
2. Zero: The zero bit is set when the contents of register are zero after any operation. This happens not only when you
decrement the register, but also when any arithmetic or logical operation causes the contents of register to be zero.
3. Negative or sign: In 2's complement arithmetic, the most significant bit is a sign bit. If this bit is logic 1, the number is
negative number, otherwise a positive number. The negative bit or sign bit is set when any arithmetic or logical operation
gives a negative result.
4. Auxiliary carry: The auxiliary carry bit of status register is set when an addition in the first 4-bits causes a carry into the
fifth bit. This is often referred as half carry or intermediate carry. This is used in the BCD arithmetic.
5. Overflow Flag: In 2's complement arithmetic, most significant bit is used to represent sign and remaining bits are used
to represent magnitude of a number.This flag is set if the result of a signed operation is too large to fit in the number of bits
available (7-bits for 8-bit number) to represent it.
6. Parity:When the result of an operation leave the indicated register with an even number of 1's, parity bit is set.
83
Addressing Modes
• Addressing modes in computer architecture refer to the various methods used to specify the operands
(data) for instructions in a program.
• Different addressing modes provide flexibility in how data is accessed and manipulated. Here are
some common addressing modes:
• Immediate Addressing: Fast but limited to constant values.
• Register Addressing: Very fast, used for operations on data already in registers.
• Direct Addressing: Simplicity in addressing but limited by address space.
• Indirect Addressing: Flexibility with dynamic data access.
• Indexed Addressing: Useful for accessing elements in arrays.
• Base-Register Addressing: Efficient for accessing data structures with a base address.
• Relative Addressing: Useful for branching and loop control.
• Register Indirect Addressing: Allows flexible and dynamic memory access
84
42
09-Aug-2024
Description: The operand is specified directly within the Description: The instruction specifies the exact memory address
instruction itself. where the operand is located.
Explanation: This instruction moves the constant value Explanation: Loads the data from memory address 1000 into
5 into register R1. The # symbol indicates an immediate register R1. The address 1000 is a direct address.
value.
4. Indirect Addressing
2. Register Addressing
Description: The operand is in a register, and the Description: The instruction specifies a register or memory
instruction specifies which register to use. location that holds the address of the operand.
Explanation: Adds the value in register R2 to the value Explanation: If R2 contains the address 2000, the instruction
in register R1 and stores the result in register R1. loads the data from memory address 2000 into register R1. The
value of R2 is used as a pointer to the actual data location.
SSG 85
Description: Combines a base address from a register Description: The effective address is determined by adding an
with an offset to calculate the effective address. offset to the current instruction address.
Explanation: The effective address is computed as R2 + Explanation: If the current instruction address is 2000, this
1000. The instruction loads the data from this computed branch instruction will jump to address 2100. The offset 100 is
address into register R1. added to the current address.
SSG 86
43