Unit 5
Unit 5
COMPUTER ORGANIZATION
AND ARCHITECTURE
UNIT-5
1
Contents
• Parallelism: Need, types , applications and challenges
• Architecture of Parallel Systems-Flynn’s classification
• ARM Processor: The thumb instruction set
• Processor and CPU cores, Instruction Encoding format
• Memory load and Store instruction
• Basics of I/O operations.
• Case study: ARM 5 and ARM 7 Architecture
2
Parallelism
• Executing two or more operations at the same time is
known as parallelism.
• Parallel processing is a method to improve computer
system performance by executing two or more
instructions simultaneously
• A parallel computer is a set of processors that are able
to work cooperatively to solve a computational
problem.
• Two or more ALUs in CPU can work concurrently to
increase throughput
• The system may have two or more processors
operating concurrently
3
Goals of parallelism
• To increase the computational speed (ie) to
reduce the amount of time that you need to wait
for a problem to be solved
• To increase throughput (ie) the amount of
processing that can be accomplished during a
given interval of time
• To improve the performance of the computer for
a given clock speed
• To solve bigger problems that might not fit in the
limited memory of a single CPU
4
Applications of Parallelism
• Numeric weather prediction
• Socio economics
• Finite element analysis
• Artificial intelligence and automation
• Genetic engineering
• Weapon research and defence
• Medical Applications
• Remote sensing applications
5
Applications of Parallelism
6
Types of parallelism
1. Hardware Parallelism
2. Software Parallelism
• Hardware Parallelism :
The main objective of hardware parallelism is to increase the processing
speed. Based on the hardware architecture, we can divide hardware parallelism
into two types: Processor parallelism and memory parallelism.
• Processor parallelism
Processor parallelism means that the computer architecture has multiple nodes,
multiple CPUs or multiple sockets, multiple cores, and multiple threads.
• Memory parallelism means shared memory, distributed memory, hybrid
distributed shared memory, multilevel pipelines, etc. Sometimes, it is also called a
parallel random access machine (PRAM). “It is an abstract model for parallel
computation which assumes that all the processors operate synchronously under a
single clock and are able to randomly access a large shared memory. In particular,
a processor can execute an arithmetic, logic, or memory access operation within a
single clock cycle”. This is what we call using overlapping or pipelining instructions
to achieve parallelism.
7
Hardware Parallelism
• One way to characterize the parallelism in a processor is by
the number of instruction issues per machine cycle.
• If a processor issues k instructions per machine cycle, then
it is called a k-issue processor.
• In a modern processor, two or more instructions can be
issued per machine cycle.
• A conventional processor takes one or more machine
cycles to issue a single instruction. These types of
processors are called one-issue machines, with a single
instruction pipeline in the processor.
• A multiprocessor system which built n k-issue processors
should be able to handle a maximum of nk threads of
instructions simultaneously
8
Software Parallelism
• It is defined by the control and data dependence of
programs.
• The degree of parallelism is revealed in the program
flow graph.
• Software parallelism is a function of algorithm,
programming style, and compiler optimization.
• The program flow graph displays the patterns of
simultaneously executable operations.
• Parallelism in a program varies during the execution
period .
• It limits the sustained performance of the processor.
9
10
11
12
13
14
15
Software Parallelism - types
Parallelism in Software
Instruction level parallelism
Task-level parallelism
Data parallelism
Transaction level parallelism
16
Instruction level parallelism
• Instruction level Parallelism (ILP) is a measure of
how many operations can be performed in
parallel at the same time in a computer.
18
• If we assume that each operation can be
completed in one unit of time then these 3
operations can be completed in 2 units of
time .
• ILP factor is 3/2=1.5 which is greater than
without ILP.
• A superscalar CPU architecture implements
ILP inside a single processor which allows
faster CPU throughput at the same clock rate.
19
Data-level parallelism (DLP)
20
DLP - example
• Let us assume we want to sum all the
elements of the given array of size n and the
time for a single addition operation is Ta time
units.
• In the case of sequential execution, the time
taken by the process will be n*Ta time unit
• if we execute this job as a data parallel job on
4 processors the time taken would reduce to
(n/4)*Ta + merging overhead time units.
21
DLP in Adding elements of array
22
DLP in matrix multiplication
24
Flynn’s Classification
Was proposed by researcher Michael J. Flynn in 1966.
It is the most commonly accepted taxonomy of computer
organization.
In this classification, computers are classified by whether it
processes a single instruction at a time or multiple
instructions simultaneously, and whether it operates on
one or multiple data sets.
25
Flynn’s Classification
• This taxonomy distinguishes multi-processor computer
architectures according to the two independent dimensions
of Instruction stream and Data stream.
• An instruction stream is sequence of instructions executed
by machine.
• A data stream is a sequence of data including input, partial
or temporary results used by instruction stream.
• Each of these dimensions can have only one of two possible
states: Single or Multiple.
• Flynn’s classification depends on the distinction between
the performance of control unit and the data processing
unit rather than its operational and structural
interconnections.
26
Flynn’s Classification
27
SISD
• They are also called scalar • SISD computer having one
processor i.e., one instruction control unit, one processor
at a time and each instruction unit and single memory unit.
have only one set of operands. •
• Single instruction: only one
instruction stream is being
acted on by the CPU during
any one clock cycle.
• Single data: only one data
stream is being used as input
during any one clock cycle.
• Deterministic execution.
• Instructions are executed
sequentially.
28
SIMD
• A type of parallel computer. • single instruction is executed
• Single instruction: All by different processing unit on
processing units execute the different set of data
same instruction issued by the
control unit at any given clock
cycle .
• Multiple data: Each processing
unit can operate on a different
data element as shown if
figure below the processor are
connected to shared memory
or interconnection network
providing multiple data to
processing unit
29
MISD
• A single data stream is fed into • same data flow through a
multiple processing units. linear array of processors
• Each processing unit operates executing different instruction
on the data independently via streams
independent instruction.
• A single data stream is
forwarded to different
processing unit which are
connected to different control
unit and execute instruction
given to it by control unit to
which it is attached.
30
MIMD
• Multiple Instruction: • Different processor each
every processor may be processing different task.
executing a different
instruction stream.
• Multiple Data: every
processor may be
working with a different
data stream.
• Execution can be
synchronous or
asynchronous,
deterministic or
nondeterministic
31
32
33
ARM Features Contd…
34
35
Thumb instruction set (T variant) Contd….
36
ARM Core dataflow model
37
38
39
40
41
42
43
Single-core computer
44
Single-core CPU chip
the single core
45
Multi-core architectures
• Replicate multiple processor cores
on a single die.
Core 1 Core 2 Core 3 Core 4
c c c c
o o o o
r r r r
e e e e
1 2 3 4
47
The cores run in parallel
thread 1 thread 2 thread 3 thread 4
c c c c
o o o o
r r r r
e e e e
1 2 3 4
48
Within each core, threads are time-sliced (just
like on a uniprocessor)
several several several several
threads threads threads threads
c c c c
o o o o
r r r r
e e e e
1 2 3 4
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
Difference between Memory mapped
I/O and I/O mapped I/O
Memory Mapped Input/Output Input/Output Mapped Input/Output
1. Each port is treated as a memory Each port is treated as an independent unit.
location.
2. CPU’s memory address space is Separate address spaces for memory and
divided between memory and input/output ports.
input/output ports.
3. Single instruction can transfer Two instruction are necessary to transfer
data between memory and port. data between memory and port.
4. Data transfer is by means of Each port can be accessed by means of IN
instruction like MOVE. or OUT instructions.
Program Controlled I/O
◼ Program controlled I/O is one in which the processor repeatedly checks a status flag to achieve the
required synchronization between processor & I/O device.
◼ The processor polls the device.
◼ It is useful in small low speed systems where hardware cost must be minimized.
◼ It requires that all input/output operators be executed under the direct control of the CPU.
◼ The transfer is between CPU registers(accumulator) and a buffer register connected to the
input/output device.
◼ The i/o device does not have direct access to main memory.
◼ A data transfer from an input/output device to main memory requires the execution of
several instructions by the CPU, including an input instruction to transfer a word from the
input/output device to the CPU and a store instruction to transfer a word from CPU to main
memory.
◼ One or more additional instructions may be needed for address communication and data
word counting.
Typical Program controlled instructions
Name Mnemonic
Branch BR
Jump JMP
Skip SKP
Call CALL
Return RET
Compare CMP
Test(by ADDing) TST
Case study: ARM 5 and ARM 7
Architecture
Data Sizes and Instruction Sets
The ARM is a 32-bit architecture.
cpsr
spsr spsr spsr spsr spsr
Register Organization Summary
r0
r1
User
r2 mode
r3 r0-r7,
r4 r15, User User User User
r5 and mode mode mode mode Thumb state
cpsr r0-r12, r0-r12, r0-r12, r0-r12, Low registers
r6
r15, r15, r15, r15,
r7 and and and and
r8 r8 cpsr cpsr cpsr cpsr
r9 r9
r10 r10 Thumb state
r11 r11 High registers
r12 r12
r13 (sp) r13 (sp) r13 (sp) r13 (sp) r13 (sp) r13 (sp)
r14 (lr) r14 (lr) r14 (lr) r14 (lr) r14 (lr) r14 (lr)
r15 (pc)
cpsr
spsr spsr spsr spsr spsr
N Z C V Q J U n d e f i n e d I F T mode
f s x c
• Condition code flags • Interrupt Disable bits.
• N = Negative result from ALU • I = 1: Disables the IRQ.
• Z = Zero result from ALU • F = 1: Disables the FIQ.
• C = ALU operation Carried out
• V = ALU operation oVerflowed
• T Bit
• Architecture xT only
• Sticky Overflow flag - Q flag • T = 0: Processor in ARM state
• Architecture 5TE/J only • T = 1: Processor in Thumb state
• Indicates if saturation has occurred
• Mode bits
• J bit
• Specify the processor mode
• Architecture 5TEJ only
• J = 1: Processor in Jazelle state
Program Counter (r15)
When the processor is executing in ARM state:
All instructions are 32 bits wide
All instructions must be word aligned
Therefore the pc value is stored in bits [31:2] with bits [1:0] undefined (as
instruction cannot be halfword or byte aligned).
CMP r0,#0
MOVEQ r0,#1
BLEQ func
• Set the flags, then use various condition codes
if (a==0) x=0;
if (a>0) x=1;
CMP r0,#0
MOVEQ r1,#0
MOVGT r1,#1
CMP r0,#4
CMPNE r0,#10
MOVEQ r1,#0
Branch instructions
• Branch : B{<cond>} label
• Branch with Link : BL{<cond>} subroutine_label
31 28 27 25 24 23 0
Cond 1 0 1 L Offset
• The processor core shifts the offset field left by 2 positions, sign-
extends it and adds it to the PC
– ± 32 Mbyte range
– How to perform longer branches?
Data processing Instructions
• Consist of :
– Arithmetic:
RSC
ADD ADC SUB SBC RSB
CF Destination 0 Destination CF
Destination CF
Immediate value
– 8 bit number, with a range of 0-
255.
ALU • Rotated right through even
number of positions
– Allows increased range of 32-bit
constants to be loaded directly
into registers
Result
Immediate constants
• Examples: 31 0
ror #0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 range 0-0x000000ff step 0x00000001
• Cycle time
– Basic MUL instruction
• 2-5 cycles on ARM7TDMI
• 1-3 cycles on StrongARM/XScale
• 2 cycles on ARM9E/ARM102xE
– +1 cycle for ARM9TDMI (over ARM7TDMI)
– +1 cycle for accumulate (not on 9E though result delay is one cycle
longer)
– +1 cycle for “long”
• Above are “general rules” - refer to the TRM for the core you are
using for the exact details
Single register data transfer
LDR STR Word
LDRB STRB Byte
LDRH STRH Halfword
LDRSB Signed byte load
LDRSH Signed halfword load
• Syntax:
– LDR{<cond>}{<size>} Rd, <address>
– STR{<cond>}{<size>} Rd, <address>
e.g. LDREQB
Address accessed
• Address accessed by LDR/STR is specified by a base register
plus an offset
• For word and unsigned byte accesses, offset can be
– An unsigned 12-bit immediate value (ie 0 - 4095 bytes).
LDR r0,[r1,#8]
Post-indexed: STR
r0,[r1],#12
r1 Offset
Updated
Base 0x20c 12 0x20c
Register r0
Source
Original r1 0x5 Register
Base 0x5 for STR
0x200
Register 0x200
LDM / STM operation
• Syntax:
<LDM|STM> {<cond>}<addressing_mode> Rb{!}, <register list>
• 4 addressing modes:
LDMIA / STMIA increment after
LDMIB / STMIB increment before
LDMDA / STMDA decrement after
LDMDB / STMDB decrement before
IA IB DA DB
LDMxx r10, {r0,r1,r4} r4
STMxx r10, {r0,r1,r4}
r4 r1
r1 r0 Increasing
Base Register (Rb) r10 r0 r4 Address
r1 r4
r0 r1
r0
Software Interrupt (SWI)
31 28 27 24 23 0
Condition Field
• Causes an exception trap to the SWI hardware
vector
• The SWI handler can examine the SWI number to
decide what operation has been requested.
• By using the SWI mechanism, an operating system
can implement a set of privileged operations which
applications running in user mode can request.
• Syntax:
– SWI{<cond>} <SWI number>
PSR Transfer Instructions
31 28 27 24 23 16 15 8 7 6 5 4 0
N Z C V Q J U n d e f i n e d I F T mode
f s x c
where
– <psr> = CPSR or SPSR
– [_fields] = any combination of ‘fsxc’
• In User Mode, all bits can be read but only the condition
flags (_f) can be written.
ARM Branches and Subroutines
• B <label>
– PC relative. ±32 Mbyte range.
• BL <subroutine>
– Stores return address in LR
– Returning implemented by restoring the PC from LR
– For non-leaf functions, LR will have to be stacked
func1 func2
STMFD :
: sp!,{regs,lr}
:
: :
:
BL func1 BL func2
:
: :
:
: LDMFD
sp!,{regs,pc} MOV pc, lr
Thumb
• Thumb is a 16-bit instruction set
– Optimised for code density from C code (~65% of ARM code size)
– Improved performance from narrow memory
– Subset of the functionality of the ARM instruction set
• Core has additional execution state - Thumb
– Switch between ARM and Thumb using BX instruction
31 0
ADDS r2,r2,#1
32-bit ARM Instruction
For most instructions generated by compiler:
Conditional execution is not used
Source and destination registers identical
Only Low registers used
Constants are of limited size
15
ADD r2,#1 0 Inline barrel shifter not used
Interrupt
Controller
Peripherals I/O
nIRQ nFIQ
ARM
Core
8 bit ROM
AMBA
Arbiter Reset
ARM
TIC
Remap/
External Bus Interface Timer
Pause
ROM External
Bridge
Bus
Interface
External
RAM On-chip Interrupt
Decoder RAM Controller
• AMBA • ACT
– Advanced Microcontroller Bus – AMBA Compliance Testbench
Architecture
• ADK • PrimeCell
– Complete AMBA Design Kit – ARM’s AMBA compliant peripherals