0% found this document useful (0 votes)
29 views

Module#1

ENGINEERING STUDY MATERIALS.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views

Module#1

ENGINEERING STUDY MATERIALS.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 42

CS203_ Computer

Organization
&
Architecture

Dr. Aruna Jain


Associate Professor
Department of Computer Science
B.I.T. , Mesra
Evolution of Computer System

• Computers have become the part and parcel of our


daily lives.
They are everywhere(embedded systems)
Lecture1:Introduction to
COA Laptops, tablets, mobile phones, intelligent
applications.
• It is required to understand how computer works.
What are there inside a computer?
How does it work?
• We distinguish between the two terms Computer
Architecture Computer and Organization .
Distinction between computer
Architecture and Computer
Organization

• Computer Architecture:
Computer Architecture is a functional
description of requirements and design
implementation for the various parts of
computer. It deals with functional
behavior of computer system. It comes
before the computer organization while
designing a computer.
• Architecture describes what the
computer does.
Distinction between computer
Architecture and Computer
Organization

Computer Organization:
• Computer Organization comes after
the decide of Computer Architecture
first. Computer Organization is how
operational attribute are linked
together and contribute to realize the
architectural specification. Computer
Organization deals with structural
relationship.
• Organization describes how it does it.
Historical Perspective
• Constant quest of building automatic computing machines have driven the
development of computers.
➢ Initial Efforts: Mechanical devices like pulleys, levers and gears.
➢ During World War II: Mechanical relays to carry out computations.
➢ Vacuum Tubes developed: First electronic computer like ENIAC.
➢ Semiconductor transistors developed and journey of miniaturization
began:
SSI-> MSI-> LSI-> VLSI-> ULSI->….billions of transistor per chip
Generations
Evolution of
Computer
Systems
Architecture describes what the
1. Organization describes how it does it.
computer does.

Computer Architecture deals with Computer Organization deals with


2.
functional behavior of computer system. structural relationship.

In above figure, its clear that it deals with In above figure, its also clear that it deals
3.
high-level design issue. with low-level design issue.
Where, Organization indicates its
• Differentiate between 4. Architecture indicates its hardware.
performance.
Computer
For designing a computer, its architecture For designing a computer, organization is
Architecture and 5.
is fixed first. decided after its architecture.
Computer
Computer Architecture is also called as Computer Organization is frequently
Organization 6.
instruction set architecture. called as micro architecture.
Computer Architecture comprises logical
Computer Organization consists of
functions such as instruction sets,
7. physical units like circuit designs,
registers, data types and addressing
peripherals and adders.
modes.
Architecture coordinates between the Computer Organization handles the
8.
hardware and software of the system. segments of the network in a system.
Moores Law
• Moore's Law refers to Moore's
perception that the number of
transistors on a microchip
doubles every two years, though
the cost of computers is halved.
Moore's Law states that we can
expect the speed and capability
of our computers to increase
every couple of years, and we will
pay less for them.
• Another tenet of Moore's Law
asserts that this growth is
exponential.
• All instruction and data are stored in memory.
Simplified Block • An instruction and data are brought into the
processor for execution.
Diagram of Computer • Input and output devices interface with the
outside world.
System • Referred to as Von Neumann Architecture.
• Also called the Central Processing Unit(CPU)
• Consists of a Control Unit and Arithmetic Logical Unit(ALU).
• All calculations happen inside the ALU.

Inside the • The Control Unit generates sequence of control signals to carry out all
operations.

Processor • The Processor fetches an instruction from Memory for execution.


• An instruction specifies the exact operation to be carried out.
• It also specifies the data that are to be operated on.
• A program refers to a set of instructions that are required to carry out some
specific task(sorting a set of numbers)
• Acts as the nerve center that senses the states of various functional units and
sends control signals to control their states.
What is the • To carry out a specific operation (R1  R2+R3) The control unit must generate
control signals in a specific sequence.
Role of • Enable the output of registers R2 and R3.

Control Unit? •
Select the addition operation .
Store the output of the adder circuit into register R1.
• When an instruction is fetched from memory , the operation (opcode) is
decoded by the control unit , and the control signals issued.
• It contains several registers some general purpose and some special purpose,
for temporary storage of data.
• It contains circuitry to carry out logic operations like AND ,OR, NOT, Shift ,

What is the Compare,etc.


• It contains circuitry to carry out arithmetic operations like addition,
subtraction, multiplication , division , etc.
Role of ALU? • During instruction execution , the data (operands)are brought in and stored in
some registers , the desired operation carried out, and the result stored back in
some register or memory.
• Two main types of Memory subsystems
• Primary Main Memory, which stores the active instructions and data
for the
• program being executed on the processor.
• Secondary Memory, which is used as a backup and stores all active and
Inside the inactive
• programs and data, typically as files.
Memory • The processor only has direct access to the Primary Memory.
• In reality memory system is implemented as a hierarchy of several levels.
Unit? • L1 cache, L2 cache, L3 cache, Primary Memory , Secondary Memory.
• Objective is to provide faster memory access at affordable cost.
• .
• Random Access Memory(RAM) ,which is used for cache and primary
memory subsystem. Read and Write access times are independent of the
location being accessed.

Different • Read Only Memory(ROM) , which is used as part of the primary memory to
store some fixed data that cannot be changed.

types of • Magnetic Disk, which uses direction of magnetization of tiny magnetic


particles on a metalic surface to store data. Access time vary depending on

Memory the location being accessed , and is used as Secondary memory.


• Flash Memory, which is replacing magnetic disks as Secondary memory
devices . They are faster , but smaller in size as compared to disk.
• Used to feed data to the computer system from the External environment.
• Data are transferred to the processor/Memory after appropriate
encoding.

Input Unit • Common Input devices are:


• Keyboard
• Mouse
• Joystick
• Camera…..
Used to send the result of some
computation to the outside
world. Data are transferred to
the processor/Memory after
appropriate encoding.
Output Unit
Common Output devices are:
• LCD/LED Screen
• Printer & Plotter
• Speaker/Buzzer
• Projection System
Processor
• The processor is an electric circuitry within
the computer system.
• The processor carries out the instructions
of the computer program with the help of
basic arithmetic and logic, input/output
operations.
Main Memory
• The Random Access Memory is the main
memory of the computer system, which is
known as RAM. The main memory can store
the operating system software, application
software, and other information. The RAM is
one of the fastest memory, and it allows the
data to be readable and writeable.
Secondary memory
• We can store the data and programs on a
long-term basis in the secondary memory. The
hard disks and the optical disks are the
common secondary devices. It is slow and
cheap memory as compare to primary
memory. This memory is not connected to the
processor directly.
• It has a large capacity to store the data. The
hard disk has a capacity of 500 gigabytes. The
data and programs on the hard disk are
organized into files, and the file is the
collection of data on the disk. The secondary
storage is direct access by the CPU; that’s why
it is different from the primary storage.
Register Organization:
• Program Counter –
It is a CPU register in the computer processor which has the address of
the next instruction to be executed from memory. As each instruction
gets fetched, the program counter increases its stored value by 1. It is a
digital counter needed for faster execution of tasks as well as for
tracking the current execution point.
• Instruction Register –
In computing, an instruction register (IR) is the part of a CPU’s control
unit that holds the instruction currently being executed or decoded. An
instruction register is the part of a CPU’s control unit that holds the
instruction currently being executed or decoded. The instruction
register specifically holds the instruction and provides it to the
instruction decoder circuit.
• Memory Address Register –
The Memory Address Register (MAR) is the CPU register that either
stores the memory address from which data will be fetched from the
CPU, or the address to which data will be sent and stored. It is a
temporary storage component in the CPU(central processing unit) that
temporarily stores the address (location) of the data sent by the
memory unit until the instruction for the particular data is executed.
• Memory Data Register –
The memory data register (MDR) is the register in a computer’s
processor, or central processing unit, CPU, that stores the data being
transferred to and from the immediate access storage. Memory data
register (MDR) is also known as memory buffer register (MBR).
• General Purpose Register –
General-purpose registers are used to store temporary data within the
microprocessor. It is a multipurpose register. They can be used either by
a programmer or by a user.
Computer understands only binary language.

It is an electronic machine which can only


understand two phases i.e. on and off.
Number System So, it only understands binary language. i.e. 1 and 0.

Representation Binary circuits are required in computers for reasons


of reliability.

The use of binary numbers in computers maximizes


the expressive power of the binary circuits.
Number Sy Number System Representation(Contd.)rtem
Representation
The number system is a way to
represent or express
numbers. You have heard of
various types of number
systems such as the whole
numbers and the real
numbers. But in the context of
computers, we define:
• Write the Binary Number
• Write the weights 1,2,4,8,…under binary digit
• Cross out any weight under a zero.
• Add the remaining weights.
Examples:
Binary to 1 N= 11001
= 1x24 + 1x23 + 0x22 +0x21 + 1x20
Decimal =16 + 8 + 0 + 0 + 1
= 25
Conversion 2. N= 11001001.101
= 128 + 64 + 8 +1 +1x2-1 + 1x2-3
= 201 +1/2 +1/8
=201+0.5+0.125
=201.625
Decimal to Binary Conversion(Double Dabble
Method)
Conversion steps:
• Divide the number by 2.
• Get the integer quotient for the next iteration.
• Get the remainder for the binary digit.
• Repeat the steps until the quotient is equal to 0.

Example #1 : Convert 1310 to binary:


Division by 2 Quotient Remainder
LSB
13/2 6 1
6/2 3 0
3/2 1 1
MSB
1/2 0 1

So 1310 = 11012
More Examples
Example # 2 : Convert 13.2510 to binary
Division by 2 Quotient Remainder
13/2 6 1 LSB
6/2 3 0
3/2 1 1
1/2 0 1 MSB

.25x2 =0.50 with carry 0


0.50x2 =1.00 with carry 1

So 13.2510 = 1101.012
Example # 3 : Convert 23.610 to binary
Octal ➔ Decimal Conversion
• To convert an octal number to a decimal number, multiply
each octal value by the weight of the digit and sum the
results. For example, 4128 = 26610.
• To convert an octal number to a decimal number, multiply
each octal value by the weight of the digit and sum the
results. For example, 4128 = 26610.
Octal ➔Binary Conversion
• Octal number system is the base-8 number system, and uses
the digits 0 to 7.
• Each octal digit is replaced by 3-bit binary number.
For example, 4728 = 100 111 0102.
4 7 2 Octal Number

100 111 001


Binary Number
Floating Points Conversion using Remainder Method

Decimal ➔ Binary
2. Then, convert the fraction by multiply it with the based we want to convert:

IF ZERO, THEN STOP


Explaination Slide
Storing Real
Number

• There are two major approaches


to store real numbers (i.e.,
numbers with fractional
component) in modern
computing. These are (i) Fixed
Point Notation and (ii) Floating
Point Notation. In fixed point
notation, there are a fixed
number of digits after the
decimal point, whereas floating
point number allows for a
varying number of digits after the
decimal point.
• Fixed Point
Representation
Data
representation
(Contd.)
Floating Point Operations:
1 Addition
2 Subtraction
3 Multiplication
4 Division

Examples:
Add 1.1x103 and 50

Subtract 0.521x 103 from 0.534x103


Difference
between Fixed
and Floating
Point
Representation
IEEE754 Floating Point Representation
• It is a technical standard for floating-point computation which was established in 1985 by the Institute of Electrical
and Electronics Engineers (IEEE).
• The standard addressed many problems found in the diverse floating point implementations that made them
difficult to use reliably and reduced their portability.
• IEEE Standard 754 floating point is the most common representation today for real numbers on computers,
including Intel-based PC’s, Macs, and most Unix platforms.
• It has 3 basic components:
1. The Sign of Mantissa –
0 represents a positive number while 1 represents a negative number.
2. The Biased exponent –
The exponent field needs to represent both positive and negative exponents. A bias is added to the actual
exponent in order to get the stored exponent.
3. The Normalised Mantissa –
The mantissa is part of a number in scientific notation or a floating-point number, consisting of its significant digits.
Here we have only 2 digits, i.e. O and 1. So a normalised mantissa is one with only one 1 to the left of the decimal.
• IEEE 754 numbers are divided into two based on the above three components: single precision
and double precision.
IEEE 754 Floating
Point
Representation
Bias
Single Precision:127

Double Precision:1023
Example: Show the IEEE754
binary representation of the
number -0.7510
Example
Booth algorithm:
• It gives a procedure for multiplying binary
integers in signed 2’s complement
representation in efficient way, i.e., less
number of additions/subtractions required.
• It operates on the fact that strings of 0’s in
the multiplier require no addition but just
shifting and a string of 1’s in the multiplier
from bit weight 2^k to weight 2^m can be
treated as 2^(k+1 ) to 2^m.
• Example:
• Multiply 7x3
-7x3
Response Time/Execution Time :
• The total time required for the computer to complete a task, including
disc accesses, memory accesses, I/O activities, operating system
overheads, CPU Execution time….
Throughput:
• The total amount of workdone in a given time.
• To maximize Performance , we want to minimize Response
Time/Execution Time.
Assessing and • Performance x = 1/Execution Timex
Relative Performance
Understanding • Performance x / Performance y = Execution Timey/ Execution Timex = α

Performance Measuring Performance :


• CPU Execution time:the actual time the CPU sends computing for
specific task.
• User CPU time: The CPU time spent in program itself.
• System CPU time :The CPU time spent in the operating system
performing tasks on behalf of the program.
• Clock Cycle: The time for one clock period , usually of the processor
clock, which runs at a constant rate.
• Clock Period: The length of each clock cycle.
• CPU Execution time for a program= CPU Clock cycles for a program x Clock Cycle
Time
Alternately
• CPU Execution time for a program= CPU Clock cycles for a program / Clock Rate

Hardware Software Interface


CPU • Clock Cycles per Instruction (CPI): Average number of clock cycles per instruction for
a program or program fragment.

Performance Basic Performance Equation:


CPU Time= Instruction Count x CPI x Clock Cycle Time
Alternately
and its CPU Time= Instruction Count x CPI / Clock rate

• CPI varies by application ,as well as among implementations with the same
Factors: instruction set.
• Sometimes,it is possible to compute the CPU clock cycle by looking at different
types of instructions and using their individual clock cycle counts. In such cases,
• CPU Clock Cycles = ∑(CPIi X Ci )
Where Ci is the count of the number of instructions of class I executed, CPI is the
average number of cycles per instructions for that instruction class, and n is the
number of instruction classes.
Understanding Program Performance:

H/W or S/W Component Affects what? How?


The algo. Determines the no. of source program
instructions executed and hence the no. of
IC,Possibly CPI
Algorithm processor instructions executed. The algo may also
affect the CPI , by favoring slower/ faster
instructions.
The Programming Language affects the IC , since
statements in the language are translated to processor
IC, CPI
Programming Language instructions, which determine IC. The language may
also affect the CPI because of its features.

The compiler determines the translation of the source


language instructions into computer instructions.
IC, CPI
Compiler
IC, CPI, Clock Rate It affects the instructions needed for a function, the
Instruction Set Architecture cost in cycles of each instruction , and the overall clock
rate of the processor.
Examples:
1 Our favorite program runs in 10 seconds on computer A , which has a 4GHz
clock. We are trying to help a computer designer build a computer B , that will
run this program in 6 seconds. The designer has determined that a substantial
increase in the clock rate is possible , but this increase will affect the rest of the
CPU design , causing computer B to requite 1.2 times as many clock cycles as
computer A for this program . What clock rate should we tell the designer to
target?
2 Suppose we have two implementations of the same ISA. Computer A has a clock
cycle time of 250 ps and a CPI of 2.0 for some program, and Computer B has a
clock cycle time of 500 ps and a CPI of 1.2 for the same program. Which
computer is faster for this program , and by how much?

You might also like