0% found this document useful (0 votes)
72 views

1 - Introduction To Computer System

The document provides an introduction to computer systems, organization and architecture. It discusses the differences between computer organization and architecture, as well as CISC and RISC architectures. The document also covers performance measures such as clock cycle time, CPI, MIPS and MFLOPS.

Uploaded by

Aliaa Tarek
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views

1 - Introduction To Computer System

The document provides an introduction to computer systems, organization and architecture. It discusses the differences between computer organization and architecture, as well as CISC and RISC architectures. The document also covers performance measures such as clock cycle time, CPI, MIPS and MFLOPS.

Uploaded by

Aliaa Tarek
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Introduction to computer

systems
Computer Organization (2022/2023)
Eng. Hossam Mady
Teaching Assistant and Researcher at Aswan Faculty of Engineering
Organization & Architecture
• Computer architecture is a functional description of requirements
and design implementation for the various parts of computer. It
also refers to those attributes that have a direct impact on the
logical execution of a program (Architecture describes what the
computer does)
• Computer organization refers to the operational units and their
interconnections that realize the architectural specifications.
(Organization describes how it does it)
Organization & Architecture

• For designing a computer, organization is decided after its


architecture.
• Computer Architecture comprises logical functions such as
instruction sets, registers, data types and addressing modes.
• Computer Organization consists of physical units like circuit
designs, peripherals and adders.
Organization & Architecture

• For example, it is an architectural design issue whether a computer


will have a multiply instruction. It is an organizational issue
whether that instruction will be implemented by a special multiply
unit or by a mechanism that makes repeated use of the add unit of
the system.
• The organizational decision may be based on the anticipated
frequency of use of the multiply instruction, the relative speed of
the two approaches, and the cost and physical size of a special
multiply unit.
Organization & Architecture

• Furthermore, a particular architecture may span many years and


encompass a number of different computer models, its
organization changing with changing technology (Many computer
manufacturers offer a family of computer models, all with the same
architecture but with differences in organization).
Components of Computer System: Input, output,
Processor and Storage
Basic Functions

• There are four basic functions that a computer can perform:


➢ Data processing
➢ Data storage
✓ Short-term
✓ Long-term
➢ Data movement
➢ Control
CISC & RISC
• Computer architects have always been striving to increase the
performance of their architectures. This has taken a number of
forms among these:
• Complex instructions set computers (CISC): philosophy that by
doing more in a single instruction, one can use a smaller number of
instructions to perform the same job.
• Reduced instructions set computers (RISC): Philosophy that
promotes the optimization of architectures by speeding up those
operations that are most frequently used while reducing the
instruction complexities and the number of addressing modes.
CISC & RISC
• Complex instructions set computers (CISC): attempts to minimize
the number of instructions per program but at the cost of an
increase in the number of cycles per instruction.
➢ The immediate consequence of this is the need for fewer memory
read/write operations and an eventual speedup of operations.
➢ It was also argued that increasing the complexity of instructions and
the number of addressing modes has the theoretical advantage of
reducing the “semantic gap” between the instructions in a high-level
language and those in the low-level (machine) language.
CISC & RISC
• Reduced instructions set computers (RISC): Reduce the cycles per
instruction at the cost of the number of instructions per program.
• The two philosophies in architecture design have led to the
unresolved controversy as to which architecture style is “best.” It
should, however, be mentioned that studies have indicated that
RISC architectures would indeed lead to faster execution of
programs. The majority of contemporary microprocessor chips
seems to follow the RISC paradigm.
Technological Development
• Computer technology has shown an unprecedented rate of
improvement. This includes the development of processors and
memories. Indeed, it is the advances in technology that have fueled
the computer industry.
• The integration of numbers of transistors (a transistor is a
controlled on/off switch) into a single chip has increased from a few
hundred to millions.
• Observed number of transistors that could be put on a single chip
was doubling every year (Moore’s Law).
Performance Measures

• We focus our discussion on a number of performance measures


that are used to assess computers.
• There are various facets to the performance of a computer. For
example, a user of a computer measures its performance based on
the time taken to execute a given job (program).
• On the other hand, a laboratory engineer measures the
performance of his system by the total amount of work done in a
given time.
Performance Measures

• While the user considers the program execution time a measure


for performance, the laboratory engineer considers the throughput
a more important measure for performance.
• A metric for assessing the performance of a computer helps
comparing alternative designs.
Performance Measures
• We define the clock cycle time as the time between two
consecutive rising (trailing) edges of a periodic clock signal (Fig.
1.1).

• The time required to execute a job by a computer is often


expressed in terms of clock cycles.
Performance Measures

• We denote the number of CPU clock cycles for executing a job to be


the cycle count (CC), the cycle time by CT, and the clock frequency
by f = 1/CT.
• The time taken by the CPU to execute a job can be expressed as
𝐶𝑃𝑈 𝑡𝑖𝑚𝑒 = 𝐶𝐶 × 𝐶𝑇 = 𝐶𝐶 / 𝑓
• Comparison of clock speeds on different processors does not tell
the whole story about performance.
Performance Measures (CPI)
• The average number of clock cycles per instruction (CPI) has been
used as an alternate performance measure. The following equation
shows how to compute the CPI.

𝐶𝑃𝑈 𝑐𝑙𝑜𝑐𝑘 𝑐𝑦𝑐𝑙𝑒𝑠 𝑓𝑜𝑟 𝑡ℎ𝑒 𝑝𝑟𝑜𝑔𝑟𝑎𝑚


𝐶𝑃𝐼 =
𝐼𝑛𝑠𝑡𝑟𝑢𝑐𝑡𝑖𝑜𝑛 𝑐𝑜𝑢𝑛𝑡
𝐶𝑃𝑈 𝑡𝑖𝑚𝑒 = 𝐼𝑛𝑠𝑡𝑟𝑢𝑐𝑡𝑖𝑜𝑛 𝑐𝑜𝑢𝑛𝑡 × 𝐶𝑃𝐼 × 𝐶𝑙𝑜𝑐𝑘 𝑐𝑦𝑐𝑙𝑒 𝑡𝑖𝑚𝑒
𝐼𝑛𝑠𝑡𝑟𝑢𝑐𝑡𝑖𝑜𝑛 𝑐𝑜𝑢𝑛𝑡 × 𝐶𝑃𝐼
=
𝐶𝑙𝑜𝑐𝑘 𝑟𝑎𝑡𝑒
Performance Measures (CPI)

• It is known that the instruction set of a given machine consists of a


number of instruction categories: ALU (simple assignment and
arithmetic and logic instructions), load, store, branch, and so on. In
the case that the CPI for each instruction category is known, the
overall CPI can be computed as
σ𝑛𝑖=1 𝐶𝑃𝐼𝑖 × 𝐼𝑖
𝐶𝑃𝐼 =
𝐼𝑛𝑠𝑡𝑟𝑢𝑐𝑡𝑖𝑜𝑛 𝑐𝑜𝑢𝑛𝑡
Performance Measures (CPI)
• Example Consider computing the overall CPI for a machine A for
which the following performance measures were recorded when
executing a set of benchmark programs. Assume that the clock rate
of the CPU is 200 MHz.
Instruction category Percentage of occurrence No. of cycles per instruction
ALU 38 1
Load & store 15 3
Branch 42 4
Others 5 5
Performance Measures (CPI)

• Assuming the execution of 100 instructions, the overall CPI can be


computed as
σ𝑛𝑖=1 𝐶𝑃𝐼𝑖 × 𝐼𝑖
𝐶𝑃𝐼 =
𝐼𝑛𝑠𝑡𝑟𝑢𝑐𝑡𝑖𝑜𝑛 𝑐𝑜𝑢𝑛𝑡
38 × 1 + 15 × 3 + 42 × 4 + 5 × 5
= = 2.76
100
Performance Measures (MIPS)

• A different performance measure that has been given a lot of


attention in recent years is MIPS (million instructions-per-second
(the rate of instruction execution per unit time)), which is defined as
𝐼𝑛𝑠𝑡𝑟𝑢𝑐𝑡𝑖𝑜𝑛 𝑐𝑜𝑢𝑛𝑡 𝐶𝑙𝑜𝑐𝑘 𝑟𝑎𝑡𝑒
𝑀𝐼𝑃𝑆 = 6
=
𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑡𝑖𝑚𝑒 × 10 𝐶𝑃𝐼 × 106
Performance Measures (MIPS)

• Example: Suppose that the same set of benchmark programs


considered above were executed on another machine, call it machine B,
for which the following measures were recorded. What is the MIPS
rating for the machine considered in the previous example (machine A)
and machine B assuming a clock rate of 200 MHz?
Instruction category Percentage of occurrence No. of cycles per instruction
ALU 35 1
Load & store 30 2
Branch 15 3
Others 20 5
Performance Measures (MIPS)
38 × 1 + 15 × 3 + 42 × 4 + 5 × 5
𝐶𝑃𝐼𝑎 = = 2.76
100
200 ∗ 106
𝑀𝐼𝑃𝑆𝑎 = 6
= 70.24
2.76 × 10

35 × 1 + 30 × 2 + 15 × 3 + 20 × 5
𝐶𝑃𝐼𝑏 = = 2.4
100
200 × 106
𝑀𝐼𝑃𝑆𝑏 = 6
= 83.67
2.4 × 10
Performance Measures (MIPS)

• Thus MIPSb > MIPSa. It is interesting to note here that although


MIPS has been used as a performance measure for machines, one
has to be careful in using it to compare machines having different
instruction sets. This is because MIPS does not track execution
time.
Performance Measures (MFLOPS)

• Million floating-point instructions per second, MFLOP (rate of


floating-point instruction execution per unit time) has also been
used as a measure for machine 's performance. It is defined as

𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑓𝑙𝑜𝑎𝑡𝑖𝑛𝑔 𝑝𝑜𝑖𝑛𝑡 𝑜𝑝𝑒𝑟𝑎𝑡𝑖𝑜𝑛


𝑀𝐹𝐿𝑂𝑃𝑆 =
𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑡𝑖𝑚𝑒 × 106
Benchmark suite

• Benchmark suite:
• A collection of programs, defined in a high-level language.
• Together attempt to provide a representative test of a computer in a particular
application or system programming area.
• System Performance Evaluation Corporation (SPEC): is a non-profit
corporation formed to establish, maintain and endorse standardized
benchmarks and tools to evaluate performance and energy efficiency for
the newest generation of computing systems.
Arithmetic Mean & Geometric Mean

• The performance of a machine regarding one particular program might not


be interesting to a broad audience. The use of arithmetic and geometric
means are the most popular ways to summarize performance regarding
larger sets of programs (e.g., benchmark suites). These are defined below.
𝑛
1
𝐴𝑟𝑖𝑡ℎ𝑚𝑒𝑡𝑖𝑐 𝑚𝑒𝑎𝑛 = ෍ 𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑡𝑖𝑚𝑒𝑖
𝑛
𝑖=1

𝑛
𝑛
𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐 𝑚𝑒𝑎𝑛 = ෑ 𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑡𝑖𝑚𝑒𝑖
𝑖=1
Arithmetic Mean & Geometric Mean
• The following table shows an example for computing these metrics.
Item CPU time on computer A (s) CPU time on computer B (s)
Program1 50 10
Program2 500 100
Program3 5000 1000
Arithmetic mean 1835 370
Geometric mean 500 100
Harmonic Mean

• Harmonic Mean:
𝑛 𝑛
𝐻𝑀 = = 𝑥𝑖 > 0
1 1 1
( ) +⋯+( ) σ𝑛𝑖=1( )
𝑥1 𝑥𝑛 𝑥𝑖
Amdahl’s law
• We consider speedup as a measure of how a machine performs after
some enhancement relative to its original performance. The following
relationship formulates Amdahl’s law.
𝑃𝑒𝑟𝑓𝑜𝑟𝑚𝑎𝑛𝑐𝑒 𝑎𝑓𝑡𝑒𝑟 𝑒𝑛ℎ𝑎𝑛𝑐𝑒𝑚𝑒𝑛𝑡
𝑆𝑈0 =
𝑃𝑒𝑟𝑓𝑜𝑟𝑚𝑎𝑛𝑐𝑒 𝑏𝑒𝑓𝑜𝑟𝑒 𝑒𝑛ℎℎ𝑎𝑛𝑐𝑒𝑚𝑒𝑛𝑡
𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑡𝑖𝑚𝑒 𝑏𝑒𝑓𝑜𝑟𝑒 𝑒𝑛ℎ𝑎𝑛𝑐𝑒𝑚𝑒𝑛𝑡
𝑆𝑝𝑒𝑒𝑑𝑢𝑝 =
𝐸𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛 𝑡𝑖𝑚𝑒 𝑎𝑓𝑡𝑒𝑟 𝑒𝑛ℎ𝑎𝑛𝑐𝑒𝑚𝑒𝑛𝑡
• Consider, for example, a possible enhancement to a machine that will
reduce the execution time for some benchmarks from 25 s to 15 s. We
25
say that the speedup resulting from such reduction is: 𝑆𝑈0 = = 1.67
15
Amdahl’s law

• In its given form, Amdahl’s law accounts for cases whereby


improvement can be applied to the instruction execution time.
However, sometimes it may be possible to achieve performance
enhancement for only a fraction of time, ∆.
1
𝑆𝑈0 =
1 − 𝛥 + (𝛥/𝑆𝑈𝛥 )
• It should be noted that when ∆ = 1, that is, when enhancement is
possible at all times, then SUo = SU∆, as expected.
Amdahl’s law

• Consider, for example, a machine for which a speedup of 30 is


possible after applying an enhancement. If under certain conditions
the enhancement was only possible for 30% of the time, what is the
speedup due to this partial application of the enhancement?
1 1 1
𝑆𝑈0 = = 0.3 = = 1.4
1−𝛥 +(𝛥/𝑆𝑈𝛥 ) 1−0.3 + 0.7+0.01
30

You might also like