0% found this document useful (0 votes)
4 views36 pages

CH02-HP Computer Abstractions and Technology

The document discusses the evolution of computer technology, highlighting the impact of Moore's Law and the various classes of computers, including desktops, servers, and embedded systems. It covers the architecture of computers, the importance of high-level programming languages, and the metrics for measuring performance such as response time and throughput. Additionally, it addresses challenges like the power wall and the role of multicore processors in enhancing performance through parallelism.

Uploaded by

bdwr00121
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views36 pages

CH02-HP Computer Abstractions and Technology

The document discusses the evolution of computer technology, highlighting the impact of Moore's Law and the various classes of computers, including desktops, servers, and embedded systems. It covers the architecture of computers, the importance of high-level programming languages, and the metrics for measuring performance such as response time and throughput. Additionally, it addresses challenges like the power wall and the role of multicore processors in enhancing performance through parallelism.

Uploaded by

bdwr00121
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 36

Computer Abstractions and Technology,

Performance: response time, throughput, CPU


performance equations; and The Power Wall
§1.1 Introduction
The Computer Revolution
Progress in computer technology
 Underpinned by Moore’s Law
Makes novel applications feasible
 Computers in automobiles
 Cell phones
 Human genome project
 World Wide Web
 Search Engines
Computers are pervasive
Classes of Computers
• Desktop computers
• General purpose, variety of software
• Subject to cost/performance tradeoff
• Server computers
• Network based
• High capacity, performance, reliability
• Range from small servers to building sized
• Embedded computers
• Hidden as components of systems
• Stringent power/performance/cost constraints
§1.2 Below Your Program
Below Your Program

• Application software
• Written in high-level language
• System software
• Compiler: translates HLL code to machine code
• Operating System: service code
• Handling input/output
• Managing memory and storage
• Scheduling tasks & sharing resources
• Hardware
• Processor, memory, I/O controllers
Levels of Program
Code

• High-level language
• Level of abstraction closer to
problem domain
• Provides for productivity and
portability
• Assembly language
• Textual representation of
instructions
• Hardware representation
• Binary digits (bits)
• Encoded instructions and data
Benefits of high-level
languages
Allow programmers to think in natural language,
often specific to their intended use
 Fortran for scientific computations, Cobol for business,
LISP for symbol processing, etc
Increase productivity
 Fewer lines of code required
Allow programs to be independent of computer on
which they are developed
 portable; use of compilers
Under the cover
• Hardware performs same basic functions
across different types of computers
• input data
• output data
• process data
• store data
§1.3 Under the Covers
The BIG Picture

Components of a Computer
• Same components for
all kinds of computer (Desktop, server,
embedded)
• input
• output
• memory
• datapath
• control
Components of a Computer

• Input/output includes
• User-interface devices
• Display, keyboard, mouse
• Storage devices
• Hard disk, CD/DVD, flash
• Network adapters
• For communicating with other computers
Anatomy of a Computer
Output
device

Network
cable

Input Input
device device
Anatomy of a desktop PC
Inside the Processor (CPU)

• Datapath: performs operations on data


• Control: sequences datapath, memory, ...
• Cache memory
• Small fast SRAM memory for immediate access to
data
• Typically for CPU Cache
Abstractions
Abstraction helps us deal with complexity
 Hide lower-level detail
Instruction set architecture (ISA)
 The hardware/software interface
Application Binary Interface
 The ISA plus system software interface
Implementation
 The details underlying and interface
A Safe Place for Data

• Volatile main memory


• Loses instructions and data when power off

• Non-volatile secondary memory


• Magnetic disk
• Flash memory
• Optical disk (CDROM, DVD)
Networks

• Communication and resource sharing


• Local area network (LAN): Ethernet
• Within a building

• Metropolitan Area Network (MAN)


• Within a city

• Wide area network (WAN)


• Covering wider geographical area (country, continent, and
beyond)

• Wireless network: WiFi, Bluetooth


Technology Trends
• Electronics
technology continues
to evolve
• Increased capacity
and performance
• Reduced cost DRAM capacity

Year Technology Relative performance/cost


1951 Vacuum tube 1
1965 Transistor 35
1975 Integrated circuit (IC) 900
1995 Very large scale IC (VLSI) 2,400,000
2005 Ultra large scale IC 6,200,000,000
Manufacturing ICs
The manufacturing of IC begins with silicon:
Silicon is a natural element which is a
semiconductor
With a special chemical process, materials
can be added to silicon and transform it to
one of three devices:
Response Time and
Throughput
 Response time
 How long it takes to do a task
 Throughput
 Total work done per unit time
 e.g., tasks/transactions/… per hour

 How are response time and throughput affected by


 Replacing the processor with a faster version?
 Adding more processors?
 We’ll focus on response time for now…
Relative Performance

Define Performance = 1/Execution Time


“X is n time faster than Y”
Performanc e X Performanc e Y
Execution time Y Execution time X n
 Example: time taken to run a program
 10s on A, 15s on B

 Execution TimeB / Execution TimeA


= 15s / 10s = 1.5
 So A is 1.5 times faster than B
Measuring Execution Time
Elapsed time
 Total response time, including all aspects
 Processing, I/O, OS overhead, idle time
 Determines system performance
CPU time
 Time spent processing a given job
 Doesn’t include I/O time, other jobs’ shares
 Dividedinto user CPU time, time spent in the
program, and system CPU time, time spent in OS
performing tasks on behalf of the program.
 Differentprograms are affected differently by CPU and
system performance
CPU Clocking
• Operation of digital hardware governed by a
constant-rate clock
Clock period

Clock (cycles)

Data transfer
and computation
Update state

 Clock period: duration of a clock cycle


 e.g., 250ps = 0.25ns = 250×10–12s
 Clock frequency (rate): cycles per second
 e.g., 4.0GHz = 4000MHz = 4.0×109Hz
Computer Organization & Design:The HW/SW Interface 5th Edition
Computer Organization & Design:The HW/SW Interface 5th Edition
CPU Time
CPU Time CPU Clock CyclesClock Cycle Time
CPU Clock Cycles

Clock Rate
• Performance improved by
• Reducing number of clock cycles
• Increasing clock rate
• Hardware designer must often trade off clock rate
against cycle count
CPU Time Example
• Computer A: 2GHz clock, 10s CPU time
• Designing Computer B
• Aim for 6s CPU time
• Can do faster clock, but causes 1.2 × clock cycles
• How fast must Computer B clock be?

Clock CyclesB 1.2 Clock CyclesA


Clock RateB  
CPU Time B 6s
Clock CyclesA CPU Time A Clock Rate A
10s 2GHz 20 109
1.2 20 109 24 109
Clock RateB   4GHz
6s 6s
Instruction Count and CPI

Clock Cycles Instructio n Count Cycles per Instructio n


CPU Time Instructio n Count CPI Clock Cycle Time
Instructio n Count CPI

Clock Rate

Instruction Count for a program


Determined by program, ISA and compiler
Average cycles per instruction
Determined by CPU hardware
If different instructions have different CPI
(clock cycles per instruction)
 Average CPI affected by instruction mix
CPI Example
Computer A: Cycle Time = 250ps, CPI = 2.0
Computer B: Cycle Time = 500ps, CPI = 1.2
Same ISA (instruction set architecture)
Which is faster, and by how much?
CPU Time Instructio n Count CPI Cycle Time
A A A
I 2.0 250ps I 500ps A is faster…

CPU Time Instructio n Count CPI Cycle Time


B B B
I 1.2 500ps I 600ps
CPU Time
B I 600ps 1.2
…by this much
CPU Time I 500ps
A
CPI in More Detail

• If different instruction classes take different


numbers of cycles
n
Clock Cycles  (CPIi Instructio n Counti )
i1

 Weighted average CPI


n
Clock Cycles  Instructio n Counti 
CPI    CPIi  
Instructio n Count i1  Instructio n Count 

Relative frequency
CPI Example

• Alternative compiled code sequences using


instructions in classes A, B, C
Class A B C
CPI for class 1 2 3
IC in sequence 1 2 1 2
IC in sequence 2 4 1 1

 Sequence 1: IC = 5  Sequence 2: IC = 6
 Clock Cycles  Clock Cycles
= 2×1 + 1×2 + 2×3 = 4×1 + 1×2 + 1×3
= 10 =9
 Avg. CPI = 10/5 = 2.0  Avg. CPI = 9/6 = 1.5
Performance Summary

The BIG Picture

Instructio ns Clock cycles Seconds


CPU Time   
Program Instructio n Clock cycle

Performance depends on
Algorithm: affects IC, possibly CPI
Programming language: affects IC, CPI
Compiler: affects IC, CPI
Instruction set architecture: affects IC, CPI, T c
Computer Organization & Design:The HW/SW Interface 5th Edition
Computer Organization & Design:The HW/SW Interface 5th Edition
Computer Organization & Design:The HW/SW Interface 5th Edition
Reducing Power

Suppose a new CPU has


85% of capacitive load of old CPU
15% voltage and 15% frequency reduction
Pnew Cold 0.85 (Vold 0.85) 2 Fold 0.85 4
 2
0.85 0.52
Pold Cold Vold Fold
 The power wall
 We can’t reduce voltage further

 We can’t remove more heat

 How else can we improve performance?


Multiprocessors
Multicore microprocessors
 More than one processor per chip
Requires explicitly parallel programming
 Compare with instruction level parallelism
 Hardware executes multiple instructions at once
 Hidden from the programmer
 Hard to do
 Programming for performance
 Load balancing
 Optimizing communication and synchronization
Concluding Remarks
•Cost/performance is improving
–Due to underlying technology development
•Hierarchical layers of abstraction
–In both hardware and software
•Instruction set architecture
–The hardware/software interface
•Execution time: the best performance measure
•Power is a limiting factor
–Use parallelism to improve performance

You might also like