0% found this document useful (0 votes)
35 views

Lecture 2

The document discusses various ways to measure computer performance. Response time or execution time refers to the total time to complete a task including operating system overhead and I/O activities. Throughput or bandwidth refers to the number of tasks completed per unit of time. CPU execution time refers only to the time the CPU spends computing and does not include wait times. CPU time can be divided into user CPU time spent in a program and system CPU time spent in the operating system on behalf of the program. Clock cycles refer to the discrete time intervals determined by the computer's clock.

Uploaded by

ruba
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

Lecture 2

The document discusses various ways to measure computer performance. Response time or execution time refers to the total time to complete a task including operating system overhead and I/O activities. Throughput or bandwidth refers to the number of tasks completed per unit of time. CPU execution time refers only to the time the CPU spends computing and does not include wait times. CPU time can be divided into user CPU time spent in a program and system CPU time spent in the operating system on behalf of the program. Clock cycles refer to the discrete time intervals determined by the computer's clock.

Uploaded by

ruba
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 29

1.

3 Under the Cover

• Dynamic random-access memory (DRAM) is a type of random access


semiconductor memory that stores each bit of data in a separate tiny capacitor
within an integrated circuit.
• The capacitor can either be charged or discharged; these two states are taken to
represent the two values of a bit, conventionally called 0 and 1.
• The electric charge on the capacitors slowly leaks off, so without intervention
the data on the chip would soon be lost.
• To prevent this, DRAM requires an external memory refresh circuit which
periodically rewrites the data in the capacitors, restoring them to their original
charge.
• This refresh process is the defining characteristic of dynamic random-access
memory, in contrast to static random-access memory (SRAM) which does not
require data to be refreshed.
• Unlike flash memory, DRAM is volatile memory (vs. non-volatile memory), since
it loses its data quickly when power is removed.
Computer Organization & Architecture 1
1.3 Under the Cover . . .
• Central Processor Unit (CPU) :
The processor is the active part of the board, following the
instructions of a program to the letter.
It adds numbers, tests numbers, signals I/O devices to activate, and
so on.
The processor is under the fan and covered by a heat sink.

• The processor logically comprises two main components:


datapath and Control.

• The datapath performs the arithmetic operations, and control tells the
datapath, memory, and I/O devices what to do according to the wishes of
the instructions of the program.

• Chapter 4 explains the datapath and control for a higher-performance


design.
Computer Organization & Architecture 2
1.3 Under the Cover . . .
• Inside the processor is another type of memory—cache memory.
• Cache memory consists of a small, fast memory that acts as a buffer for the
DRAM memory.
• Cache is built using a different memory technology, static random access
memory (SRAM).
• SRAM is faster but less dense, and hence more expensive, than DRAM.
• You may have noticed a common theme in both the software and the
hardware descriptions: delving into the depths of hardware or software
reveals more information or, conversely, lower-level details are hidden to
offer a simpler model at higher levels.
• The use of such layers, or abstractions, is a principal technique for designing
very sophisticated computer systems.
• One of the most important abstractions is the interface between the
hardware and the lowest-level software.
• Because of its importance, it is given a special name: the instruction set
architecture, or simply architecture, of a computer.

Computer Organization & Architecture 3


Abstraction
• Delving into the depths
reveals more information

• An abstraction omits unneeded detail,


helps us cope with complexity
Level of Abstractions

High level Programming


Language
Assembly Laws
Top to OS Machine Laws
Down
Approac Instruction Set Down to
h Top
Architecture(ISA) Approac
h
Conventional Machine Level
Micro Programming Level
Digital Level
Electronics Level
1.3 Under the Cover . . .
• The instruction set architecture includes anything programmers need to know
to make a binary machine language program work correctly, including
instructions, I/O devices, and so on.
• Typically, the operating system will encapsulate the details of doing I/O,
allocating memory, and other low-level system functions so that application
programmers do not need to worry about such details.

Computer Organization & Architecture 6


1.3 Under the Cover . . .

Computer Organization & Architecture 7


Computer Architecture
What is Computer Architecture ?
DISCUSSIONS

Computer used to run large problems and usually accessed via a


network ?

Computer Organization & Architecture 10


DISCUSSIONS

Computer used to run large problems and usually accessed via a


network ?
Supercomputers
Computer Organization & Architecture 11
DISCUSSIONS

A class of computers composed of hundred to thousand processors and


terabytes of memory and having the highest performance and cost ??

Computer Organization & Architecture 12


DISCUSSIONS

A class of computers composed of hundred to thousand processors and


terabytes of memory and having the highest performance and cost ??
Servers
Computer Organization & Architecture 13
DISCUSSIONS

A computer used to running one predetermined application or collection of


software ?
embedded computers

Computer Organization & Architecture 14


1.4 Performance . . .

• When trying to choose among different computers, performance is an


important attribute.
• Accurately measuring and comparing different computers is critical to
purchasers and therefore to designers.

Computer Organization & Architecture 15


1.4 Performance
• Hence, understanding how best to measure performance and the limitations
of performance measurements is important in selecting a computer.
• This section describes different ways in which performance can be
determined; then, we describe the metrics for measuring performance from
the viewpoint of both a computer user and a designer.
• When we say one computer has better performance than another, what do
we mean?
• If you were running a program on two different desktop computers, you’d
say that the faster one is the desktop computer that gets the job done first.
• If you were running a datacenter that had several servers running jobs
submitted by many users, you’d say that the faster computer was the one
that completed the most jobs during a day.
• As an individual computer user, you are interested in reducing response
time—the time between the start and completion of a task—also referred to
as execution time.
• Datacenter managers are often interested in increasing throughput or
bandwidth— the total amount of work done in a given time.
Computer Organization & Architecture 16
1.4 Performance

• Response time / execution time.


The total time required for the computer to complete a task, including
disk accesses, memory accesses, I/O activities, operating system
overhead, CPU execution time, and so on.

• Throughput / bandwidth.
Another measure of performance, it is the number of tasks completed
per unit time.

Computer Organization & Architecture 17


1.4 Performance

Computer Organization & Architecture 18


1.4 Performance

Computer Organization & Architecture 19


1.4 Performance

Computer Organization & Architecture 20


1.4 Performance

Computer Organization & Architecture 21


1.4 Performance

Computer Organization & Architecture 22


1.4 Performance
• CPU execution time or simply CPU time, is the time the CPU spends
computing for this task and does not include time spent waiting for I/O or
running other programs.
• Remember, though, that the response time experienced by the user will be
the elapsed time of the program, not the CPU time.
• CPU time can be further divided into the CPU time spent in the program,
called user CPU time, and the CPU time spent in the operating system
performing tasks on behalf of the program, called system CPU time.
• Differentiating between system and user CPU time is difficult to do
accurately, because it is often hard to assign responsibility for operating
system activities to one user program rather than another and because of
the functionality differences among operating systems.
• For consistency, we maintain a distinction between performance based on
elapsed time and that based on CPU execution time.
• We will use the term system performance to refer to elapsed time on an
unloaded system and CPU performance to refer to user CPU time.

Computer Organization & Architecture 23


1.4 Performance

CPU execution time

• Also called CPU time. The actual time the CPU spends computing for a
specific task.
User CPU time

• The CPU time spent in a program itself.

System CPU time

• The CPU time spent in the operating system performing tasks on behalf of
the program.

Computer Organization & Architecture 24


1.4 Performance
• Almost all computers are constructed using a clock that determines when
events take place in the hardware.
• These discrete time intervals are called clock cycles (or ticks, clock ticks,
clock periods, clocks, cycles).
• Designers refer to the length of a clock period both as the time for a
complete clock cycle (e.g., 250 picoseconds, or 250 ps) and as the clock
rate (e.g., 4 gigahertz, or 4 GHz), which is the inverse of the clock period.
• In the next subsection, we will formalize the relationship between the
clock cycles of the hardware designer and the seconds of the computer
user.

Computer Organization & Architecture 25


1.4 Performance

Computer Organization & Architecture 26


1.4 Performance

Computer Organization & Architecture 27


1.4 Performance

Computer Organization & Architecture 28


1.4 Performance

Computer Organization & Architecture 29

You might also like