Input/Output & System Performance Issues

Download as pdf or txt
Download as pdf or txt
You are on page 1of 46

Input/Output & System Performance Issues

• System Architecture & I/O Connection Structure


– Types of Buses/Interconnects in the system. Isolated I/O System Architecture

• I/O Data Transfer Methods.


• System and I/O Performance Metrics.
– I/O Throughput i.e system throughput in tasks per second
– I/O Latency (Response Time) i.e Time it takes the system to process an average task
• Magnetic Disk Characteristics.
• I/O System Modeling Using Queuing Theory.
– Little’s Queuing Law More Specifically steady state queuing theory

– Single Server/Single Queue I/O Modeling: M/M/1 Queue


Quiz 8
– Multiple Servers/Single Queue I/O Modeling: M/M/m Queue
• Designing an I/O System & System Performance:
– Determining system performance bottleneck.
• (i.e. which component creates a system performance bottleneck)

4th Edition: Chapter 6.1, 6.2, 6.4, 6.5


3rd Edition: Chapter 7.1-7.3, 7.7, 7.8 EECC551 - Shaaban
#1 Lec # 9 Fall 2012 10-23-2012
The Von-Neumann Computer Model
• Partitioning of the computing engine into components:
1 – Central Processing Unit (CPU): Control Unit (instruction decode, sequencing of operations),
Datapath (registers, arithmetic and logic unit, buses).
2 – Memory: Instruction (program) and operand (data) storage.
3 – Input/Output (I/O): Communication between the CPU/memory and the outside world.

System Architecture = System components and how the components are connected (system interconnects)
Components of Total System Execution Time: (or response time) CPU Memory I/O

Control Input
2

Memory
- I/O Subsystem
System Interconnects

(instructions,

System Interconnects
3
data) Datapath
registers Output
ALU, buses

Computer System 1 CPU I/O Devices

System performance depends on many aspects of the system


(“limited by weakest link in the chain”): The system performance bottleneck EECC551 - Shaaban
#2 Lec # 9 Fall 2012 10-23-2012
Input and Output (I/O) Subsystem
• The I/O subsystem provides the mechanism for
communication between the CPU and the outside world
(I/O devices). Including users Including memory

• Design factors:
– I/O device characteristics (input, output, storage, etc.)
/Performance.
– I/O Connection Structure (degree of separation from
memory operations). Isolated I/O System Architecture

– I/O interface (the utilization of dedicated I/O and bus


controllers).
– Types of buses/system interconnects (processor-memory vs.
I/O buses/interconnects).
– I/O data transfer or synchronization method (programmed
I/O, interrupt-driven, DMA).
Components of Total System Execution Time:
(or response time)
CPU Memory I/O EECC551 - Shaaban
#3 Lec # 9 Fall 2012 10-23-2012
Typical FSB-Based System Architecture
System Architecture = System Components + System Component Interconnects
System Bus or Front Side Bus (FSB)
CPU
Microprocessor System Interconnects: 1- (CPU-Memory System Interconnect)
Chip

Memory Controller
(One or more levels) (Chipset North Bridge)
Back Side Bus
(BSB)

I/O Controller Hub


(Chipset South Bridge)
i.e. System Core Logic

(I/O System Interconnect)


Isolated I/O System Interconnects: 2-

Current System Architecture:


Isolated I/O: Separate memory
(system) and I/O buses.

Thus
I/O Subsystem
Two Types of System Interconnects/Buses:
1- CPU-Memory Bus or interconnect EECC551 - Shaaban
2 – I/O Buses/interfaces
#4 Lec # 9 Fall 2012 10-23-2012
CPU Core
1 GHz - 3.8 GHz
Typical FSB-Based System Architecture
4-way Superscaler System Architecture = System Components + System Component Interconnects
RISC or RISC-core (x86): All Non-blocking caches
Deep Instruction Pipelines
Dynamic scheduling L1 L1 16-128K 1-2 way set associative (on chip), separate or unified
Multiple FP, integer FUs CPU L2 256K- 2M 4-32 way set associative (on chip) unified
Dynamic branch prediction L2 L3 2-16M 8-32 way set associative (on or off chip) unified
Hardware speculation
L3 Front Side Bus
SDRAM (possibly Caches (FSB) Examples: Alpha, AMD K7: EV6, 200-400 MHz
PC100/PC133 Intel PII, PIII: GTL+ 133 MHz
on-chip)
100-133MHz System Bus Intel P4 800 MHz
64-128 bits wide
2-way inteleaved
Bus Adapter
~ 900 MBYTES/SEC )64bit) Memory Main I/O Bus
Controller Example: PCI, 33-66MHz
32-64 bits wide
Double Date 133-528 MB/s
Rate (DDR) SDRAM Memory Bus
PC3200
I/O Controllers NICs PCI-X 133MHz 64-bits wide
1066 MB/s
200 MHz DDR
64-128 bits wide
Memory
4-way interleaved Disks
~3.2 GBYTES/SEC (64bit)
Displays Networks
Keyboards
RAMbus DRAM (RDRAM)
400MHZ DDR Chipset (Isolated I/O Subsystem)
16 bits wide (32 banks)
Chipset I/O Devices
North South
~ 1.6 GBYTES/SEC (System Logic)
Bridge Bridge
(System Logic) I/O Subsystem
Current System Architecture: Two Types of System Interconnects/Buses:
Thus
Isolated I/O: Separate memory 1- CPU-Memory Bus or interconnect
(system) and I/O buses. 2 – I/O Buses/interfaces

Important issue: Which component creates a system performance bottleneck?


EECC551 - Shaaban
#5 Lec # 9 Fall 2012 10-23-2012
Main Types of Buses/Interconnects in The System
1 Processor-Memory Bus/Interconnect: AKA System Bus, Front Side Bus, (FSB)
– Should offer very high-speed (bandwidth) and low latency.
– Matched to the memory system performance to maximize memory-processor
bandwidth.
– Usually system design-specific (not an industry standard).
– Examples: Alpha EV6 (AMD K7), Peak bandwidth = 400 MHz x 8 = 3.2 GB/s
Intel GTL+ (P3), Peak bandwidth = 133 MHz x 8 = 1 GB/s
Intel P4, Peak bandwidth = 800 MHz x 8 = 6.4 GB/s
HyperTransport 2.0: 200Mhz-1.4GHz, Peak bandwidth up to 22.8 GB/s
Also Intel's QuickPath Interconnect (QPI)
used in Core i7 system architecture
(point-to-point system interconnect not a bus)
2 I/O buses/Interconnects: Sometimes called I/O channels or interfaces
– Follow bus/interface industry standards.
– Usually formed by I/O interface adapters to handle many types of connected I/O
devices.
– Wide range in the data bandwidth and latency
– Not usually interfaced directly to memory instead connected to processor-
memory bus via a bus adapter (system chipset south bridge). Isolated I/O System Architecture
– Examples: Main system I/O bus: PCI, PCI-X, PCI Express
Storage Interfaces: SATA, PATA, SCSI.

System Architecture = System Components + System Component Interconnects EECC551 - Shaaban


#6 Lec # 9 Fall 2012 10-23-2012
FSB-Based Single Processor Socket System Architecture
System Bus (Front Side Bus, FSB)
Bandwidth usually should match or exceed
System Core Logic that of main memory
Memory Controller Hub
(Chipset North Bridge)

Graphics/ System
GPU
Memory

Isolated I/O

System Core Logic

I/O Controller Hub


(Chipset South Bridge)

EECC551 - Shaaban
#7 Lec # 9 Fall 2012 10-23-2012
Intel Pentium 4 System Architecture
System Architecture = System Components
+ System Component Interconnects
And Core 2 (Using The Intel 925 Chipset)
CPU
(Including cache) System Bus (Front Side Bus, FSB)
System Core Logic Bandwidth usually should match or exceed
that of main memory
Memory Controller Hub
(Chipset North Bridge)

Graphics/GPU System
Memory
Two 8-byte DDR2 Channels
Graphics I/O Bus (PCI Express)

Isolated I/O
Storage I/O (Serial ATA)

Misc. Main
I/O I/O Bus
Interfaces (PCI)

Misc.
I/O
System Core Logic
Interfaces
I/O Controller Hub
(Chipset South Bridge) Basic Input Output System (BIOS)
I/O Subsystem
Current System Architecture:
Isolated I/O: Separate memory and I/O buses.
EECC551 - Shaaban
Source: http://www.anandtech.com/showdoc.aspx?i=2088&p=4
#8 Lec # 9 Fall 2012 10-23-2012
Intel Core i7 “Nehalem” System Architecture
Intel's QuickPath Interconnect (QPI) Point-to-point system interconnect used
instead of Front Side Bus (FSB)
+ Memory controller integrated on processor chip (three DDR3 channels)

Memory
Controllers System
Memory

QuickPath Interconnect (QPI) Link


(Replaces FSB)

Partial North Bridge


Graphics/GPU (No memory controller)
QPI Link(s)

Isolated I/O

QuickPath Interconnect:
Intel’s first point-point interconnect
introduced 2008 with the Nehalem
Architecture as an alternative to
HyperTransport

I/O Controller Hub


(Chipset South Bridge)
EECC551 - Shaaban
#9 Lec # 9 Fall 2012 10-23-2012
(e.g . FSB)
Bus Characteristics
Option High performance Low cost/performance
Bus width Separate address Multiplex address
& data lines & data lines
Data width Wider is faster Narrower is cheaper
(e.g., 64 bits) (e.g., 16 bits)
Transfer size Multiple words has Single-word transfer
less bus overhead is simpler
Bus masters Multiple Single master
(requires arbitration) (no arbitration)
Split Yes, separate No , continuous transaction?
Request and Reply connection is cheaper
packets gets higher and has lower latency
bandwidth
(needs multiple masters)
Clocking Synchronous Asynchronous

EECC551 - Shaaban
FSB = Front Side Bus (Processor-memory Bus or System Bus)
#10 Lec # 9 Fall 2012 10-23-2012
Example CPU-Memory System Buses
(Front Side Buses, FSBs)
Bus Summit Challenge XDBus SP P4
Originator HP SGI Sun IBM Intel
Clock Rate (MHz) 60 48 66 111 800
Split transaction? Yes Yes Yes Yes Yes
Address lines 48 40 ?? ?? ??
Data lines 128 256 144 128 64
Clocks/transfer 4 5 4 ?? ??
Peak (MB/s) 960 1200 1056 1700 6400
Master Multi Multi Multi Multi Multi
Arbitration Central Central Central Central Central
Addressing Physical Physical Physical Physical Physical
Length 13 inches 12 inches 17 inches ?? ??
FSB Bandwidth matched with single 8-byte channel SDRAM

FSB Bandwidth matched with dual channel PC3200 DDR SDRAM

EECC551 - Shaaban
#11 Lec # 9 Fall 2012 10-23-2012
Main System I/O Bus Example: PCI, PCI-Express
Specification Bus Width Bus Frequency Peak
Bandwidth
(bits) (MHz) (MB/sec)
PCI 2.3
32 33.3 133
Legacy
PCI PCI 2.3
64 33.3 266

PCI 2.3
64 66.6 533

PCI-X 1.0 64 133.3 1066

Not Implemented Yet PCI-X 2.0 64 266, 533 2100 , 4200

Formerly
Intel’s 3GIO
PCI-Express 1-32 ??? 500-16,000

Addressing Physical PCI Bus Transaction Latency:


Master Multi PCI requires 9 cycles @ 33Mhz (272ns)
Arbitration Central
PCI-X requires 10 cycles @ 133MHz (75ns)

PCI = Peripheral Component Interconnect


EECC551 - Shaaban
#12 Lec # 9 Fall 2012 10-23-2012
Storage IO Interfaces/Buses
EIDE/Parallel ATA (PATA) SCSI
Data Width 16 bits 8 or 16 bits (wide)
Clock Rate Upto 100MHz 10MHz (Fast)
20MHz (Ultra)
40MHz (Ultra2)
80MHz (Ultra3)
160MHz (Ultra4)
Bus Masters 1 Multiple
Max no. devices 2 7 (8-bit bus)
15 (16-bit bus)
Peak Bandwidth 200 MB/s 320MB/s (Ultra4)
Target Application Desktop Servers
EIDE = Enhanced Integrated Drive Electronics SCSI = Small Computer System Interface
ATA = Advanced Technology Attachment
PATA = Parallel ATA
SATA = Serial ATA EECC551 - Shaaban
#13 Lec # 9 Fall 2012 10-23-2012
I/O Data Transfer Methods
1 • Programmed I/O (PIO): Polling (For low-speed I/O)
– The I/O device puts its status information in a status register. Memory-mapped
register
– The processor must periodically check the status register.
– The processor is totally in control and does all the work.
– Very wasteful of processor time.
– Used for low-speed I/O devices (mice, keyboards etc.)
2 • Interrupt-Driven I/O (For medium-speed I/O):
– An interrupt line from the I/O device to the CPU is used to generate an I/O
interrupt indicating that the I/O device needs CPU attention. (e.g data is ready)
– The interrupting device places its identity in an interrupt vector.
– Once an I/O interrupt is detected the current instruction is completed and
an I/O interrupt handling routine (by OS) is executed to service the
device.
– Used for moderate speed I/O (optical drives, storage, neworks ..)
– Allows overlap of CPU processing time and I/O processing time
Time(workload) = Time(CPU) + Time(I/O) - Time(Overlap)
I/O I/O
No overlap I/O I/O
Overlap of CPU processing
EECC551 - Shaaban
Time and I/O processing time #14 Lec # 9 Fall 2012 10-23-2012
I/O data transfer methods:
3 Direct Memory Access (DMA) (For high-speed I/O):
• Implemented with a specialized controller that transfers data between an I/O
device and memory independent of the processor.
• The DMA controller becomes the bus master and directs reads and writes
between itself and memory.
• Interrupts are still used only on completion of the transfer or when an error
occurs.
• Even lower CPU overhead, used in high speed I/O (storage, network interfaces)
• Allows more overlap of CPU processing time and I/O processing time than
interrupt-driven I/O.
• DMA transfer steps:
1 – The CPU sets up DMA by supplying device identity, operation, memory
address of source and destination of data, the number of bytes to be
transferred.
2 – The DMA controller starts the operation. When the data is available it
transfers the data, including generating memory addresses for data to be
transferred.
3 – Once the DMA transfer is complete, the controller interrupts the processor,
which determines whether the entire operation is complete.
EECC551 - Shaaban
#15 Lec # 9 Fall 2012 10-23-2012
I/O Interface/Controller
I/O Interface, I/O controller or I/O bus adapter:
– Specific to each type of I/O device/interface standard.
– To the CPU, and I/O device, it consists of a set of
control and data registers (usually memory-mapped)
within the I/O address space.
– On the I/O device side, it forms a localized I/O bus
which can be shared by several I/O devices
• (e.g IDE, SCSI, USB ...) Industry-standard interfaces

Why? – Handles I/O details (originally done by CPU) such as:


• Assembling bits into words,
Low-level
I/O Processing • Low-level error detection and correction
off-loaded • Accepting or providing words in word-sized I/O registers.
from CPU
• Presents a uniform interface to the CPU regardless of I/O
device.

EECC551 - Shaaban
#16 Lec # 9 Fall 2012 10-23-2012
I/O Controller Architecture
Part of System Core Logic Part of System Core Logic

Chipset Chipset
North Bridge Peripheral or Main I/O Bus (PCI, PCI-X, etc.)South Bridge

Host Peripheral Bus Interface/DMA


Memory Micro-controller
or
Embedded processor Buffer
Front Side Bus (FSB)
FSB
CPU-Memory Interconnect (Bus)
Memory
Processor µProc
Cache
ROM

CPU Host
I/O Channel Interface
Processor
I/O Controller
I/O Devices
SCSI, IDE, USB, ….
Time(workload) = Time(CPU) + Time(I/O) - Time(Overlap)
Industry-standard interfaces
I/O I/O
No overlap I/O I/O
Overlap of CPU processing EECC551 - Shaaban
Time and I/O processing time #17 Lec # 9 Fall 2012 10-23-2012
I/O: A System Performance Perspective
• CPU Performance: Improvement of ~ 60% per year.
i.e storage devices (hard drives)

• I/O Sub-System Performance: Limited by mechanical delays


(disk I/O). Improvement less than 10% per year (IO rate per
sec or MB per sec). CPU Memory I/O

• From Amdahl's Law: overall system speed-up is limited by


the slowest component: Originally: CPU-bound
Originally: I/O = 10% CPU = 90%
If I/O is 10% of current processing time: I/O CPU

• Increasing CPU performance by 10 times


⇒ 5 times system performance increase
/ 10

I/O CPU Speedup = 5.2


(50% loss in performance) / 10
I/O = 53% CPU = 47%

• Increasing CPU performance by 100 times


⇒ ~ 10 times system performance
Speedup = 9.2
I/O I/O = 92% CPU = 8%

(90% loss of performance) After: I/O-bound

• The I/O system performance bottleneck diminishes the


benefit of faster CPUs on overall system performance.
System performance depends on many aspects of the system
(“limited by weakest link in the chain”): The system performance bottleneck EECC551 - Shaaban
#18 Lec # 9 Fall 2012 10-23-2012
System & I/O Performance Metrics/Modeling
• Diversity: The variety of I/O devices that can be connected to the system.
• Capacity: The maximum number of I/O devices that can be connected to the
system. Producer: I/O Task Queue I/O
Server: i.e I/O device
i.e User, OS or Producer Tasks Tasks
Server
I/O Performance Modeling: CPU (FIFO)
+ controller

• Producer/server Model of I/O: The producer (CPU, human etc.) creates


tasks to be performed and places them in a task buffer (queue); the server
(I/O device or controller) takes tasks from the queue and performs them.
I/O (or Entire System) Performance Metrics:
1 • I/O Throughput: The maximum data rate that can be transferred to/from
an I/O device or sub-system, or the maximum number of I/O tasks or
transactions completed by I/O in a certain period of time
⇒ Maximized when task queue is never empty (server always busy).
2 • I/O Latency or response time: The time an I/O task takes from the time it is
placed in the task buffer or queue until the server (I/O system) finishes the
task. Includes I/O device serice time and buffer waiting (or queuing time).
⇒ Minimized when task queue is always empty (no queuing time).
Response Time = Service Time + Queuing Time
EECC551 - Shaaban
#19 Lec # 9 Fall 2012 10-23-2012
System & I/O Performance Metrics: Throughput
• Throughput is a measure of speed—the rate at which the
I/O or storage system delivers data.
• I/O Throughput is measured in two ways:
1 • I/O rate:
– Measured in: I/O Tasks/sec
• Accesses/second,
• Transactions Per Second (TPS) or,
• I/O Operations Per Second (IOPS).
– I/O rate is generally used for applications where the size of each
request is small, such as in transaction processing. i.e server applications
2 • Data rate, measured in bytes/second or megabytes/second
(MB/s, GB/s …).
– Data rate is generally used for applications where the size of each
request is large, such as in scientific and multimedia applications.

EECC551 - Shaaban
#20 Lec # 9 Fall 2012 10-23-2012
System & I/O Performance Metrics: Response time
• Response time measures how long a storage (or I/O) Orsystem
entire

system takes to process an I/O request and access data.


– I/O request latency or total processing time per I/O request.
• This time can be measured in several ways.
For example: i.e. Time it takes the system to process an average task

– One could measure time from the user’s perspective,


– the operating system’s perspective,
– or the disk controller’s perspective, depending on
what you view as the storage or I/O system.
Is Response time always = 1 / Throughput ?
I/O Request Started
Response time
Initiate I/O Request
Start Request Done
CPU time + I/O Bus Transfer Time + Queue Time + I/O controller Time
+ I/O device service time + ...
(By CPU, User or OS)

The utilization of DMA and I/O device queues and multiple I/O devices
servicing a queue may make throughput >> 1 / response time EECC551 - Shaaban
#21 Lec # 9 Fall 2012 10-23-2012
Timesystem =Time in System for a task =
I/O Modeling:
Modeling Response Time = Queuing Time + Service Time
Average Task
Arrival Rate Producer-Server Model Server Service Time
per task Tser
r
tasks/sec
Time a task spends
waiting in queue
Queue wait time = Tq

Task Arrival Rate, r tasks/sec


Producer I/O Tasks
Server
I/O Tasks

Producer i.e User, OS, or CPU Queue Server


(FIFO) i.e I/O device + controller
• Throughput: Shown above: Single Queue + Single Server
– The number of tasks completed by the server in unit time.
– In order to get the highest possible throughput:
Throughput • The server should never be idle.
is maximized when:
• The queue should never be empty.
• Response time:
– Begins when a task is placed in the queue
– Ends when it is completed by the server
– In order to minimize the response time:
Response Time • The queue should be empty (no waiting time in queue).
is minimized when:
• The server will be idle at times.
EECC551 - Shaaban
Shown above is a (Single Queue + Single Server) Producer-Server Model
#22 Lec # 9 Fall 2012 10-23-2012
Tq (FIFO)
Producer-Server Task Arrival Rate, r Tser

Model I/O I/O


Single Queue + Tasks Tasks
User or CPU
Single Server

Response Time = TimeSystem = TimeQueue + TimeServer = Tq + Tser I/O device +


controller

Throughput
vs. Queue
full
Response Time most of the
time.
More time
in queue

Queue almost empty


most of the time
Less time in queue

Shown here is a (Single Queue + Single Server)


Producer-Server Model

Utilization
AKA Loading Factor

i.e Utilization = U ranges from 0 to 1 (0 % to 100%) EECC551 - Shaaban


#23 Lec # 9 Fall 2012 10-23-2012
I/O Performance:
Performance
Throughput Enhancement
Shown here: Two Queues + Two Servers Tq TimeQueue Tser I/O device +
TimeServer Server controller
Queue
I/O
I/O
Tasks
Tasks

I/O
Tasks

I/O I/O
Tasks Tasks
Producer I/O device +
e.g CPU controller
Queue Server

• In general throughput can be improved by:


Ignoring CPU
I/O processing time
– Throwing more hardware at the problem.
and other system
delays – Reduces load-related latency. Less queuing time
• Response time is much harder to reduce.
– e.g. Faster I/O device (i.e server)
Response Time = TimeSystem = TimeQueue + TimeServer = Tq + Tser
EECC551 - Shaaban
#24 Lec # 9 Fall 2012 10-23-2012
Storage I/O Systems:

Magnetic Disks
Characteristics: (1-5)
• Diameter (form factor): 1.8in - 3.5in
• Rotational speed: 5,400 RPM-15,000 RPM
• Tracks per surface.
• Sectors per track: Outer tracks contain
Seek Time
more sectors.
• Recording or Areal Density: Tracks/in X Bits/in
• Cost Per Megabyte. Bits/ Inch2
• Seek Time: (2-12 ms) Current Areal Density ~ 500 Gbits / Inch2

{ •
The time needed to move the read/write head arm.
Reported values: Minimum, Maximum, Average.
Rotation Latency or Delay: (2-8 ms)
Current Rotation speed
The time for the requested sector to be under Rotation
7200-15000 RPM
Time
the read/write head. (~ time for half a rotation)
• Transfer time: The time needed to transfer a sector of bits.
Read/Write Seek
• Type of controller/interface: SCSI, EIDE (PATA, SATA) Head Time
• Disk Controller delay or time.
• Average time to access a sector of data =
average seek time + average rotational delay + transfer time
+ disk controller overhead
(ignoring queuing time)
Access time = average seek time + average rotational delay
EECC551 - Shaaban
#25 Lec # 9 Fall 2012 10-23-2012
Basic Disk Performance Example
• Given the following Disk Parameters:
– Average seek time is 5 ms
– Disk spins at 10,000 RPM
– Transfer rate is 40 MB/sec
• Controller overhead is 0.1 ms
• Assume that the disk is idle, so no queuing delay exist.
• What is Average Disk read or write service time for a 500-
byte (.5 KB) Sector? Time for half a rotation
Ave. seek + ave. rot delay + transfer time + controller overhead
= 5 ms + 0.5/(10000 RPM/60) + 0.5 KB/40 MB/s + 0.1 ms
= 5 + 3 + 0.13 + 0.1 = 8.23 ms
Access Time

Actual time to process the disk request Tservice (Disk Service Time for this request)
is greater and may include CPU I/O processing Time
and queuing time
Or Tser
Here: 1KBytes = 103 bytes, MByte = 106 bytes, 1 GByte = 109 bytes EECC551 - Shaaban
#26 Lec # 9 Fall 2012 10-23-2012
Historic Perspective of Hard Drive Characteristics Evolution: Areal Density

8.5 Million times increase


in areal density

Drive areal density has increased by a factor of 8.5 million since the first disk drive, IBM's RAMAC,
was introduced in 1957. Since 1991, the rate of increase in areal density has accelerated to 60% per year,
and since 1997 this rate has further accelerated to an incredible 100% per year.

Current Areal Density ~ 640 Gbits / In2 EECC551 - Shaaban


#27 Lec # 9 Fall 2012 10-23-2012
Historic Perspective of Hard Drive Characteristics Evolution: Internal Data Transfer Rate

100x times increase


over last 20 years

Internal data transfer rate increase is influenced by the increase in areal density EECC551 - Shaaban
#28 Lec # 9 Fall 2012 10-23-2012
Historic Perspective of Hard Drive Characteristics Evolution: Access/Seek Time

Access time = average seek time + average rotational delay


Less than 3x times improvement
over 15 years!

Access/Seek Time is a big factor in service(response) time for small/random


disk requests. Limited improvement due to mechanical rotation speed + seek delay EECC551 - Shaaban
#29 Lec # 9 Fall 2012 10-23-2012
Historic Perspective of Hard Drive Characteristics Evolution: Cost

Cost Per MByte:


> 100,000X times cost drop
The price per megabyte of disk storage has been decreasing at about 40% per year based on improvements in
data density,-- even faster than the price decline for flash memory chips. Recent trends in HDD price per megabyte
show an even steeper reduction.

Actual Current Hard Disk Storage Cost (Second Quarter 2012):


~ 0.00005 dollars per MByte or about 20 GBytes /Dollar
EECC551 - Shaaban
#30 Lec # 9 Fall 2012 10-23-2012
Historic Perspective of Hard Drive Characteristics Evolution: Roadmap

Current Areal Density ~ 640 Gbits / In2 EECC551 - Shaaban


#31 Lec # 9 Fall 2012 10-23-2012
Introduction to Queuing Theory
(Steady State)
Task Task
Arrivals Departures
r
Average Task Arrival Rate, r tasks/sec

• Concerned with long term, steady state than in startup:


– where => Arrivals = Departures
Rate r Rate

Lsys (length or number of tasks in system)


• Little’s Law:
Tsys (System Time)
r (arrival rate)
Mean number tasks in system
i.e. average = arrival rate x mean response time
(Steady State)
• Applies to any system in equilibrium, as long as nothing in
the black box is creating or destroying tasks.
EECC551 - Shaaban
#32 Lec # 9 Fall 2012 10-23-2012
I/O Performance & Little’s Queuing Law
Task arrival rate r FIFO System (Single Queue + Single Server)
tasks/sec
Queue server Tser Task Service Time

Producer:
CPU Proc Tasks
Tq Tasks
IOC Device
OS or User
Tsys = Tq + Tser
• Given: An I/O system in equilibrium (input rate is equal to output rate) and:
– Tser : Average time to service a task = 1/Service rate
– Tq : Average time per task in the queue
– Tsys : Average time per task in the system, or the response time, Ignoring CPU processing time
and other system delays
the sum of Tser and Tq thus Tsys = Tser + Tq
– r : Average number of arriving tasks/sec (i.e task arrival rate)
– Lser : Average number of tasks in service.
– Lq : Average length of queue
– Lsys : Average number of tasks in the system,
the sum of L q and Lser

• Little’s Law states: Lsys = r x Tsys (applied to system)


Lq = r x Tq (applied to queue)
AKA
Loading
Factor
• Server utilization = u = r / Service rate = r x Tser r = Task Arrival rate

u must be between 0 and 1 otherwise there would be more tasks arriving than could be serviced

Here a server is the device (i.e hard drive) and its I/O controller (IOC) EECC551 - Shaaban
#33 Lec # 9 Fall 2012 10-23-2012
A Little Queuing Theory
Task arrival rate r FIFO System (Single Queue + Single Server)
tasks/sec
Queue server Tser
Task Service Time Tser
Proc Tq IOC Device

• Server spends a variable amount of time with customers


– Arithmetic mean time = m1 = (f1 x T1 + f2 x T2 +...+ fn x Tn)
• where Ti is the time for task i and fi is the frequency of task i
– variance = (f1 x T12 + f2 x T22 +...+ fn x Tn2) – m12 Avg.
• Must keep track of unit of measure (100 ms2 vs. 0.1 s2 )
– Squared coefficient of variance: C2 = variance/m12
Distributions:
• Unitless measure
• Exponential (Poisson) distribution C2 = 1 : most short relative to average, few
others long; 90% < 2.3 x average, 63% < average
• Hypoexponential distribution C2 < 1 : most close to average,
C2=0.5 => 90% < 2.0 x average, only 57% < average
• Hyperexponential distribution C2 > 1 : further from average
C2=2.0 => 90% < 2.8 x average, 69% < average
EECC551 - Shaaban
2
Variance = (Standard deviation) #34 Lec # 9 Fall 2012 10-23-2012
A Little Queuing Theory
Task arrival rate r
FIFO System (Single Queue + Single Server)

Queue server Tser Task Service Time Tser


Producer:
CPU Proc Tasks
Tq Tasks IOC Device
OS or User
Tsys = Tq + Tser
• Service time completions vs. waiting time for a busy server: randomly arriving
task joins a queue of arbitrary length when server is busy, otherwise serviced
immediately
– Unlimited length queues key simplification
• A single server queue: combination of a servicing facility that accommodates 1
task at a time (server) + waiting area (queue): together called a system
• Server spends a variable amount of time servicing tasks, average, Timeserver
Response
Time
Timesystem = Timequeue + Timeserver = Tsys = Tq + Tser Ignoring CPU processing time
and other system delays

Timequeue = Lengthqueue x Timeserver + Time for the server to complete current task
Time for the server to complete current task = Server utilization x remaining service time of current task
Lengthqueue = Arrival Rate x Timequeue (Little’s Law)
We need to estimate waiting time in queue (i.e Timequeue = Tq)? Tq?
Here a server is the device (i.e hard drive) and its I/O controller (IOC)
The response time above does not account for other factors such as CPU time. EECC551 - Shaaban
#35 Lec # 9 Fall 2012 10-23-2012
A Little Queuing Theory: Average Queue Wait Time Tq
For Single Queue + Single Server
• Calculating average wait time in queue Tq
– If something at server, it takes to complete on average m1(z) = 1/2 x Tser x (1 + C2)
– Chance server is busy = u; average delay is u x m1(z) = 1/2 x u x Tser x (1 + C2)
– All customers in line must complete; each avg Tser
Timequeue = Time for the server to complete current task + Lengthqueue x Timeserver
Timequeue = Average residual service time + Lengthqueue x Timeserver

Tq = u x m1(z) + Lq x Ts er= 1/2 x u x Tser x (1 + C2) + Lq x Ts er


Tq = 1/2 x u x Ts er x (1 + C2) + r x Tq x Ts er
Tq = 1/2 x u x Ts er x (1 + C2) + u x Tq (Rearrange)
Tq x (1 – u) = Ts er x u x (1 + C2) /2 Lq = r x Tq
Tq = Ts er x u x (1 + C2) / (2 x (1 – u)) (Little’s Law)
• Notation:
r average number of arriving tasks/second
Tser average time to service a task
u server utilization (0..1): u = r x Tser What if utilization u = 1 ?
Tq average time/request in queue
Lq average length of queue: Lq= r x Tq

A version of this derivation in textbook page 385 (3rd Edition: page 726) EECC551 - Shaaban
#36 Lec # 9 Fall 2012 10-23-2012
A Little Queuing Theory: M/G/1 and M/M/1
Single Queue + Single Server

• Assumptions so far: Arrival Service Number of


Distribution Distribution Servers
– System in equilibrium (i.e C =1) 2
(i.e C =1) 2

– Time between two successive task arrivals in line are random


– Server can start on next task immediately after prior finishes
– No limit to the queue: works First-In-First-Out (FIFO)
– Afterward, all tasks in line must complete; each avg Tser
• Described “memoryless” or Markovian request arrival
(M for C2=1 exponentially random), General service
distribution (no restrictions), 1 server: M/G/1 queue
• When Service times have C2 = 1, M/M/1 queue
• Tq = Tser x u x (1 + C2) /(2 x (1 – u)) = Tser x u / (1 – u) Tq

(Tq average time/task in queue)


Response Tser average time to service a task Queuing Time, Tq
Time
Lq Average length of queue Lq = r x Tq = u2 / (1 – u)
(In Textbook page 726)
u server utilization (0..1): u = r x Tser
EECC551 - Shaaban
Timesystem = Timequeue + Timeserver = Tsys = Tq + Tser
#37 Lec # 9 Fall 2012 10-23-2012
Single Queue + Multiple Servers (Disks/Controllers)
I/O Modeling: M/M/m Queue
Arrival Service Number of servers

• I/O system with Markovian request arrival rate r i.e C2 = 1


• A single queue serviced by m servers (disks + controllers) each with
i.e C2 = 1 Markovian Service rate = 1/ Tser T ser
(and requests are distributed evenly among all servers)
1
Tq
Tq = Tser x u /[m (1 – u)] Request Arrival Rate
r 2
Tasks

where u = r x Tser / m Single Queue


(FIFO)
m

m number of servers m servers each has


service time = Tser
Tser average time to service a task
u server utilization (0..1): u = r x Tser / m Please Note:
We will use this simplified
Tq average time/task in queue formula for M/M/m not the
book version 4th Edition on
Lq Average length of queue Lq = r x Tq page 388
(3rd Edition: page729)
Tsys = Tser + Tq Time in system (mean response time)
i.e as if the m servers are a single server with an effective service time of Tser / m EECC551 - Shaaban
#38 Lec # 9 Fall 2012 10-23-2012
I/O Queuing Performance: An M/M/1 Example
• A processor sends 40 disk I/O requests per second, requests & service
are exponentially distributed, average disk service time = 20 ms
i.e C2 = 1
• On average: r Tser
– What is the disk utilization u?
– What is the average time spent in the queue, Tq?
– What is the average response time for a disk request, Tsys ?
– What is the number of requests in the queue Lq? In system, Lsys?
• We have:
r average number of arriving requests/second = 40
Tser average time to service a request = 20 ms (0.02s) Utilization U
• We obtain:
u server utilization: u = r x Tser = 40/s x .02s = 0.8 or 80%
Tq average time/request in queue = Tser x u / (1 – u)
= 20 x 0.8/(1-0.8) = 20 x 0.8/0.2 = 20 x 4 = 80 ms (0 .08s)
i.e Mean
Response Time Tsys average time/request in system: Tsys = Tq + Tser= 80+ 20 = 100 ms
Lq average length of queue: Lq= r x Tq
Response
= 40/s x 0.08s = 3.2 requests in queue Time
Lsys average # tasks in system: Lsys = r x Tsys = 40/s x 0.1s = 4

EECC551 - Shaaban
#39 Lec # 9 Fall 2012 10-23-2012
I/O Queuing Performance: An M/M/1 Example
• Previous example with a faster disk with average disk service time = 10 ms
• The processor still sends 40 disk I/O requests per second, requests & service
are exponentially distributed
i.e C2 = 1 Tser
• On average:
(Changed from
– How utilized is the disk, u? 20 ms to 10 ms)
– What is the average time spent in the queue, Tq?
– What is the average response time for a disk request, Tsys ?
• We have:
r average number of arriving requests/second = 40
Tser average time to service a request = 10 ms (0.01s) Utilization U
• We obtain:
u server utilization: u = r x Tser = 40/s x .01s = 0.4 or 40%
Tq average time/request in queue = Tser x u / (1 – u)
= 10 x 0.4/(1-0.4) = 10 x 0.4/0.6 = 6.67 ms (0 .0067s)
i.e Mean
Response Time Tsys average time/request in system: Tsys = Tq +Tser=10 + 6.67 =
= 16.67 ms
Response time is 100/16.67 = 6 times faster even though the new
Response
service time is only 2 times faster due to lower queuing time . Time

6.67 ms instead of 80 ms
EECC551 - Shaaban
#40 Lec # 9 Fall 2012 10-23-2012
Factors Affecting System & I/O Performance
• I/O processing computational requirements:
– CPU computations available for I/O operations.
CPU – Operating system I/O processing policies/routines.
– I/O Data Transfer/Processing Method used.
• CPU cycles needed: Polling >> Interrupt Driven > DMA
• I/O Subsystem performance:
– Raw performance of I/O devices (i.e magnetic disk performance).
I/O
– I/O bus capabilities. Service Time, Tser, Throughput

– I/O subsystem organization. i.e number of devices, array level ..


– Loading level (u) of I/O devices (queuing delay, response time).
Tq
• Memory subsystem performance:
Memory
– Available memory bandwidth for I/O operations (For DMA)
• Operating System Policies:
Components of Total System Execution Time:
– File system vs. Raw I/O.
OS – File cache size and write Policy. CPU Memory I/O
– File pre-fetching, etc.

System performance depends on many aspects of the system


(“limited by weakest link in the chain”): The system performance bottleneck
EECC551 - Shaaban
#41 Lec # 9 Fall 2012 10-23-2012
System Design (Including I/O)
• When designing a system, the performance of the
components that make it up should be balanced.
• Steps for designing I/O systems are:
– List types and performance of I/O devices and buses in the system
– Determine target application computational & I/O demands
– Determine the CPU resource demands for I/O processing
• CPU clock cycles directly for I/O (e.g. initiate, interrupts, complete)
• CPU clock cycles due to stalls waiting for I/O
• CPU clock cycles to recover from I/O activity (e.g., cache flush)
– Determine memory and I/O bus resource demands
– Assess the system performance of the different ways to organize
Iterative
Refinement
these devices: i.e system configurations
Process
• For each system configuration identify which system component
(CPU, memory, I/O buses, I/O devices etc.) is the performance
bottleneck. Iterative
Refinement
Process
• Improve performance of the component that poses a system
performance bottleneck
System performance depends on many aspects of the system EECC551 - Shaaban
(“limited by weakest link in the chain”) System Performance Bottleneck
#42 Lec # 9 Fall 2012 10-23-2012
Example: Determining the System Performance
Bottleneck (ignoring I/O queuing delays)
• Assume the following system components:
– 500 MIPS CPU
– 16-byte wide memory system with 100 ns cycle time
– 200 MB/sec I/O bus Main system I/O Bus
– 20, 20 MB/sec SCSI-2 buses, with 1 ms controller overhead
– 5 disks per SCSI bus: 8 ms seek, 7,200 RPMS, 6MB/sec (100 disks total)
• Other assumptions
– All devices/system components can be used to 100% utilization
(i.e u = 1)
– Average I/O request size is 16 KB
– I/O Requests are assumed spread evenly on all disks.
– OS uses 10,000 CPU instructions to process a disk I/O request
– Ignore disk/controller queuing delays. (i.e u = 1)
(Since I/O queuing delays are ignored here 100% disk utilization is allowed)
• What is the average IOPS?
i.e I/O throughput
• What is the average I/O bandwidth?
• What is the average response time per IO operation?
EECC551 - Shaaban
Here: 1KBytes = 103 bytes, MByte = 106 bytes, 1 GByte = 109 bytes
#43 Lec # 9 Fall 2012 10-23-2012
Example: Determining the System I/O Bottleneck
(ignoring queuing delays)
• The performance of I/O systems is determined by the system component
with the lowest performance (the system performance bottleneck):
Determining the system performance bottleneck
– CPU : (500 MIPS)/(10,000 instructions per I/O) = 50,000 IOPS
CPU time per I/O = 10,000 / 500,000,000 = .02 ms
– Main Memory : (16 bytes)/(100 ns x 16 KB per I/O) = 10,000 IOPS
Memory time per I/O = 1/10,000 = .1ms
– I/O bus: (200 MB/sec)/(16 KB per I/O) = 12,500 IOPS
– SCSI-2: (20 buses)/((1 ms + (16 KB)/(20 MB/sec)) per I/O) = 11,111 IOPS
SCSI bus time per I/O = 1ms + 16/20 ms = 1.8ms
– Disks: (100 disks)/((8 ms + 0.5/(7200 RPMS) + (16 KB)/(6 MB/sec)) per I/O) =
Tser 6700 IOPS
Tdisk = (8 ms + 0.5/(7200 RPMS) + (16 KB)/(6 MB/sec) = 8+ 4.2+ 2.7 = 14.9ms
Throughput:

• The disks limit the I/O performance to 6700 IOPS


• The average I/O bandwidth is 6700 IOPS x (16 KB/sec) = 107.2 MB/sec
• Response Time Per I/O = Tcpu + Tmemory + Tscsi + Tdisk =
= .02 + .1 + 1.8 + 14.9 = 16.82 ms
Since I/O queuing delays are ignored here 100% disk utilization is allowed
EECC551 - Shaaban
Here: 1KBytes = 103 bytes, MByte = 106 bytes, 1 GByte = 109 bytes
#44 Lec # 9 Fall 2012 10-23-2012
Example: Determining the I/O Bottleneck
Accounting for I/O Queue Time (M/M/m queue)
• Assume the following system components: Here m = 100
– 500 MIPS CPU
– 16-byte wide memory system with 100 ns cycle time
– 200 MB/sec I/O bus Main system I/O Bus
– 20, 20 MB/sec SCSI-2 buses, with 1 ms controller overhead
– 5 disks per SCSI bus: 8 ms seek, 7,200 RPMS, 6MB/sec (100 disks)
• Other assumptions
Thus maximum utilization of
– All devices used to 60% utilization (i.e u = 0.6). any system component is fixed in
question at u = 0.6 or 60%
– Treat the I/O system as an M/M/m queue.
– I/O Requests are assumed spread evenly on all disks.
– Average I/O size is 16 KB
i.e I/O
throughput – OS uses 10,000 CPU instructions to process a disk I/O request
• What is the average IOPS? What is the average bandwidth?
• Average response time per IO operation?
EECC551 - Shaaban
Here: 1KBytes = 103 bytes, MByte = 106 bytes, 1 GByte = 109 bytes
#45 Lec # 9 Fall 2012 10-23-2012
Example: Determining the I/O Bottleneck
Accounting For I/O Queue Time (M/M/m queue)
• The performance of I/O systems is still determined by the system
component with the lowest performance (the system performance
bottleneck): Determining the system performance bottleneck
– CPU : (500 MIPS)/(10,000 instr. per I/O) x .6 = 30,000 IOPS
CPU time per I/O = 10,000 / 500,000,000 = .02 ms
– Main Memory : (16 bytes)/(100 ns x 16 KB per I/O) x .6 = 6,000 IOPS
Memory time per I/O = 1/10,000 = .1ms
– I/O bus: (200 MB/sec)/(16 KB per I/O) x .6 = 12,500 IOPS
– SCSI-2: (20 buses)/((1 ms + (16 KB)/(20 MB/sec)) per I/O) x .6 = 6,666.6 IOPS
SCSI bus time per I/O = 1ms + 16/20 ms = 1.8ms
– Disks: (100 disks)/((8 ms + 0.5/(7200 RPMS) + (16 KB)/(6 MB/sec)) per I/O) x .6 =
6,700 x .6 = 4020 IOPS
Tser = (8 ms + 0.5/(7200 RPMS) + (16 KB)/(6 MB/sec) = 8+4.2+2.7 = 14.9ms
• The disks limit the I/O performance to r = 4020 IOPS Throughput

• The average I/O bandwidth is 4020 IOPS x (16 KB/sec) = 64.3 MB/sec
• Tq = Tser x u /[m (1 – u)] = 14.9ms x .6 / [100 x .4 ] = .22 ms Using expression
for Tq for M/M/m
• Response Time = Tser + Tq+ Tcpu + Tmemory + Tscsi = from slide 36

Total System response time 14.9 + .22 + .02 + .1 + 1.8 = 17.04 ms


including CPU time and other delays

EECC551 - Shaaban
Here: 1KBytes = 103 bytes, MByte = 106 bytes, 1 GByte = 109 bytes #46 Lec # 9 Fall 2012 10-23-2012

You might also like