0% found this document useful (0 votes)
511 views21 pages

OS - Chapter - 6 - Input Output Management

This document discusses input/output (I/O) management in operating systems. It describes how operating systems use device drivers to communicate with hardware devices through device controllers. It explains the basic interaction between processors and I/O devices using control and status registers. Finally, it covers common I/O techniques like polling, interrupts, programmed I/O, direct memory access, spooling, buffering, and caching used to optimize data transfer and device management.

Uploaded by

abelcreed1994
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
511 views21 pages

OS - Chapter - 6 - Input Output Management

This document discusses input/output (I/O) management in operating systems. It describes how operating systems use device drivers to communicate with hardware devices through device controllers. It explains the basic interaction between processors and I/O devices using control and status registers. Finally, it covers common I/O techniques like polling, interrupts, programmed I/O, direct memory access, spooling, buffering, and caching used to optimize data transfer and device management.

Uploaded by

abelcreed1994
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Operating Systems and System Programming

Meti Dejene
[email protected]
Haramaya University
Chapter 6 – Input Output Management
Introduction

 The Processor and memory are not the only resources that the operating system must

manage.

 Input/Output (I/O) devices also interact heavily with the operating system.

 The role of the operating system in computer I/O is

 to manage and control I/O devices and operations.


Cont.

 A device communicates with a computer system by sending signals via a connection

point, or port.

 If devices share a common set of wires over which a set of messages are sent and

received , the connection is called a bus.

 However, because I/O devices vary so widely in their function and speed, varied

methods are needed to control them.


Device Drivers
 To encapsulate the details and oddities of different devices, the kernel of an operating

system is structured to use device-driver modules.

 Device drivers are specialized low-level software programs designed to allow the

operating system kernel to communicate with different hardware devices, without

worrying about the details of how the hardware works.

 Device drivers translate generic commands from the OS into commands that the

hardware understands.

 They provide a standardized interface for the OS to send requests and receive responses

from the hardware.


Device Controller

 Each I/O devices generally consist of a controller.

 Device controller is a hardware unit on the motherboard attached to the I/O bus

and responsible for handling the incoming and outgoing signals of a device.

 It serves as an interface between the device and the computer's central

processing unit (CPU), facilitating communication and controlling the device's

operations.
Interaction with I/O Devices
 How does the processor give commands and data to a controller to accomplish an I/O

transfer?

 The short answer is that the controller has one or more registers for data and control

signals that are used for communicating.

 Typically I/O device control consists of four registers:

1. The data-in register is read by the host to get input.

 When data is transmitted from an external device to the computer system, it's initially

stored in the "data in" register of the controller.


Cont.
2. The data-out register is written by the host to send output.

 When the CPU needs to send data to an external device, it's placed in the "data out"

register of the device controller.

3. The status register contains bits that can be read by the host.

 These bits provide information about the device's current state, such as whether the

current command has completed, whether a byte is available to be read from the

data-in register, and whether a device error has occurred.

4. The control register can be written by the host to start a command or to change the

mode of a device.
Cont.
 The processor communicates with the controller by reading and writing bit patterns in

these registers.

 The controller indicates its state through the busy bit in the status register.

 The controller sets the busy bit when it is busy working and clears the busy bit when it is

ready to accept the next command.

 To set a bit means to write a 1 into the bit and to clear a bit means to write a 0 into it.

 In such a way, it accepts commands from the operating system, for example, to read

data from the device, and carries them out.


I/O Devices

 Devices vary on many dimensions.

1. Character-stream and block devices: A character-stream device transfers bytes one by

one, whereas a block device transfers a block of bytes as a unit.

2. Sequential or random access devices: A sequential device transfers data in a fixed order

determined by the device, whereas the user of a random-access device can instruct the

device to seek to any of the available data storage locations.

3. Sharable or dedicated devices: A sharable device can be used concurrently by several

processes or threads; a dedicated device cannot.


Polling

 Polling refers to a situation where programs or processes repeatedly check the

status of a device at regular intervals to determine if the device is readily

available.

 Basically, it is reading the status register over and over in a loop until the busy bit

becomes clear.
Interrupts
 Rather than to poll repeatedly for an I/O completion, it may be more efficient to arrange

for the hardware controller to notify the CPU when the device becomes ready for service.

 This mechanism that enables a device to notify the CPU when it is ready for service is

called an interrupt.

 During I/O, the various device controllers raise interrupts.

 These interrupts signify that output has completed, or that input data are available, or

that a failure has been detected.

 The interrupt mechanism is also used to handle a wide variety of exceptions.

 such as dividing by zero, accessing a protected or nonexistent memory address, or

attempting to execute a privileged instruction from user mode.


Interrupts Working Procedure
 The CPU hardware has a wire called the interrupt-request line that the CPU senses after

executing every instruction.

 When the CPU detects that a controller has asserted a signal on the interrupt-request line,

the CPU performs a state save and jumps to the interrupt-handler routine.

 The interrupt handler determines the cause of the interrupt, performs the necessary

processing, performs a state restore, and executes a return from interrupt instruction to

return the CPU to the execution state prior to the interrupt.

 We say that the device controller raises an interrupt by asserting a signal on the interrupt

request line, the CPU catches the interrupt and dispatches it to the interrupt handler, and

the handler clears the interrupt by servicing the device.


Programmed I/O (PIO)

 Programmed I/O (PIO) is a basic method of transferring data between the CPU

and external devices where the CPU directly controls data transfer between itself

and the I/O device by actively reading or writing data.


Direct Memory Access (DMA)
 For a device that does large transfers, such as a disk drive, it seems wasteful to use an

expensive general-purpose processor to watch status bits and to feed data into a controller

register one byte at a time.

 As an alternative, computers avoid burdening the main CPU with PIO by offloading some of

this work to a special-purpose hardware component called a direct memory access (DMA)

controller that manage these data transfers independently of the CPU.

 This allows the CPU to focus on executing instructions and performing computations rather

than handling individual data transfers.


Cont.

 To initiate a DMA transfer,

 the CPU sets up the DMA controller by programming it with the necessary

information for the data transfer so it knows what to transfer where.

 This includes a pointer to the source of a transfer, a pointer to the destination of

the transfer, and a count of the number of bytes to be transferred.

 Once the data transfer is finished, the DMA controller signals the CPU (often through

interrupts) that the transfer is complete.


Spooling

 Spooling (Simultaneous Peripheral Operation On-Line) involves temporarily storing

data/jobs for devices (such as printers) in a Spool until the device is ready to accept

the job.

 A spool is a dedicated queue on a disk that holds output for a device, such as a

printer, that cannot accept interleaved data streams.

 Spooling is more about managing device access and queuing tasks for devices that

cannot accept interleaved data streams.


Buffering

 A buffer is a temporary storage area, often in RAM, that temporarily holds data

while it is being transmitted between two devices or between a device and an

application.

 Buffering helps manage the flow of data efficiently and optimize data transfer,

especially when the speed of data transfer between components or devices

differs.
Caching
 Caching refers to the mechanism of temporarily storing frequently accessed data and files

in a faster, more accessible location called cache memory.

 Cache is a small high-speed memory located near the CPU that temporarily stores copies

of frequently accessed used data based on past access patterns to expedite future

access.

 Caching is designed to reduce memory access latency and improve system performance

by bridging the speed gap between slower, larger storage (like hard drives) and the faster

computational units (like the CPU).


Cont.

 So, when an application requests data from a file, the OS checks if it is available in the

cache first.

 A "cache hit" occurs when the requested data is found in the cache, resulting in

faster access.

 A "cache miss" happens when the required data is not in the cache, necessitating

retrieval from the slower main storage.


Clocks and Timers
 Clocks provides a time reference for the system allowing the OS to keep track of time,
manage scheduling of tasks and processes, and timestamp events.

 Most computers have hardware clocks and timers that provide three basic functions:

 Give the current time.

 Give the elapsed time.

 Set a timer to trigger operation X at time T.

 One common use of timers is in scheduling.

 The OS uses timers (e.g. Programmable Interval Timer ) to schedule tasks or processes
based on time slices or deadlines.

 The scheduler uses this mechanism to generate an interrupt that will preempt a
process at the end of its time slice.

You might also like