0% found this document useful (0 votes)
76 views5 pages

07 Handout 18

The document discusses input/output devices and their logical structure and operation. I/O devices can be grouped into human-readable, machine-readable, and communication devices. They can transfer data as blocks or streams and differ in data rates, applications, control complexity, and error handling. The logical I/O structure involves layers for logical, device, scheduling, and control functions.

Uploaded by

Zyverz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views5 pages

07 Handout 18

The document discusses input/output devices and their logical structure and operation. I/O devices can be grouped into human-readable, machine-readable, and communication devices. They can transfer data as blocks or streams and differ in data rates, applications, control complexity, and error handling. The logical I/O structure involves layers for logical, device, scheduling, and control functions.

Uploaded by

Zyverz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

IT2105

Input/output Management device driver. As an interface, its main task is to convert serial bitstreams to
I/O Devices block of bytes and perform error correction as necessary. A device driver is
Input/output (I/O) devices pertain to hardware devices that are capable of a software module that can be plugged into the operating system (OS). It tells
accepting input, delivering output, and/or managing other processed data. the OS and other software how to communicate and handle a particular I/O
Below are some examples of I/O devices: device. Note that in order to successfully communicate with the OS, these two
• Scanner • Mouse elements (2) are necessary for an I/O device to have (Tutorialspoint, n.d.).
• Laser printer • Bar code reader
• Gigabit ethernet • Audio-related devices An I/O device communicates with a computer system by sending signals over
• Switches • Display adapters a cable or even through air. A connection point, known as a port, can be used
by I/O devices. If devices share a common set of wires, the connection is called
Devices that are engaged with the computer I/O system can be grouped into a bus. A bus is composed of wires and a rigidly defined protocol that specifies
three categories (Stallings, 2018): a set of messages that can be transmitted through wires. Buses are widely
1. Human-readable devices – These are suitable for communicating used in computer architecture and vary in their signaling and connection
with computer users. Examples: Keyboard and mouse methods, speed, and throughput (Silberschatz, Galvin, & Gagne, 2018).
2. Machine-readable devices – These are suitable for communicating
with electronic equipment. Examples: Sensors and controllers
3. Communication devices – These are suitable for communicating
with remote devices. Examples: Digital line drivers and modems

Substantial differences exist across and within I/O device categories. The
following are some of the key differences (Stallings, 2018):
• Data transfer rate – This involves the amount of digital data that is moved
from one location to another within a specific time.
• Application – This pertains to the specific use of the device that
commonly depends on its actual purpose.
• Control complexity – This involves the level of difficulty in operating the
device.
• Unit of transfer – This indicates whether data is transferred as a stream
of bytes (involves a character device) or in large blocks (involves a block
Figure 1. A typical PC bus structure.
device). Source: Operating Systems Concepts (10th ed.), 2018 p. 491
• Data representation – This encompasses a data encoding scheme that
is used by the device that includes character codes and parity Aside from the three (3) categories of I/O devices, it is also significant to know
conventions. the distinction between the two (2) types of I/O devices based on the way they
• Error conditions – These involve the nature of errors, the way in which handle data (Stallings, 2018):
errors are reported, the consequences of each error, and the available 1. Block-oriented device – This type of device stores information in
responses. blocks that are usually fixed in size, and transfers are made one block
at a time. Hard disks and flash drives are examples of block-oriented
An I/O device typically contains a mechanical and an electronic component, devices.
where the electronic component is known as the device controller. Device
controllers technically work like an interface between an actual device and a

07 Handout 1 *Property of STI


[email protected] Page 1 of 5
IT2105
2. Stream-oriented device – This type of device transfers data in and • Device I/O – In this layer, the requested operations and data are
out as a stream of bytes, without any block structure. All other devices converted into an appropriate sequence of I/O instruction, channel of
that are not secondary storage are stream-oriented. commands, and control orders.
• Scheduling and control – This layer involves the queueing and
Logical Structure of the I/O Function scheduling of the I/O operations, as well as the control of the operations.
The following are some techniques in performing I/O: Interrupts are handled in this layer and I/O status are collected and
1. Programmed I/O: The processor is programmed to cross-examine reported. Thus, this layer performs the actual interaction with the I/O
I/O devices in order to check the readiness of the device for data device.
transfer (Encyclopedia.com, n.d.).
2. Interrupt-driven I/O: The processor issues an I/O command on behalf Note that, the I/O structure of most communication devices is almost the same
of a process that may lead to either of the following (Stallings, 2018): as the structure of an I/O function. The major difference is that the logical I/O
o If the I/O instruction from the process is nonblocking, then the layer is replaced by a communication architecture layer.
processor continues to execute instructions from the process that
issued the I/O command.
o If the I/O instruction is blocking, then the next instruction that the
processor executes is from the OS, which will put the current
process in a blocked state and schedule another process.
3. Direct memory access (DMA): The DMA module controls the
exchange of data between the main memory and the device driver.
The processor sends a request for the data transfer of a block of data
to the DMA module and is interrupted only after the entire block has
been transferred. The following information are involved in this
technique (Stallings, 2018):
o Indication whether a read or write is requested
o The address of the I/O device involved that are communicated on
the data lines
o The starting location in memory to read from or write to that are
communicated on the data lines and stored by the DMA module
in its address register
o The number of words to be read or written that are communicated
on the data lines and stored in the data count register

The logical structure of the I/O function will depend on the type of device and
its application. The following layers are involved when considering the simplest Figure 2. Simple representation of an I/O structure.
case – that a local peripheral device communicates through a stream of bytes
of records (Stallings, 2018): In selecting the size of the I/O, the following cost-related areas are considered:
• Logical I/O – This layer deals with the device as a logical resource and • Initializing buffers • Checking process privileges and limits
concerned with managing general I/O functions on behalf of the user • Making system calls • Mapping addresses to devices
• Mode or context switching • Executing kernel and driver codes
processes, allowing them to deal with the device using device identifiers
• Allocating kernel metadata • Freeing metadata and buffers
and simple commands.

07 Handout 1 *Property of STI


[email protected] Page 2 of 5
IT2105
Note that, increasing the I/O size is a common strategy used by applications empties (or fills) the other buffer. This technique is also known as double
to improve throughput. On the other hand, I/O latency may still be buffering or buffer swapping. The details of the actual process may vary
encountered. This can be lowered by selecting a smaller I/O size that closely depending on the manner of data handling (block-oriented or stream-oriented)
matches what the application is requesting. Unnecessary larger I/O sizes can of the device.
also waste cache space (Gregg, 2021).
Circular Buffer. Utilizing double buffer may be inadequate if the process
I/O Buffering performs rapid busts of I/O. In this case, using more than two (2) buffers can
Buffering is a technique that smoothens out peaks in I/O demands. However, alleviate the inadequacy. A collection of buffers are called a circular buffer,
no amount of buffering will allow an I/O device to keep pace with a process where each individual buffer is treated as one (1) unit.
indefinitely when the average demand of the process is greater than the I/O
device can service (Stallings, 2018). In order to improve the write performance, Disk Scheduling and Cache
data may be merged in a buffer before being sent to the next level. This When the disk drive is operating, the disk is rotating at a constant speed. In
increases the I/O size and operation efficiency (Gregg, 2021). order to read or write, the head must be positioned at the desired track and at
the beginning of the desired sector on the track. Track selection is performed
The following are the major reasons why buffering is performed (Silberschatz, by moving the head in a moving-head system or by electronically selecting one
Galvin, & Gagne, 2018): head on a fixed-head system. The disk performance parameters below are
1. To cope with a speed mismatch between the producer and consumer involved in the actual disk I/O operations (Stallings, 2018).
of the data stream; • Seek time – This is the time required to move the disk arm to the required
2. To provide adaptations for devices that contain different data-transfer track. It is composed of two (2) components: the startup time and the time
sizes; and needed to traverse the tracks (non-linear function). A typical average seek
3. To support copy semantics for application I/O. Copy semantics time on a contemporary hard disk is under 10 milliseconds.
guarantees the version of the data written to the disk is the same as • Rotational delay – This is the time required for the addressed area of the
the version at the time of the application system call, independent of disk to rotate into a position where it is accessible by the read/write head.
any subsequent changes in the application's buffer. This is also known as rotational latency.
• Transfer time – This depends on the rotation speed of the disk, which is
Single Buffer. This is the simplest type of support an operating system can equal to the number of bytes to be transferred (b) divided by the product
provide. When a user process issues an I/O request, the OS assigns a buffer of the rotation speed (r) and the number of bytes on a track (N).Formula:
in the system portion of the main memory to the operation. T = b/(rN)
• For block-oriented devices: Input transfers are made to the system buffer.
Upon completing the transfer, the process moves the block into user space Disk Scheduling Policies
and immediately requests another block. This process can be termed as The most common reason for the differences in performance can be linked to
reading ahead (anticipated input), which is done in anticipation that the the seek time. If a sector access requests involve random track selection, then
block will eventually be used. the performance of the disk I/O system will be poor. Hence, the average time
• For stream-oriented devices: Single buffering can be used in a line-at-a- spent in moving the disk arm to the required track must be reduced. With this,
time operation which is appropriate for scroll-mode terminals, or in a byte- various disk scheduling policies, sometimes referred to as disk scheduling
at-a-time operation which is used on forms-mode terminals where each algorithms, are developed.
keystroke is significant.

Double Buffer. This involves the assignment of two (2) system buffers to an
operation. A process transfers data to (or from) one (1) buffer while the OS

07 Handout 1 *Property of STI


[email protected] Page 3 of 5
IT2105
The process selection of the disk scheduling policies below is based on the arm is returned to the opposite end of the disk and the scan begins again
attributes of the queue or the requestor (Stallings, 2018): in the same direction. This reduces the maximum delay experienced by
• First-in First-out (FIFO) – This is the simplest form of scheduling policy new requests.
that processes items from the queue in sequential order. Since every • N-step SCAN – This scheduling policy segments the disk request queue
request is acknowledged, this technique generally encompasses fairness into sub-queues of length N. Sub-queues are processed one (1) at a time,
between processes. If there are only few processes that require disk using SCAN. While a queue is being processed, new requests are added
access and many of the requests are clustered to access file sectors, then to the other queue. If fewer than N requests are available at the end of a
a good performance can be expected. However, if there are many scan, then all of them are processed in the next scan.
processes competing for the disk, this technique will then perform random • FSCAN – This policy utilizes two (2) sub-queues. Initially, when the scan
scheduling that results in poor performance. Thus, selecting a more begins, all requests are placed in one of the sub-queues, while the other
sophisticated scheduling policy is highly suggested. sub-queue is empty. During the scan, all new requests are placed into the
• Last-in First-out (LIFO) – In transaction-processing systems, giving the empty sub-queue. Thus, the service of new requests is deferred until all of
device to the most recent user should result in little or no arm movement the initial or old requests have been processed.
for moving through a sequential file. This locality can actually improve
throughput and may reduce queue lengths. However, if the disk is kept Redundant array of independent disks (RAID) is a standardized scheme for
busy because of a large workload, there is a high possibility of starvation. multiple-disk database design that is composed of seven (7) levels, from zero
• Priority – In this scheduling policy, the control of the scheduling is outside (0) to six (6). With the utilization of multiple disks, a wide variety of ways in
the control of the disk management software, which is not intended to which data can be organized is made possible. In addition, redundancy can be
optimize disk utilization but to meet other objectives within the OS. In most added to improve the reliability of data. Note that the levels do not imply a
cases, batch and interactive jobs are given higher priority than jobs that hierarchical relationship but encompass different design architectures that
require longer computations. This allows short jobs to be flushed out of the share three (3) common characteristics which are:
system resulting in a good response time. However, longer jobs may have 1. It is a set of physical disk drives viewed by the OS as a single logical
to wait for excessively long times. This type of scheduling policy tends to drive.
be poor in performance for database systems. 2. Data are distributed across the physical drives of an array in a scheme
known as stripping.
The process selection of the following disk scheduling policies is in accordance 3. Redundant disk capacity is used to store parity information, which
with the requested item: guarantees data recoverability in case if a disk failure.
• Shortest Service Time First (SSTF) – This scheduling policy selects the
disk I/O request that requires the least movement of the disk arm from its Large I/O Data
Category Level Small I/O Request Rate
current position. Hence the selection of requests that hold minimal seek Transfer Capacity
time. On the other hand, this does not guarantee a minimal average seek Striping 0 Very high Very high
time. However, this policy still provides better performance than FIFO. 1 Higher than a single disk Up to twice that of a single
Mirroring for read and similar to a disk for read and similar to
• SCAN – This is also known as the elevator algorithm because it operates single disk for write a single disk for write
much like an elevator. With this policy, the arm is required to move in one 2 Highest of all listed Approximately twice that of
(1) direction only, satisfying all outstanding requests in route, until it Parallel alternatives a single disk
reaches the last track in the direction or until there are no more requests Access 3 Highest of all listed Approximately twice that of
in that direction. Then, the service direction is reversed and the scan alternatives a single disk
proceeds in the opposite direction, picking up all requests in order. 4 Similar to RAID 0 for read Similar to RAID 0 for read
Independent
• Circular SCAN (C-SCAN) – This policy restricts scanning to one (1) and significantly lower and significantly lower than
Access
direction only. When the last track has been visited in one direction, the than a single disk for write a single disk for write

07 Handout 1 *Property of STI


[email protected] Page 4 of 5
IT2105
5 Similar to RAID 0 for read Similar to RAID 0 for read
and lower than a single and generally lower than a
disk for write single disk for write
6 Similar to RAID 0 for read Similar to RAID 0 for read
and lower than RAID 5 for and significantly lower than
write RAID 5 for write
Table 1. Summary of RAID levels.
Source: Operating systems: Internal and design principles (9th ed.), 2018 p. 526

Caches are used by operating systems to improve file system read


performance, while their storage is often used as buffers to improve write
performance. Memory allocation performance can also be improved through
caching, since the results of commonly performed operations may be stored
in a local cache for future use (Gregg, 2021).

The term cache memory is used to indicate a memory that is smaller and faster
to access than the main memory. This actually reduces the average memory
access time by exploiting the principles of locality. The same principle can be
applied to disk memory. A disk cache is a buffer in the main memory for disk
sectors (Stallings, 2018).

References:
Encyclopedia.com. (n.d.). Programmed I/O. Retrieved on December 13, 2021 from https://www.encyclopedia.com/computing/dictionaries-
thesauruses-pictures-and-press-releases/programmed-io
Gregg, B. (2021). System performance: Enterprise and Cloud (2nd ed.). Pearson Education, Inc.
Silberschatz, A., Galvin, P. & Gagne, G. (2018). Operating systems concepts (10th ed.). John Wiley & Sons, Inc.
Stallings, W. (2018). Operating systems: Internal and design principles (9th ed.). Pearson Education Limited
Tutorialspoint. (n.d.). Operating System – I/O Hardware. Retrieved on December 13, 2021 from
https://www.tutorialspoint.com/operating_system/os_io_hardware.htm

07 Handout 1 *Property of STI


[email protected] Page 5 of 5

You might also like