Device - MGT
Device - MGT
Serial Devices
Serial means one event at a time. It is opposite of parallel, which means more than one event
happening at a time. In data transmission, the techniques of time division and space division are
used, where time separates the transmission of individual bits of information sent serially and
space (on multiple lines or paths) can be used to have multiple bits sent in parallel.
For computer hardware and data transmission, serial connection, operation, and media usually
indicate a simpler, slower operation and parallel indicates a faster operation. This does not
always hold true, however, since a serial medium (i.e. fiber optic cable) can be much faster than
a slower medium that carries multiple signals in parallel.
Most processors and their programs operate in a serial manner, with the processor reading a
process and performing its instructions one after the other. However, there are computers with
multiple processors and can perform instructions in parallel.
Parallel Devices
Parallel means more than one event happening at a time. In data transmission, the techniques of
time division and space division are used, where time separates the transmission of individual
bits of information sent serially and space (in multiple lines or paths) can be used to have
multiple bits sent in parallel.
Parallel connection and operation generally indicates faster operation. Again, this does not
always hold true, because a serial medium may be faster than a slower medium that carries
multiple signals in parallel.
The path between the operating system and almost all hardware not on the computer's
motherboard goes through a special program called a driver. Much of a driver's use is to be the
translator between the electrical signals of the hardware subsystems and the highlevel
programming languages of the operating system and application programs. Drivers take data that
the operating system has defined as a file and translate them into streams of bits placed in
specific locations on storage devices.
Because there are such wide differences in the hardware controlled through drivers, there are
differences in the way that the driver programs function, but most are run when the device is
required, and function similar to any other process. The operating system will frequently assign
high-priority blocks to drivers so that the hardware resource can be released and readied for use
again as quickly as possible.
One reason that drivers are separate from the operating system is so that new functions can be
added to the driver, and thus to the hardware subsystems, without requiring the operating system
itself to be modified, recompiled and redistributed. Through the development of new hardware
device drivers, development often performed or paid for by the manufacturer of the subsystems
instead of the publisher of the operating system, input/output capabilities of the overall system
can be greatly enhanced.
Managing input and output is mostly a matter of managing queues and buffers, special storage
facilities that take a stream of bits from a device, such as a keyboard or a serial port, hold those
bits, and release them to the processor at a rate slow enough for the processor to deal with. This
function is especially important when a number of processes are running and taking up processor
time. The operating system can instruct a buffer to continue taking input from the device, but
stop sending data to the processor while the process using the input is suspended. Then, when the
process needing input is made active once again, the operating system will command the buffer
to send data. This process allows a keyboard or a modem to deal with external users or
computers at a high speed even though there are times when the CPU can not use input from
those sources.
Buffering Strategies
I/O is the process of transferring data between a program and an external device. The process of
optimizing I/O consists primarily of making the best possible use of the slowest part of the path
between the program and the device.
The slowest part is usually the physical channel, which is often slower than the CPU or a
memory-to-memory data transfer. The time spent in I/O processing overhead can reduce the
amount of time that a channel can be used, and therefore reducing the effective transfer rate. The
biggest factor in maximizing this channel speed is often the reduction of I/O processing
overhead.
A buffer is a temporary storage location for data while the data is being transferred. Small I/O
requests can be collected into a buffer, and the overhead of making many relatively expensive
system calls can be greatly reduced. A collection buffer of this type can be sized and handled so
that the physical I/O requests made to the operating system match the physical characteristics of
the device being used.
During the write process, a buffer can be used as a work area where control words can be
inserted into the data stream (a process called blocking). The blocked data is then written to the
device. During the read process, the same buffer work area can be used to examine and remove
these control words before passing the data on to the user (a process called deblocking).
When data access is random, the same data may be requested many times. A cache is a buffer
that keeps old requests in the buffer in case these requests are needed again. A cache that is
sufficiently large and/or efficient can avoid a large part of the physical I/O by having the data
ready in a buffer. When the data is often found in the cache buffer, it is referred to as having a
high hit rate. For example, if the entire file fits in the cache and the file is present in the cache, no
more physical requests are required to perform the I/O. In this case, the hit rate will be 100%.
Running disks and the processor in parallel often improves performance. Therefore, it is useful to
keep the processor busy while data is being moved. To do this when writing, data can be
transferred to the buffer at memory-to-memory copy speed and an asynchronous I/O request can
be made. The control is then immediately returned to the program, which continues to execute as
though the I/O were complete (a process called write-behind). A similar process can be used
while reading. In this process, data is read into a buffer before the actual request is issued for it.
When it is needed, it is already in the buffer and can be transferred to the user at very high speed.
This is another form or use of a cache.
Unbuffered I/O
The simplest form of buffering is none at all. This unbuffered I/O is known as raw I/O. For large,
well-formed requests, buffering is not necessary. It can add unnecessary overhead and delay.
Library buffering
The term library buffering refers to a buffer that the I/O library associates with a file. When a file
is opened, the I/O library checks the access, form, and any attributes declared on the assign
command to determine the type of processing that should be used on the file. Buffers are usually
an vital part of the processing.
System cache
The operating system or kernel uses a set of buffers in kernel memory for I/O operations. These
are collectively called the system cache. The I/O library uses system calls to move data between
the user memory space and the system buffer. The system cache ensures that the actual I/O to the
logical device is well formed, and it tries to remember recent data in order to reduce physical I/O
requests. In many cases, though, it is desirable to bypass the system cache and to perform I/O
directly between the user's memory and the logical device.
Direct Memory Access (DMA) is a capability provided by some computer bus architectures that
allows data to be sent directly from an attached device to the memory on the computer's
motherboard. The processor is freed from involvement with the data transfer, thus speeding up
overall operation.
Usually a specified portion of memory is designated as an area to be used for direct memory
access. In the ISA bus standard, up to 16 megabytes of memory can be addressed for DMA.
Peripheral Component Interconnect (PCI) accomplishes DMA by using a bus master (with the
processor assigning I/O control to the PCI controller).
Recovery from Failures
If a device becomes unusable, either because the hardware failed or has been taken down for
some reason, the driver should reject all new and outstanding I/O requests, and return an
appropriate error code indicating that the device is not usable. At each major entry point, the
driver shall check the device state, so that resources are not mistakenly committed to a device
that cannot perform its function.
If the driver discovers that hardware has failed in some way, the driver should log information
about the failure. This allows system management software and/or a human user to trace events
to determine the time and root cause of the failure.