Chapter 5
Chapter 5
1
Test: Used to test various status conditions associated with an I/O module and its
peripherals. This command helps processor to know that the peripheral of interest
is powered on and available for use. It also helps to know if the most recent I/O
operation is completed and if any errors occurred.
Read: Causes the I/O module to obtain an item of data from the peripheral and
place it in an internal buffer. The processor can then obtain the data item by
requesting that the I/O module place it on the data bus.
Write: Causes the I/O module to take an item of data (byte or word) from the data
bus and subsequently transmit that data item to the peripheral.
Depending on the way they transmit data, interface categorized in to two types, namely,
serial or parallel.
In a parallel interface, there are multiple lines connecting the I/O module and the
peripheral, and multiple bits are transferred simultaneously, just as all of the bits of
a word are transferred simultaneously over the data bus.
In a serial interface, there is only one line used to transmit data, and bits must be
transmitted one at a time. A parallel interface used for higher-speed peripherals,
such as tape and disk, while the serial interface used for printers and terminals.
3
During handshaking process, the communication between two components managed by control
register.
The control register is loaded by the processor to control the mode of operation and to define
signals. The control signals serve two main purposes: handshaking and interrupt request.
One control line is used by the sender as a DATA READY line, to indicate when the data are
present on the I/O data lines. Another line is used by the receiver as an ACKNOWLEDGE,
indicating that the data have been read and the data lines may be cleared. Another line may be
designated as an INTERRUPT REQUEST line and tied back to the system bus.
Buffering
The buffer is an area in the main memory used to store or hold the data temporarily. The act
of storing data temporarily in the buffer is called buffering.
A buffer may be used when moving data between processes within a computer.
Buffers are typically used when there is a difference between the rate of received data
and the rate of processed data, for example, in a printer spooler or online video
streaming.
A buffer often adjusts timing by implementing a queue or FIFO algorithm in memory,
simultaneously writing data into the queue at one rate and reading it at another rate.
Programmed I/O
With programmed I/O, data are exchanged between the processor and the I/O module. The
processor executes a program that gives it direct control of the I/O operation, including sensing
device status, sending a read or write command, and transferring the data. When the processor
issues a command to the I/O module, it must wait until the I/O operation is complete.
If the processor is faster than the I/O module, this is waste of processor time.
Interrupt-driven I/O
The problem with programmed I/O is that the processor has to wait a long time for the I/O
module of concern to be ready for either reception or transmission of data. The processor, while
waiting, must repeatedly interrogate the status of the I/O module. As a result, the level of the
performance of the entire system is severely degraded.
The solution to the drawbacks of programmed I/O is to provide an interrupt mechanism. In this
approach, the processor issues I/O command to a module and then go on to do some other useful
work. The I/O module then interrupts the processor to request service when it is ready to
4
exchange data with the processor. The processor then executes the data transfer. Once the data
transfer is over, the processor then resumes its former processing.
5
these registers may be used by the interrupt handler. Typically, the interrupt handler will
begin by saving the contents of all registers on stack.
7. The interrupt handles next processes the interrupt. This includes an examination of status
information relating to the I/O operation or, other event that caused an interrupt.
8. When interrupt processing is complete, the saved register values are retrieved from the
stack and restored to the registers.
9. The final act is to restore the PSW and program counter values from the stack. As a result,
the next instruction to be executed will be from the previously interrupted program.
The following diagram shows basic instruction cycle; that means without
interruption
6
Vectored interrupt and prioritized
In a vectored interrupt, the address to which control is transferred is determined by the cause of
the interrupt. In other words, vectored interrupts are those which have fixed vector address
(starting address of sub-routine) and after executing these, program control is transferred to that
address. For instance, let us say certain type of interrupt has occurred; which is arithmetic over
flow; it has its own address.
To record the information of this interrupt, processor need two registers which
- record cause of interrupt and - its address, namely exception program counter (EPC) and
Cause.
EPC: A register used to hold the address of the affected instruction.
Cause: A register used to record the cause of the exception.
Prioritized interrupt takes place when more than one device is requesting interrupt service…..
- The processor just picks the interrupt line with the highest priority. This priority scheme
employed by a technique called bus arbitration can employ a priority scheme. Actually, ………...
- interrupt priority determined by daisy chain.
Daisy Chain can be defined as a method of device interconnection for determining interrupt
priority by connecting the interrupt sources serially.
7
Magnetic Disks
A disk is a circular platter constructed of nonmagnetic material, called the substrate,
coated with a magnetizable material. Traditionally, the substrate has been an aluminum or
aluminum alloy material. More recently, glass substrates have been introduced.
The glass substrate has a number of benefits, including the following:
Improvement in the uniformity of the magnetic film surface to increase disk reliability.
A significant reduction in overall surface defects to help reduce read- write errors.
Ability to support lower fly heights (described subsequently).
Better stiffness to reduce disk dynamics.
Greater ability to withstand shock and damage.
Magnetic Read and Write Mechanisms
Data are recorded on and later retrieved from the disk via a conducting coil named the head. In
many systems, there are two heads, a read head and a write head. During a read or write
operation, the head is stationary while the platter rotates under it. The write mechanism exploits
the fact that electricity flowing through a coil produces a magnetic field. Electric pulses are sent
to the write head, and the resulting magnetic patterns are recorded on the surface below, with
different patterns for positive and negative currents. The write head itself is made of easily
magnetizable material and is in the shape of a rectangular doughnut with a gap along one side
and a few turns of conducting wire along the opposite side (Figure 5.3).
8
An electric current in the wire induces a magnetic field across the gap, which in turn magnetizes
a small area of the recording medium. Reversing the direction of the current reverses the
direction of the magnetization on the recording medium.
The traditional read mechanism exploits the fact that a magnetic field moving relative to a coil
produces an electrical current in the coil. When the surface of the disk rotates under the head, it
generates a current of the same polarity as the one already recorded. The structure of the head for
reading is in this case essentially the same as for writing and therefore the same head can be used
for both. Such single heads are used in floppy disk systems and in older rigid disk systems.
Contemporary rigid disk systems use a different read mechanism, requiring a separate read head,
positioned for convenience close to the write head. The read head consists of a partially shielded
magneto-resistive (MR) sensor. The MR material has an electrical resistance that depends on the
direction of the magnetization of the medium moving under it.
y passing a current through the MR sensor, resistance changes are detected as voltage signals.
The MR design allows higher- frequency operation, which equates to greater storage densities
and operating speeds.
Disk Performance Parameters
The actual details of disk I/O operation depend on the computer system, the operating system,
and the nature of the I/O channel and disk controller hardware. When the disk drive is operating,
the disk is rotating at constant speed.
To read or write, the head must be positioned at the desired track and at the beginning of the
desired sector on that track. Track selection involves moving the head in a movable-head system
or electronically selecting one head on a fixed-head system.
On a movable- head system, the time it takes to position the head at the track is known
as seek time. In either case, once the track is selected, the disk controller waits until the
appropriate sector rotates to line up with the head.
The time it takes for the beginning of the sector to reach the head is known as
rotational delay, or rotational latency.
The sum of the seek time, if any, and the rotational delay equals the access time, which
is the time it takes to get into position to read or write. Once the head is in position, the
read or write operation is then performed as the sector moves under the head; this is the
data transfer portion of the operation; the time required for the transfer is the transfer
time.
9
In addition to the access time and transfer time, there are several queuing delays normally
associated with a disk I/O operation. When a process issues an I/O request, it must first wait in a
queue for the device to be available. At that time, the device is assigned to the process. If the
device shares a single I/O channel or a set of I/O channels with other disk drives, then there
may be an additional wait for the channel to be available. At that point, the seek is performed to
begin disk access.
10
Bus lines can be classified into three functional groups (Figure 5.5):
- data, address, and control lines. In addition, there may be power distribution lines that
supply power to the attached modules.
The data lines provide a path for moving data among system modules. These lines,
collectively, are called the data bus. The data bus may consist of 32, 64, 128, or even more
separate lines, the number of lines being referred to as the width of the data bus. Because
each line can carry only one bit at a time, the number of lines determines how many bits can
be transferred at a time. The width of the data bus is a key factor in determining overall
system performance. For example, if the data bus is 32 bits wide and each instruction is 64
bits long, then the processor must access the memory module twice during each instruction
cycle.
The address lines are used to designate the source or destination of the data on the data
bus. For example, if the processor wishes to read a word (8, 16, or 32 bits) of data from
memory, it puts the address of the desired word on the address lines. Clearly, the width of
the address bus determines the maximum possible memory capacity of the system.
Furthermore, the address lines are generally also used to address I/O ports. Typically, the
higher-order bits are used to select a particular module on the bus, and the lower-order bits
select a memory location or I/O port within the module. For example, on an 8-bit address
bus, address 01111111 and below might reference locations in a memory module (module
0) with 128 words of memory, and address 10000000 and above refer to devices attached to
an I/O module (module 1).
The control lines are used to control the access to and the use of the data and address
lines. Because the data and address lines are shared by all components, there must be a
means of controlling their use. Control signals transmit both command and timing
information among system modules. Timing signals indicate the validity of data and address
information. Command signals specify operations to be performed. Typical control lines
include:
• Memory write: causes data on the bus to be written into the addressed location.
• Memory read: causes data from the addressed location to be placed on the bus.
• I/O write: causes data on the bus to be output to the addressed I/O port.
• I/O read: causes data from the addressed I/O port to be placed on the bus.
• Transfer ACK: indicates that data have been accepted from or placed on the bus.
11
• Bus request: indicates that a module needs to gain control of the bus.
• Bus grant: indicates that a requesting module has been granted control of the bus.
• Interrupt request: indicates that an interrupt is pending.
• Interrupt ACK: acknowledges that the pending interrupt has been recognized.
• Clock: is used to synchronize operations.
• Reset: initializes all modules.
The operation of the bus is as follows.
If one module wishes to send data to another, it must do two things:
1) Obtain the use of the bus, and
2) Transfer data via the bus.
If one module wishes to request data from another module, it must
1) Obtain the use of the bus, and
2) Transfer a request to the other module over the appropriate control and address lines. It
must then wait for that second module to send the data.
12
and better scalability.
An important and representative example of the point-to-point interconnect approach is Intel’s
Quick Path Interconnect (QPI), which was introduced in 2008. The following are significant
characteristics of point-to-point interconnect schemes:
Multiple direct connections: Multiple components within the system enjoy direct pairwise
connections to other components. This eliminates the need for arbitration found in shared
transmission systems.
Layered protocol architecture: As found in network environments, such as TCP/IP-based
data networks, these processor-level interconnects use a layered protocol architecture, rather
than the simple use of control signals found in shared bus arrangements.
Packetized data transfer: Data are not sent as a raw bit stream. Rather, data are sent as a
sequence of packets, each of which includes control headers and error control codes.
5.6. Multimedia Support
Multimedia System Architecture (MSA) is the term used to describe a computer system built
with the intent of displaying multimedia content. It is often used to describe computers designed
to display audio, video, images, and other digital information. Dedicated Multimedia Processors are
designed to manage the operations can be done on different multimedia components. However,
General Purpose Processors as its name indicates works for variety of operations.
Multimedia System Architecture is composed of several components, including hardware,
software, and communication infrastructure.
Hardware components of a MSA generally include displays, processors, storage, and
input/output devices.
Displays are the components that enable content to be seen by users. Displays can range
from simple LED screen to more sophisticated flat-panel LCD or plasma displays.
Processors are the devices used to control the system and provide the computing power
needed to show multimedia content.
Storage devices are needed to store the content and can range from hard disk drives to
optical and flash memory drives.
Input/output devices enable the user to interact with the system and allow for data to be
exchanged with external systems.
Software components commonly include an operating system, multimedia software,
and additional utilities.
13
An operating system manages the connections between different components,
loading and unloading the resources needed for any particular task. The
Multimedia software allows for the integration of audio and video data and
provides the necessary controls for manipulating the data.
Additional utilities such as media players, converters, and streaming
technologies also provide advanced functionality.
14