0% found this document useful (0 votes)
28 views14 pages

Chapter 5

Chapter 5 discusses the fundamentals of interfacing and communication in computer systems, focusing on I/O modules, their functions, and types of interfaces. It covers key concepts such as handshaking, buffering, programmed I/O, and interrupt-driven I/O, explaining how these mechanisms improve data transfer efficiency between the CPU and peripheral devices. Additionally, it explores external storage types, magnetic disk mechanisms, and performance parameters related to disk I/O operations.

Uploaded by

nafyjabesa1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views14 pages

Chapter 5

Chapter 5 discusses the fundamentals of interfacing and communication in computer systems, focusing on I/O modules, their functions, and types of interfaces. It covers key concepts such as handshaking, buffering, programmed I/O, and interrupt-driven I/O, explaining how these mechanisms improve data transfer efficiency between the CPU and peripheral devices. Additionally, it explores external storage types, magnetic disk mechanisms, and performance parameters related to disk I/O operations.

Uploaded by

nafyjabesa1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

Chapter 5

Interfacing and Communication


5.1. I/O Fundamentals
In addition to the processor and a set of memory modules, the third key element of a computer
system is a set of I/O modules. Each module interfaces to the system bus or central switch and
controls one or more peripheral devices. An I/O module is not simply a set of mechanical
connectors that wire a device into the system bus. Rather, the I/O module contains logic for
performing a communication function between the peripheral and the bus.
Interfacing: Interface is a shared boundary between two separate components of the computer
system which can be used to attach two or more components to the system for communication
purposes. It supports a method by which data is transferred between internal storage and
external I/O devices. All the peripherals connected to a computer require special communication
connections for interfacing them with the CPU.
 There are two types of interface: CPU Interface and I/O Interface.
The I/O interface is the route used for peripheral devices to interact with the computer
processor. A typical connection of the I/O interface to I/O devices is shown in the figure below.

Figure 5.1 I/O Interface


Peripherals connected to a computer need special communication links for interfacing with CPU.
There are special hardware components between the CPU and peripherals to control or
manage the input-output transfers. These components are called input-output interface
units because they provide communication links between processor bus and peripherals, for
transferring information between internal system and input-output devices.
An interface receives any of the following four commands:
 Control: Used to activate a peripheral and tell it what to do. These commands are
tailored to the particular type of peripheral device.

1
 Test: Used to test various status conditions associated with an I/O module and its
peripherals. This command helps processor to know that the peripheral of interest
is powered on and available for use. It also helps to know if the most recent I/O
operation is completed and if any errors occurred.
 Read: Causes the I/O module to obtain an item of data from the peripheral and
place it in an internal buffer. The processor can then obtain the data item by
requesting that the I/O module place it on the data bus.
 Write: Causes the I/O module to take an item of data (byte or word) from the data
bus and subsequently transmit that data item to the peripheral.
Depending on the way they transmit data, interface categorized in to two types, namely,
serial or parallel.
 In a parallel interface, there are multiple lines connecting the I/O module and the
peripheral, and multiple bits are transferred simultaneously, just as all of the bits of
a word are transferred simultaneously over the data bus.
 In a serial interface, there is only one line used to transmit data, and bits must be
transmitted one at a time. A parallel interface used for higher-speed peripherals,
such as tape and disk, while the serial interface used for printers and terminals.

Figure 5.2 parallel interface and serial interface

The major I/O fundamentals are:


 Handshaking, Buffering, Programmed I/O, and Interrupt-driven I/O.
Handshaking:
In a normal lifestyle, handshaking resembles establishing communication or a friendly bond
between two people. In terms of the computer system also, it means somewhat the same. In
general handshaking can be defined as a communication link established between two different
components of a computer for data exchange.
2
In the handshaking process, first, a testing signal is sent by the source channel to the
destination. Then, the destination sends back the acknowledgment that the signal has been
received with a signal informing whether the destination channel is free or not for receiving the
data. By following these steps, communication is established between the sender and the receiver
ends and then the further transfer of data takes place within the two through the data bus.

Types of handshaking process


There are two types of handshaking process:
 Source Initiated Handshaking
 Destination Initiated Handshaking
a) Source Initiated Handshaking Process
In the source Initiated handshaking process, the sender needs to send the data and so the
handshaking process is initiated by the sender. So, in this process, after sending the valid data,
the receiver sends the acknowledgment that the data has been received. Hence, the signals
DATA VALID is sent by the sender before sending the data and the signal DATA ACCEPTED
is sent by the receiver after getting the data.
b) Destination Initiated Handshaking Process
In this process, the process of establishing the connection is initiated by the destination. This
means is that, the receiver needs to receive the data form the sender; hence the handshaking
process is initiated by the receiver. Therefore, in this process, the receiver has to first send the
request signal to the source channel. After that, the source sends the DATA VALID signal before
sending the data and the receiver then again sends a signal DATA ACCEPTED after the data is
received by it.
 In both types, the purpose of handshaking is to ensure that two systems involved in the
1, communication are in agreement on the parameters of the conversation and
2, Understand the communication process. It helps ensure the reliability of
communication between two devices and compatibility with the protocols.
- This helps to establish more secured communication.
Handshaking is a simple timing mechanism. It also beneficial in terms of speed. By passing a
handshaking pattern before the data exchange, the two systems involved can quickly establish
their communication parameters, allowing for faster data transmission. By using the
handshake protocol, the time needed for data transfer is significantly reduced, allowing for
faster transfers and less idle time.

3
During handshaking process, the communication between two components managed by control
register.
The control register is loaded by the processor to control the mode of operation and to define
signals. The control signals serve two main purposes: handshaking and interrupt request.
One control line is used by the sender as a DATA READY line, to indicate when the data are
present on the I/O data lines. Another line is used by the receiver as an ACKNOWLEDGE,
indicating that the data have been read and the data lines may be cleared. Another line may be
designated as an INTERRUPT REQUEST line and tied back to the system bus.
Buffering
The buffer is an area in the main memory used to store or hold the data temporarily. The act
of storing data temporarily in the buffer is called buffering.
 A buffer may be used when moving data between processes within a computer.

 Buffers are typically used when there is a difference between the rate of received data
and the rate of processed data, for example, in a printer spooler or online video
streaming.
 A buffer often adjusts timing by implementing a queue or FIFO algorithm in memory,
simultaneously writing data into the queue at one rate and reading it at another rate.
Programmed I/O
With programmed I/O, data are exchanged between the processor and the I/O module. The
processor executes a program that gives it direct control of the I/O operation, including sensing
device status, sending a read or write command, and transferring the data. When the processor
issues a command to the I/O module, it must wait until the I/O operation is complete.
 If the processor is faster than the I/O module, this is waste of processor time.
Interrupt-driven I/O
The problem with programmed I/O is that the processor has to wait a long time for the I/O
module of concern to be ready for either reception or transmission of data. The processor, while
waiting, must repeatedly interrogate the status of the I/O module. As a result, the level of the
performance of the entire system is severely degraded.
The solution to the drawbacks of programmed I/O is to provide an interrupt mechanism. In this
approach, the processor issues I/O command to a module and then go on to do some other useful
work. The I/O module then interrupts the processor to request service when it is ready to

4
exchange data with the processor. The processor then executes the data transfer. Once the data
transfer is over, the processor then resumes its former processing.

5.2. Interrupt structures


Interrupt is an unscheduled event that comes from outside of the processor, and disrupts
program execution. Virtually all computers provide a mechanism by which other modules (I/O,
memory) may interrupt the normal processing of the processor.
Interrupts are provided primarily as a way to improve processing efficiency. For example,
most external devices are much slower than the processor. Suppose that the processor is
transferring data to a printer using the instruction cycle scheme.
Interrupt Processing:
The occurrence of an interrupt triggers a number of events, both in the processor hardware and
in software. When an I/O device completes an I/O operation, the following sequences of
hardware events occurs:
1. The device sends interrupt request signal to the processor.
2. The processor finishes execution of the current instruction before responding to the interrupt.
3. The processor tests for the interrupt. If there is one interrupt pending, then the
- processor sends an acknowledgement signal to the device which sent the interrupt
request. After getting acknowledgement,
-the device removes its interrupt signals.
4. The processor now needs to prepare to transfer control to the interrupt routine.
-It needs to save the information needed to resume the current program at the point of
interrupt. - The minimum information required to save is called processor status word
(PSW) and the location of next instruction to be executed will be the content of program
counter (PC). These can be pushed into the system control stack.
 Program counter (PC) is part of CPU register contains the address of the next
instruction to be fetched from memory.
5. The processor now loads the program counter with the entry location of the interrupt
handling program that will respond to the interrupt.
6. At the point, the program counter and PSW relating to the interrupted program have been
saved on the system stack. In addition to that some more information must be saved related
to the current processor state which includes the control of the processor registers, because

5
these registers may be used by the interrupt handler. Typically, the interrupt handler will
begin by saving the contents of all registers on stack.
7. The interrupt handles next processes the interrupt. This includes an examination of status
information relating to the I/O operation or, other event that caused an interrupt.
8. When interrupt processing is complete, the saved register values are retrieved from the
stack and restored to the registers.
9. The final act is to restore the PSW and program counter values from the stack. As a result,
the next instruction to be executed will be from the previously interrupted program.

 The following diagram shows basic instruction cycle; that means without
interruption

 The following diagram shows Instruction Cycle with Interrupts

6
Vectored interrupt and prioritized
In a vectored interrupt, the address to which control is transferred is determined by the cause of
the interrupt. In other words, vectored interrupts are those which have fixed vector address
(starting address of sub-routine) and after executing these, program control is transferred to that
address. For instance, let us say certain type of interrupt has occurred; which is arithmetic over
flow; it has its own address.
 To record the information of this interrupt, processor need two registers which
- record cause of interrupt and - its address, namely exception program counter (EPC) and
Cause.
 EPC: A register used to hold the address of the affected instruction.
 Cause: A register used to record the cause of the exception.
Prioritized interrupt takes place when more than one device is requesting interrupt service…..
- The processor just picks the interrupt line with the highest priority. This priority scheme
employed by a technique called bus arbitration can employ a priority scheme. Actually, ………...
- interrupt priority determined by daisy chain.

Daisy Chain can be defined as a method of device interconnection for determining interrupt
priority by connecting the interrupt sources serially.

5.3. External Storage, Physical Organization, and Drives


External storage can also be known as secondary memory or backing store.
It is used to store a huge amount of data because it has a huge capacity….
The important property of external memory is that whenever the computer switches off, then
stored information will not be lost. The external memory can be categorized into four parts:

 Magnetic disk  Optical memory


 RAID  Magnetic Tape

7
Magnetic Disks
 A disk is a circular platter constructed of nonmagnetic material, called the substrate,
coated with a magnetizable material. Traditionally, the substrate has been an aluminum or
aluminum alloy material. More recently, glass substrates have been introduced.
 The glass substrate has a number of benefits, including the following:
 Improvement in the uniformity of the magnetic film surface to increase disk reliability.
 A significant reduction in overall surface defects to help reduce read- write errors.
 Ability to support lower fly heights (described subsequently).
 Better stiffness to reduce disk dynamics.
 Greater ability to withstand shock and damage.
Magnetic Read and Write Mechanisms
Data are recorded on and later retrieved from the disk via a conducting coil named the head. In
many systems, there are two heads, a read head and a write head. During a read or write
operation, the head is stationary while the platter rotates under it. The write mechanism exploits
the fact that electricity flowing through a coil produces a magnetic field. Electric pulses are sent
to the write head, and the resulting magnetic patterns are recorded on the surface below, with
different patterns for positive and negative currents. The write head itself is made of easily
magnetizable material and is in the shape of a rectangular doughnut with a gap along one side
and a few turns of conducting wire along the opposite side (Figure 5.3).

Figure 5.3 Inductive Write/Magneto-resistive Read Head

8
An electric current in the wire induces a magnetic field across the gap, which in turn magnetizes
a small area of the recording medium. Reversing the direction of the current reverses the
direction of the magnetization on the recording medium.
The traditional read mechanism exploits the fact that a magnetic field moving relative to a coil
produces an electrical current in the coil. When the surface of the disk rotates under the head, it
generates a current of the same polarity as the one already recorded. The structure of the head for
reading is in this case essentially the same as for writing and therefore the same head can be used
for both. Such single heads are used in floppy disk systems and in older rigid disk systems.
Contemporary rigid disk systems use a different read mechanism, requiring a separate read head,
positioned for convenience close to the write head. The read head consists of a partially shielded
magneto-resistive (MR) sensor. The MR material has an electrical resistance that depends on the
direction of the magnetization of the medium moving under it.
y passing a current through the MR sensor, resistance changes are detected as voltage signals.
The MR design allows higher- frequency operation, which equates to greater storage densities
and operating speeds.
Disk Performance Parameters
The actual details of disk I/O operation depend on the computer system, the operating system,
and the nature of the I/O channel and disk controller hardware. When the disk drive is operating,
the disk is rotating at constant speed.
To read or write, the head must be positioned at the desired track and at the beginning of the
desired sector on that track. Track selection involves moving the head in a movable-head system
or electronically selecting one head on a fixed-head system.
 On a movable- head system, the time it takes to position the head at the track is known
as seek time. In either case, once the track is selected, the disk controller waits until the
appropriate sector rotates to line up with the head.
 The time it takes for the beginning of the sector to reach the head is known as
rotational delay, or rotational latency.
 The sum of the seek time, if any, and the rotational delay equals the access time, which
is the time it takes to get into position to read or write. Once the head is in position, the
read or write operation is then performed as the sector moves under the head; this is the
data transfer portion of the operation; the time required for the transfer is the transfer
time.

9
In addition to the access time and transfer time, there are several queuing delays normally
associated with a disk I/O operation. When a process issues an I/O request, it must first wait in a
queue for the device to be available. At that time, the device is assigned to the process. If the
device shares a single I/O channel or a set of I/O channels with other disk drives, then there
may be an additional wait for the channel to be available. At that point, the seek is performed to
begin disk access.

Figure 5.5 Timing of a Disk I/O Transfer

5.4. Buses: Bus Protocols, Arbitration, Direct-Memory Access (DMA)


Buses are the dominant means of computer system component interconnection. However, bus
structures are still commonly used for embedded systems, particularly microcontrollers.
A bus is a communication pathway connecting two or more devices.
A key characteristic of a bus is that it is a shared transmission medium. Multiple devices
connect to the bus, and a signal transmitted by any one device is available for reception by all
other devices attached to the bus. If two devices transmit during the same time period, their
signals will overlap and become garbled. Thus, only one device at a time can successfully
transmit.
Typically, a bus consists of multiple communication pathways, or lines. Each line is capable of
transmitting signals representing binary 1 and binary 0. Over time, a sequence of binary digits
can be transmitted across a single line. Taken together, several lines of a bus can be used to
transmit binary digits simultaneously (in parallel). For example, an 8-bit unit of data can be
transmitted over eight bus lines.
Computer systems contain a number of different buses that provide pathways between
components at various levels of the computer system hierarchy.
A bus that connects major computer components (processor, memory, I/O) is called a system
bus.
The most common computer interconnection structures are based on the use of one or more
system buses.

10
 Bus lines can be classified into three functional groups (Figure 5.5):
- data, address, and control lines. In addition, there may be power distribution lines that
supply power to the attached modules.
 The data lines provide a path for moving data among system modules. These lines,
collectively, are called the data bus. The data bus may consist of 32, 64, 128, or even more
separate lines, the number of lines being referred to as the width of the data bus. Because
each line can carry only one bit at a time, the number of lines determines how many bits can
be transferred at a time. The width of the data bus is a key factor in determining overall
system performance. For example, if the data bus is 32 bits wide and each instruction is 64
bits long, then the processor must access the memory module twice during each instruction
cycle.
 The address lines are used to designate the source or destination of the data on the data
bus. For example, if the processor wishes to read a word (8, 16, or 32 bits) of data from
memory, it puts the address of the desired word on the address lines. Clearly, the width of
the address bus determines the maximum possible memory capacity of the system.
Furthermore, the address lines are generally also used to address I/O ports. Typically, the
higher-order bits are used to select a particular module on the bus, and the lower-order bits
select a memory location or I/O port within the module. For example, on an 8-bit address
bus, address 01111111 and below might reference locations in a memory module (module
0) with 128 words of memory, and address 10000000 and above refer to devices attached to
an I/O module (module 1).
 The control lines are used to control the access to and the use of the data and address
lines. Because the data and address lines are shared by all components, there must be a
means of controlling their use. Control signals transmit both command and timing
information among system modules. Timing signals indicate the validity of data and address
information. Command signals specify operations to be performed. Typical control lines
include:
• Memory write: causes data on the bus to be written into the addressed location.
• Memory read: causes data from the addressed location to be placed on the bus.
• I/O write: causes data on the bus to be output to the addressed I/O port.
• I/O read: causes data from the addressed I/O port to be placed on the bus.
• Transfer ACK: indicates that data have been accepted from or placed on the bus.

11
• Bus request: indicates that a module needs to gain control of the bus.
• Bus grant: indicates that a requesting module has been granted control of the bus.
• Interrupt request: indicates that an interrupt is pending.
• Interrupt ACK: acknowledges that the pending interrupt has been recognized.
• Clock: is used to synchronize operations.
• Reset: initializes all modules.
The operation of the bus is as follows.
If one module wishes to send data to another, it must do two things:
1) Obtain the use of the bus, and
2) Transfer data via the bus.
If one module wishes to request data from another module, it must
1) Obtain the use of the bus, and
2) Transfer a request to the other module over the appropriate control and address lines. It
must then wait for that second module to send the data.

Figure 5.5 Bus Interconnection Scheme


Arbitration: Any I/O module can temporarily function as “master.” A mechanism is provided to
arbitrate competing requests for bus control, using some sort of priority scheme.
5.5. Introduction to Networks
The shared bus architecture was the standard approach to interconnection between the processor
and other components (memory, I/O, and so on) for decades. But contemporary systems
increasingly rely on point-to-point interconnection rather than shared buses. The principal
reason driving the change from bus to point-to-point interconnect was the electrical
constraints encountered with increasing the frequency of wide synchronous buses. At higher
and higher data rates, it becomes increasingly difficult to perform the synchronization and
arbitration functions in a timely fashion.
Compared to the shared bus, the point-to-point interconnect has lower latency, higher data rate,

12
and better scalability.
An important and representative example of the point-to-point interconnect approach is Intel’s
Quick Path Interconnect (QPI), which was introduced in 2008. The following are significant
characteristics of point-to-point interconnect schemes:
 Multiple direct connections: Multiple components within the system enjoy direct pairwise
connections to other components. This eliminates the need for arbitration found in shared
transmission systems.
 Layered protocol architecture: As found in network environments, such as TCP/IP-based
data networks, these processor-level interconnects use a layered protocol architecture, rather
than the simple use of control signals found in shared bus arrangements.
 Packetized data transfer: Data are not sent as a raw bit stream. Rather, data are sent as a
sequence of packets, each of which includes control headers and error control codes.
5.6. Multimedia Support
Multimedia System Architecture (MSA) is the term used to describe a computer system built
with the intent of displaying multimedia content. It is often used to describe computers designed
to display audio, video, images, and other digital information. Dedicated Multimedia Processors are
designed to manage the operations can be done on different multimedia components. However,
General Purpose Processors as its name indicates works for variety of operations.
Multimedia System Architecture is composed of several components, including hardware,
software, and communication infrastructure.
 Hardware components of a MSA generally include displays, processors, storage, and
input/output devices.
 Displays are the components that enable content to be seen by users. Displays can range
from simple LED screen to more sophisticated flat-panel LCD or plasma displays.
 Processors are the devices used to control the system and provide the computing power
needed to show multimedia content.
 Storage devices are needed to store the content and can range from hard disk drives to
optical and flash memory drives.
 Input/output devices enable the user to interact with the system and allow for data to be
exchanged with external systems.
 Software components commonly include an operating system, multimedia software,
and additional utilities.

13
 An operating system manages the connections between different components,
loading and unloading the resources needed for any particular task. The
 Multimedia software allows for the integration of audio and video data and
provides the necessary controls for manipulating the data.
 Additional utilities such as media players, converters, and streaming
technologies also provide advanced functionality.

5.7. RAID Architectures


RAID (Redundant Array of Independent Disks) refers to an organization of disks that uses an array of
small and inexpensive disks so as to increase both performance and reliability. With the use of multiple
disks, there is a wide variety of ways in which the data can be organized and in which redundancy can
be added to improve reliability.
In other words, redundant array of independent disks (RAID), can be defined as, a disk array in which
part of the physical storage capacity is used to store redundant information about user data stored on
the remainder of the storage capacity. The redundant information enables regeneration of user data in
the event that one of the array’s member disks or the access path to it fails.

14

You might also like