0% found this document useful (0 votes)
16 views31 pages

COA CH 5

Uploaded by

Bifa Hirpo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views31 pages

COA CH 5

Uploaded by

Bifa Hirpo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 31

Chapter-5: Interfacing and communication

I/O fundamentals: handshaking, buffering,


programmed I/O, interrupt-driven I/O
I/O fundamentals:
 The basic I/O fundamental things are processor, memory, input device
and output device.
 The Figure below shows a simple arrangement for connecting the
processor and the memory in a given computer system to an input device
and an output device.
 A single bus consisting of the required address, data, and control lines is
used to connect the system's components 1
I/O fundamentals:

Shared I/O arrangement


Handshaking:
 The basic principle of the two wire handshaking method of data transfer is as follows:
 One control line is from source to the destination. It is used by the source unit to inform
the destination unit whether there are valid data in the bus.
 The destination unit is used to inform the source unit whether it can accept data.
 For example: printer
 Suppose we have a printer connected to a system
 The printer can print 100 character/second, but the microprocessor can send much more
information to the printer at the same time. So it will indicate a busy pin printing.
Buffering:
 Buffer is a memory area that stores data while they are
transferred between two devices or between a device and an
application.
 Buffering of I/O is performed for ( at least ) 3 major reasons:
 Speed differences between two devices.
 Data transfer size differences.
 To support copy semantics.
Programmed I/O:
The CPU has direct control over programmed I/O:
 Sensing status
 Read/write commands
 Transferring data
I/O mapping:
 A complete instruction fetch, decode, and execute cycle will have to be executed for every input
and every output operation.
 Programmed I/O is useful in cases whereby one character at a time is to be transferred
 In I/O mapping, it has following types,
• Memory mapped I/O
• Isolated I/O
Memory mapped I/O:
• I/O looks just like memory read/write
• No special commands are needed for I/O
Isolated I/O:
 Separate address space is required for I/O
 Special commands are needed for I/O
Interrupt-Driven I/O:
 Computers are provided with Interrupt Hardware capability in the form of
specialized Interrupt Lines to the processor.
 These lines are used to send interrupt signals to the processor.
 In the case of I/O, there exists more than one I/O device.
 The processor should be provided with a mechanism that enables it to
handle simultaneous interrupt requests and to recognize the interrupting
device.
 Two basic schemes can be implemented to achieve this Interrupt-Driven
task:
I. Daisy Chain Bus Arbitration (DCBA) and
II. Independent Source Bus Arbitration (ISBA)
Interrupt Hardware: Daisy Chain Interrupt Arrangement:

Interrupt Hardware: Independent Interrupt Arrangement:


Interrupt in Operating Systems:
 When an interrupt occurs, the operating system gains control.
 The operating system saves the state of the interrupted process, analyzes
the interrupt, and passes control to the appropriate routine to handle the
interrupt.
 The layers of software involved in I/O operations:
Interrupt Structure:
 When a program enters a wait loop, it will repeatedly check the device
status.
 During this period, the processor will not perform any function.
 The Interrupt request line will send a hardware signal called the interrupt
signal to the processor.
 On receiving this signal, the processor will perform the useful function
during the waiting period.
 The routine executed in response to an interrupt request is called Interrupt
Service Routine.
Cont..
Vectored Interrupt:

 A device requesting an interrupt may identify itself to the processor by


sending a special code over the bus & then the processor start executing
the Interrupt Service Routine.
 The code supplied by the processor indicates the starting address of the
ISR for the device.
 The code length ranges from 4 to 8 bits.
 The processor reads this address, called the interrupt vector & loads into
PC.
 When the processor is ready to receive the interrupt vector code, it activate
the interrupt acknowledge (INTA) line.
One of the recent Interrupt Vector is: Daisy Chain Bus Arbitration (DCBA)
Priority Interrupt:
 In multiple level priority scheme, we assign a priority level to the processor
that can be changed under program control.
 The priority level of the processor is the priority of the program that is
currently being executed.
 The processor accepts interrupts only from devices that have priorities
higher than its own.
Privileged Instruction:
 It can be privileged using a few bits of the Processor Status word.
 This can be executed only when the processor is in supervisor mode.
External Storage

 Main memory is taking an important role in the working of computer.


 We have seen that computer works on Von-Neuman stored program principle.
 We have to keep the information in main memory and CPU access the
information from main memory.
 The main memory is made up of semiconductor device and by nature it is
volatile.
 For permanent storage of information we need some non volatile memory.
 The memory devices need to store information permanently are termed
as external memory.
 While working, the information will be transferred from external memory to
main memory.
 The devices need to store information permanently are either magnetic or
optical devices.
Cont...
Magnetic Devices: Optical Devices:
 Magnetic disk ( Hard  CD- ROM
disk )  CD-Recordable( CD –R)
 Floppy disk  CD-R/W
 Magnetic tape  DVD

 Drive
 A drive is a medium that is capable of storing
and reading information that is not easily
removed like a disk.
 The picture is an example of different drives
listed in Microsoft Windows My Computer.
Buses

 A bus protocol is the set of rules that govern the behavior of various devices
connected to the bus
 The bus lines used for transferring data is grouped into 3 types. They are,
 Address line
 Data line
 Control line.
 During data transfer operation, one device plays the role of a Master.
 Master device initiates the data transfer by issuing read / write command
on the bus. Hence it is also called as Initiator.
 The device addressed by the master is called as Slave / Target.
Types of Buses:

There are 2 types of buses. They are,


 Synchronous Bus
 Asynchronous Bus.
Synchronous Bus:-
 In synchronous busses, the steps of data transfer take place at fixed clock cycles.
 Everything is synchronized to the bus clock and clock signals are made available to both
master and slave.
 A transfer may take multiple bus cycles depending on the speed parameters of the bus
and the two ends of the transfer.
 Synchronous buses are simple and easily implemented.
 However, when connecting devices with varying speeds to a synchronous bus, the
slowest device will determine the speed of the bus.
 Also, the synchronous bus length could be limited to avoid clock- skewing problem.
Asynchronous Bus:-
 There are no fixed clock cycles in asynchronous busses.
 Handshaking is used instead.
 The master asserts the data-ready line until it sees a data-accept signal.
 When the slave sees data-ready signal, it will assert the data-accept line.
 Asynchronous bus is appropriate for different speed devices.
 Bus Arbitration:
 It is the process by which the next device to become the bus master is
selected and the bus mastership is transferred to it.
 Types of Bus Arbitration:
 Centralized arbitration - ( A single bus arbiter performs arbitration)
 Distributed arbitration - (all devices participate in the selection of next bus
master).
DIRECT MEMORY ACCESS
 A special control unit may be provided to allow the transfer of large block of
data at high speed directly between the external device and main memory,
without continuous intervention by the processor. This approach is called
DMA.
 DMA transfers are performed by a control circuit called the DMA
Controller.
 To initiate the transfer of a block of words , the processor sends,
 Starting address
 Number of words in the block
 Direction of transfer.
Cont..
 When a block of data is transferred , the DMA controller increment the
memory address for successive words and keep track of number of words
and it also informs the processor by raising an interrupt signal.
 While DMA control is taking place, the program requested the transfer
cannot continue and the processor can be used to execute another program.
 After DMA transfer is completed, the processor returns to the program that
requested the transfer.
 When the processor wishes to read or write a block of data, it issues a
command to the DMA module, by sending to the DMA module the
following information:
Cont..
 Whether a read or write is requested, using the read or write
control line between the processor and the DMA module.
 Address of the I/O device involved, communicated on the data
lines
 The starting location in memory to read from or write to,
communicated on the data lines
 The starting location in memory to read from or write to,
communicated on the data lines and stored by the DMA module in
its address register.
 The number of words to be read or written, again communicated
via the data lines and stored in the stored count register.
Introduction to networks
 A basic understanding of networking is important for anyone managing a server.
 Not only is it essential for getting your services online and running smoothly, it also gives you the
insight to diagnose problems.
 Connection: In networking, a connection refers to pieces of related information that are transferred
through a network.
 Packet: A packet is, generally speaking, the most basic unit that is transferred over a network.
 When communicating over a network, packets are the envelopes that carry your data (in pieces) from
one end point to the other.
 Packets have a header portion that contains information about the packet including the source and
destination, timestamps, network hops, etc.
 The main portion of a packet contains the actual data being transferred. It is sometimes called the body
or the payload.
 There are two types of transmission technology that are in widespread use:
 broadcast links
 point-to-point links.
Cont..
Point-to-point links connect individual pairs of machines.
 To go from the source to the destination on a network made up
of point-to-point links, short messages, called packets.
 P2P is sometimes called unicasting
Broadcast systems usually also allow the possibility of
addressing a packet to all destinations by using a special code
in the address field.
 Broadcasting which known as multicasting.
 Classification of network by areas as follow:
The classifications are PAN, LAN, MAN, WAN

 PANs (Personal Area Networks) let devices communicate over the range of a person.
A common example is a wireless network that connects a computer with its peripherals.
 Best example to design a short-range wireless network called Bluetooth to connect
these components without wires.
 LAN (Local Area Network): A LAN is a privately owned network that operates within
and nearby a single building like a home, office or factory.
 Best example: wifi
 MAN (Metropolitan Area Network) covers a city. The best-known examples of
MANs are the cable television networks available in many cities. Best example is the
cable TV network Operators
 WAN (Wide Area Network) spans a large geographical area, often a country or
continent. We will begin our discussion with wired WANs, using the example of a
company with branch offices in different cities.
 Eg: ISP – internet service provider
Open Systems Interconnection - OSI layer
Totally there are seven layers, they are
1. Physical: The physical layer is responsible for handling the actual physical devices that are
used to make a connection.
2. Data Link: This layer is implemented as a method of establishing and maintaining reliable
links between different nodes or devices on a network using existing physical connections.
3. Network: The network layer is used to route data between different nodes on the network.
4. Transport: The transport layer is responsible for handing the layers above it a reliable
connection.
5. Session: The session layer is a connection handler. It creates, maintains, and destroys
connections between nodes in a persistent way.
6. Presentation: The presentation layer is responsible for mapping resources and creating
context.
7. Application: The application layer is the layer that the users and user-applications most
often interact with.
Protocol: A protocol is a set of rules and standards that basically
define a language that devices can use to communicate.
Types of protocol:
 TCP - (Transmission Control Protocol)
 UDP - (User Datagram Protocol)
 TCP - is a reliable connection-oriented protocol that allows a byte
stream originating on one machine to be delivered without error
on any other machine in the internet.
 UDP - is an unreliable, connectionless protocol for applications
that do not want TCP’s sequencing or flow control and wish to
provide their own.
RAID architectures
 RAID is a technology that is used to increase the performance and/or
reliability of data storage.
 The abbreviation stands for Redundant Array of Inexpensive Disks.
 A RAID system consists of two or more drives working in parallel.
RAID levels:
 RAID 0 – striping
 RAID 1 – mirroring
 RAID 5 – striping with parity
 RAID 6 – striping with double parity
 RAID 10 – combining mirroring and striping
RAID architectures
 To achieve greater performance and higher availability, servers and larger
systems use RAID
 disk technology.
 RAID is a family of techniques for using multiple disks as a parallel array of
data storage
 devices, with redundancy built in to compensate for disk failure.
 RAID is a set of physical disk drives viewed by the operating system as a single
 logical drive.
 Data are distributed across the physical drives of an array.
 Redundant disk capacity is used to store parity information, which guarantees
data
 recoverability in case of a disk failure.
 The RAID array creates significant performance and reliability gains.

You might also like