Unit-4 COA Presentation
Unit-4 COA Presentation
Peripheral Devices
A Peripheral Device is defined as the device which provides input/output functions for a
computer and serves as an auxiliary computer device without computing-intensive
functionality.
A peripheral device is a device that is connected to a computer system but is not part of
the core computer system architecture .
They are not essential for the computer to perform its basic tasks, they can be thought of
as an enhancement to the user’s experience.
Classification of Peripheral devices:
It is generally classified into 3 basic categories which are given below:
1.Input Devices:
The input device is defined as it converts incoming data and instructions into a pattern of
electrical signals in binary code that are comprehensible to a digital computer.
A piece of equipment/hardware which helps us enter data into a computer is called an input
device.
Example: Keyboard, mouse, scanner, microphone , Optical Character Recognition
(OCR) ,Optical Bar code reader (OBR) ,Optical Mark Reader(OMR), Magnetic Ink
Character Recognition (MICR) etc.
2. Output Devices:
A piece of equipment/hardware which gives out the result of the entered input, once it is
processed (i.e. converts data from machine language to a human-understandable language),
is called an output device.
Example:
Monitors, headphones, printers etc.
Input-Output peripherals:
Allows both input(from outside world to computer) as well as, output(from computer to the
outside world).
Example: Touch screen etc.
Storage Devices:
Storage devices are used to store data in the system which is required for performing any
operation in the system. The storage device is one of the most requirement devices and also
provide better compatibility.
Example:
Hard disk, magnetic tape, Flash memory etc.
Advantage of Peripherals Devices:
Peripherals devices provides more feature due to this operation of the system is easy. These
are given below:
•It is helpful for taking input very easily.
•It is also provided a specific output.
•It has a storage device for storing information or data.
•It also improves the efficiency of the system.
Introduction to Input-Output
Input-Output Interface
interface provides a method for transferring information between internal
storage (such as memory and CPU registers) and external I/O devices.
Peripherals connected to a computer need special communication links for interfacing them
with the central processing unit.
The communication link resolves the following differences between the computer and
peripheral devices.
Unit of Information
Peripherals - Byte
CPU or Memory - Word
Operating Modes
Peripherals - Autonomous, Asynchronous
CPU or Memory – Synchronous
To resolve these differences, computer systems include special hardware components
(Interfaces) between the CPU and peripherals to supervise and synchronize all input and output
interfaces.
Input-Output Interface
Functions of Input-Output Interface:
1.It is used to synchronize the operating speed of CPU with respect to input-output
devices. Synchronizes the data flow and supervises the transfer rate between peripheral
and CPU or Memory.
2.Decodes the device address (device code).
3.Decodes the commands (operation).
4.It is capable of providing signals like control and timing signals.
5.In this data buffering can be possible through data bus.
6.There are various error detectors.
7.It converts serial data into parallel data and vice-versa.
8.It also convert digital data into analog signal and vice-versa.
Data Transfer
We synchronize the internal operations in any individual unit of the digital system using
clock pulse. That means clock pulse is given to every register within the unit. And all the
data transfer among internal registers co-occurs during the occurrence of the clock pulse.
Now, let's assume that two units of the digital system are designed independently, like CPU
and I/O interface. If the internal registers in the I/O interface share a standard clock with the
CPU registers, then data transfer between the units (two or more) is said to be synchronous.
But in maximum cases, the internal timing in every unit is independent of each other, so
every unit uses its clock for its registers. In this case, the units are asynchronous, and data
transfer between them is called Asynchronous data transfer.
Asynchronous Data Transfer
Asynchronous data transfer is a mode of data transfer in which two-component have two
different clocks. Asynchronous data transfer between any two independent units requires that
the control signals be transmitted between communicating units so that times can be indicated at
which they transfer data.
We have two different methods of Asynchronous data transfer:
In the figure, we may see that the source unit initializes the strobe. In the timing diagram,
we can notice that the source unit first places the data on the data bus. Then after a brief
delay, the source unit activates a strobe pulse to ensure that the data revolves to a stable
value. The strobe control signal and data bus information remain in the active state for
enough time to permit the destination unit to receive the data.
The destination unit uses a falling edge of strobe control to transfer the contents of a data
bus to one of its internal registers. The source removes the data from the data bus after it
disables its strobe pulse. Thus, new valid data will be available only after the strobe is
enabled again.
Destination initiated strobe: In the below figure, we can see that the destination unit
initiates the strobe, and as shown in the timing diagram, the destination unit activates the
strobe pulse first by informing the source to provide the data.
In destination initiated transfer, the source unit will respond by placing the requested
information on the data bus. The transfer data must be valid and remain on the data bus
long enough for the destination unit to receive it. We can use the strobe pulse's falling edge
again to trigger a destination register. The destination unit then disables the strobe pulse.
Finally, the source unit removes the data from the bus after some determined time interval.
The strobe control method for asynchronous data transfer has a disadvantage. The source
unit always assumes that the destination unit has received the data placed in the data bus.
Similarly, a destination unit that initiates the transfer has no way of knowing whether the
source unit has placed data on the bus.
This problem is solved by the handshaking method of data transfer.
2. Handshaking Method
This method is commonly used to accompany each data item being transferred with a control
signal that indicates data in the bus. The unit receiving the data item responds with another
signal to acknowledge receipt of the data.
In this method, one control line is in the same direction as the data flow in the bus from the
source to the destination. The source unit uses it to inform the destination unit whether there
are valid data in the bus.
The other control line is in the other direction from the destination to the source. This is
because the destination unit uses it to inform the source whether it can accept data. And in it
also, the sequence of control depends on the unit that initiates the transfer. So it means the
sequence of control depends on whether the transfer is initiated by source and destination.
Source initiated handshaking: In the below block diagram, we can see that two handshaking lines
are "data valid", which is generated by the source unit, and "data accepted", generated by the
destination unit.
The timing diagram shows the timing relationship of the exchange of signals between the two units.
The source initiates a transfer by placing data on the bus and enabling its data valid signal. The
destination unit then activates the data accepted signal after it accepts the data from the bus.
The source unit then disables its valid data signal, which invalidates the data on the bus.
After this, the destination unit disables its data accepted signal, and the system goes into its initial
state. The source unit does not send the next data item until after the destination unit shows
readiness to accept new data by disabling the data accepted signal.
This sequence of events described in its sequence diagram, which shows the above sequence in
which the system is present at any given time.
Destination initiated handshaking: In the below block diagram, you see that the two handshaking
lines are "data valid", generated by the source unit, and "ready for data" generated by the
destination unit.
Note that the name of signal data accepted generated by the destination unit has been changed to ready
for data to reflect its new meaning.
The destination transfer is initiated, so the source unit does not place data on the data bus until it
receives a ready data signal from the destination unit. After that, the handshaking process is the same
as that of the source initiated.
The sequence of events is shown in its sequence diagram, and the timing relationship between signals
is shown in its timing diagram. Therefore, the sequence of events in both cases would be identical.
Programmed I/O: It is due to the result of the I/O instructions that are written in the
computer program. Each data item transfer is initiated by an instruction in the program.
Usually the transfer is from a CPU register and memory. In this case it requires constant
monitoring by the CPU of the peripheral devices.
Example of Programmed I/O: In this case, the I/O device does not have direct access to the
memory unit. A transfer from I/O device to memory requires the execution of several
instructions by the CPU, including an input instruction to transfer the data from device to the
CPU and store instruction to transfer the data from CPU to memory. In programmed I/O, the
CPU stays in the program loop until the I/O unit indicates that it is ready for data transfer. This
is a time consuming process since it needlessly keeps the CPU busy. This situation can be
avoided by using an interrupt facility.
Advantages:
•Programmed I/O is simple to implement.
•It requires very little hardware support.
•CPU checks status bits periodically.
Disadvantages:
•The processor has to wait for a long time for the I/O module to be ready for either
transmission or reception of data.
•The performance of the entire system is severely degraded.
Interrupt-initiated I/O
In the Programmed I/O, we saw that the CPU is kept busy unnecessarily. We can avoid this
situation by using an interrupt-driven method for data transfer. The interrupt facilities and
special commands inform the interface for issuing an interrupt request signal as soon as the
data is available from any device.
In the meantime, the CPU can execute other programs, and the interface will keep
monitoring the I/O device. Whenever it determines that the device is ready for transferring
data, interface initiates an interrupt request signal to the CPU. As soon as the CPU detects
an external interrupt signal, it stops the program it was already executing, branches to the
service program to process the I/O transfer, and returns to the program it was initially
running.
Advantages:
•It is faster and more efficient than Programmed I/O.
•It requires very little hardware support.
•CPU does not check status bits periodically.
Disadvantages:
•It can be tricky to implement if using a low-level language.
•It can be tough to get various pieces of work well together.
•The hardware manufacturer / OS maker usually implements it, e.g., Microsoft.
Direct Memory Access (DMA)
The data transfer between any fast storage media like a memory unit and a magnetic disk
gets limited with the speed of the CPU. Thus it will be best to allow the peripherals to
directly communicate with the storage using the memory buses by removing the intervention
of the CPU.
This mode of transfer of data technique is known as Direct Memory Access (DMA). During
Direct Memory Access, the CPU is idle and has no control over the memory buses. The
DMA controller takes over the buses and directly manages data transfer between the
memory unit and I/O devices.
Bus Request - We use bus requests in the DMA controller to ask the CPU to relinquish the
control buses.
Bus Grant - CPU activates bus grant to inform the DMA controller that DMA can take
control of the control buses. Once the control is taken, it can transfer data in many ways.
Types of DMA transfer using DMA controller:
•Burst Transfer: In this transfer, DMA will return the bus control after the complete data
transfer. A register is used as a byte count, which decrements for every byte transfer, and
once it becomes zero, the DMA Controller will release the control bus. When the DMA
Controller operates in burst mode, the CPU is halted for the duration of the data transfer.
•Cyclic Stealing: It is an alternative method for data transfer in which the DMA controller
will transfer one word at a time. After that, it will return the control of the buses to the CPU.
The CPU operation is only delayed for one memory cycle to allow the data transfer
to “steal” one memory cycle.
Interrupts
Data transfer between the CPU and the peripherals is initiated by the CPU. But the CPU
cannot start the transfer unless the peripheral is ready to communicate with the CPU. When a
device is ready to communicate with the CPU, it generates an interrupt signal. A number of
input-output devices are attached to the computer and each device is able to generate an
interrupt request.
The main job of the interrupt system is to identify the source of the interrupt. There is also a
possibility that several devices will request simultaneously for CPU communication. Then,
the interrupt system has to decide which device is to be serviced first.
Priority Interrupt
A priority interrupt is a system which decides the priority at which various devices, which
generates the interrupt signal at the same time, will be serviced by the CPU. The system has
authority to decide which conditions are allowed to interrupt the CPU, while some other
interrupt is being serviced.
Generally, devices with high speed transfer such as magnetic disks are given high priority and
slow devices such as keyboards are given low priority.
When two or more devices interrupt the computer simultaneously, the computer services the
device with the higher priority first.
Types of Interrupts:
Following are some different types of interrupts:
Hardware Interrupts
When the signal for the processor is from an external device or hardware then this interrupts is known
as hardware interrupt.
Let us consider an example: when we press any key on our keyboard to do some action, then this
pressing of the key will generate an interrupt signal for the processor to perform certain action.
•Normal Interrupt
The interrupts that are caused by software instructions are called normal software interrupts.
•Exception
Unplanned interrupts which are produced during the execution of some program are
called exceptions, such as division by zero.
Memory Organization in Computer Architecture
A memory unit is the collection of storage units or devices together. The memory unit stores the
binary information in the form of bits. Generally, memory/storage is classified into 2 categories:
•Volatile Memory: This loses its data, when power is switched off.
•Non-Volatile Memory: This is a permanent storage and does not lose any data when power is
switched off.
Memory Hierarchy
The total memory capacity of a computer can be visualized by hierarchy of components. The memory
hierarchy system consists of all storage devices contained in a computer system from the slow
Auxiliary Memory to fast Main Memory and to smaller Cache memory.
Auxiliary memory access time is generally 1000 times that of the main memory, hence it is at the
bottom of the hierarchy.
The main memory occupies the central position because it is equipped to communicate directly with
the CPU and with auxiliary memory devices through Input/output processor (I/O).
When the program not residing in main memory is needed by the CPU, they are brought in from
auxiliary memory. Programs not currently needed in main memory are transferred into auxiliary
memory to provide space in main memory for other programs that are currently in use.
The cache memory is used to store program data which is currently being executed in the CPU.
Approximate access time ratio between cache memory and main memory is about 1 to 7~10 ns.
Memory Access Methods
Each memory type, is a collection of numerous memory locations. To access data from any
memory, first it must be located and then the data is read from the memory location. Following are
the methods to access information from memory locations:
1.Random Access: Main memories are random access memories, in which each memory location
has a unique address. Using this unique address any memory location can be reached in the same
amount of time in any order.
3.Direct Access: In this mode, information is stored in tracks, with each track having a separate
read/write head.
Main Memory
The memory unit that communicates directly within the CPU, Auxiliary memory and Cache
memory, is called main memory. It is the central storage unit of the computer system. It is a
large and fast memory used to store data during computer operations. Main memory is
made up of RAM and ROM, with RAM integrated circuit chips holing the major share.
• DRAM: Dynamic RAM, is made of capacitors and transistors, and must be refreshed
every 10~100 ms. It is slower and cheaper than SRAM.
• SRAM: Static RAM, has a six transistor circuit in each cell and retains data, until
powered off.
•ROM:
•Read Only Memory, is non-volatile and is more like a permanent storage for information. It also
stores the bootstrap loader program, to load and start the operating system when computer is
turned on.
•PROM(Programmable ROM),
•EPROM(Erasable PROM) and
•EEPROM(Electrically Erasable PROM) are some commonly used ROMs.
Auxiliary/Secondary Memory
Devices that provide backup storage are called auxiliary memory. For example: Magnetic disks and
tapes are commonly used auxiliary devices. Other devices used as auxiliary memory are magnetic
drums, magnetic bubble memory and optical disks.
It is not directly accessible to the CPU, and is accessed using the Input/Output channels.
Cache Memory
The data or contents of the main memory that are used again and again by CPU, are stored in the cache
memory so that we can easily access that data in shorter time.
Whenever the CPU needs to access memory, it first checks the cache memory. If the data is not found
in cache memory then the CPU moves onto the main memory. It also transfers block of recent data into
the cache and keeps on deleting the old data in cache to accommodate the new one.
Hit Ratio
The performance of cache memory is measured in terms of a quantity called hit ratio. When the CPU
refers to memory and finds the word in cache it is said to produce a hit. If the word is not found in
cache, it is in main memory then it counts as a miss.
The ratio of the number of hits to the total CPU references to memory is called hit ratio.
Associative Memory
It is also known as content addressable memory (CAM). It is a memory chip in which each bit
position can be compared. In this the content is compared in each bit cell which allows very fast table
lookup. Since the entire chip can be compared, contents are randomly stored without considering
addressing scheme. These chips have less storage capacity than regular memory chips.
Virtual Memory
Virtual memory is the separation of logical memory from physical memory. This
separation provides large virtual memory for programmers when only small physical
memory is available.
Virtual memory is used to give programmers the illusion that they have a very large
memory even though the computer has a small main memory. It makes the task of
programming easier because the programmer no longer needs to worry about the amount
of physical memory available.