0% found this document useful (0 votes)
16 views

Computer Organization_Unit IV

The document discusses the Input-Output (I/O) organization of computer systems, detailing the role of the I/O subsystem in facilitating communication between the CPU and peripheral devices. It categorizes peripherals into input, output, and input-output types, and explains the importance of interfaces and channels for effective data transfer. Additionally, it covers various data transfer modes, including programmed I/O, interrupt-driven I/O, and Direct Memory Access (DMA), highlighting the advantages of DMA in improving data transfer efficiency.

Uploaded by

Jeya Perumal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Computer Organization_Unit IV

The document discusses the Input-Output (I/O) organization of computer systems, detailing the role of the I/O subsystem in facilitating communication between the CPU and peripheral devices. It categorizes peripherals into input, output, and input-output types, and explains the importance of interfaces and channels for effective data transfer. Additionally, it covers various data transfer modes, including programmed I/O, interrupt-driven I/O, and Direct Memory Access (DMA), highlighting the advantages of DMA in improving data transfer efficiency.

Uploaded by

Jeya Perumal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 29

COMPUTER ORGANIZATION

UNIT-IV

I/O ORGANIZATION
1) INPUT-OUTPUT ORGANIZATION

Input/Output Subsystem

The I/O subsystem of a computer provides an efficient mode of communication


between the central system and the outside environment. It handles all the inputoutput
operations of the computer system.

Peripheral Devices

Input or output devices that are connected to computer are called peripheral devices.
These devices are designed to read information into or out of the memory unit upon command
from the CPU and are considered to be the part of computer system. These devices are also
called peripherals.

For example: Keyboards, display units and printers are common peripheral devices.

There are three types of peripherals:

1. Input peripherals : Allows user input, from the outside world to the computer. Example:
Keyboard, Mouse etc
2. Output peripherals: Allows information output, from the computer to the outside world.
Example: Printer, Monitor etc
3. Input-Output peripherals: Allows both input(from outised world to computer) as well
as, output(from computer to the outside world). Example: Touch screen etc.

Interfaces

Interface is a shared boundary btween two separate components of the computer system
which can be used to attach two or more components to the system for communication
purposes.

There are two types of interface:

1. CPU Inteface

2. I/O Interface

Let's understand the I/O Interface in details,


Input-Output Interface

Peripherals connected to a computer need special communication links for interfacing


with CPU. In computer system, there are special hardware components between the CPU and
peripherals to control or manage the input-output transfers. These components are called input-
output interface units because they provide communication links between processor bus and
peripherals. They provide a method for transferring information between internal system and
input-output devices.

The Input/output Interface is required because there are exists many differences
between the central computer and each peripheral while transferring information.Some major
differences are:

1. Peripherals are electromechanical and electromagnetic devices and their manner of operation
is different from the operation of CPU and memory, which are electronic device.Therefore, a
conversion of signal values may be required.

2. The data transfer rate of peripherals is usually slower than the transfer rate of CPU, and
consequently a synchronisation mechanism is needed.

3. Data codes and formats in peripherals differ from the word format in the CPU and Memory.

4. The operating modes of peripherals are differ from each other and each must be controlled
so as not to disturb the operation of other peripherals connected to CPU.

These differences are resolved through input-output interface.As input-output


interface(Interface Unit) contain various components, each of which performs one or more vital
function for smooth transforming of information between CPU and Peripherals.
Input/Output Channels

A channel is an independent hardware component that co-ordinate all I/O to a set of


controllers. Computer systems that use I/O channel have special hardware components that
handle all I/O operations.

Channels use separate, independent and low cost processors for its functioning which
are called Channel Processors.

Channel processors are simple, but contains sufficient memory to handle all I/O tasks.
When I/O transfer is complete or an error is detected, the channel controller communicates
with the CPU using an interrupt, and informs CPU about the error or the task completion

Each channel supports one or more controllers or devices. Channel programs contain
list of commands to the channel itself and for various connected controllers or devices. Once
the operating system has prepared a list of I/O commands, it executes a single I/O machine
instruction to initiate the channel program, the channel then assumes control of the I/O
operations until they are completed.

IBM 370 I/O Channel

The I/O processor in the IBM 370 computer is called a Channel. A computer system
configuration includes a number of channels which are connected to one or more I/O devices.

Categories of I/O Channels

Following are the different categories of I/O channels:

Multiplexer

The Multiplexer channel can be connected to a number of slow and medium speed
devices. It is capable of operating number of I/O devices simultaneously.

Selector

This channel can handle only one I/O operation at a time and is used to control one high
speed device at a time.

Block-Multiplexer

It combines the features of both multiplexer and selector channels.


The CPU directly can communicate with the channels through control lines. Following diagram
shows the word format of channel operation.

2) PERIPHERAL DEVICES

Peripheral devices are those devices that are linked either internally or externally to
a computer. These devices are commonly used to transfer data. The most common processes
that are carried out in a computer are entering data and displaying processed data. Several
devices can be used to receive data and display processed data. The devices used to perform
these functions are called peripherals or I/O devices.

Peripherals read information from or write in the memory unit on receiving a command
from the CPU. They are considered to be a part of the total computer system. As they require
a conversion of signal values, these devices can be referred to as electromechanical and
electromagnetic devices. The most common peripherals are a printer, scanner, keyboard,
mouse, tape device, microphone, and external modem that are externally connected to the
computer.

The following are some of the commonly used peripherals −

Keyboard

The keyboard is the most commonly used input device. It is used to provide commands
to the computer. The commands are usually in the form of text. The keyboard consists of many
keys such as function keys, numeric keypad, character keys, and various symbols.

Monitor

The most commonly used output device is the monitor. A cable connects the monitor
to the video adapters in the computer’s motherboard. These video adapters convert the
electrical signals to the text and images that are displayed. The images on the monitor are made
of thousands of pixels. The cursor is the characteristic feature of display devices. It marks the
position on the screen where the next character will be inserted.
Printer

Printers provide a permanent record of computer data or text on paper. We can classify
printers as impact and non-impact printers. Impact printers print characters due to the physical
contact of the print head with the paper. In non-impact printers, there is no physical contact.

Magnetic Tape

Magnetic tapes are used in most companies to store data files. Magnetic tapes use a
read-write mechanism. The read-write mechanism refers to writing data on or reading data
from a magnetic tape. The tapes sequentially store the data manner. In this sequential
processing, the computer must begin searching at the beginning and check each record until
the desired data is available.

Magnetic tape is the cheapest medium for storage because it can store a large number
of binary digits, bytes, or frames on every inch of the tape. The advantages of using magnetic
tape include unlimited storage, low cost, high data density, rapid transfer rate, portability, and
ease of use.

Magnetic Disk

There is another medium for storing data is magnetic disks. Magnetic disks have high-
speed rotational surfaces coated with magnetic material. A read-write mechanism is used to
achieve access to write on or read from the magnetic disk. Magnetic disks are generally used
for the volume storage of programs and information.

There are some peripheral devices found in computer systems including digital
incremental plotters, optical and magnetic readers, analog to digital converters, and several
data acquisition equipment.

3) MODE OF TRANSFER

The binary information that is received from an external device is usually stored in the
memory unit. The information that is transferred from the CPU to the external device is
originated from the memory unit. CPU merely processes the information but the source and
target is always the memory unit. Data transfer between CPU and the I/O devices may be done
in different modes.
Data transfer to and from the peripherals may be done in any of the three possible ways

1. Programmed I/O.

2. Interrupt- initiated I/O.

3. Direct memory access( DMA). Now let’s discuss each mode one by one.

1. Programmed I/O:

It is due to the result of the I/O instructions that are written in the computer program.
Each data item transfer is initiated by an instruction in the program. Usually the transfer is from
a CPU register and memory. In this case it requires constant monitoring by the CPU of the
peripheral devices.

Example of Programmed I/O: In this case, the I/O device does not have direct access to the
memory unit. A transfer from I/O device to memory requires the execution of several
instructions by the CPU, including an input instruction to transfer the data from device to the
CPU and store instruction to transfer the data from CPU to memory. In programmed I/O, the
CPU stays in the program loop until the I/O unit indicates that it is ready for data transfer. This
is a time consuming process since it needlessly keeps the CPU busy. This situation can be
avoided by using an interrupt facility. This is discussed below.

2. Interrupt- initiated I/O:

Since in the above case we saw the CPU is kept busy unnecessarily. This situation can
very well be avoided by using an interrupt driven method for data transfer. By using interrupt
facility and special commands to inform the interface to issue an interrupt request signal
whenever data is available from any device. In the meantime the CPU can proceed for any
other program execution. The interface meanwhile keeps monitoring the device. Whenever it
is determined that the device is ready for data transfer it initiates an interrupt request signal to
the computer. Upon detection of an external interrupt signal the CPU stops momentarily the
task that it was already performing, branches to the service program to process the I/O transfer,
and then return to the task it was originally performing.

Note: Both the methods programmed I/O and Interrupt-driven I/O require the active
intervention of the processor to transfer data between memory and the I/O module, and any
data transfer must transverse a path through the processor. Thus both these forms of I/O suffer
from two inherent drawbacks.
 The I/O transfer rate is limited by the speed with which the processor can test and service a
device.

 The processor is tied up in managing an I/O transfer; a number of instructions must be


executed for each I/O transfer

4. Direct Memory Access: The data transfer between a fast storage media such as
magnetic disk and memory unit is limited by the speed of the CPU. Thus we can allow
the peripherals directly communicate with each other using the memory buses,
removing the intervention of the CPU. This type of data transfer technique is known as
DMA or direct memory access. During DMA the CPU is idle and it has no control over
the memory buses. The DMA controller takes over the buses to manage the transfer
directly between the I/O devices and the memory unit.

Bus Request : It is used by the DMA controller to request the CPU to relinquish the control of
the buses.

Bus Grant : It is activated by the CPU to Inform the external DMA controller that the buses
are in high impedance state and the requesting DMA can take control of the buses. Once the
DMA has taken the control of the buses it transfers the data. This transfer can take place in
many ways.

Types of DMA transfer using DMA controller:

Burst Transfer: DMA returns the bus after complete data transfer. A register is used as a byte
count, being decremented for each byte transfer, and upon the byte count reaching zero, the
DMAC will release the bus. When the DMAC operates in burst mode, the CPU is halted for
the duration of the data transfer.
Steps involved are:

1. Bus grant request time.

2. Transfer the entire block of data at transfer rate of device because the device is usually slow
than the speed at which the data can be transferred to CPU.

3. Release the control of the bus back to CPU So, total time taken to transfer the N bytes = Bus
grant request time + (N) * (memory transfer rate) + Bus release control time.

Where, X µsec =data transfer time or preparation time (words/block)

Y µsec =memory cycle time or cycle time or transfer time (words/block)

% CPU idle (Blocked)=(Y/X+Y)*100

% CPU Busy=(X/X+Y)*100

Cyclic Stealing : An alternative method in which DMA controller transfers one word at a time
after which it must return the control of the buses to the CPU. The CPU delays its operation
only for one memory cycle to a low the direct memory I/O transfer to “steal” one memory
cycle.

Steps Involved are:

4. Buffer the byte into the buffer

5. Inform the CPU that the device has 1 byte to transfer (i.e. bus grant request)

6. Transfer the byte (at system bus speed)

7. Release the control of the bus back to CPU.

Before moving on transfer next byte of data, device performs step 1 again so that bus isn’t tied
up and the transfer won’t depend upon the transfer rate of device.

So, for 1 byte of transfer of data, time taken by using cycle stealing mode (T). = time required
for bus grant + 1 bus cycle to transfer data + time required to release the bus, it will be N x T

In cycle stealing mode we always follow pipelining concept that when one byte is getting
transferred then Device is para lel preparing the next byte. “The fraction of CPU time to the
data transfer time” if asked then cycle stealing mode is used.
Where,

X µsec =data transfer time or preparation time (words/block)

Y µsec =memory cycle time or cycle time or transfer time (words/block)

% CPU idle (Blocked) =(Y/X)*100

% CPU busy=(X/Y)*100

Interleaved mode: In this technique , the DMA controller takes over the system bus when the
microprocessor is not using it.An alternate half cycle i.e. half cycle DMA + half cycle
processor.

4) DIRECT MEMORY ACCESS (DMA)

Direct Memory Access (DMA) is a feature or method used in computer systems that
allows certain hardware components to access the system's main memory (RAM) without
requiring the central processing unit (CPU) for each data transfer. DMA is generally used to
improve data transfer efficiency between peripheral devices and memory.

Functions of DMA
A Direct Memory Access controller's basic job is to enable efficient and direct data
transfers between peripheral devices and the system's main memory (RAM) without requiring
continual CPU participation. The following are the primary roles of a DMA controller:

1. Data Transfer Management: DMA controllers govern data transfers between


peripheral devices (such as hard drives, network cards, or GPUs) and system memory.
They can read data from or write data to memory on these devices' behalf.
2. Data Direction Control: DMA controllers can be designed to regulate the direction of
data transfers from a peripheral device to memory (read) or from memory to a
peripheral device (write).
3. Address Generation: DMA controllers produce memory addresses for data transfer
activities. They may be set to automatically increase or decrease memory addresses,
making them appropriate for block transfers.
4. Data Block Management: DMA controllers can transmit data in blocks rather than
sending each individual byte or word independently. This increases efficiency since
fewer control signals are required for greater transfers.
5. Prioritization: Some DMA controllers enable prioritization, which allows particular
devices or channels to take precedence over others when the DMA controller's
resources are in competition.
6. Error Handling: To preserve data integrity during transfers, DMA controllers may
include error detection and handling techniques. They can notify faults to the CPU so
that necessary action can be taken.
7. Channel Management: DMA controllers may contain numerous channels or request
lines, each of which is linked with a different peripheral device. They are in charge of
allocating these channels to devices that require DMA transfers.
8. Interrupt Generation: When a data transfer is completed, DMA controllers can
produce an interruption to inform the CPU, letting it perform any post-transfer activities
or error circumstances.
9. Bus Arbitration: In systems with several DMA-capable devices, the DMA controller
may execute bus arbitration to ensure that competing devices have equal access to the
system bus.
10. Buffering: Some DMA controllers may incorporate buffers to hold data temporarily
during transfers, which can assist smooth out fluctuations in data transfer speeds
between peripherals and memory.
11. Cycle Stealing: In systems with limited DMA capabilities, the DMA controller may
operate in cycle-stealing mode, taking control of the bus for brief periods of time and
transferring tiny quantities of data between peripheral and memory.

DMA's primary purpose is to increase overall system speed and efficiency by offloading data
transfer responsibilities off the CPU. DMA saves CPU overhead by allowing peripheral devices
to directly access memory, ensuring that data transfers occur at the fastest feasible speed
enabled by the hardware. This is especially true for operations involving huge amounts of data,
such as disc I/O, network connectivity, and multimedia processing.

How does DMA work?


Let's delve into a more detailed explanation of how Direct Memory Access (DMA) works:

o Initialization: The procedure begins with the CPU configuring the DMA controller.
The CPU provides the DMA controller with crucial data transfer information such as
source and destination memory addresses, bytes to transfer, and transfer direction.
o Peripheral Request: When a peripheral device (such as a network card, disc controller,
or sound card) needs to read or write to memory, it sends a request to the DMA
controller.
o Permission Granted: The DMA controller takes control of the system bus if the CPU
provides permission (usually via a handshake mechanism or by validating priority
settings).
o Data Transfer: With control of the system bus, the DMA controller begins data
transfer between the peripheral device and memory. Depending on the operation and
setup, it can transport data in blocks or in a streaming form.
o Parallel Operation: It is important to note that the CPU can continue to execute other
instructions and operations while the data transfer process is running. This parallel
procedure improves system efficiency since the CPU is not used by the transfer itself.
o Completion Notification: When the data transfer is complete, the DMA controller tells
the requesting peripheral device that the operation is complete. It can also emit an
interruption to notify the CPU for the completion of the transfer or any possible
problems.
o Bus Release: After completing the data transfer, the DMA controller relinquishes
control of the system bus, enabling the CPU and other devices to use it as needed.

Modes of DMA

DMA (Direct Memory Access) operates in a variety of modes or configurations to support a


variety of data transmission scenarios. DMA's most popular modes are:

1. Block Transfer Mode: The DMA controller moves a fixed-size block of data from the
source to the destination in a single continuous operation in block transfer mode. This
mode is frequently used for large-scale data transfers, such as reading or writing a block
of data from or to a storage device or across memory regions.
2. Cycle Stealing Mode: The DMA controller can momentarily interrupt the CPU's
function to transmit a tiny amount of data at a time using cycle stealing mode. During
a single bus cycle, it takes control of the system bus, transmits a little amount of data,
and then relinquishes control. This mode is appropriate for situations when data flow is
minimal and intermittent.
3. Burst Transfer Mode: In burst transfer mode, the DMA controller takes over the
system bus for a prolonged period to transmit a bigger block of data, which is frequently
larger than in block transfer mode. When a bigger volume of data must be transported
more effectively, this mode is employed.
4. Demand Transfer Mode: When a peripheral device is ready to send data, it launches
a DMA request in demand transfer mode. The DMA controller controls the data
transmission and replies to the request, which is excellent for devices that run at
irregular intervals.
5. Scatter-Gather Mode: Scatter-gather mode is an advanced technique for effectively
transferring non-contiguous data. During the transfer, the DMA controller can manage
data distributed across several memory regions and consolidate it into a single
contiguous block. This mode is frequently used in networking and storage systems.
6. Chain Transfer Mode: Chain transfer mode allows us to plan ahead of time a sequence
of DMA transfers. When one transfer is finished, the DMA controller automatically
advances to the next transfer in the chain without requiring any CPU interaction. This
mode is useful for handling a sequence of data transfer processes.
7. Cycle Stealing Burst Mode: The cycle stealing burst mode combines the advantages
of cycle stealing and burst transfer modes. Because the DMA controller can transport
data in bursts, temporarily pausing the CPU between bursts, it is suited for a broad range
of data transmission applications.

These DMA modes provide flexibility in optimizing data transfer efficiency for various
applications and contexts, allowing for effective data transfer management between peripheral
devices and memory while minimizing CPU interference. The mode used is determined by the
precise needs of the data transfer operation as well as the capabilities of the DMA controller in
use.

5) RANDOM ACCESS MEMORY (RAM)

RAM stands for Random Access Memory. It is the internal memory in the computer’s
CPU which stores different types of data and information as per the requirement. Moreover, we
can also call it the main memory, primary memory, or read/write memory. RAM stores all
the data that the CPU requires during the execution of a program. Moreover, it is
a volatile memory i.e. it loses data as soon as the power is cut off.

Besides, as per its name, the data access is in a random manner. This means that we can
access any random location in the memory without even knowing the address of the previous
location. Since RAM is a volatile memory hence, a backup is present in most of the systems in
the form of an uninterrupted power supply (UPS). Moreover, the speed and performance of a
system are directly proportional to the size of RAM.
Types of Random Access Memory (RAM)

There are basically two types of RAM. They are as follows:

1. SRAM (Static Random Access Memory)

2. DRAM (Dynamic Random Access Memory)

SRAM

The word static denotes that the data stays in the memory but only till the power supply is ON.
Furthermore, it is a chip having 6 transistors and no capacitors. The SRAM uses more chips than
the DRAM for the same storage amount. And hence the cost of manufacturing is greater than that
of a DRAM.

Moreover, due to the fast accessing time, SRAM is used as the cache memory. We can list the
characteristics of SRAM as follows:

 Contains circuits similar to D flip flops.

 Expensive in nature.

 It requires more power.

 Contents remain safe until power is ON.

 Fast accessing time than DRAM.

 Cannot store much data on a single chip hence, requires more chips.

 Heat generation is more.

 Act as cache due to fast accessing speed.

 There is no need to refresh it again and again.


DRAM

In this memory, we have to refresh it, again and again, to store the data. Furthermore, it performs
this task by placing memory on the refresh circuit which rewrites the data many times. It consists
of only one transistor and one capacitor. We can list the characteristics of DRAM as follows:

 It is made up of capacitors which leks the data and hence we need to refresh it several times.

 It is inexpensive.

 Requires less power.

 It needs a recharge or refresh every millisecond to maintain the data.

 Slow accessing time than the SRAM.

 It can store many bits on a single chip.

 Less heat generation.

 It acts as the main memory.

 Smaller in size.
Browse more Topics under Primary Memory

 Cache Memory

 ROM
Difference Between SRAM and DRAM

The differences between SRAM and DRAM are as follows:

DRAM SRAM

 It is less costly.  It is more expensive.


 Slow performance since the access time  Access time is much faster, synchronizes with
is more. the CPU. Hence, performance is better.

 It is used as the level 1 or level 2 cache


 It is used as the main memory.
memory.

 One cell consists of one transistor only.


 One cell consists of 6 transistors. Moreover,
Moreover, multiple cells can adjust in
fewer cells can adjust in the space.
the space.

 Power consumption is less.  Power consumption is more.

 Storage capacity is more.  The storage capacity is less.

 It is also volatile but the data stays in memory


 Volatile and also requires circuits for
during the power supply without any
refreshing the data.
refreshment.

 It is present as the main memory on the  SRAM acts as a cache, therefore, present
motherboard. between the CPU and main memory.

6) READ ONLY MEMORY (ROM)

As the name suggests, we can only read from this memory and cannot write on it.
Moreover, it is non-volatile in nature which means that it does not lose data after the power supply
is cut off. Furthermore, its main function is to store the program and instructions which are
important to boot (start) the system. This is the bootstrap process.Other than computers, many
devices like calculators, washing machines, ovens, etc use the ROM.
Features of Read Only Memory

The ROM has the features as follows:

 It is non-volatile in nature.

 Less costly than the RAM.

 As, only read operation is allowed therefore, no changes can occur.

 It is easy to test the ROM.

 Due to its nature, it is more reliable than RAM.

 Does not require any refreshing.


Types of Read Only Memory

There are 4 types of ROM out of them, 3 are the most common. These are as follows:

 MROM (masked read only memory)

 PROM (programmable read only memory)

 EPROM(erasable and programmable read only memory)

 EEPROM(electrically erasable and programmable read only memory)


MROM (Masked Read Only Memory)

These were the very first ROMs. Furthermore, these are hard-wired devices that contain a pre-
programmed set of data and instructions. Moreover, they are inexpensive in nature.

PROM (Programmable Read Only Memory)

It is the programmable ROM that the user can program but only once. Furthermore, the user writes
the data and instructions using a PROM program. Moreover, after writing once the user cannot
change or erase the data and instructions.
EPROM (Erasable and Programmable Read Only Memory)

We can reprogram this memory by erasing the data. Furthermore to erase the data it has to be
exposed to ultraviolet light. During the programming, a charge is trapped in the insulated gate
region. Besides, on exposing it to the ultraviolet light for around 40 minutes this charge destroys.
Hence, in this way, the data gets erased. After erasing the data we can now reprogram the ROM.

EEPROM (Electrically Erasable and Programmable Read Only Memory)

We can program and erase this memory electrically. Furthermore, we do not require any
ultraviolet light to erase the data. Moreover, erasing and reprogramming is possible many times.
Besides, we can erase any particular location of the memory selectively. At the same time, we can
delete only one byte from the memory at a time rather than erasing the whole chip. Therefore, the
process of reprogramming is flexible and slow.

Browse more Topics under Primary Memory

 Cache Memory

 ROM
Difference Between PROM and EPROM

PROM EPROM

 PROM is non-reusable.  It is reusable in nature.

 Less costly.  More expensive than PROM.

 If we write the data once, it is permanent and  Data is not permanent, we can erase and
we cannot erase it. rewrite it.

 The storage capacity is high.  Storage capacity is less than the PROM.
 If there is any error or bug in the PROM’s
 Whereas we can erase and fix the
program it becomes useless as we cannot
previous code in EPROM.
rewrite it.

 It uses a bipolar transistor.  It uses a MOS transistor.

7) MEMORY DECODING

In addition to requiring storage components in a memory unit, there is a need for decoding
circuits to select the memory word specified by the input address.

The storage part of the cell is modeled by an SR latch with associated gate s to form a
D latch. Actually, the cell is an electronic circuit with four to six transistors. The select input
enables the cell for reading or writing and the read/write input determines the operation of the
cell when it is selected. A 1 in the read/write input provides the read operation by fanning a
path from the latch to the output terminal. A 0 in the read/write input provides the write
operation by forming a path from the input terminal to the latch.
The logical construction of a small RAM consists of four words of four bits each and
bas a×total of 16 binary cells. The small blocks labeled BC represent the binary cell with its
three inputs and one output. A memory with four words needs two address lines. The two
address inputs go through a 2 4 decoder to select one of the four words. The decoder is enabled
with the memory-enable input.

When the memory enable is 0, all outputs of the decoder are 0 and none of the memory
words are selected. With the memory select at 1, one of the four words is selected, dictated by
the value in the two address lines.

Once a word has been selected, the read/write input determines the operation. During
the read operation the four bits of the selected word go through OR gates to the output
terminals.

During the write operation, the data available in the input lines arc transferred into the
four binary cells of the selected word. The binary cells that are not selected are disabled and
their previous binary values remain unchanged.

When the memory select input that goes into the decoder is equal to 0 none of the words
are selected and the contents of all cells remain unchanged regardless of the value of the
read/write input.
Coincident Decoding

A decoder with k inputs and 2k outputs requires 2k AND gates with k inputs per gate.
The total number of gates and the number of inputs per gate can be reduced by employing two
decoders in a two - dimensional selection scheme.

input decoders are used instead of one k-input decoder. One decoder performs the row
selection and the other the column selection in a two-dimensional matrix configuration.
For example, instead of using a single 10 x 1,024 decoder, we use two 5 x 32 decoders. With
the single decoder, we would need 1,024 AND gates with 10 inputs in each. The five most
significant bits of the address go to input X and the five least significant bits go to input Y.
Each word within the memory array is selected by the coincidence of one X line and one Y
line. Thus each word in memory is selected by the coincidence between 1 of 32 rows and 1 of
32 columns, for a total of 1,024 words.

Address Multiplexing

Because of large capacity, the address decoding of DRAM is arranged in a two


dimensional array and larger memories often have multiple arrays. To reduce the number of
pins in the IC package, designers utilize address multiplexing whereby one set of address input
pins accommodates the address components.

In a two-dimensional array, the address is applied in two parts at different times, with
the row address first and the column address second. Since the same set of pins is used for both
parts of the address, the size of the package is decreased significantl y.

The memory consists of a two-dimensional array of cells arranged into 256 rows by
256 columns, for a total of 28 x 28 = 216 = 64K words. There is a single data input line; a single
data output line, and a read/write control as well as an eight-bit address input and two
address strobes, the latter included for-enabling the row and column address into their
respective registers. The row address strobe (RAS) enables the eight-bit row register and the
column address strobe (CAS) enables the eight-bit column register. The bar on top of the name
of the strobe symbol indicates that the registers are enabled on the zero level of the signal.
8) ERROR DETECTION AND CORRECTION CODE

Error detection and correction code plays an important role in the transmission of data
from one source to another. The noise also gets added into the data when it transmits from one
system to another, which causes errors in the received binary data at other systems. The bits of
the data may change(either 0 to 1 or 1 to 0) during transmission.

It is impossible to avoid the interference of noise, but it is possible to get back the original
data. For this purpose, we first need to detect either an error z is present or not using error
detection codes. If the error is present in the code, then we will correct it with the help of error
correction codes.

Error detection code


The error detection codes are the code used for detecting the error in the received
data bitstream. In these codes, some bits are included appended to the original bitstream.

Error detecting codes encode the message before sending it over the noisy channels. The
encoding scheme is performed in such a way that the decoder at the receiving can find the
errors easily in the receiving data with a higher chance of success.

Parity Code
In parity code, we add one parity bit either to the right of the LSB or left to the MSB to the
original bitstream. On the basis of the type of parity being chosen, two types of parity codes
are possible, i.e., even parity code and odd parity code.

Features of Error detection codes


These are the following features of error detection codes:
o These codes are used when we use message backward error correction techniques for
reliable data transmission. A feedback message is sent by the receiver to inform the
sender whether the message is received without any error or not at the receiver side. If
the message contains errors, the sender retransmits the message.
o In error detection codes, in fixed-size blocks of bits, the message is contained. In this,
the redundant bits are added for correcting and detecting errors.
o These codes involve checking of the error. No matter how many error bits are there and
the type of error.
o Parity check, Checksum, and CRC are the error detection technique.

Error correction code


Error correction codes are generated by using the specific algorithm used for removing and
detecting errors from the message transmitted over the noisy channels. The error-correcting
codes find the correct number of corrupted bits and their positions in the message. There are
two types of ECCs(Error Correction Codes), which are as follows.

Block codes
In block codes, in fixed-size blocks of bits, the message is contained. In this, the redundant bits
are added for correcting and detecting errors.

Convolutional codes
The message consists of data streams of random length, and parity symbols are generated by
the sliding application of the Boolean function to the data stream.

The hamming code technique is used for error correction.

Hamming Code

Hamming code is an example of a block code. The two simultaneous bit errors are detected,
and single-bit errors are corrected by this code. In the hamming coding mechanism, the sender
encodes the message by adding the unessential bits in the data. These bits are added to the
specific position in the message because they are the extra bits for correction.
9) PROGRAMMABLE LOGIC ARRAY (PLA) AND PROGRAMMING ARRAY
LOGIC (PAL)

Programmable Logic Array (PLA) and Programming Array Logic (PAL) are the
categories of programming logic devices. In PLA or Programmable Logic Array, there are
massive functions can be implemented. Whereas in PAL or Programmable Array Logic, there
is finite functions can be implemented.
The distinction between PLA and PAL is that, PAL have programmable AND array
and fixed OR array. On the other hand, PLA have a programmable AND array and
programming OR array.

What is Programmable Logic Array or PLA?

A Programmable Rationale Cluster (PLA) is a computerized gadget used to execute


combinational rationale circuits. It consists of a programmable AND door cluster and a
programmable OR entryway exhibit. The PLA permits clients to arrange its inward
associations to understand any Boolean capability by programming the associations between
the sources of information, AND entryways, or potentially doors. This adaptability makes
PLAs reasonable for custom rationale planning and prototyping. Not at all like fixed-
capability rationale gadgets, PLAs can be custom fitted to explicit necessities by
programming, making them helpful in applications where custom rationale arrangements are
required.
Applications of PLA
 Custom Reasoning Arrangement: PLAs are used to make exceptionally mechanized
reasoning circuits for unequivocal capacities, habitually in embedded structures and
purchaser devices.
 Prototyping: Draftsmen use PLAs to demonstrate and test reasoning plans before zeroing
in on ASIC or FPGA plans.
 Estimation Execution: They can complete complex computations in hardware, such as
encoding/unraveling plans or custom data dealing with computations.
 Modernized Circuit Unraveling: PLAs develop the arrangement of puzzling high-level
circuits by allowing fashioners to design Boolean capacities directly into gear.
 Control Structures: Used in control systems for programmable state machines and to
complete unambiguous control reasoning.

What is Programming Array Logic or PAL?

Programmable Array Logic is a sort of computerized gadget used to carry out


combinational rationale circuits with proper engineering, however programmable usefulness.
Buddies comprise a fixed OR cluster and a programmable AND exhibit. Clients program the
AND cluster to make explicit rationale capabilities, which are then joined utilizing the fixed
OR exhibit. This arrangement takes into consideration the production of custom rationale
capabilities without the requirement for complex wiring. Buddies offer a less complex, more
practical arrangement compared with PLAs for carrying out clear rationale and are generally
utilized in computerized circuit plans for undertakings like information directing and control
signaling.
Benefits of PAL
 Savvy: Buddies are, for the most part, more affordable than PLAs and FPGAs for their
less complex rationale capabilities due to their fixed OR exhibit and less complex
engineering.
 Convenience: Buddies are simpler to program and arrange compared with additional
mind-boggling gadgets like FPGAs, making them available for a clear rationale.
 Speed: Buddies regularly have quicker speeds for rationale activities in view of their
decent engineering, which diminishes the time required for signal steering and handling.
 Unwavering quality: The fixed OR exhibit design adds to higher dependability and
consistency in execution.
 Smaller Plan: Buddies give a minimized answer for executing combinational rationale,
saving space on circuit sheets.
Difference between PLA and PAL

S.NO PLA PAL

PLA stands for Programmable Logic While PAL stands for Programmable Array
1. Array. Logic.

2. PLA speed is lower than PAL. While PAL’s speed is higher than PLA.

3. The complexity of PLA is high. While PAL’s complexity is less.

PLA has limited amount of While PAL has a huge number of functions
4. functions implemented. implemented.

5. The cost of PLA is also high. While the cost of PAL is low.

Programmable Logic Array is less While Programmable Array Logic is more


6. available. available than Programmable Logic Array.

PLA design may be built using a


programmable set of AND gates and PAL design may be built using a programmable
7. a programmable set of OR gates. set of AND and a fix set of OR gates

The flexibility of PLA is high as


8. compared to PAL. Flexibility of PAL is less.

9. It is less used than PAL. While it is more used than PLA.


THANK YOU!

You might also like