0% found this document useful (0 votes)
315 views48 pages

CIT 314 Architecture and Organization Summary

The document provides a summary of key concepts in computer architecture, focusing on main memory, auxiliary storage devices, and their characteristics. It discusses primary and secondary storage types, including magnetic tapes, disks, and CD-ROMs, detailing their advantages and historical context. Additionally, it covers the evolution of hard drives and optical disks, emphasizing their storage capacities and technological advancements.

Uploaded by

Michael Owoh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
315 views48 pages

CIT 314 Architecture and Organization Summary

The document provides a summary of key concepts in computer architecture, focusing on main memory, auxiliary storage devices, and their characteristics. It discusses primary and secondary storage types, including magnetic tapes, disks, and CD-ROMs, detailing their advantages and historical context. Additionally, it covers the evolution of hard drives and optical disks, emphasizing their storage capacities and technological advancements.

Uploaded by

Michael Owoh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

01/01/2022 SUMMARY / Q&A

DPRINCE*INSTITUTE
CIT314 - COMPUTER ARCHITECTURE AND
ORGANIZATION II
LEVEL: 300, SEMESTER: 2nd SEMESTER
CLICK >> Whatsapp*Group << CLICK

Table of contents (swift links)


POP: EXAM - SUMMARY

QUESTION
DEFINE THE MAIN MEMORY
The principal technology used for the main memory is based on semiconductor integrated circuits.

QUESTION
MENTION THE PRIMARY AND SECONDARY TYPES OF AUXILIARY STORAGE DEVICES

SUGGESTED ANSWER

PRIMARY TYPES
• Magnetic tape
• Magnetic Disks
• Floppy Disks
• Hard Disks and Drives

SECONDARY OR AUXILLIARY MEMORY TYPES


 Magnetic tapes
 Magnetic disks and
 Floppy disks.
 Compound Disk

• It is not directly accessible by the CPU.


• Computer usually uses its input / output channels to access secondary storage and transfers the desired
data using an intermediate in primary storage.

- In auxiliary or secondary storage, the cost per bit of storage is low.


- The operating speed is slower than that of the primary memory.

Magnetic Tapes

Magnetic tape is a medium for magnetic recording, made of a thin, magnetisable coating on a long, narrow strip
of plastic film.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 1
01/01/2022 SUMMARY / Q&A

Figure 1.0: Magnetic Tape

Characteristics of Magnetic Tapes


• No direct access, but very fast sequential access.
• Resistant to different environmental conditions.
• Easy to transport, store, cheaper than disk.
• Before, it was widely used to store application data; nowadays,
• It's mostly used for backups or archives (tertiary storage).

Figure 1.2: Magnetic Tape

Magnetic tape is used in a tape transport (also called a tape drive, tape deck, tape unit, or MTU), a device that
moves the tape over one or more magnetic heads. An electrical signal is applied to the write head to record data
as a magnetic pattern on the tape; as the recorded tape passes over the read head it generates an electrical signal
from which the stored data can be reconstructed. The two heads may be combined into a single read/write head.

• Magnetic tape has been used for offline data storage, backup, archiving, data interchange, and software
distribution, and in the early days (before disk storage was available) also as online backing store. \
• Magnetic tape is still extensively used for backup; for this purpose, interchange standards are of minor
importance, so proprietary cartridge-tape formats are widely used.
• Magnetic tapes are used for large computers like mainframe computers where large volume of data is
stored for a longer time. In PCs also you can use tapes in the form of cassettes.
• The cost of storing data in tapes is inexpensive. Tapes consist of magnetic materials that store data
permanently. It can be 12.5 mm to 25 mm wide plastic film-type and 500 meter to 1200meter-long which is coated
with magnetic material.

Advantages of Magnetic Tape

• Compact: A 10-inch diameter reel of tape is 2400 feet long and is able to hold 800, 1600 or 6250
characters in each inch of its length. The maximum capacity of such type is 180 million characters. Thus data are
stored much more compact on tape

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 2
01/01/2022 SUMMARY / Q&A

• Economical: The cost of storing characters on tape is very less as compared to other storage devices.
• Fast: Copying of data is easier and fast.
• Long term Storage and Re-usability: Magnetic tapes can be used for long term storage and a tape can
be used repeatedly without loss of data.

2 Magnetic Disks

You might have seen the gramophone record, which is circular like a disk and coated with magnetic material.
Magnetic disks used in computer are made on the same principle.

- The presence of a magnetic sport represents one bit (1) and its absence represents zero bit (0).
- The information stored in a disk can be read many times without affecting the stored data.
- So the reading operation is non-destructive.
- But if you want to write a new data, then the existing data is erased from the disk and new data is recorded.

= The data capacity of magnetic disk memories ranges from several tens of
thousands up to several billion bits,
and the average access time is 10- 100 milliseconds.
- The two main types are the hard disk and the floppy disk.

= Data is stored on either or both surfaces of discs in concentric rings called "tracks".
- Each track is divided into a whole number of "sectors". Where multiple (rigid) discs are mounted on the
same axle the set of tracks at the same radius on all their surfaces is known as a" cylinder".

Floppy Disks

These are small removable disks that are plastic coated with magnetic recording material. Floppy disks are
typically 3.5″ in size (diameter) and can hold 1.44 MB of data.
This portable storage device is a rewritable media and can be reused a number of times.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 3
01/01/2022 SUMMARY / Q&A

Figure 1.4: Floppy Disks

• Read/Write head: A floppy disk drive normally has tworead/write heads making Modern floppy disk
drives as doublesided drives. A head exists for each side of disk and Both heads are used for reading and writing
on the respective disk side.
• Head 0 and Head 1: Many people do not realize that the first head (head 0) is bottom one and top head
is head 1. The top head is located either four or eight tracks inward from the bottom head depending upon the
drive type.
• Head Movement: A motor called head actuator moves the head mechanism. The heads can move in and
out over the surface of the disk in a straight line to position themselves over various tracks. The heads move in
and out tangentially to the tracks that they record on the disk.
• Head: The heads are made of soft ferrous (iron) compound with electromagnetic coils. Each head is a
composite design with a R/W head centered within two tunnel erasure heads in the same physical assembly. PC
compatible floppy disk drive spin at 300 or 360r.p.m. The two heads are spring loaded and physically grip the
disk with small pressure, this pressure does not present excessive friction.

Recording Method

• Tunnel Erasure: As the track is laid down by the R/W heads, the trailing tunnel erasure heads force the
data to be present only within a specified narrow tunnel on each track. This process prevents the signals from
reaching adjacent track and making cross talk.
• Straddle Erasure:In this method, the R/W and the erasure heads do recording and erasing at the same
time. The erasure head is not used to erase data stored in the diskette. It trims the top and bottom fringes of
recorded flux reversals. The erasure heads reduce the effect of cross-talk between tracks and minimize the errors
induced by minor run out problems on the diskette or diskette drive.
• Head alignment: Alignment is the process of placement of the heads with respect to the track that they
must read and write. Head alignment can be checked only against some sort of reference- standard disk recorded
by perfectly aligned machine. These types of disks are available and one can use one to check the drive alignment.

Hard Disks and Drives


A hard disk drive (HDD), hard disk, hard drive or fixed disk is a data storage device that uses magnetic storage
to store and retrieve digital information using one or more rigid rapidly rotating disks (platters) coated with
magnetic material.

All primary computer hard drives are found inside a computer case and are attached to the computer motherboard
using an ATA, SCSI, or SATA cable, and are powered by a connection to the PSU (power supply unit). The hard
drive is typically capable of storing more data than any other drive, but its size can vary depending on the type of
drive and its age. Older hard drives had a storage size of several hundred megabytes (MB) to several gigabytes
(GB). Newer hard drives have a storage size of several hundred gigabytes to several terabytes (TB).

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 4
01/01/2022 SUMMARY / Q&A

Hard Drive Components


As can be seen in the picture below, the desktop hard drive consists of the following components: the head
actuator, read/write actuator arm, read/write head, spindle, and.

Figure 1.5: Hard Drive Components


External and Internal Hard drives

Although most hard drives are internal, there are also stand-alone devices called external hard drives, which can
backup data on computers and expand the available disk space. External drives are often stored in an enclosure
that helps protect the drive and allows it to interface with the computer, usually over USB or eSATA.

Figure 1.6: Hard Drive

QUESTION
DISCUSS THE HISTORY OF THE HARD DRIVE

SUGGESTED ANSWER

The first hard drive was introduced to the market by IBM on September 13, 1956. The hard drive was first used
in the RAMAC 305 system, with a storage capacity of 5 MB and a cost of about $50,000 ($10,000 per megabyte).
The hard drive was built-in to the computer and was not removable. The first hard drive to have a storage capacity
of one gigabyte was also developed by IBM in 1980. It weighed 550 pounds and cost $40,000. 1983 marked the
introduction of the first 3.5-inch size hard drive, developed by Rodime. It had a storage capacity of 10 MB.
Seagate was the first company to introduce a 7200 RPM hard drive in 1992. Seagate also introduced the first
10,000 RPM hard drive in 1996 and the first 15,000 RPM hard drive in 2000. The first solid-state drive (SSD) as
we know them today was developed by SanDisk Corporation in 1991, with a storage capacity of 20 MB. However,

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 5
01/01/2022 SUMMARY / Q&A

this was not a flash-based SSD, which were introduced later in 1995 by M-Systems. These drives did not require
a battery to keep data stored on the memory chips, making them a non-volatile storage medium.

3.2.5 CD-ROM Compact Disk/Read Only Memory (CD-ROM)

CD-ROM disks are made of reflective metals. CD-ROM is written during the process of manufacturing by high
power laser beam. Here the storage density is very high, storage cost is very low and access time is relatively fast.
Each disk is approximately 4 1/2 inches in diameter and can hold over 600 MB of data. As the CD-ROM can be
read only we cannot write or make changes into the data contained in it.

Figure 1.7: CD-Rom

3.2.5.1 Characteristics of the CD-ROM

• In PCs, the most commonly used optical storage technology is called Compact Disk Read-Only Memory
(CD-ROM).
• A standard CD-ROM disk can store up to 650 MB of data, or about 70 minutes of audio.
• Once data is written to a standard CD-ROM disk, the data cannot be altered or overwritten. CD‐ROM
SPEEDS AND USES Storage capacity 1 CD can store about 600 to 700 MB = 600 000 to 700 000 KB. For
comparison, we should realize that a common A4 sheet of paper can store an amount of information in the form
of printed characters that would require about 2 kB of space on a computer. So one CD can store about the same
amount of text information equivalent as 300 000 of such A4 sheets. Yellow Book standard
• The basic technology of CD-ROM remains the same as that for CD audio, but CD-ROM requires greater
data integrity, because a corrupt bit that is not noticeable during audio playback becomes intolerable with
computer data.
• So CD-ROM (Yellow Book) dedicates more bits to error detection and correction than CD audio (Red
Book).
• Data is laid out in a format known as ISO 960. Advantages in comparison with other information carriers
The information density is high.
• The cost of information storage per information unit is low.
• The disks are easy to store, to transport and to mail.
• Random access to information is possible.

Advantages

• Easier access to a range of CD-ROMs.


• Ideally, access from the user’s own workstation in the office or at home.
• Simultaneous access by several users to the same data.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 6
01/01/2022 SUMMARY / Q&A

• Better security avoids damage to discs and equipment.


• Less personnel time needed to provide disks to users.
• Automated, detailed registration of usage statistics to support the management

Disadvantages

• Costs of the network software and computer hardware.


• Increased charges imposed by the information suppliers.
• Need for expensive, technical expertise to select, set up, manage, and maintain the network system.
• Technical problems when the CD-ROM product is not designed for use in the network.
• The network software component for the workstation side must be installed on each microcomputer before
this can be applied to access the CD-ROM’s.

QUESTION
WHAT IS AN OPTICAL DISK

SUGGESTED ANSWER

An optical disk is made up of a rotating disk which is coated with a thin reflective metal. To record data on the
optical disk, a laser beam is focused on the surface of the spinning disk.

1. Read-only memory (ROM)disks, like the audio CD, are used for the distribution of standard program
and data files. These are mass-produced by mechanical pressing from a master die. The information is actually
stored as physical indentations on the surface of the CD.
2. Write-once read-many (WORM) disks: The information stored on the disk cannot be changed or erased.
A strong laser beam is focused on selected spots on the surface and pulsed. The energy melts the film at
that point, producing a nonreflective void. In the read mode, a low power laser is directed at the disk and the bit
information is recovered by sensing the presence or absence of a reflected beam from the disk.
3. Re-writeable, write-many read-many (WMRM) disks, just like the magnetic storage disks, allows
information to be recorded and erased many times. Usually, there is a separate erase cycle although this may be
transparent to the user. Some modern devices have this accomplished with one over-write cycle. These devices
are also called direct read-after-write (DRAW) disks.
4. WORM (write once, read many) is a data storage technology that allows information to be written to a
disc a single time and prevents the drive from erasing the data.Because of this feature, Erasable Optical Disk:
An erasable optical disk is the one which can be erased and then loaded with new data content all over
again. These generally come with a RW label. These are based on a technology popularly known as Magnetic
Optical which involves the application of heat on a precise point on the disk surface and magnetizing it using a
laser. Magnetizing alters the polarity of the point indicating data value ‘1’. Touchscreen Optical Device: A
touchscreen is an input and output device normally layered on the top of an electronic visual display of an
information processing system. A user can give input or control the information processing system through simple
or multi-touch gestures by touching the screen with a special stylus or one or more fingers. Some touchscreens
use ordinary or specially coated gloves to work while others may only work using a special stylus or pen.

There are two types of overlay-based touch screens:

• Capacitive Touch Technology – Capacitive touch screens take advantage of the conductivity of the
object to detect location of touch. While they are durable and last for a long time, they can malfunction if they get
wet. Most smart phones and tablets have capacitive touch screens.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 7
01/01/2022 SUMMARY / Q&A

• Resistive Touch Technology – Resistive touch screens have moving parts. There is an air gap between
two layers of transparent material. An electric circuit is completed and the location can be determined. Though
they are cheaper to build compared to capacitive touch screens, they are also less sensitive and can wear out
quickly.

There are mainly three types of perimeter-based technologies:

• Infrared Touch Technology – This technology uses beams of infrared lights to detect touch events.
• Surface Acoustic Wave Touch Technology – This type of touch screen uses ultrasonic waves to detect
touch events.
• Optical Touch Technology – This type of perimeter-based technology uses optical sensors, mainly
CMOS sensors to detect touch events. All of these touch screen technologies can also be integrated on top of a
non-touch-based system like an ordinary LCD and converted into Open Frame Touch Monitors.

MEMORY ACCESS METHODS

Data need to be accessed from the memory for various purposes. There are several methods to access memory as
listed below:

• Sequential access
• Direct access
• Random access
• Associative access

Sequential Access Method

In sequential memory access method, the memory is accessed in linear sequential way. The time to access data
in this type of method depends on the location of the data.

Random Access Method


In random access method, data from any location of the memory can be accessed randomly.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 8
01/01/2022 SUMMARY / Q&A

Direct Access Method


Direct access method can be seen as combination of sequential access method and random access method.
Magnetic hard disks contain many rotating storage tracks.

Example of direct access: Memory devices such as magnetic hard disks.

Figure 1.10: Direct Access Method

Associative Access Method

Associative access method is a special type of random access method. It enables comparison of desired bit
locations within a word for a specific match and to do this for all words simultaneously.
MEMORY MAPPING AND VIRTUAL MEMORIES

Memory-mapping is a mechanism that maps a portion of a file, or an entire file, on disk to a range of addresses
within an application's address space.

Benefits of Memory-Mapping
The principal benefits of memory-mapping are efficiency, faster file access, the ability to share memory between
applications, and more efficient coding.

Faster File Access


Accessing files via memory map is faster than using I/O functions such as fread and fwrite. It only reads or writes
the file on disk when a specified part of the memory map is accessed, and then it only reads that specific part.
This provides faster random access to the mapped data.

Efficiency

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 9
01/01/2022 SUMMARY / Q&A

As a result, memory-mapped files provide a mechanism by which applications can access data segments in an
extremely large file without having toread the entire file into memory first.

VIRTUAL MEMORIES
Processes in a system share the CPU and main memory with other processes. However, sharing the main memory
poses some special challenges. As demand on the CPU increases, processes slowdown in some reasonably smooth
way. But if too many processes need too much memory, then some of them will simply not be able to run. When
a program is out of space, it is out of luck. Memory is also vulnerable to corruption. If some process inadvertently
writes to the memory used by another process, that process might fail in some bewildering fashion totally
unrelated to the program logic.
• It uses main memory efficiently by treating it as a cache for an address space stored on disk, keeping only
the active areas in main memory, and transferring data back and forth between disk and memory as needed.
• It simplifies memory management by providing each process with a uniform address space.
• It protects the address space of each process from corruption by other processes.

• Virtual memory is central. Virtual memory pervades all levels of computer systems, playing key roles in
the design of hardware exceptions, assemblers, linkers, loaders, shared objects, files, and processes.
• Virtual memory is powerful. Virtual memory gives applications powerful capabilities to create and destroy
chunks of memory, map chunks of memory to portions of disk files, and share memory with other processes.
VM as a Tool for Caching

Conceptually, a virtual memory is organized as an array of N contiguous byte-sized cells stored on disk. Each
byte has a unique virtual address that serves as an index into the array.

Figure 1.12: Memory as a Cache

Page Tables

As with any cache, the VM system must have some way to determine if a virtual page is cached somewhere in
DRAM. If so, the system must determine which physical page it is cached in. If there is a miss, the system must
determine where the virtual page is stored on disk, select a victim page in physical memory, and copy the virtual
page from disk to DRAM, replacing the victim page.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 10
01/01/2022 SUMMARY / Q&A

Figure 1.13: Page Table

Virtual memory was invented in the early 1960s, long before the widening CPU-memory gap spawned SRAM
caches. As a result, virtual memory systems use a different terminology from SRAM caches, even though many
of the ideas are similar.

VM as a Tool for Memory Protection

Any modern computer system must provide the means for the operating system to control access to the memory
system. A user process should not be allowed to modify its read-only text section. Nor should it be allowed to
read or modify any of the code and data structures in the kernel.

Integrating Caches and VM

In any system that uses both virtual memory and SRAM caches, there is the issue of whether to use virtual or
physical addresses to access the SRAM cache. Although a detailed discussion of the trade-offs is beyond our
scope here, most systems opt for physical addressing.

Speeding up Address Translation with a TLB

As we have seen, every time the CPU generates a virtual address, the MMU must refer to a PTE in order to
translate the virtual address into a physical address. In the worst case, this requires an additional fetch from
memory, at a cost of tens to hundreds of cycles.

Replacement Algorithms

When a page fault occurs, the operating system has to choose a page to remove from memory to make room for
the page that has to be brought in. If the page to be removed has been modified while in memory, it must be
rewritten to the disk to bring the disk copy up to date

• Optimal page replacement algorithm


• Not recently used page replacement
• First-In, First-Out page replacement
• Second chance page replacement
• Clock page replacement
• Least recently used page replacement

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 11
01/01/2022 SUMMARY / Q&A

The Optimal Page Replacement Algorithm

The best possible page replacement algorithm is easy to describe but impossible to implement. It goes like this.
If one page will not be used for 8 million instructions and another page will not be used for 6 million instructions,
removing the former pushes the page fault that will fetch it back as far into the future as possible.

The Not Recently Used Page Replacement


Algorithm
The operating system then sets the R bit (in its internal tables), changes the page table entry to point to the correct
page, with mode READ ONLY, and restarts the instruction. If the page is subsequently written on, another page
fault will occur, allowing the operating system to set the M bit and change the page’s mode to READ/WRITE.
The R and M bits can be used to build a simple paging algorithm as follows. Periodically (e.g., on each clock
interrupt), the R bit is cleared, to distinguish pages that have not been referenced recently from those that have
been. When a page fault occurs, the operating system inspects all the pages and divides them into four categories
based on the current values of their R and M bits:

• Class 0: not referenced, not modified.


• Class 1: not referenced, modified.
• Class 2: referenced, not modified.
• Class 3: referenced, modified.

The First-In, First-Out (FIFO) Page Replacement

Algorithm
Another low-overhead paging algorithm is the First-In, First-Out (FIFO) algorithm. To illustrate how this works,
consider a supermarket that has enough shelves to display exactly k different products. One day, some company
introduces a new convenience food—instant, freeze-dried, organic yogurt that can be reconstituted in a microwave
oven

The Second Chance Page Replacement Algorithm


A simple modification to FIFO that avoids the problem of throwing out a heavily used page is to inspect the R bit
of the oldest page. If it is 0, the page is both old and unused, so it is replaced immediately.

The Clock Page Replacement Algorithm


Although second chance is a reasonable algorithm, it is unnecessarily inefficient because it is constantly moving

pages around on its list.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 12
01/01/2022 SUMMARY / Q&A

Figure 1.14: The Clock Replacement Algorithm

When a page fault occurs, the page being pointed to by the hand is inspected. If its R bit is 0, the page is evicted,
the new page is inserted into the clock in its place, and the hand is advanced one position. If R is 1, it is cleared
and the hand is advanced to the next page.

The Least Recently Used (LRU) Page Replacement

Algorithm

A good approximation to the optimal algorithm is based on the observation that pages that have been heavily used
in the last few instructions will probably be heavily used again in the next few.

DATA TRANSFER MODES


The DMA mode of data transfer reduces CPU’s overhead in handling I/O operations. It also allows parallelism
in CPU and I/O operations. Such parallelism is necessary to avoid wastage of valuable CPU time while handling
I/O devices whose speeds are much slower as compared to CPU. The concept of DMA operation can be extended
to relieve the CPU further from getting involved with the execution of I/O operations. This gives rises to the
development of special purpose processor called Input-Output Processor (IOP) or IO channel. The Input
Output Processor (IOP) is just like a CPU that handles the details of I/O operations. It is more equipped with
facilities than those are available in typical DMA controller.

Figure 1.15: The Block Diagram

The IOP can fetch and execute its own instructions that are specifically designed to characterize I/O transfers. In
addition to the I/O – related tasks, it can perform other processing tasks like arithmetic, logic, and branching and
code translation. The main memory unit takes the pivotal role. It communicates with processor by the means of
DMA.

The Input Output Processor is a specialized processor which loads and stores data into memory along with the
execution of I/O instructions.

Advantages
• The I/O devices can directly access the main memory without the intervention by the processor in I/O
processor-based systems.
• It is used to address the problems that are arises in Direct memory access method.

Modes of Transfer
The binary information that is received from an external device is usually stored in the memory unit. The
information that is transferred from the CPU to the external device is originated from the memory unit

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 13
01/01/2022 SUMMARY / Q&A

Data transfer to and from the peripherals may be done in any of the three possible ways.
• Programmed I/O.
• Interrupt- initiated I/O.
• Direct memory access (DMA).

1. Programmed I/O:It is due to the result of the I/O instructions that are written in the computer program.
Each data item transfer is initiated by an instruction in the program.
2. Example of Programmed I/O: In this case, the I/O device does not have direct access to the memory
unit. A transfer from I/O device to memory requires the execution of several instructions by the CPU, including
an input instruction to transfer the data from device to the CPU and store instruction to transfer the data from CPU
to memory.
3. Interrupt- initiated I/O: Since in the above case we saw the CPU is kept busy unnecessarily. This
situation can very well be avoided by using an interrupt driven method for data transfer.
.The I/O transfer rate is limited by the speed with which the processor can test and service a device.
• The processor is tied up in managing an I/O transfer; a number of instructions must be executed for each
I/O transfer.

4. Direct Memory Access: The data transfer between a fast storage media such as magnetic disk and memory
unit is limited by the speed of the CPU. During DMA the CPU is idle and it has no control over the memory
buses. The DMA controller takes over the buses to manage the transfer directly between the I/O devices and the
memory unit.

Figure 1.16: Control lines for DMA


• Bus Request: It is used by the DMA controller to request the CPU to relinquish the control of the buses.
• Bus Grant: It is activated by the CPU to Inform the external DMA controller that the buses are in high
impedance state and the requesting DMA can take control of the buses. Once the DMA has taken the control of
the buses it transfers the data. This transfer can take place in many ways.

PARALLEL PROCESSING

The quest for higher-performance digital computers seems unending. In the past two decades, the performance of
microprocessors has enjoyed an exponential growth. This growth is the result of a combination of two factors:

• Increase in complexity (related both to higher device density and to larger size) of VLSI chips, projected
to rise to around 10 M transistors per chip for microprocessors, and 1B for dynamic random-access memories
(DRAMs), by the year 2000

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 14
01/01/2022 SUMMARY / Q&A

• Introduction of, and improvements in, architectural features such as on-chip cache memories, large
instruction buffers, multiple instruction issue per cycle, multithreading, deep pipelines, out-oforder instruction
execution, and branch prediction.

The motivations for parallel processing can be summarized as follows:

1. Higher speed, or solving problems faster. This is important when applications have “hard” or “soft”
deadlines. For example, we have at most a few hours of computation time to do 24-hour weather forecasting or
to produce timely tornado warnings.
2. Higher throughput, or solving more instances of given problems. This is important when many similar
tasks must be performed. For example, banks and airlines, among others, use transaction processing systems that
handle large volumes of data.
3. Higher computational power, or solving larger problems. This would allow us to use very detailed, and
thus more accurate, models or to carry out simulation runs for longer periods of time (e.g., 5-day, as opposed to
24-hour, weather forecasting).

A major issue in devising a parallel algorithm for a given problem is the way in which the computational load is
divided between the multiple processors. The most efficient scheme often depends both on the problem and on
the parallel machine’s architecture.

Example
Consider the problem of constructing the list of all prime numbers in the interval [1, n] for a given integer n > 0.
A simple algorithm that can be used for this computation is the sieve of Eratosthenes. Start with the list of numbers
1, 2, 3, 4, ... , n represented as The computation steps for n = 30 are shown in the figure below

Figure 3.17: The Block Diagram

PARALLEL PROCESSING UPS AND DOWNS

L. F. Richardson, a British meteorologist, was the first person to attempt to forecast the weather using numerical
computations. 24-hour period would require 64,000 slow “computers” (humans + mechanical calculators) and
even then, the forecast would take 12 hours to complete. He had the following idea or dream:

Imagine a large hall like a theater. The walls of this chamber are painted to form a map of the globe

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 15
01/01/2022 SUMMARY / Q&A

Types of Parallelism: A Taxonomy


Parallel computers can be divided into two main categories of control flow and data flow. Control-flow parallel
computers are essentially based on the same principles as the sequential or von Neumann computer, except that
multiple instructions can be executed at any given time.

CIT 314

Figure 3.18: Pictorial Representation of Richardsons example


In 1966, M. J. Flynn proposed a four-way classification of computer systems based on the notions of instruction
streams and data streams. Flynn’s classification has become standard and is widely used.
Again, one of the four categories (GMMP) is not widely used. The GMSV class is what is loosely referred to as
(sharedmemory) multiprocessors.

Figure 1.19: Classes of Computer according to Flynn

At the other extreme, the DMMP class is known as (distributedmemory) multi-computers. Finally, the DMSV
class, which is becoming popular in view of combining the implementation ease of distributed memory with the
programming ease of the shared-variable scheme, is sometimes called distributed shared memory. When all
processors in a MIMD-type machine execute the same program, the result is sometimes referred to as single-
program multipledata [SPMD (spim-dee)]. Although the Figigure lumps all SIMD machines together, there are
in fact variations similar to those suggested above for MIMD machines.

Roadblocks to Parallel Computing

The list begins with the less serious, or obsolete, objections and ends with Amdahl’s law, which perhaps
constitutes the most important challenge facing parallel computer designers and users.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 16
01/01/2022 SUMMARY / Q&A

1. Grosch’s law (economy of scale applies, or computing power is proportional to the square of cost). If this
law did in fact hold, investing money in p processors would be foolish as a single computer with the same total
cost could offer p² times the performance of one such processor. Most applications have a pleasant amount of data
access regularity and locality that help improve the performance.
2. The tyranny of IC technology (because hardware becomes about 10 times faster every 5 years, by the time
a parallel machine with 10-fold performance is designed and implemented, uniprocessors will be just as fast).
This objection might be valid for some special-purpose systems that must be built from scratch with
“old” technology. The tyranny of vector supercomputers (vector supercomputers, built by Cray, Fujitsu, and other
companies, are rapidly improving in performance and additionally offer a familiar programming model and
excellent vectorizing compilers;
Most current vector supercomputers do in fact come in multiprocessor configurations for increased performance.

3. The software inertia (billions of dollars’ worth of existing software makes it hard to switch to parallel
systems; the cost of converting the “dusty decks” to parallel programs and retraining the programmers is
prohibitive). The added information about concurrency and data dependencies would allow the sequential
computer to improve its performance by instruction pre fetching, data caching, and so forth.

PIPELINING

Similar to the assembly line, the success of a pipeline depends upon dividing the execution of an instruction
among a number of subunits (stages), each performing part of the required operations. Pipeline system is like the
modern day assembly line setup in factories. For example in a car manufacturing industry, huge assembly lines
are setup and at each point, there are robotic arms to perform a certain task, and then the car moves on ahead to
the next arm.

Types of Pipeline:
It is divided into 2 categories:

• Arithmetic Pipeline-Arithmetic pipelines are usually found in most of the computers. They are used for
floating point operations, multiplication of fixed point numbers etc.
• Instruction Pipeline- In this a stream of instructions can be executed by overlapping fetch, decode and
execute phases of an instruction cycle. This type of technique is used to increase the throughput of the computer
system.

Pipeline Conflicts

There are some factors that cause the pipeline to deviate its normal performance. Some of these factors are given
below:

• Timing Variations:All stages cannot take same amount of time. This problem generally occurs in
instruction processing where different instructions have different operand requirements and thus different
processing time.

• Data Hazards: When several instructions are in partial execution, and if they reference same data then
the problem arises. We must ensure that next instruction does not attempt to access data before the current
instruction, because this will lead to incorrect results.
• Interrupts: Interrupts set unwanted instruction into the instruction stream. Interrupts effect the execution
of instruction.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 17
01/01/2022 SUMMARY / Q&A

• Data Dependency: It arises when an instruction depends upon the result of a previous instruction but this
result is not yet available.

Advantages of Pipelining

• The cycle time of the processor is reduced. It increases the throughput of the system
• It makes the system reliable.

Disadvantages of Pipelining

• The design of pipelined processor is complex and costly to manufacture.


• The instruction latency is more.

Pipelining refers to the technique in which a given task is divided into a number of subtasks that need to be
performed in sequence. Each subtask is performed by a given functional unit. Figure 3.20 shows an illustration
of the basic difference between executing four subtasks of a given instruction (in this case fetching F, decoding
D, execution E, and writing the results W) using pipelining and sequential processing.

Figure 3.20: Pictorial Representation of a simple Pipelining Example

A possible saving of up to 50% in the execution time of these three instructions is obtained. In order to formulate
some performance measures for the goodness of a pipeline in processing a series of tasks, a space time chart
(called the Gantt’s chart) is used.

MODULE 2 MEMORY ADDRESSING AND HIERARCHY SYSTEMS

2.1 INTRODUCTION

A memory address is a unique identifier used by a device or CPU for data tracking. This binary address is defined
by an ordered and finite sequence allowing the CPU to track the location of each memory byte. Addressing modes
are an aspect of the instruction set architecture in most central processing unit (CPU) designs. The various
addressing modes that are defined in a given instruction set architecture define how the machine language
instructions in that architecture identify the operand(s) of each instruction. An addressing mode specifies how to
calculate the effective memory address of an operand by using information held in registers and/or constants
contained within a machine instruction or elsewhere.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 18
01/01/2022 SUMMARY / Q&A

In computer programming, addressing modes are primarily of interest to those who write in assembly languages
and to compiler writers. For a related concept see orthogonal instruction set which deals with the ability of any
instruction to use any addressing mode.

This module is divided into three units. The first unit explains memory addressing and the various modes
available. Unit two explains the elements of memory hierarchy while the last unit takes on virtual memory control
systems. All these are given below.

UNIT ONE: Memory Addressing


UNIT TWO: Elements of Memory Hierarchy
3.1.1What is memory addressing mode?

Memory addressing mode is the method by which an instruction operand is specified. One of the functions of a
microprocessor is to execute a sequence of instructions or programs stored in a computer memory (register) in
order to perform a particular task. The way the operands are chosen during program execution is dependent on
the addressing mode of the instruction. The addressing mode specifies a rule for interpreting or modifying the
address field of the instruction before the operand is actually referenced. This technique is used by the computers
to give programming versatility to the user by providing such facilities as pointers to memory, counters for loop
control, indexing of data, and program relocation. And as well reduce the number of bits in the addressing field
of the instruction.

However, there are basic requirement for the operation to take effect. First, the must be an operator to indicate
what action to take and secondly, there must be an operand that portray the data to be executed. For instance; if
the numbers 5 and 2 are to be added to have a result, it could be expressed numerically as 5 + 2. In this expression,
our operator is (+), or expansion, and the numbers 5 and 2 are our operands. It is important to tell the machine in
a microprocessor how to get the operands to perform the task. The data stored in the operation code is the operand
value or the result. A word that defines the address of an operand that is stored in memory is the effective address.
The availability of the addressing modes gives the experienced assembly language programmer flexibility for
writing programs that are more efficient with respect to the number of instructions and execution time.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 19
01/01/2022 SUMMARY / Q&A

Modes of addressing
There are many methods for defining or obtaining the effective address of an operators directly from the register.
Such approaches are known as modes of addressing. The programmes are usually written in a highlevel language,
as it is a simple way to describe the variables and operations to be performed on the variables by the programmer.
The
following are the modes of addressing;

ADDRESSING EXAMPLE MEANING WHEN TO


MODES INSTRUCTION USED
Register ADD R4, R3 R4 <- R4 + R3 When a value
is in a register
Immediate ADD R4, #3 R4 <- R4 + R3 For constants
indexed ADD R3, (R1 + R2) R3 <- R3 + M When
[R1 + R2] addressing
array;
R1 = base of
array R2 =
index amount

Register Indirect ADD R4, (R1) R4 <- R4 + M Accessing


[R1] using a
pointer or a
computed
address
Auto Increment ADD R1, (R2)+ R1 <- R1 + M Use for
[R2] stopping
R2 <- R2 + d through array
in a loop. R2 =
start of array
D = size of an
element

Auto Decrement ADD R1, - (R2) R2 <- R2 – d R1 Same as auto


<- R1 + M increment.
[R2] Both can also be
used to
implement a
stack push and
pop
Direct ADD R1, (1001) R1 <- R1 + M Useful in
[1001] accessing static
data

Note :
< - = assignment

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 20
01/01/2022 SUMMARY / Q&A

M = the name for memory: M[R1] refers to contents of memory location whose address is given by the contents
of R1

3.1.3Number of addressing modes


The number of addressing modes are as follow;
a. Register Addressing Mode
In this mode the operands are in registers that reside within the CPU. The particular register is selected from a
register field in the instruction. A k-bit field can specify any one of 2' registers.
b. Direct Addressing Mode and Indirect Address mode
In Direct Address Mode, the effective address is equal to the address part of the instruction. The operand resides
in memory and its address is given directly by the address field of the instruction. In a branch-type instruction the
address field specifies the actual branch address. But in the Indirect Address Mode, the address field of the
instruction gives the address where the effective address is stored in memory. Control fetches the instruction from
memory and uses its address part to access memory again to read the effective address. A few addressing modes
require that the address field of the instruction be added to the content of a specific register in the CPU. The
effective address in these modes is obtained from the following computation:
Effective address = address part of instruction + content of CPU register.
The CPU register used in the computation may be the program counter, an index register, or a base register. In
either case we have a different addressing mode which is used for a different application.
c. Immediate Addressing Mode
In this mode the operand is specified in the instructionitself. In other words, an immediate-mode instruction has
an operand fieldrather than an address field. The operand field contains the actual operand tobe used in
conjunction with the operation specified in the instruction. Immediate-mode instructions are useful for initializing
registers to a constant value.It was mentioned previously that the address field of an instruction mayspecify either
a memory word or a processor register. When the address fieldspecifies a processor register, the instruction is
said to be in the register mode.
d. Register Indirect Addressing Mode
In this mode the instruction specifies a register in theCPU whose contents give the address of the operand in
memory. In otherwords, the selected register contains the address of the operand rather thanthe operand itself.
Before using a register indirect mode instruction, the programmermust ensure that the memory address of the
operand is placed in theprocessor register with a previous instruction. A reference to the register isthen equivalent
to specifying a memory address. The advantage of a registerindirect mode instruction is that the address field of
the instruction uses fewerbits to select a register than would have been required to specify a memoryaddress
directly.
e. Indexed Addressing Mode
In this mode the content of an index register isadded to the address part of the instruction to obtain the effective
address. Theindex register is a special CPU register that contains an index value. Theaddress field of the
instruction defines the beginning address of a data arrayin memory. Each operand in the array is stored in memory
relative to thebeginning address. The distance between the beginning address and theaddress of the operand is the
index value stored in the index register. Anyoperand in the array can be accessed with the same instruction
provided thatthe index register contains the correct index value. The index register can beincremented to facilitate
access to consecutive operands. Note that if an index typeinstruction does not include an address field in its format,
the instructionconverts to the register indirect mode of operation.Some computers dedicate one CPU register to
function solely as an indexregister. This register is involved implicitly when the index-mode instructionis used.
In computers with many processor registers, any one of the CPUregisters can contain the index number. In such
a case the register must bespecified explicitly in a register field within the instruction format.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 21
01/01/2022 SUMMARY / Q&A

f. Auto Increment Mode and Auto Decrement Mode


This is similar to the register indirect mode except that the register is incremented or decremented after (or before)
its value is used to access memory. When the address stored in the register refers to a table of data in memory, it
is necessary to increment or decrement the register after every access to the table. This can be achieved by using
the increment or decrement instruction. However, because it is such a common requirement, some computers
incorporate a special mode that automatically increments or decrements the content of the register after data
access. The address field of an instruction is used by the control unit in the CPU to obtain the operand from
memory. Sometimes the value given in the address field is the address of the operand, but sometimes it is just an
address from which the address of the operand is calculated. To differentiate among the various addressing modes
it is necessary to distinguish between the address part of the instruction and the effective address used by the
control when executing the instruction. The effective address is defined to be the memory address obtained from
the computation dictated by the given addressing mode. The effective address is the address of the operand in a
computational type instruction. It is the address where control branches in response to a branch-type instruction.

g. Relative Addressing Mode:


In this mode the content of the program counter is added to the address part of the instruction in order to obtain
the effective address. The address part of the instruction is usually a signed number which can be either positive
or negative. When this number is added to the content of the program counter, the result produces an effective
address whose position in memory is relative to the address of the next instruction. For instance, let’s assume that
the program counter contains the number 682 and the address part of the instruction contains the number 21. The
instruction at location 682 is read from memory during the fetch phase and the program counter is then
incremented by one to 683. The effective address computation for the relative address mode is 683 + 21 = 704.
This is 21 memory locations forward from the address of the next instruction. Relative addressing is often used
with branch-type instructions when the branch address is in the area surrounding the instruction word itself. It
results in a shorter address field in the instruction format since the relative address can be specified with a smaller
number of bits compared to the number of bits required to designate the entire memory address.

3.1.4Advantages of addressing modes

The advantages of using the addressing mode are as follow;

a. To provide the user with programming flexibility by offering such facilities as memory pointers, loop
control counters, data indexing, and programme displacement.
b. To decrease the counting of bits in the instruction pointing area.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 22
01/01/2022 SUMMARY / Q&A

Uses of addressing modes

Some direction set models, for instance, Intel x86 and its substitutions, had a pile ground-breaking area direction.
This plays out an assessment of the fruitful operand location, anyway rather following up on that memory territory,
it stacks the area that might have been gotten in the register. This may be significant during passing the area of a
display part to a browse mode. It can similarly be a fairly precarious strategy for achieving a greater number of
includes than average in one direction; for example, using such a direction with the keeping an eye on mode
“base+ index+ balance” (unequivocal underneath) licenses one to assemble two registers and a consistent into a
solitary unit in one direction.

What is memory hierarchy?

Memory is one of the important units in any computer system. Its serves as a storage for all the processed and the
unprocessed data or programs in a computer system. However, due to the fact that most computer users often
stored large amount of files in their computer memory devices, the use of one memory device in a computer
system has become inefficient and unsatisfactory. This is because only one memory cannot contain all the files
needed by the computer users and when the memory is large, it decreases the speed of the processor and the
general performance of the computer system.
Therefore, to curb this challenges, memory unit must be divided into smaller memories for more storage, speedy
program executions and the enhancement of the processor performance. The recently accessed files or programs
must be placed in the fastest memory. Since the memory with large capacity is cheap and slow and the memory
with smaller capacity is fast and costly. The organization of smaller memories to hold the recently accessed files
or programs closer to the CPU is term memory hierarchy. These memories are successively larger as they move
away from the CPU.

The strength and performance of memory hierarchy can be measured using the model below;

Memory_Stall_Cycles = IC*Mem_Refs * Miss_Rate * Miss_Penalty

Where,
IC = Instruction Count
Mem_Refs = Memory References per Instruction
Miss_Rate = Fraction of Accesses that are not in the
cache
Miss_Penalty = Additional time to service the Miss

The memory hierarchy system encompasses all the storage devices used in a computer system. Its ranges from
the cache memory, which is smaller in size but faster in speed to a relatively auxiliary memory which is larger in
size but slower in speed. The smaller the size of the memory the costlier it becomes.

The element of the memory hierarchy includes

a. Cache memory,
b. Main memory and
c. Auxiliary memory

• The cache memoryis the fastest and smallest memory. It is easily accessible by the CPU because it closer
to the CPU. Cache memory is very costly compare to the main memory and the auxiliary memory.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 23
01/01/2022 SUMMARY / Q&A

• The main memory also known as primary memory, communicates directly to the CPU. Its also
communicates to the auxiliary memory through the I/O processor. During program execution, the files that are
not currently needed by the CPU are often moved to the auxiliary storage devices in order to create space in the
main memory for the currently needed files to be
stored. The main memory is made up of Random Access Memory (RAM) and Read Only Memory (ROM).

• The auxiliary memory is very large in size and relatively slow in speed. Its includes the magnetic tapes
and the magnetic disks which are used for the storage and backup of removable files. The auxiliary memories
store programs that are not currently needed by the CPU. They are very cheap when compare to the both cache
and main memories.

3.2.2 Memory hierarchy diagram

The memory hierarchy system encompasses all the storage devices used in a computer system. Its ranges from
fastest but smaller in size (cache memory) to a relatively fast but small in size (main memory) and more slowly
but larger in size (auxiliary memory). The cache memory is the smallest and fastest storage device, it is place
closer to the CPU for easy accessed by the processor logic. More so, cache memory is helps to enhance the
processing speed of the system by making available currently needed programs and data to the CPU at a very
high speed. Its stores segment of programs currently processed by the CPU as well as the temporary data
frequently needed in the current calculation

If the CPU needs a program that is outside the main memory, the main memory will call in the program from the
auxiliary memories via the input/output processor. The main difference between cache and main memories is the
access time and processing logic. The processor logic is often faster than that of the main memory access time.

The auxiliary memory is made up of the magnetic tape and the magnetic disk. They are employ in the system to
store and backup large volume of

Main Cache
Memor Memor

I/O
Processo CPU

As the storage capacity of the memory increases, the cost per bit for storing binary information decreases and the
access time of the memory becomes longer. The diagram of a memory hierarchy in presented in Figure 2.1

2.3 Characteristics of Memory Hierarchy


There are numbers of parameters that characterized memory hierarchy. They stand as the principle on which all
the levels of the memory hierarchy operate. These characteristics are;
a. Access type,

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 24
01/01/2022 SUMMARY / Q&A

b. Capacity,
c. Cycle time,
d. Latency,
e. Bandwidth, and
f. Cost

a. Access Time: refers to the action that physically takes place during a read or write operation. When a data
or program is moved from the top of the memory hierarchy to the bottom , the access time automatically increases.
b. Capacity: the capacity of a memory hierarchy often increased when a data is moved from the top of the
memory hierarchy to the bottom.
c. Cycle time: is defined as the time elapsed from the start of a read operation to the start of a subsequent
read.
d. Latency:is defined as the time interval between the request for information and the access to the first bit
of that information.
e. Bandwidth: this measures the number of bits that can be accessed per second.
f. Cost: the cost of a memory level is usually specified as dollars per megabytes. When the data is moved
from bottom of the memory hierarchy to top, the cost for each bit increases automatically.

Memory Hierarchy Design


The memory in a computer can be divided into five hierarchies based on the speed as well as use. The primary
memory is directly accessible by the processor, it is also known as internal memory. magnetic disk, and magnetic
tape. The memory hierarchy design is presented in Figure 2.2 below.

Inc
rea
se
in
CPU Level 0
cap
Register
cos acit
Cache Memory
t y Level 1
per & (SRAM)
bit acc Main Memory
Inc ess Level 2
rea tim (DRAM)
se e Magnetic Disk
in Level 3
(Disk storage)
Optical Disk
Level 4
Magnetic Tape

Figure 2.2.2: Memory hierarchy design

1. operation. Normally, a complex instruction set computer uses many registers to accept main memory.
2. Cache Memory: Cache memory can also be found in the processor, however rarely it may be another
integrated circuit (IC) which is separated into levels.
3. Main Memory: This is the memory unit that communicate directly to the CPU. It is the primary storage
unit in a computer system

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 25
01/01/2022 SUMMARY / Q&A

4. Magnetic Disks: The magnetic disks is a circular plates fabricated of plastic or metal by magnetized
material.
5. Magnetic Tape: This tape is a normal magnetic recording which is designed with a slender magnetizable
covering on an extended, plastic film of the thin strip. This is mainly used to back up huge data.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 26
01/01/2022 SUMMARY / Q&A

3.2.5 Advantages of Memory Hierarchy


The advantages of a memory hierarchy include the following.

a. Memory distributing is simple and economical


b. Removes external destruction
c. Data can be spread all over
d. Permits demand paging & pre-paging
e. Swapping will be more proficient

Memory management systems

In a multiprogramming system, there is a need for a high capacity memory. This is because most of the programs
are often stored in the memory. The programs must be moved around the memory to change the space of memory
used by a particular program and as well prevent a program from altering other programs during read and write.

COMPONENTS OF MEMORY MANAGEMENT SYSTEM:

The principal components of the memory management system are;

a. A facility for dynamic storage relocation that maps logical memory references into physical memory
addresses.
b. A provision for sharing common programs stored in memory by different users.
c. Protection of information against unauthorized access between users and preventing users from changing
operating system functions. The dynamic storage relocation hardware is a mapping process similar to the paging
system

3.3.2 Paging

Memory management is a crucial aspect of any computing device, and paging specifically is important to the
implementation of virtual memory. In the Paging method, the main memory is divided into small fixed-size blocks
of physical memory, which is called frames.

From the above diagram you cansee that A2 and A4 are moved to the waiting state after some time. Therefore,
eight frames become empty, and so other pages can be loaded in that empty blocks. The process A5 of size 8
pages (8 KB) are waiting in the ready queue.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 27
01/01/2022 SUMMARY / Q&A

In conclusion, paging is a function of memory management where a computer will store and retrieve data from a
device’s secondary storage to the primary storage. Memory management is a crucial aspect of any computing
device, and paging specifically is important to the implementation of virtual memory.

3.3.2.1 Paging Protection

The paging process should be protected by using the concept of insertion of an additional bit called Valid/Invalid
bit. Paging Memory protection in paging is achieved by associating protection bits with each page. These bits are
associated with each page table entry and specify protection on the corresponding page.

3.3.2.2 Advantages and Disadvantages of Paging

Advantages

The following are the advantages of using Paging method:

a. No need for external Fragmentation


b. Swapping is easy between equal-sized pages and page frames.
c. Easy to use memory management algorithm

Disadvantages

The following are the disadvantages of using Paging method

a. May cause Internal fragmentation


b. Page tables consume additional memory.
c. Multi-level paging may lead to memory reference overhead.

3.3.3 Address mapping using paging

The table implementation of the address mapping is simplified if the information in the address space and the
memory space are each divided into groups of fixed size. The physical memory is broken down into groups of
equal size called blocks, which may range from 64 to 4096 words each. The term page refers to groups of address
space of the same size. For example, if a page or block consists of 1K words, address space is divided into 1024
pages and main memory is divided into 32 blocks. Although both a page and a block are split into groups of 1K
words, a page refers to the organization of address space, while a block refers to the organization of memory
space. The programs are also considered to be split into pages. Portions of programs are moved from auxiliary
memory to main memory in records equal to the size of a page. The term page frame is sometimes used to denote
a block.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 28
01/01/2022 SUMMARY / Q&A

Address mapping using segments

Another mapping process similar to paging system is the dynamic storage relocation hardware. Due to the large
size of program and its logical structures, the fixed page size employ in the virtual memory system has really pose
a lot of challenges in memory management.

Address mapping using segmented paging

One of the properties of logical space is that it uses variable-length segments. The length of each segment is
allowed to grow and contract according to the needs of the program being executed. One way of specifying the
length of a segment is by associating with it a number of equal-size pages. To see how this is done, consider the
logical address shown in Figure2.3.3.
Logical Address
Segment Page Word

Segment
Table

Block Word

Figure 2.3.3 Address mapping using segmented paging

The mapping of the logical address into a physical address is done by means of two tables, as shown in Figure
3.3. The segment number of the logical address specifies the address for the segment table. The entry in the
segment table is a pointer address for a page table base.

Multi-programming
Multiprogramming is the basic form of parallel processing in which several programs are run at the same time on
a single processor.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 29
01/01/2022 SUMMARY / Q&A

It executes multiple programs to avoid CPU and memory underutilization. It is also called as Multiprogram Task
System. It is faster in processing than Batch
Processing system

Advantages and Disadvantages of Multiprogramming

Below are the Advantages and disadvantages of Multiprogramming Advantages of Multiprogramming:

a. CPU never becomes idle


b. Efficient resources utilization
c. Response time is shorter
d. Short time jobs completed faster than long time jobs
e. Increased Throughput

Disadvantages of Multiprogramming:

a. Long time jobs have to wait long


b. Tracking all processes sometimes difficult
c. CPU scheduling is required
d. Requires efficient memory management
e. User interaction not possible during program execution

Virtual machines/memory and protection


Memory protection can be assigned to the physical address or the logical address. The protection of memory
through the physical address can be done by assigning to each block in memory a number of protection bits that
indicate the type of access allowed to its corresponding block

Base address Length Protection

Figure 2.3.4: Format of a typical segment description

Some of the access rights of interest that are used for protecting the programs residing in memory are:

• Full read and write privileges


• Read only (write protection)
• Execute only (program protection)
• System only (operating system protection)

Hierarchical memory systems


In the Computer System Design, Memory Hierarchy is used to enhance the organization of memory such that it
can minimize the access time.
• External Memory or Secondary Memory: This is a permanent storage (non-volatile) and does not lose
any data when power is switched off. It is made up of Magnetic Disk, Optical Disk, Magnetic Tape i.e. peripheral
storage devices which are accessible by the processor via I/O Module.
• Internal Memory or Primary Memory: This memory is volatile in nature. it loses its data, when power
is switched off. It is made up of Main Memory, Cache Memory & CPU registers. This is directly accessible by
the processor.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 30
01/01/2022 SUMMARY / Q&A

Properties of Hierarchical Memory Organization


There are three important properties for maintaining consistency in the memory hierarchy these three properties
are;

• Inclusion
• Coherence and
• Locality.

Drawbacks that occur in virtual memories


The following are the drawbacks of using virtual memory:
• Applications may run slower if the system is using virtual memory.
• Likely takes more time to switch between applications.
• Offers lesser hard drive space for your use.
• It reduces system stability.

The control unit is the main component of a central processing unit (CPU) in computers that can direct the
operations during the execution of a program bytheprocessor/computer. The main function of the control unit is
to fetch and execute instructions from the memory of a computer. It receives the input instruction/information
from the user and converts it intocontrolsignals, which are then given to the CPU for further execution. It is
included as a part of Von Neumann architecture developed by John Neumann. It is responsible for providing the
timing signals, and control signals and directs the execution of a program by the CPU. It is included as an internal
part of the CPU in modern computers. This module describes complete information about the control unit.
This module is divided into three units.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 31
01/01/2022 SUMMARY / Q&A

QUESTION
WHAT IS CONTROL UNIT?

SUGGESTED ANSWER

Control Unit is the part of the computer’s central processing unit (CPU), which directs the operation of the
processor. It was included as part of the Von Neumann Architecture by John von Neumann. A control unit works
by receiving input information to which it converts into control signals, which are then sent to the central
processor. The architecture of CPU varies from manufacturer to manufacturer. Examples of devices that require
a CU are:

• Control Processing Units (CPUs)


• Graphics Processing Units (GPUs)

Figure 3.1: Stucture of Control Unit

Major functions of the Control Unit –

• It coordinates the sequence of data movements into, out of, and between a processor’s many sub-units.
It interprets instructions.
• It controls data flow inside the processor.
• It receives external instructions or commands to which it converts to sequence of control signals.
• It controls many execution units (i.e. ALU, data buffers and registers) contained within a CPU.
• It also handles multiple tasks, such as fetching, decoding, execution handling and storing results.

WHAT IS A HARDWIRED CONTROL UNIT


A hardwired control is a mechanism of producing control signals using Finite State Machines (FSM)
appropriately. It is designed as a sequential logic circuit. The final circuit is constructed by physically connecting
the components such as gates, flip flops, and drums.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 32
01/01/2022 SUMMARY / Q&A

Figure 3.2.1: Hardwired Control Unit

Design of a hardwired Control Unit


Control signals for an instruction execution have to be generated not in a single time point but during the entire
time interval that corresponds to the instruction execution cycle.
Advantages of Hardwired Control Unit:

1. Because of the use of combinational circuits to generate signals, Hardwired Control Unit is fast.
2. It depends on number of gates, how much delay can occur in generation of control signals.
3. It can be optimized to produce the fast mode of operation.
4. Faster than micro- programmed control unit.

Disadvantages of Hardwired Control Unit:

1. The complexity of the design increases as we require more control signals to be generated (need of more
encoders & decoders)
2. Modifications in the control signals are very difficult because it requires rearranging of wires in the
hardware circuit.
3. Adding a new feature is difficult & complex.
4. Difficult to test & correct mistakes in the original design.
5. It is Expensive.

Design of a Micro-Programmed Control Unit


The fundamental difference between these unit structures and the structure of the hardwired control unit is the
existence of the control store that is used for storing words containing encoded control signals mandatory for
instruction execution. In microprogrammed control units, subsequent instruction words are fetched into the
instruction register in a normal way. However, the operation code of each instruction is not directly decoded to
enable immediate control signal generation but it comprises the initial address of a microprogram contained in
the control store.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 33
01/01/2022 SUMMARY / Q&A

Differences Between Hardwired and Microprogrammed Control

Advantages of Micro programmed Control Unit


There are the following advantages of microprogrammed control are as follows:
• It can more systematic design of the control unit.
• It is simpler to debug and change.
• It can retain the underlying structure of the control function.
• It can make the design of the control unit much simpler. Hence, it is inexpensive and less error-prone.
• It can orderly and systematic design process.
• It is used to control functions implemented in software and not hardware.
• It is more flexible.
• It is used to complex function is carried out easily.

Disadvantages of Microprogrammed Control Unit


There are the following disadvantages of microprogrammed control are as follows;

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 34
01/01/2022 SUMMARY / Q&A

• Adaptability is obtained at more cost.


• It is slower than a hardwired control unit.

Organization of micro programmed control unit

The control memory is assumed to be a ROM, within which all control information is permanently stored
• The control memory address register specifies the address of the microinstruction, and the control data
register holds the microinstruction read from memory.
• The microinstruction contains a control word that specifies one or more microoperations for the data
processor. Once these operations are executed, the control must determine the next address.
• The location of the next microinstruction may be the one next in sequence, or it may be located somewhere
else in the control memory.
• While the microoperations are being executed, the next address is computed in the next address generator
circuit and then transferred into the control address register to read the next microinstruction.
• Thus a microinstruction contains bits for initiating microoperations in the data
processor part and bits that determine the address sequence for the control memory.
• The next address generator is sometimes called a micro-program sequencer, as it determines the address
sequence that is read from control memory.
• Typical functions of a micro-program sequencer are incrementing the control address register by one,
loading into the control address register an address from control memory, transferring an external address, or
loading an initial address to start the control operations.
• The control data register holds the present microinstruction while the next address is computed and read
from memory.
• The data register is sometimes called a pipeline register.
• It allows the execution of the microoperations specified by the control word simultaneously with the
generation of the next microinstruction.
• This configuration requires a two-phase clock, with one clock applied to the address register and the other
to the data register.
• The main advantage of the micro programmed control is the fact that once the hardware configuration is
established; there should be no need for further hardware or wiring changes.
• If we want to establish a different control sequence for the system, all we need to do is specify a different
set of microinstructions for control memory.

Types of Micro-programmed Control Unit

• Horizontal Micro-programmed control Unit:

The control signals are represented in the decoded binary format that is 1 bit/CS. Example: If 53 Control signals
are present in the processor than 53 bits are required. More than 1 control signal can be enabled at a time.
• It supports longer control word.
• It is used in parallel processing applications.
• It allows higher degree of parallelism. If degree is n, n CS are enabled at a time.
• It requires no additional hardware(decoders). It means it is faster than Vertical Microprogrammed.
• It is more flexible than vertical microprogrammed

Vertical Micro-programmed control Unit:

The control signals re represented in the encoded binary format. For N control signals- Log2 (N) bits are required.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 35
01/01/2022 SUMMARY / Q&A

• It supports shorter control words.


• It supports easy implementation of new control signals therefore it is more flexible.
• It allows low degree of parallelism i.e., degree of parallelism is either 0 or 1.
• Requires an additional hardware (decoders) to generate control signals, it implies it is slower than
horizontal microprogrammed.
• It is less flexible than horizontal but more flexible than that of hardwired control unit.

Clock limitations

A circuit can only operate synchronously if all parts of it see the clock at the same time, at least to a reasonable
approximation.
As feature sizes reduce and chips encompass more functionality it is likely that the average proportion of the chip
which is doing something useful at any time will shrink. Therefore the global clock is becoming increasingly
inefficient.

Basic Concepts
There are a few key concepts fundamental to the understanding of asynchronous circuits:
the timing models used, the mode of operation and the signaling conventions.

Timing model

Asynchronous circuits are classified according to their behaviour with respect to circuit delays. If a circuit
functions correctly irrespective of the delays in the logic gates and the delays in the wiring it is known as delay-
insensitive.
Asynchronous circuits can operate in one of two modes. The first is called fundamental mode and assumes no
further input changes can be applied until all outputs have settled in response to a previous input. The second,
input/output mode, allows

Asynchronous signaling conventions


A communication between two elements in an asynchronous system can be considered as having two or four
phases of operation and a single bit of information can be conveyed on either a single wire or a pair or wires
(known as dual-rail encoding).

Two-phase
In a two-phase communication the information is transmitted by a single transition or change in voltage level on
a wire. Figure 4.1(a) shows an example of two-phase communication.

Four-phase
With four-phase communication two phases are active communication while the other two permit recovery to a
predefined state. Figure 4.2 shows an example of four-phase communication; in this example all wires are
initialized to a logical Low level.

Four-phase
With four-phase communication two phases are active communication while the other two permit recovery to a
predefined state.

Single-rail encoding
A single-rail circuit encodes information in a conventional level encoded manner. One wire is required for each
bit of information.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 36
01/01/2022 SUMMARY / Q&A

Dual-rail encoding
A dual-rail circuit requires two wires to encode every bit of information.
Of the two wires, one represents a logic ‘0’ and the other represents a logic ‘1’. In any communication an event
occurs on either the logic ‘0’ wire or the logic ‘1’ wire.

Overall the design adheres to the bounded-delay timing model (although some parts may be considered delay-
insensitive) and its pipeline stages operate in fundamental mode.

Benefits of Asynchronous Control

Two major assumptions guide the design of today’s logic; all signals
are binary, and time is discrete. Both of these assumptions are made in
order to simplify logic design. By assuming binary values on signals,
simple Boolean logic can be used to describe and manipulate logic
constructs. By assuming time is discrete, hazards and feedback can
largely be ignored.

Asynchronous circuits keep the assumption that signals are binary, but
remove the assumption that time is discrete.

Clockless or asynchronous control design is receiving renewed


attention, due to its potential benefits of modularity, low power, low
electromagnetic interference and average-case performance.

This has several possible benefits:

• No clock skew - Clock skew is the difference in arrival times of the clock signal at different parts of the
circuit
• Lower power - Standard synchronous circuits have to toggle clock lines, and possibly pre-charge and
discharge signals, in portions of a circuit unused in the current computation.
Average-case instead of worst-case performance -
Synchronous circuits must wait until all possible computations have completed before latching the results,
yielding worst-case performance. Many asynchronous systems sense
• Easing of global timing issues: In systems such as a synchronous microprocessor, the system clock, and
thus system performance, is dictated by the slowest (critical) path.
• Better technology migration potential - Integrated circuits will often be implemented in several different
technologies during their lifetime.
In many asynchronous systems, migration of only the more critical system components can improve
system performance on average, since performance is dependent on only the currently active path.
Automatic adaptation to physical properties - The delay through a circuit can change with variations
in fabrication, temperature, and power-supply voltage. Synchronous circuits must assume that the worst possible
combination of factors is present and clock the system accordingly.

Robust mutual exclusion and external input handling -


Elements that guarantee correct mutual exclusion of independent signals and synchronization of external signals
to a clock are subject to meta-Stability. Also, since there is no clock with which signals must be synchronized,

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 37
01/01/2022 SUMMARY / Q&A

asynchronous circuits more gracefully accommodate inputs from the outside world, which are by nature
asynchronous.

Limitations of Asynchronous Controllers


• Asynchronous circuits are more difficult to design in an ad hoc fashion than synchronous circuits. In a
synchronous system, a designer can simply define the combinational logic necessary to compute the given
function, and surround it with latches.
• By setting the clock rate to a long enough period, all worries about hazards (undesired signal transitions)
and the dynamic state of the circuit are removed. In contrast, designers of asynchronous systems must pay a great
deal of attention to the dynamic state of the circuit. Hazards must also be removed from the circuit, or not
introduced in the first place, to avoid incorrect results.
• The ordering of operations, which was fixed by the placement of latches in a synchronous system, must
be carefully ensured by the asynchronous control logic. For example, some asynchronous methodologies allow
only algebraic manipulations (associative, commutative, and DeMorgan's Law) for logic decomposition, and
many do not even allow these.
• Placement, routing, partitioning, logic synthesis, and most other CAD tools either need modifications for
asynchronous circuits, or are not applicable at all.
• Finally, even though most of the advantages of asynchronous circuits are towards higher performance, it
isn't clear that asynchronous circuits are actually any faster in practice.
• Asynchronous circuits generally require extra time due to their signaling policies, thus increasing average-
case delay.

Asynchronous Communication

Concurrent and distributed systems use communication as a means to exchange information. Communication can
be of two kinds: synchronous and asynchronous. Microcontrollers have the ability to communication
asynchronously and synchronously. With asynchronous communication, there is no wire between the two
microcontrollers, so each microcontroller is essentially blind to the pulse rate.
That means the two devices do not share a dedicated clock signal (a unique clock exists on each device). Each
device must setup ahead of time a matching bit rate and how many bits to expect in a given transaction.

Asynchronous Transmission

The size of a character transmitted is 8 bits, with a parity bit added both at the beginning and at the end, making
it a total of 10 bits. It doesn’t need a clock for integration—rather, it utilizes the parity bits to tell the receiver
how to translate the data. It is straightforward, quick, cost-effective, and doesnot need two-way communication
to function.

Characteristics of Asynchronous Communication

• Each character is headed by a beginning bit and concluded with one or more end bits.
• There may be gaps or spaces in between characters.

Examples of Asynchronous Communication

• Emails
• Forums
• Letters
• Radios

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 38
01/01/2022 SUMMARY / Q&A

• Televisions

Synchronous vs. Asynchronous Transmission

1. In synchronous transmission data is transmitted in the form of chunks, while in asynchronous transmission
data is transmitted one byte at a time.
2. Synchronous transmission needs a clock signal between the source and target to let the target know of the
new byte. In comparison, with asynchronous transmission, a clock signal is not needed because of the parity bits
that are attached to the data being transmitted, which serves as a start indicator of the new byte.
3. The data transfer rate of synchronous transmission is faster since it transmits in chunks of data, compared
to asynchronous transmission which transmits one byte at a time.
4. Asynchronous transmission is straightforward and cost-effective, while synchronous transmission is
complicated and relatively pricey.
5. Synchronous transmission is systematic and necessitates lower overhead figures compared to
asynchronous transmission.

Emerging application areas

Beyond more classical design targets, a number of novel application areas have recently emerged where
asynchronous design is poised to make an impact.

• Large-scale heterogenous system integration. In multi- and many-core processors and systems-onchip
(SoC’s), some level of asynchrony is inevitable in the integration of heterogeneous components.

• Ultra-low-energy systems and energy harvesting.

Continuous-time digital signal processors (CTDSP’s).

Another intriguing direction is the development of continuous-time digital signal processors, where input samples
are generated at irregular rates by a level-crossing analog-to-digital converter, depending on the actual rate of
change of the input’s waveform.

An early specialized approach, using finel discretized sampling, demonstrated a 10_ power reduction

Alternative computing paradigms.

Finally, there is increasing interest in asynchronous circuits as the organizing backbone of systems based on
emerging computing technologies

What is Fault Tolerance

Fault Tolerance has been part of the computing community for quite a long time, to clarify the building of our
understanding of fault tolerance, then we should know that fault tolerance is the art and science of building
computing systems that continue to operate satisfactorily in the presence of faults.

Basic Terms of fault Tolerance Computing

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 39
01/01/2022 SUMMARY / Q&A

Fault tolerance can be built into a system to remove the risk of it having a single point of failure. To do so, the
system must have no single component that, if it were to stop working effectively, would result in the entire
system failing. Fault tolerance is reliant on aspects likeload balancingandfailover, which remove the risk of a
single point of failure.

A fault is a physical defect, imperfection, or flaw that occurs within some hardware or software component.

An error is the manifestation of a fault. Specifically, an error is a deviation from accuracy or correctness.
failure has occurred. Essentially, a failure is the nonperformance of some action that is due or expected. A failure
is also the performance of some function in a subnormal quantity or quality.

The concepts of faults, errors, and failures can be best presented by the use of a three-universe model that is an
adaptation of the four-universe models;

• first universe is the physical universe in which faults occur. The physical universe contains the
semiconductor devices, mechanical elements, displays, printers, power supplies, and other physical entities that
make up a system. A fault is a physical defect or alteration of some component within the physical universe.
• The second universe is the informational universe. The informational universe is where the error occurs.
Errors affect units of information such as data words within a computer or digital voice or image information. An
error has occurred when some unit of information becomes incorrect.
• The final universe is the external or user’s universe. The external universe is where the user of a system
ultimately sees the effect of faults and errors.
• The cause-effect relationship implied in the three-universe model leads to the definition of two important
parameters; fault latency and error latency.

• Fault latency is the length of time between the occurrence of a fault and the appearance of an error due to
that fault.
• Error latency is the length of time between the occurrence of an error and the appearance of the resulting
failure.

Characteristics of Faults
a) Causes/Source of Faults
b) Nature of Faults
c) Fault Duration
d) Extent of Faults
e) Value of faults

Sources of faults: Faults can be the result of a variety of things that occur within electronic components, external
to the components, or during the component or system design process. Problems at any of several points within
the design process can result in faults within the system.

• Specification mistakes, which include incorrect algorithms, architectures, or hardware and software
design specifications.
• Implementation mistakes. Implementation, as defined here, is the process of transforming hardware and
software specifications into the physical hardware and the actual software. The implementation can introduce
faults because of poor design, poor component selection, poor construction, or software coding mistakes.
• Component defects. Manufacturing imperfections, random device defects, and component wear-out are
typical examples of component defects. Electronic components simply become defective sometimes. The defect

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 40
01/01/2022 SUMMARY / Q&A

can be the result of bonds breaking within the circuit or corrosion of the metal. Component defects are the most
commonly considered cause of faults.
• External disturbance; for example, radiation, electromagnetic interference, battle damage, operator
mistakes, and environmental extremes.

Nature of a faults:specifies the type of fault; for example, whether it is a hardware fault, a software fault, a fault
in the analog circuitry, or a fault in the digital circuitry.

Fault Duration. The duration specifies the length of time that a fault is active.

• Permanent fault, that remains in existence indefinitely if no corrective action is taken.


• Transient fault, which can appear and disappear within a very short period of time.
• Intermittent faultthat appears, disappears, and then reappears repeatedly.

Fault Extent. The extent of a fault specifies whether the fault is localized to a given hardware or software module
or globally affects the hardware, the software, or both.

Fault value of a fault can be either determinate or indeterminate. A determinate fault is one whose status remains
unchanged throughout time unless externally acted upon. An indeterminate fault is one whose status at some time,
T, may be different from its status at some increment of time greater than or less than T.

Three primary techniques for maintaining a system’s normal performance in an environment where faults are of
concern; fault avoidance, fault masking, and fault tolerance.

• Fault avoidanceis a technique that is used in an attempt to prevent the occurrence of faults. Fault
avoidance can include such things as design reviews, component screening, testing, and other quality control
methods.
• Fault masking is any process that prevents faults in a system from introducing errors into the
informational structure of that system.
• Fault toleranceis the ability of a system to continue to perform its tasks after the occurrence of faults. The
ultimate goal of fault tolerance is to prevent system failures from occurring. Since failures are directly caused by
errors, the terms fault tolerance and error tolerance are often used interchangeably.

Approaches for Fault Tolerance.

• Fault masking is one approach to tolerating faults.


• Reconfigurationis the process of eliminating a faulty entity from a system and restoring the system to
some operational condition or state.
• Fault detection is the process of recognizing that a fault has occurred. Fault detection is often required
before any recovery procedure can be implemented.
• Fault locationis the process of determining where a fault has occurred so that an appropriate recovery can
be implemented.
• Fault containmentis the process of isolating a fault and preventing the effects of that fault from
propagating throughout a system. Fault containment is required in all fault-tolerant designs.
• Fault recovery is the process of remaining operational or regaining operational status via reconfiguration
even in the presence of faults.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 41
01/01/2022 SUMMARY / Q&A

Goals of Fault Tolerance


Fault tolerance is an attribute that is designed into a system to achieve some design goals such as; dependability,
reliability, availability, safety, performability, maintainability, and testability; fault tolerance is one stem attribute
capable of fulfilling such requirements.

Dependability. The term dependability is used to encapsulate the concepts of reliability, availability, safety,
maintainability, performability, and testability.

Reliability. The reliability of a system is a function of time, R(t), defined as the conditional probability that the
system performs correctly throughout the interval of time, [t0,t], given that the system was performing correctly
at time t0.

Reliability is most often used to characterize systems in which even momentary periods of incorrect performance
are unacceptable, or it is impossible to repair the system. control, the time intervals of concern may be no more
than several hours, but the probability of working correctly throughout that interval may be 0.9999999 or higher.
It is a common convention when reporting reliability numbers to use 0.9i to represent the fraction that has i nines
to the right of the decimal point. For example, 0.9999999 is written as 0.97.

Availability. Availability is a function of time, A(t), defined as the probability that a system is operating correctly
and is available to perform its functions at the instant of time, t. Availability differs from reliability in that
reliability involves an interval of time, while availability is taken at an instant of time.

Safety. Safety is the probability, S(t), that a system will either perform its functions correctly or will discontinue
its functions in a manner that does not disrupt the operation of other systems or compromise the safety of any
people associated with the system. Safety is a measure of the failsafe capability of a system; if the system does
not operate correctly, it is desired to have the system fail in a safe manner.

Performability. In many cases, it is possible to design systems that can continue to perform correctly after the
occurrence of hardware and software faults, but the level of performance is somehow diminished. The
performability of a system is a function of time, P(L,t), defined as the probability that the system performance
will be at, or above, some level, L, at the instant of time, t Performability differs from reliability in that reliability
is a measure of the likelihoodthat all of the functions are performed correctly, while performability is a measure
of the likelihood that some subset of the functions is performed correctly.

Graceful degradation is an important feature that is closely related to performability. Graceful degradation is
simply the ability of a system to automatically decrease its level of performance to compensate for hardware and
software faults. Fault tolerance can certainly support graceful degradation and performability by providing the
ability to eliminate the effects of hardware and software faults from a system, therefore allowing performance at
some reduced level.

Maintainability. Maintainability is a measure of the ease with which a system can be repaired, once it has failed.
In more quantitative terms, maintainability is the probability, M(t), that a failed system will be restored to an
operational state within a period of time t. The restoration process includes locating the problem, physically
repairing the system, and bringing the system back to its operational condition. Many of the techniques that are
so vital to the achievement of fault tolerance can be used to detect and locate problems in a system for the purpose
of maintenance.

Testability. Testability is simply the ability to test for certain attributes within a system. Measures of testability
allow one to assess the ease with which certain tests can be performed. Certain tests can be automated and
provided as an integral part of the system to improve the testability.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 42
01/01/2022 SUMMARY / Q&A

Fault Tolerant Systems

This fault-tolerance definition refers to the system's ability to continue operating despite failures or malfunctions.
A fault-tolerant system may be able to tolerate one or more fault-types including

• Transient, Intermittent or Permanent Hardware Faults,


• Software and Hardware Design Errors,
• Operator Errors
• Externally Induced Upsets or Physical Damage.

An extensive methodology has been developed in this field over the past thirty years, and a number of fault-
tolerant machines have been developed most dealing with random hardware faults, while a smaller number deal
with software, design and operator faults to varying degrees. Each channel is designed to provide the same
function, and a method is provided to identify if one channel deviates unacceptably from the others. The goal is
to tolerate both hardware and software design faults. This is a very expensive technique, but it is used in very
critical aircraft control applications.

Major building blocks of a Fault-tolerance System

The key benefit of fault tolerance is to minimize or avoid the risk of systems becoming unavailable due to a
component error(s). This is particularly important in critical systems that are relied on to ensure people’s safety,
such as air traffic control, and systems that protect and secure critical data and high-value transactions The core
components toimproving fault tolerance include:

Diversity: If a system’s main electricity supply fails, potentially due to a storm that causes a power outage or
affects a power station, it will not be possible to access alternative electricity sources.
• Some diverse fault-tolerance options result in the backup not having the same level of capacity as the
primary source. This may, in some cases, require the system to ensure graceful degradation until the primary
power source is restored.
• Redundancy
• Fault-tolerant systems use redundancy to remove the single point of failure. The system is equipped with
one or more power supply units (PSUs), which do not need to power the system when the primary PSU functions
as normal. In the event the primary PSU fails or suffers a fault, it can be removed from service and replaced by a
redundant PSU, which takes over system function and performance.
• Alternatively, redundancy can be imposed at a system level, which means an entire alternate computer
system is in place in case a failure occurs.

Replication: Replication is a more complex approach to achieving fault tolerance. It involves using multiple
identical versions of systems and subsystems and ensuring their functions always provide identical results.
• Replication can either take place at the component level, which involves multiple processors running
simultaneously, or at the system level, which involves identical computer systems running simultaneously

Basic Characteristics of Fault Tolerant Systems

A fault tolerant system may have one or more of the following characteristics:

• No Single Point of Failure: This means if a capacitor, block of software code, a motor, or any single item
fails, then the system does not fail. As an example, many hospitals have backup power systems in case the grid

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 43
01/01/2022 SUMMARY / Q&A

power fails, thus keeping critical systems within the hospital operational. Critical systems may have multiple
redundant schemes to maintain a high level of fault tolerance and resilience.

• No Single Point Repair Takes the System Down: Extending the single point failure idea, effecting a
repair of a failed component does not require powering down the system, for example.

• Fault isolation or identification: The system is able to identify when a fault occurs within the system
and does not permit the faulty element to adversely influence to functional capability (i.e. Losing data or making
logic errors in a banking system). The faulty elements are identified and isolated. Portions of the system may have
the sole purpose of detecting faults, built-in self-test (BIST) is an example.

Fault containment to prevent propagation of failure

• When a failure occurs it may result in damage to other elements within the system, thus creating a second
or third fault and system failure.
• For example, if an analog circuit fails it may increase the current across the system damaging logic circuits
unable to withstand
high current conditions. The idea of fault containment is to avoid or minimize collateral damage caused by a
single point failure.

Robustness or Variability Control

• When a system experiences a single point failure, the system changes.


• The change may cause transient or permanent changes affecting how the working elements of the system
response and function. Variation occurs, and when a failure occurs there often is an increase in variability. For
example, when one of two power supplies fails, the remaining power supply takes on the full load of the power
demand. This transition should occur without impacting the performance of the system. The ability to design and
manufacture a robust system may involve design for six sigma, design of experiment optimization, and other tools
to create a system able to operate when a failure occurs.

Availability of Reversion Mode

• There are many ways a system may alter it performance when a failure occurs, enabling the system to
continue to function in some fashion.
• In some cases, the system may be able to operators with no or only minimal loss of functional capability,
or the reversion operation significantly restricts the system operation to a critical few functions.

Hardware and Software Fault Tolerant Issues

In everyday language, the terms fault, failure, and error are used interchangeably. In fault-tolerant computing
parlance, however, they have distinctive meanings. A fault (or failure) can be either a hardware defect or a
software i.e. programming mistake (bug). In contrast, an error is a manifestation of the fault, failure and bug. As
an example, consider an adder circuit, with an output line stuck at 1; it always carries the value 1 independently
of the values of the input operands. This is a fault, but not (yet) an error. This fault causes an error when the adder
is used and the result on that line is supposed to have been a 0, rather than a 1. A similar distinction exists between
programming mistakes and execution errors. Consider, for example, a subroutine that is supposed to compute
sin(x) but owing to a programming mistake calculates the absolute value of sin(x) instead. This mistake will result
in an execution error only if that particular subroutine is used and the correct result is negative.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 44
01/01/2022 SUMMARY / Q&A

Both faults and errors can spread through the system. For example, if a chip shorts out power to ground, it may
cause nearby chips to fail as well. Errors can spread because the output of one unit is used as input by other units.
To return to our previous examples, the erroneous results of either the faulty adder or the sin(x) subroutine can be
fed into further calculations, thus propagating the error.

To limit such contagion, designers incorporate containment zonesinto systems. These are barriers that reduce the
chance that a fault or error in one zone will propagate to another. For example, a fault-containment zone can be
created by ensuring that the maximum possible voltage swings in one zone are insulated from the other zones,
and by providing an independent power supply to each zone. In other words, the designer tries to electrically
isolate one zone from another. An error-containment zone can be created, as we will see in some detail later on,
by using redundant units, programs and voting on their output.

Hardware faults can be classified according to several aspects. Regarding their duration, hardware faults can be
classified into permanent, transient, or intermittent. A permanent faultis just that: it reflects the permanent going
out of commission of a component. As an example of a permanent fault think of a burned-out light bulb.

A transient faultis one that causes a component to malfunction for some time; it goes away after that time and the
functionality of the component is fully restored. As an example, think of a random noise interference during a
telephone conversation. Another example is a memory cell with contents that are changed spuriously due to some
electromagnetic interference. The cell itself is undamaged: it is just that its contents are wrong for the time being,
and overwriting the memory cell will make the fault go away.

An intermittent faultnever quite goes away entirely; it oscillates between being quiescent and active. When the
fault is quiescent, the component functions normally; when the fault is active, the component malfunctions. An
example for an intermittent fault is a loose electrical connection. Another classification of hardware fault is into
benign and maliciousfaults.
A fault that just causes a unit to go dead is called benign. Such faults are the easiest to deal with. Far more insidious
are the faults that cause a unit to produce reasonable-looking, but incorrect, output, or that make a component
“act maliciously” and send differently valued outputs to different receivers. Think of an altitude sensor in an
airplane that reports a 1000-foot altitude to one unit and an 8000-foot altitude to another unit. These are called
malicious(or Byzantine) faults.

3.0.1.4 Fault Tolerance VS High Availability

Why is it that we see industry-standard servers advertising five 9s of availability while Nonstop servers
acknowledge four 9s? Are these highavailability industry-standard servers really ten times more reliable than
fault-tolerant NonStop servers? Of course not.

To understand this marketing discrepancy, let’s take a look at the factors which differentiate fault-tolerant systems
from high-availability systems. To start with, there is no reason to assume that a single NonStop processor is any
more or less reliable than an industry-standard processor. In fact, a reasonable assumption is that a processor will
be up about 99.5% of the time (that is, it will have almost three 9s availability) whether it be a NonStop processor
or an industry-standard processor. So how do we get four or five 9s out of components that offer less than three
9s of availability? Through redundancy, of course. NonStop servers are inherently redundant and are fault tolerant
(FT) in that they can survive any single fault. In the high-availability (HA) world, industry-standard servers are
configured in clusters of two or more processors that allow for re-configuration around faults. FT systems tolerate
faults; HA clusters re-configure around faults.

If you provide a backup, you double your 9s.Thus, in a two-processor configuration, each of which has an
availability of .995, you can be dreaming of five 9s of hardware availability. But dreams they are. True, you will

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 45
01/01/2022 SUMMARY / Q&A

have at least one processor up 99.999% of the time; but that does not mean that your system will be available for
that proportion of time. This is because most system outages are not caused by hardware failures.

The causes of outages have been studied by many (Standish Group, IEEE Computer, Grey, among others), and
they all come up with amazingly similar breakdowns:

- Hardware 10% – 20%


- Software 30% – 40 %
- People 20% – 40%
- Environment 10% – 20%
-Planned 20% – 30%

These results are for single processor systems. However, we are considering redundant systems which will suffer
a hardware failure only if both systems fail. Given a 10-20% chance that a single system will fail due to a hardware
failure, an outage due to a dual hardware failure is only 1% to 4%. Thus, we can pretty much ignore hardware
failures as a source of failure in redundant systems. (This is a gross understatement for the new Nonstop Advanced
Architecture, which is reaching toward six or seven 9s for hardware availability.)

So, what is left that can be an FT/HA differentiator? Environmental factors (air conditioning, earthquakes, etc.)
and people factors (assuming good system management tools) are pretty much independent of the system. Planned
downtime is a millstone around everyone’s neck, and much is being done about this across all systems. This
leaves software as the differentiator.

Software faults are going to happen, no matter what. In a single system, 30-40% of all single-system outages will
be caused by software faults. The resultant availability of a redundant system is going to depend on how software
faults are handled. Here is the distinction between faulttolerant systems and high-availability systems. A fault-
tolerant system will automatically recover from a software fault almost instantly (typically in seconds) as failed
processes switch over to their synchronized backups. The state of incomplete transactions remains in the backup
disk process and processing goes on with virtually no delay. On the other hand, a high-availability (HA) cluster
will typically require that the applications be restarted on a surviving system and that in-doubt transactions in
process be recovered from the transaction log. Furthermore, users must be switched over before the applications
are once again available to the users. This can all take several minutes. In addition, an HA switchover must often
be managed manually.

If an FT system and an HA cluster have the same fault rate, but the FT system can recover in 3 seconds and the
HA cluster takes 5 minutes (300 seconds) to recover from the same fault, then the HA cluster will be down 100
times as long as the FT system and will have an availability which is two 9s less. That glorious five 9s claim
becomes three 9s (as reported in several industry studies), at least so far as software faults are concerned.

So, the secret to high availability is in the recovery time. This is what the Tandem folks worked so hard on for
two decades before becoming the Nonstop people. Nobody else has done it. Today, Nonstop servers are the only
fault-tolerant systems out-of-the-box in the marketplace, and they hold the high ground for availability.

Redundancy
All of fault tolerance is an exercise in exploiting and managing redundancy. Redundancy is the property of having
more of a resource than is minimally necessary to do the job at hand. As failures happen, redundancy is exploited
to mask or otherwise work around these failures, thus maintaining the desired level of functionality.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 46
01/01/2022 SUMMARY / Q&A

There are four forms of redundancy that we will study: hardware, software, information, and time. Hardware
faults are usually dealt with by using hardware, information, or time redundancy, whereas software faults are
protected against by software redundancy.

Hardware redundancy is provided by incorporating extra hardware into the design to either detect or override the
effects of a failed component. For example, instead of having a single processor, we can use two or three
processors, each performing the same function. By having two processors, we can detect the failure of a single
processor; by having three, we can use the majority output to override the wrong output of a single faulty
processor. This is an example of static hardware redundancy, the main objective of which is the immediate
masking of a failure. A different form of hardware redundancy is dynamic redundancy, where spare components
are activated upon the failure of a currently active component. A combination of static and dynamic redundancy
techniques is also possible, leading to hybrid hardware redundancy.
Hardware redundancy can thus range from a simple duplication to complicated structures that switch in spare
units when active ones become faulty. These forms of hardware redundancy incur high overheads, and their use
is therefore normally reserved for critical systems where such overheads can be justified. In particular, substantial
amounts of redundancy are required to protect against malicious faults.

The best-known form of information redundancy is error detection and correction coding. Here, extra bits (called
check bits) are added to the original data bits so that an error in the data bits can be detected or even corrected.
The resulting error-detecting and error-correcting codes are widely used today in memory units and various
storage devices to protect against benign failures. Note that these error codes (like any other form of information
redundancy) require extra hardware to process the redundant data (the check bits).
Error-detecting and error-correcting codes are also used to protect data communicated over noisy channels, which
are channels that are subject to many transient failures. These channels can be either the communication links
among widely separated processors (e.g., the Internet) or among locally connected processors that form a local
network. If the code used for data communication is capable of only detecting the faults that have occurred (but
not correcting them), we can retransmit as necessary, thus employing time redundancy.

In addition to transient data communication failures due to noise, local and wide-area networks may experience
permanent link failures. These failures may disconnect one or more existing communication paths, resulting in a
longer communication delay between certain nodes in the network, a lower data bandwidth between certain node
pairs, or even a complete disconnection of certain nodes from the rest of the network. Redundant communication
links (i.e., hardware redundancy) can alleviate most of these problems.

Computing nodes can also exploit time redundancy through re-execution of the same program on the same
hardware. As before, time redundancy is effective mainly against transient faults. Because the majority of
hardware faults are transient, it is unlikely that the separate executions will experience the same fault.

Time redundancy can thus be used to detect transient faults in situations in which such faults may otherwise go
undetected. Time redundancy can also be used when other means for detecting errors are in place and the system
is capable of recovering from the effects of the fault and repeating the computation. Compared with the other
forms of redundancy, time redundancy has much lower hardware and software overhead but incurs a high-
performance penalty.

Software redundancy is used mainly against software failures. It is a reasonable guess that every large piece of
software that has ever been produced has contained faults (bugs). Dealing with such faults can be expensive: one
way is to independently produce two or more versions of that software (preferably by disjoint teams of
programmers) in the hope that the different versions will not fail on the same input. The secondary version(s) can
be based on simpler and less accurate algorithms (and, consequently, less likely to have faults) to be used only
upon the failure of the primary software to produce acceptable results. Just as for hardware redundancy, the

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 47
01/01/2022 SUMMARY / Q&A

multiple versions of the program can be executed either concurrently (requiring redundant hardware as well) or
sequentially (requiring extra time, i.e., time redundancy) upon a failure detection.

Techniques of Redundancy
The concept of redundancy implies the addition of information, resources, or time beyond what is needed for
normal system operation. The redundancy can take one of several forms, including hardware redundancy,
software redundancy, information redundancy, and time redundancy. The use of redundancy can provide
additional capabilities within a system. In fact, if fault tolerance or fault detection is required then some form of
redundancy is also required. But, it must be understood that redundancy can have a very important impact on a
system in the areas of performance, size, weight, power consumption, reliability, and others

Hardware Redundancy
The physical replication of hardware is perhaps the most common form of redundancy used in systems. As
semiconductor components have become smaller and less expensive, the concept of hardware redundancy has
become more common and more practical. The costs of replicating hardware within a system are decreasing
simply because the costs of hardware are decreasing.
There are three basic forms of hardware redundancy. First, passive techniques use the concept of fault masking
to hide the occurrence of faults and prevent the faults from resulting in errors. Passive approaches are designed
to achieve fault tolerance without requiring any action on the part of the system or an operator. Passive techniques,
in their most basic form, do not provide for the detection of faults but simply mask the faults.

PLEASE PATRONIZE US FOR MORE COURSE SUMMARY, PAST QUESTIONS AND TMA (30/30)…
Click >> Whatsapp*Group << Click & Click >> Email << Click & Click >> Telegram*Group << Click pg. 48

You might also like