0% found this document useful (0 votes)
23 views60 pages

Computer Architecture

The Computer Organization and Architecture Tutorial provides comprehensive insights into the structure, functionality, and implementation of computer systems, covering topics such as architecture, organization, and the evolution of computing devices. It differentiates between computer architecture, which focuses on high-level design and user-visible attributes, and computer organization, which deals with the physical arrangement of components. The tutorial also discusses the functional units of digital systems, general system architecture, and various instruction formats used in computing.

Uploaded by

smdevx6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views60 pages

Computer Architecture

The Computer Organization and Architecture Tutorial provides comprehensive insights into the structure, functionality, and implementation of computer systems, covering topics such as architecture, organization, and the evolution of computing devices. It differentiates between computer architecture, which focuses on high-level design and user-visible attributes, and computer organization, which deals with the physical arrangement of components. The tutorial also discusses the functional units of digital systems, general system architecture, and various instruction formats used in computing.

Uploaded by

smdevx6
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 60

Computer Organization and Architecture Tutorial

Computer Organization and Architecture Tutorial provides in-depth knowledge of internal


working, structuring, and implementation of a computer system.

Whereas, Organization defines the way the system is structured so that all those
catalogued tools can be used properly.

Our Computer Organization and Architecture Tutorial includes all topics of such as
introduction, ER model, keys, relational model, join operation, SQL, functional
dependency, transaction, concurrency control, etc.

What is Computer Architecture and Organization?

In general terms, the architecture of a computer system can be considered as a


catalogue of tools or attributes that are visible to the user such as instruction sets,
number of bits used for data, addressing techniquesWhereas, Organization of a computer
system defines the way system is structured so that all those catalogued tools can be
used. The significant components of Computer organization are ALU, CPU, memory and
memory organization.

Computer Architecture VS Computer Organization

Computer Architecture Computer Organization

Computer Architecture is concerned with the Computer Organization is concerned with


way hardware components are connected the structure and behaviour of a computer
together to form a computer system. system as seen by the user.

It acts as the interface between hardware It deals with the components of a


and software. connection in a system.

Computer Architecture helps us to Computer Organization tells us how


understand the functionalities of a system. exactly all the units in the system are
arranged and interconnected.

A programmer can view architecture in Whereas Organization expresses the


terms of instructions, addressing modes and realization of architecture.
registers.

While designing a computer system An organization is done on the basis of


architecture is considered first. architecture.

Computer Architecture deals with high-level Computer Organization deals with low-
design issues. level design issues.

Architecture involves Logic (Instruction sets, Organization involves Physical


Addressing modes, Data types, Cache Components (Circuit design, Adders,
optimization) Signals, Peripherals)

Evolution of Computing Devices

ENIAC (Electronic Numerical Integrator and Computer) was the first computing
system designed in the early 1940s. It consisted of 18,000 buzzing electronic switches
called vacuum tubes, 42 panels each 9'x 2'x1'. It was organized in U-Shaped around the
perimeter of a room with forced air cooling.
o Atanasoff-Berry Computer (ABC) design was known as the first digital
electronic computer (though not programmable). It was designed and built
by John Vincent Atanasoff and his assistant, Clifford E. Berry in 1937.

o In 1941, Z3 was invented by German inventor Konrad Zuse. It was the first
working programmable, fully automatic computing machine.

o Transistors were invented in 1947 at Bell Laboratories which were a fraction


the size of the vacuum tubes and consumed less power, but still, the complex
circuits were not easy to handle.

o Jack Kilby and Robert Noyce invented the Integrated Circuit at the same
time. In July 1959 Noyce filed a patent for this.

o In 1968, Robert Noyce co-founded Intel Electronics company which is still


the global market leader in IC manufacturing, research, and development.

o In 1983, Lisa was launched as the first personal computer with a graphical
user interface (GUI) that was sold commercially; it ran on the Motorola 68000,
dual floppy disk drives, a 5 MB hard drive and had 1MB of RAM.

o In 1990, Apple released the Macintosh Portable; it was heavy weighing 7.3 kg (16
lb) and extremely expensive. It was not met with great success and was
discontinued only two years later.

o In 1990, Intel introduced the Touchstone Delta supercomputer, which had 512
microprocessors. This technological advancement was very significant as it was
used as a model for some of the fastest multi-processors systems in the world.

Functional Units of Digital System

o A computer organization describes the functions and design of the various units of
a digital system.

o A general-purpose computer system is the best-known example of a digital


system. Other examples include telephone switching exchanges, digital
voltmeters, digital counters, electronic calculators and digital displays.

o Computer architecture deals with the specification of the instruction set and the
hardware units that implement the instructions.

o Computer hardware consists of electronic circuits, displays, magnetic and optic


storage media and also the communication facilities.

o Functional units are a part of a CPU that performs the operations and calculations
called for by the computer program.

o Functional units of a computer system are parts of the CPU (Central Processing
Unit) that performs the operations and calculations called for by the computer
program. A computer consists of five main components namely, Input unit,
Central Processing Unit, Memory unit Arithmetic & logical unit, Control unit and an
Output unit.

Input unit

o Input units are used by the computer to read the data. The most commonly used
input devices are keyboards, mouse, joysticks, trackballs, microphones, etc.

o However, the most well-known input device is a keyboard. Whenever a key is


pressed, the corresponding letter or digit is automatically translated into its
corresponding binary code and transmitted over a cable to either the memory or
the processor.

Central processing unit

o Central processing unit commonly known as CPU can be referred as an electronic


circuitry within a computer that carries out the instructions given by a computer
program by performing the basic arithmetic, logical, control and input/output (I/O)
operations specified by the instructions.

Memory unit

o The Memory unit can be referred to as the storage area in which programs are
kept which are running, and that contains data needed by the running programs.

o The Memory unit can be categorized in two ways namely, primary memory and
secondary memory.

o It enables a processor to access running execution applications and services that


are temporarily stored in a specific memory location.

o Primary storage is the fastest memory that operates at electronic speeds. Primary
memory contains a large number of semiconductor storage cells, capable of
storing a bit of information. The word length of a computer is between 16-64 bits.

o It is also known as the volatile form of memory, means when the computer is shut
down, anything contained in RAM is lost.

o Cache memory is also a kind of memory which is used to fetch the data very
soon. They are highly coupled with the processor.

o The most common examples of primary memory are RAM and ROM.
o Secondary memory is used when a large amount of data and programs have to be
stored for a long-term basis.

o It is also known as the Non-volatile memory form of memory, means the data is
stored permanently irrespective of shut down.

o The most common examples of secondary memory are magnetic disks, magnetic
tapes, and optical disks.

Arithmetic & logical unit

o Most of all the arithmetic and logical operations of a computer are executed in the
ALU (Arithmetic and Logical Unit) of the processor. It performs arithmetic
operations like addition, subtraction, multiplication, division and also the logical
operations like AND, OR, NOT operations.

Control unit

o The control unit is a component of a computer's central processing unit that


coordinates the operation of the processor. It tells the computer's memory,
arithmetic/logic unit and input and output devices how to respond to a program's
instructions.

o The control unit is also known as the nerve center of a computer system.

o Let's us consider an example of addition of two operands by the instruction given


as Add LOCA, RO. This instruction adds the memory location LOCA to the operand
in the register RO and places the sum in the register RO. This instruction
internally performs several steps.

Output Unit

o The primary function of the output unit is to send the processed results to the
user. Output devices display information in a way that the user can understand.

o Output devices are pieces of equipment that are used to generate information or
any other response processed by the computer. These devices display information
that has been held or generated within a computer.

o The most common example of an output device is a monitor.

Basic Operational Concepts

o The primary function of a computer system is to execute a program, sequence of


instructions. These instructions are stored in computer memory.

o These instructions are executed to process data which are already loaded in the
computer memory through some input devices.
o After processing the data, the result is either stored in the memory for further
reference, or it is sent to the outside world through some output port.

o To perform the execution of an instruction, in addition to the arithmetic logic unit,


and control unit, the processor contains a number of registers used for temporary
storage of data and some special function registers.

o The special function registers include program counters (PC), instruction registers
(IR), memory address registers (MAR) and memory and memory data registers
(MDR).

General System Architecture

In Computer Architecture, the General System Architecture is divided into two major
classification units.

1. Store Program Control Concept

2. Flynn's Classification of Computers

Store Program Control Concept

The term Stored Program Control Concept refers to the storage of instructions in
computer memory to enable it to perform a variety of tasks in sequence or
intermittently.

The idea was introduced in the late 1040s by John von Neumann who proposed that a
program be electronically stored in the binary-number format in a memory device so that
instructions could be modified by the computer as determined by intermediate
computational results.

ENIAC (Electronic Numerical Integrator and Computer) was the first computing
system designed in the early 1940s. It was based on Stored Program Concept in which
machine use memory for processing data.

Stored Program Concept can be further classified in three basic ways:

1. Von-Neumann Model

2. General Purpose System

3. Parallel Processing
Flynn's Classification of Computers

M.J. Flynn proposed a classification for the organization of a computer system by the
number of instructions and data items that are manipulated simultaneously.

The sequence of instructions read from memory constitutes an instruction stream.

The operations performed on the data in the processor constitute a data stream.

Parallel processing may occur in the instruction stream, in the data stream, or both.

Competitive questions on Structures in Hindi

Keep Watching

Flynn's classification divides computers into four major groups that are:

1. Single instruction stream, single data stream (SISD)

2. Single instruction stream, multiple data stream (SIMD)

3. Multiple instruction stream, single data stream (MISD)

4. Multiple instruction stream, multiple data stream (MIMD)

SISD

SISD stands for 'Single Instruction and Single Data Stream'. It represents the
organization of a single computer containing a control unit, a processor unit, and a
memory unit.

Instructions are executed sequentially, and the system may or may not have internal
parallel processing capabilities.

Most conventional computers have SISD architecture like the traditional Von-Neumann
computers.

Parallel processing, in this case, may be achieved by means of multiple functional units
or by pipeline processing.

00:00/04:57
4.5M
56
Competitive questions on Structures
1. Where, CU = Control Unit, PE = Processing Element, M = Memory

Instructions are decoded by the Control Unit and then the Control Unit sends the
instructions to the processing units for execution.

Data Stream flows between the processors and memory bi-directionally.

Examples:

Older generation computers, minicomputers, and workstations

SIMD

SIMD stands for 'Single Instruction and Multiple Data Stream'. It represents an
organization that includes many processing units under the supervision of a common
control unit.

All processors receive the same instruction from the control unit but operate on different
items of data.

The shared memory unit must contain multiple modules so that it can communicate with
all the processors simultaneously.

SIMD is mainly dedicated to array processing machines. However, vector processors can
also be seen as a part of this group

MISD

MISD stands for 'Multiple Instruction and Single Data stream'.


MISD structure is only of theoretical interest since no practical system has been
constructed using this organization.

In MISD, multiple processing units operate on one single-data stream. Each processing
unit operates on the data independently via separate instruction stream.

5. Where, M = Memory Modules, CU = Control Unit, P = Processor Units

MIMD

MIMD stands for 'Multiple Instruction and Multiple Data Stream'.

In this organization, all processors in a parallel computer can execute different


instructions and operate on various data at the same time.

In MIMD, each processor has a separate program and an instruction stream is generated
from each program.

6. Where, M = Memory Module, PE = Processing Element, and CU = Control Unit

Computer Registers

Registers are a type of computer memory used to quickly accept, store, and transfer
data and instructions that are being used immediately by the CPU. The registers used by
the CPU are often termed as Processor registers.
A processor register may hold an instruction, a storage address, or any data (such as bit
sequence or individual characters).

The computer needs processor registers for manipulating data and a register for holding
a memory address. The register holding the memory location is used to calculate the
address of the next instruction after the execution of the current instruction is
completed.

Following is the list of some of the most common registers used in a basic computer:

Register Symbol Number of bits Function

Data register DR 16 Holds memory operand

Address register AR 12 Holds address for the memory

Accumulator AC 16 Processor register

Instruction register IR 16 Holds instruction code

Program counter PC 12 Holds address of the instruction

Temporary register TR 16 Holds temporary data

Input register INPR 8 Carries input character

Output register OUTR 8 Carries output character

The following image shows the register and memory configuration for a basic computer.

Competitive questions on Structures in Hindi

Keep Watching

Computer Instructions

Computer instructions are a set of machine language instructions that a particular


processor understands and executes. A computer performs tasks on the basis of the
instruction provided.

An instruction comprises of groups called fields. These fields include:

o The Operation code (Opcode) field which specifies the operation to be performed.
o The Address field which contains the location of the operand, i.e., register or
memory location.

o The Mode field which specifies how the operand will be located.

A basic computer has three instruction code formats which are:

1. Memory - reference instruction

2. Register - reference instruction

3. Input-Output instruction

Memory - reference instruction540

History of
Java

In Memory-reference instruction, 12 bits of memory is used to specify an address and


one bit to specify the addressing mode 'I'.

Register - reference instruction

The Register-reference instructions are represented by the Opcode 111 with a 0 in the
leftmost bit (bit 15) of the instruction.

Note: The Operation code (Opcode) of an instruction refers to a group of bits that define
arithmetic and logic operations such as add, subtract, multiply, shift, and compliment.

A Register-reference instruction specifies an operation on or a test of the AC


(Accumulator) register.

Input-Output instruction

Just like the Register-reference instruction, an Input-Output instruction does not need a
reference to memory and is recognized by the operation code 111 with a 1 in the
leftmost bit of the instruction. The remaining 12 bits are used to specify the type of the
input-output operation or test performed.
Note

o The three operation code bits in positions 12 through 14 should be equal to 111.
Otherwise, the instruction is a memory-reference type, and the bit in position 15
is taken as the addressing mode I.

o When the three operation code bits are equal to 111, control unit inspects the bit
in position 15. If the bit is 0, the instruction is a register-reference type.
Otherwise, the instruction is an input-output type having bit 1 at position 15.

Instruction Set Completeness

A set of instructions is said to be complete if the computer includes a sufficient number


of instructions in each of the following categories:

o Arithmetic, logical and shift instructions

o A set of instructions for moving information to and from memory and processor
registers.

o Instructions which controls the program together with instructions that check
status conditions.

o Input and Output instructions

Arithmetic, logic and shift instructions provide computational capabilities for processing
the type of data the user may wish to employ.

A huge amount of binary information is stored in the memory unit, but all computations
are done in processor registers. Therefore, one must possess the capability of moving
information between these two units.

Program control instructions such as branch instructions are used change the sequence
in which the program is executed.

Input and Output instructions act as an interface between the computer and the user.
Programs and data must be transferred into memory, and the results of computations
must be transferred back to the user.

Design of Control Unit

The Control Unit is classified into two major categories:

1. Hardwired Control

2. Microprogrammed Control

Hardwired Control

The Hardwired Control organization involves the control logic to be implemented with
gates, flip-flops, decoders, and other digital circuits.

The following image shows the block diagram of a Hardwired Control organization.
o A Hard-wired Control consists of two decoders, a sequence counter, and a number
of logic gates.

o An instruction fetched from the memory unit is placed in the instruction register
(IR).

o The component of an instruction register includes; I bit, the operation code, and
bits 0 through 11.

o The operation code in bits 12 through 14 are coded with a 3 x 8 decoder.

o The outputs of the decoder are designated by the symbols D0 through D7.

o The operation code at bit 15 is transferred to a flip-flop designated by the symbol


I.

o The operation codes from Bits 0 through 11 are applied to the control logic gates.

o The Sequence counter (SC) can count in binary from 0 through 15.

Micro-programmed Control

The Microprogrammed Control organization is implemented by using the programming


approach.

5M
Texas Sues Meta Over Facial Recognition Data

In Microprogrammed Control, the micro-operations are performed by executing a


program consisting of micro-instructions.

The following image shows the block diagram of a Microprogrammed Control


organization.
o The Control memory address register specifies the address of the micro-
instruction.

o The Control memory is assumed to be a ROM, within which all control information
is permanently stored.

o The control register holds the microinstruction fetched from the memory.

o The micro-instruction contains a control word that specifies one or more micro-
operations for the data processor.

o While the micro-operations are being executed, the next address is computed in
the next address generator circuit and then transferred into the control address
register to read the next microinstruction.

o The next address generator is often referred to as a micro-program sequencer, as


it determines the address sequence that is read from control memory.

Instruction Cycle

A program residing in the memory unit of a computer consists of a sequence of


instructions. These instructions are executed by the processor by going through a cycle
for each instruction.

In a basic computer, each instruction cycle consists of the following phases:

1. Fetch instruction from memory.

2. Decode the instruction.

3. Read the effective address from memory.

4. Execute the instruction.

Input-Output Configuration

In computer architecture, input-output devices act as an interface between the machine


and the user.

Instructions and data stored in the memory must come from some input device. The
results are displayed to the user through some output device.

The following block diagram shows the input-output configuration for a basic computer.
o The input-output terminals send and receive information.

o The amount of information transferred will always have eight bits of an


alphanumeric code.

o The information generated through the keyboard is shifted into an input register
'INPR'.

o The information for the printer is stored in the output register 'OUTR'.

o Registers INPR and OUTR communicate with a communication interface serially


and with the AC in parallel.

o The transmitter interface receives information from the keyboard and transmits it
to INPR.

o The receiver interface receives information from OUTR and sends it to the printer
serially.

Design of a Basic Computer

A basic computer consists of the following hardware components.

1. A memory unit with 4096 words of 16 bits each

2. Registers: AC (Accumulator), DR (Data register), AR (Address register), IR


(Instruction register), PC (Program counter), TR (Temporary register), SC
(Sequence Counter), INPR (Input register), and OUTR (Output register).

3. Flip-Flops: I, S, E, R, IEN, FGI and FGO

Note: FGI and FGO are corresponding input and output flags which are considered as control
flip-flops.

1. Two decoders: a 3 x 8 operation decoder and 4 x 16 timing decoder

2. A 16-bit common bus

3. Control Logic Gates

4. The Logic and Adder circuits connected to the input of AC.

Control Logic Gates

The Control Logic Gate for a basic computer is same as the one used in Hard wired
Control organization.
The block diagram is also similar to the Control Logic Gate used in the Hard wired Control
organization.

Inputs for the Control Logic Circuit:

o The input for the Control Logic circuit comes from the two decoders, I flip-flop and
bits 0 through 11 of IR.

o The other inputs to the Control Logic are AC (bits 0 through 15), DR (bits 0
through 15), and the value of the seven flip-flops.

Outputs of the Control Logic Circuit:

o The control of the inputs of the nine registers

o The control of the read and write inputs of memory

o To set, clear, or complement the flip-flops

o S2, S1, and SO to select a register for the bus

o The control of the AC adder and logic circuit.

Digital Computers

A Digital computer can be considered as a digital system that performs various


computational tasks.

The first electronic digital computer was developed in the late 1940s and was used
primarily for numerical computations.

By convention, the digital computers use the binary number system, which has two
digits: 0 and 1. A binary digit is called a bit.

A computer system is subdivided into two functional entities: Hardware and Software.

39.9M
893
Features of Java - Javatpoint

The hardware consists of all the electronic components and electromechanical devices
that comprise the physical entity of the device.

The software of the computer consists of the instructions and data that the computer
manipulates to perform various data-processing tasks.
o The Central Processing Unit (CPU) contains an arithmetic and logic unit for
manipulating data, a number of registers for storing data, and a control circuit for
fetching and executing instructions.

o The memory unit of a digital computer contains storage for instructions and data.

o The Random Access Memory (RAM) for real-time processing of the data.

o The Input-Output devices for generating inputs from the user and displaying the
final results to the user.

o The Input-Output devices connected to the computer include the keyboard,


mouse, terminals, magnetic disk drives, and other communication devices.

Logic Gates

o The logic gates are the main structural part of a digital system.

o Logic Gates are a block of hardware that produces signals of binary 1 or 0 when
input logic requirements are satisfied.

o Each gate has a distinct graphic symbol, and its operation can be described by
means of algebraic expressions.

o The seven basic logic gates includes: AND, OR, XOR, NOT, NAND, NOR, and XNOR.

o The relationship between the input-output binary variables for each gate can be
represented in tabular form by a truth table.

o Each gate has one or two binary input variables designated by A and B and one
binary output variable designated by x.

AND GATE:

The AND gate is an electronic circuit which gives a high output only if all its inputs are
high. The AND operation is represented by a dot (.) sign.
OR GATE:

The OR gate is an electronic circuit which gives a high output if one or more of its inputs
are high. The operation performed by an OR gate is represented by a plus (+) sign.

NOT GATE:

The NOT gate is an electronic circuit which produces an inverted version of the input at
its output. It is also known as an Inverter.

NAND GATE:

The NOT-AND (NAND) gate which is equal to an AND gate followed by a NOT gate. The
NAND gate gives a high output if any of the inputs are low. The NAND gate is represented
by a AND gate with a small circle on the output. The small circle represents inversion.

28.3M
522
Prime Ministers of India | List of Prime Minister of India (1947-2020)

NOR GATE:

The NOT-OR (NOR) gate which is equal to an OR gate followed by a NOT gate. The NOR
gate gives a low output if any of the inputs are high. The NOR gate is represented by an
OR gate with a small circle on the output. The small circle represents inversion.
Exclusive-OR/ XOR GATE:

The 'Exclusive-OR' gate is a circuit which will give a high output if one of its inputs is high
but not both of them. The XOR operation is represented by an encircled plus sign.

EXCLUSIVE-NOR/Equivalence GATE:

The 'Exclusive-NOR' gate is a circuit that does the inverse operation to the XOR gate. It
will give a low output if one of its inputs is high but not both of them. The small circle
represents inversion.

Boolean algebra

Boolean algebra can be considered as an algebra that deals with binary variables and
logic operations. Boolean algebraic variables are designated by letters such as A, B, x,
and y. The basic operations performed are AND, OR, and complement.
The Boolean algebraic functions are mostly expressed with binary variables, logic
operation symbols, parentheses, and equal sign. For a given value of variables, the
Boolean function can be either 1 or 0. For instance, consider the Boolean function:

F = x + y'z

The logic diagram for the Boolean function F = x + y'z can be represented as:

4.5M

56

Competitive questions on Structures

o The Boolean function F = x + y'z is transformed from an algebraic expression into


a logic diagram composed of AND, OR, and inverter gates.

o Inverter at input 'y' generates its complement y'.

o There is an AND gate for the term y'z, and an OR gate is used to combine the two
terms (x and y'z).

o The variables of the function are taken to be the inputs of the circuit, and the
variable symbol of the function is taken as the output of the circuit.

Note: A truth table can represent the relationship between a function and its binary variables.
To represent a function in a truth table, we need a list of the 2^n combinations of n binary
variables.

The truth table for the Boolean function F = x + y'z can be represented as:
Laws of Boolean algebra

The basic Laws of Boolean Algebra can be stated as follows:

o Commutative Law states that the interchanging of the order of operands in a


Boolean equation does not change its result. For example:

1. OR operator → A + B = B + A

2. AND operator → A * B = B * A

o Associative Law of multiplication states that the AND operation are done on two or
more than two variables. For example:
A * (B * C) = (A * B) * C

o Distributive Law states that the multiplication of two variables and adding the
result with a variable will result in the same value as multiplication of addition of
the variable with individual variables. For example:
A + BC = (A + B) (A + C).

o Annulment law:
A.0 = 0
A+1=1

o Identity law:
A.1 = A
A+0=A

o Idempotent law:
A + A = A
A.A = A

o Complement law:
A + A' = 1
A.A'= 0

o Double negation law:


((A)')' = A
o Absorption law:
A.(A+B) = A
A + AB = A

De Morgan's Law is also known as De Morgan's theorem, works depending on the


concept of Duality. Duality states that interchanging the operators and variables in a
function, such as replacing 0 with 1 and 1 with 0, AND operator with OR operator and OR
operator with AND operator.

De Morgan stated 2 theorems, which will help us in solving the algebraic problems in
digital electronics. The De Morgan's statements are:

1. "The negation of a conjunction is the disjunction of the negations", which means


that the complement of the product of 2 variables is equal to the sum of the
compliments of individual variables. For example, (A.B)' = A' + B'.

2. "The negation of disjunction is the conjunction of the negations", which means


that compliment of the sum of two variables is equal to the product of the
complement of each variable. For example, (A + B)' = A'B'.

Simplification using Boolean algebra

Let us consider an example of a Boolean function: AB+A (B+C) + B (B+C)

The logic diagram for the Boolean function AB+A (B+C) + B (B+C) can be represented
as:

We will simplify this Boolean function on the basis of rules given by Boolean algebra.

AB + A (B+C) + B (B+C)

AB + AB + AC + BB + BC {Distributive law; A (B+C) = AB+AC, B (B+C) = BB+BC}

AB + AB + AC + B + BC {Idempotent law; BB = B}

AB + AC + B + BC {Idempotent law; AB+AB = AB}

AB + AC +B {Absorption law; B+BC = B}

B + AC {Absorption law; AB+B = B}

Hence, the simplified Boolean function will be B + AC.


The logic diagram for Boolean function B + AC can be represented as:

Map Simplification

The Map method involves a simple, straightforward procedure for simplifying Boolean
expressions.

Map simplification may be regarded as a pictorial arrangement of the truth table which
allows an easy interpretation for choosing the minimum number of terms needed to
express the function algebraically. The map method is also known as Karnaugh map or K-
map.

Each combination of the variables in a truth table is called a mid-term.

Note: When expressed in a truth table a function of n variables will have 2^n min-terms,
equivalent to the 2^n binary numbers obtained from n bits.

There are four min-terms in a two variable map. Therefore, the map consists of four
squares, one for each min-term. The 0's and 1's marked for each row, and each column
designates the values of variable x and y, respectively.

Two-variable map:

Representation of functions in the two-variable map:

Three variable map


There are eight min-terms in a three-variable map. Therefore, the map consists of eight
squares.

Three variable map:

o The map was drawn in part (b) in the above image is marked with numbers in
each row and each column to show the relationship between the squares and the
three variables.

o Any two adjacent squares in the map differ by only one variable, which is primed
in one square and unprimed in the other. For example, m5 and m7 lie in the two
adjacent squares. Variable y is primed in m5 and unprimed in m7, whereas the
other two variables are the same in both the squares.

o From the postulates of Boolean algebra, it follows that the sum of two min-terms
in adjacent squares can be simplified to a single AND term consisting of only two
literals. For example, consider the sum of two adjacent squares say m5 and m7:
m5+m7 = xy'z+xyz= xz(y'+y)= xz.

Combinational Circuits

A combinational circuit comprises of logic gates whose outputs at any time are
determined directly from the present combination of inputs without any regard to
previous inputs.

A combinational circuit performs a specific information-processing operation fully


specified logically by a set of Boolean functions.

The basic components of a combinational circuit are: input variables, logic gates, and
output variables.

The 'n' input variables come from an external source whereas the 'm' output variables go
to an external destination. In many applications, the source or destination are storage
registers.

Design procedure of a Combinational Circuit


The design procedure of a combinational circuit involves the following steps:

1. The problem is stated.

2. The total number of available input variables and required output variables is
determined.

3. The input and output variables are allocated with letter symbols.

4. The exact truth table that defines the required relationships between inputs and
outputs is derived.

5. The simplified Boolean function is obtained from each output.

6. The logic diagram is drawn.

The combinational circuit that performs the addition of two bits is called a half adder and
the one that performs the addition of three bits (two significant bits and a previous carry)
is a full adder.

Half - Adder

A Half-adder circuit needs two binary inputs and two binary outputs. The input variable
shows the augend and addend bits whereas the output variable produces the sum and
carry. We can understand the function of a half-adder by formulating a truth table. The
truth table for a half-adder is:

o 'x' and 'y' are the two inputs, and S (Sum) and C (Carry) are the two outputs.

o The Carry output is '0' unless both the inputs are 1.

o 'S' represents the least significant bit of the sum.

The simplified sum of products (SOP) expressions is:

S = x'y+xy', C = xy

The logic diagram for a half-adder circuit can be represented as:

Full - Adder
This circuit needs three binary inputs and two binary outputs. The truth table for a full-
adder is:

o Two of the input variable 'x' and 'y', represent the two significant bits to be added.

o The third input variable 'z', represents the carry from the previous lower
significant position.

o The outputs are designated by the symbol 'S' for sum and 'C' for carry.

o The eight rows under the input variables designate all possible combinations of
0's, and 1's that these variables may have.

o The input-output logical relationship of the full-adder circuit may be expressed in


two Boolean functions, one for each output variable.

o Each output Boolean function can be simplified by using a unique map method.

Maps for a full-adder:

The logic diagram for a full-adder circuit can be represented as:

S-R Flip-flop/Basic Flip-Flop


Flip flops are an application of logic gates. A flip-flop circuit can remain in a binary state
indefinitely (as long as power is delivered to the circuit) until directed by an input signal
to switch states.

S-R flip-flop stands for SET-RESET flip-flops.

The SET-RESET flip-flop consists of two NOR gates and also two NAND gates.

These flip-flops are also called S-R Latch.

The design of these flip flops also includes two inputs, called the SET [S] and RESET [R].
There are also two outputs, Q and Q'.

Clocked S-R Flip-Flop

The operation of a basic flip-flop can be modified by providing an additional control input
that determines when the state of the circuit is to be changed.

The limitation with a S-R flip-flop using NOR and NAND gate is the invalid state. This
problem can be overcome by using a stable SR flip-flop that can change outputs when
certain invalid states are met, regardless of the condition of either the Set or the Reset
inputs.

A clock pulse is given to the inputs of the AND Gate. If the value of the clock pulse is '0',
the outputs of both the AND Gates remain '0'.

D Flip-Flop
D flip-flop is a slight modification of clocked SR flip-flop.

From the above figure, you can see that the D input is connected to the S input and the
complement of the D input is connected to the R input.

When the value of CP is '1' (HIGH), the flip-flop moves to the SET state if it is '0' (LOW),
the flip-flop switches to the CLEAR state.

J-K Flip-Flop

J-K flip-flop can be considered as a modification of the S-R flip-flop.

The main difference is that the intermediate state is more refined and precise than that
of an S-R flip-flop.

The characteristics of inputs 'J' and 'K' is same as the 'S' and 'R' inputs of the S-R flip-flop.

J stands for SET, and 'K' stands for CLEAR.

When both the inputs J and K have a HIGH state, the flip-flop switches to the complement
state, so, for a value of Q = 1, it switches to Q=0, and for a value of Q = 0, it switches to
Q=1.

T Flip-Flop

T flip-flop is a much simpler version of the J-K flip-flop.


Both the J and K inputs are connected and are also called as a single input J-K Flip-flop.

Triggering of Flip-Flops

The state of the flip-flop is changed by a momentary change in the input signal. This
momentary change is known as Trigger, and the transition it causes is said to triggering
the flip-flop.

Pulses trigger clocked flip-flops.

A pulse start from the initial value of '0', goes momentarily to '1', and after a short while,
returns to its initial '0' value.

A clock pulse is either positive or negative.

A positive clock source remains at '0' during the interval between pulses and goes to 1
during the occurrence of a pulse.

The pulse goes through two signal transition: from '0' to '1' and return from '1' to '0'.

Definition of clock pulse transition:

The positive transition is defined as a positive edge and the negative transition as a
negative edge

Integrated Circuits
An integrated circuit (IC) is manufactured using silicon material and mounted in a
ceramic or plastic container (known as Chip). The basic components of an IC consist of
electronic circuits for the digital gates. The various gates are interconnected inside an IC
to form the required circuit.

The following categories can broadly classify an Integrated Circuit (IC):

SSI (Small Scale Integration Devices)

These type of devices contain several independent gates in a single package. The inputs
and outputs of these gates are connected directly to the pins in the package. The
number of logic gates are usually less than 10 and are limited by the number of pins
available in the IC.

MSI (Medium Scale Integration Devices)

These type of devices has a complexity of approximately 10 to 200 gates in a single


package. The basic components include decoders, adders, and registers.

39.2M

825

C++ vs Java

LSI (Large Scale Integration Devices)

LSI devices contain about 200 to a few thousand gates in a single package. The basic
components of an LSI device include digital systems, such as processors, memory chips,
and programmable modules.

VLSI (Very Large Scale Integration Device)

This type of devices contains thousands of gates within a single package. The most
common example of a VLSI device is a complex microcomputer chip.

Digital integrated circuits are also classified by their specific circuit technology to which
they belong. The circuit technology is often referred to as Digital Logic Family. Each
technology has its own basic electronic circuit and functions to perform.

The most common component in each technology is either a NAND, a NOR, or an inverter
gate.

The most popular among the digital logic families include:

1. TTL (Transistor-transistor Logic)


2. ECL (Emitter-coupled logic)

3. MOS (Metal-oxide semiconductor)

4. CMOS (Complementary metal-oxide-semiconductor)

TTL (Transistor-transistor Logic)

The TTL technology was an upgraded version of a previous technology called as DTL
(Diode-Transistor Logic). The DTL technology used to have diodes and transistors for the
basic NAND gate. TTL came in existence when these diodes are replaced with transistors
to improve the circuit operation.

There are several variations of the TTL like high-speed TTL, low-power TTL, Schottky TTL,
low-power Schottky TTL, and advanced Schottky TTL.

The following circuit diagram shows a Standard TTL circuit and its configuration.

Features of TTL Family:

o The overall power supply voltage for TTL circuit is 5 volts, and the two logic levels
are approximately 0 and 3.5 volts.

o A TTL circuit can support at most 10 gates at its output.

o The average propagation delay for a TTL circuit is about 9ns.

TTL Applications

o TTL is used as a switching device in driving lamps and relays.

o TTL is used in controller application for providing 0 to 5Vs.

o TTL families are mostly used in processors of minicomputers like DEC VAX.

o It is also used in printers and video display terminals.


ECL (Emitter-coupled Logic)

The ECL technology provides the highest-speed digital circuits in integrated form. An ECL
circuit is used in supercomputers and signal processors where high speed is essential.

The transistors in ECL gates operate in a non-saturated state, a condition that allows the
achievement of propagation delays of 1 to 2 nanoseconds.

Features of ECL Family

o The logic gates continuously draw current even in the inactive state. Hence power
consumption is more as compared to other logic families.

o ECL uses bipolar transistor logic where the transistors are not operated in the
saturation region.

o The average propagation delay for an ECL gate is about 0.5 to 2ns.

MOS (Metal-oxide semiconductor)

The MOS (Metal-oxide semiconductor) is a unipolar transistor that depends on the flow of
only one type of carrier, which may be electrons (n-channel) or holes (p-channel).

MOS technology is generally categorized in two basic forms:

1. A p-channel MOS is referred to as PMOS.

2. An n-channel MOS is referred to as NMOS.

PMOS

The operations performed by a PMOS logic family can be explained by considering a


PMOS NAND gate.

The following circuit diagram shows a two input PMOS NAND gate.
When a low logic is applied to either A or B, the transistor gets activated. This makes a
connection between power supply and the output terminal.

When a low logic is applied, the output is raised to a logic high value. Otherwise, it will
remain at logic low in other cases.

The pull-down resistor 'R' maintains the low logic unless a low logic is applied to either A
or B.

NMOS

The structure of NMOS logic is similar to that of PMOS. However, instead of using PMOS
transistors, here we will use NMOS transistors along with a pull-up resistor R.

The following circuit diagram shows a two input NMOS NAND gate.

As shown in the circuit diagram, an NMOS NAND gate has two NMOS transistors
connected in series from the output to the ground terminal.

A pull-up resistor is connected from the output terminal to the power supply.

When a high logic is applied to both inputs, both of the transistors get activated. This
makes a connection between the output terminal and ground.
CMOS (Complementary metal-oxide semiconductor)

The complementary MOS or CMOS technology uses PMOS and NMOS transistors
connected in a complementary manner in all circuits.

CMOS logic families are highly preferred in large-scale integrated circuits because of its
high noise immunity and low power dissipation.

The following circuit diagram shows a Standard CMOS circuit and its configuration.

Q1 and Q2 are the respective NMOS and PMOS transistors connected in a complementary
fashion.

Decoders

A Decoder can be described as a combinational circuit that converts binary information


from the 'n' coded inputs to a maximum of 2^n different outputs.

Note: A binary code of n bits is capable of representing up to 2^n distinct elements of the
coded information.

The most preferred or commonly used decoders are n-to-m decoders, where m<= 2^n.

An n-to-m decoder has n inputs and m outputs and is also referred to as an n * m


decoder.

The following image shows a 3-to-8 line decoder with three input variables which are
decoded into eight output, each output representing one of the combinations of the three
binary input variables.

The three inverter gates provide the complement of the inputs corresponding to which
the eight AND gates at the output generates one binary combination for each input.
The most common application of this decoder is binary-to-octal conversion.

The truth table for a 3-to-8 line decoder can be represented as:

x y z D0 D1 D2 D3 D4 D5 D6 D7

0 0 0 1 0 0 0 0 0 0 0

0 0 1 0 1 0 0 0 0 0 0

0 1 0 0 0 1 0 0 0 0 0

0 1 1 0 0 0 1 0 0 0 0

1 0 0 0 0 0 0 1 0 0 0

1 0 1 0 0 0 0 0 1 0 0

1 1 0 0 0 0 0 0 0 1 0

1 1 1 0 0 0 0 0 0 0 1

Let us consider an example of 2-to-4 line NAND Gate Decoder which uses NAND Gates
instead of AND gate in the central logic.

The following image shows a 2-to-4 line decoder with NAND gates.

The truth table for a 2-to-4 line decoder can be represented as:

E A1 A0 D0 D1 D2 D3

0 0 0 0 1 1 1

0 0 1 1 0 1 1
0 1 0 1 1 0 1

0 1 1 1 1 1 0

1 0 0 1 1 1 1

It is also possible to combine two or more decoders to form a large decoder whenever
needed. For instance, we can construct a 3 * 8 decoder by combining two 2 *4 decoders.

The following image shows a 3 * 8 decoder constructed with two 2 * 4 decoders.

Encoders

An encoder can also be described as a combinational circuit that performs the inverse
operation of a decoder. An encoder has a maximum of 2^n (or less) input lines and n
output lines.

In an Encoder, the output lines generate the binary code corresponding to the input
value.

The following image shows the block diagram of a 4 * 2 encoder with four input and two
output lines.

The truth table for a 4-to-2 line encoder can be represented as:

4.5M

56

Competitive questions on Structures


A3 A2 A1 A0 D1

0 0 0 1 0

0 0 1 0 0

0 1 0 0 1

1 0 0 0 1

From the truth table, we can write the Boolean function for each output as:

D1 = A3 + A2
D0 = A3 + A1

The circuit diagram for a 4-to-2 line encoder can be represented by using two input OR
gates.

The most common application of an encoder is the Octal-to-Binary encoder. Octal to


binary encoder takes eight input lines and generates three output lines.

The following image shows the block diagram of an 8 * 3 line encoder.

The truth table for an 8 * 3 line encoder can be represented as:

D7 D6 D5 D4 D3 D2 D1 D0 x y z

0 0 0 0 0 0 0 1 0 0 0

0 0 0 0 0 0 1 0 0 0 1
0 0 0 0 0 1 0 0 0 1 0

0 0 0 0 1 0 0 0 0 1 1

0 0 0 1 0 0 0 0 1 0 0

0 0 1 0 0 0 0 0 1 0 1

0 1 0 0 0 0 0 0 1 1 0

1 0 0 0 0 0 0 0 1 1 1

From the truth table, we can write the Boolean function for each output as:

x = D4 + D5 + D6 + D7
y = D2 + D3 + D6 + D7
z = D1 + D3 + D5 + D7

The circuit diagram for an 8 * 3 line encoder can be represented by using two input OR
gates.

Encoders

An encoder can also be described as a combinational circuit that performs the inverse
operation of a decoder. An encoder has a maximum of 2^n (or less) input lines and n
output lines.

In an Encoder, the output lines generate the binary code corresponding to the input
value.

The following image shows the block diagram of a 4 * 2 encoder with four input and two
output lines.

The truth table for a 4-to-2 line encoder can be represented as:

4.5M
56

Competitive questions on Structures

A3 A2 A1 A0 D1 D0

0 0 0 1 0 0

0 0 1 0 0 1

0 1 0 0 1 0

1 0 0 0 1 1

From the truth table, we can write the Boolean function for each output as:

D1 = A3 + A2
D0 = A3 + A1

The circuit diagram for a 4-to-2 line encoder can be represented by using two input OR
gates.

The most common application of an encoder is the Octal-to-Binary encoder. Octal to


binary encoder takes eight input lines and generates three output lines.

The following image shows the block diagram of an 8 * 3 line encoder.

The truth table for an 8 * 3 line encoder can be represented as:

D7 D6 D5 D4 D3 D2 D1 D0 x y z

0 0 0 0 0 0 0 1 0 0 0

0 0 0 0 0 0 1 0 0 0 1
0 0 0 0 0 1 0 0 0 1 0

0 0 0 0 1 0 0 0 0 1 1

0 0 0 1 0 0 0 0 1 0 0

0 0 1 0 0 0 0 0 1 0 1

0 1 0 0 0 0 0 0 1 1 0

1 0 0 0 0 0 0 0 1 1 1

From the truth table, we can write the Boolean function for each output as:

x = D4 + D5 + D6 + D7
y = D2 + D3 + D6 + D7
z = D1 + D3 + D5 + D7

The circuit diagram for an 8 * 3 line encoder can be represented by using two input OR
gates.

De-Multiplexers

A De-multiplexer (De-Mux) can be described as a combinational circuit that performs the


reverse operation of a Multiplexer.

A De-multiplexer has a single input, 'n' selection lines and a maximum of 2^n outputs.

The following image shows the block diagram of a 1 * 4 De-multiplexer.

The function table for a 1 * 4 De - Multiplexer can be represented as:

39.2M

825
C++ vs Java

S1 S0 y3 y2 y1

0 0 0 0 0

0 1 0 0 I

1 0 0 I 0

1 1 I 0 0

From the above function table, we can write the Boolean function for each output as:

y3 = S1S0 I, y2 = S1S0' I, y1 = S1' S0 I, y0 = S1'S0' I

The above equations can be implemented using inverters and three-input AND gates.

We can also implement higher order De-multiplexers using lower order De-multiplexers.
For instance, let us implement a 1 * 8 De-multiplexer using 1 * 2 De-multiplexer in the
first stage followed by two 1 * 4 De-multiplexers in the second stage.

The function table for a 1 * 8 De-multiplexer can be represented as:

S2 S1 S0 y7 y6 y5 y4 y3 y2 y1 y0

0 0 0 0 0 0 0 0 0 0 I

0 0 1 0 0 0 0 0 0 I 0

0 1 0 0 0 0 0 0 I 0 0

0 1 1 0 0 0 0 I 0 0 0

1 0 0 0 0 0 I 0 0 0 0

1 0 1 0 0 I 0 0 0 0 0

1 1 0 0 I 0 0 0 0 0 0
1 1 1 I 0 0 0 0 0 0 0

The block diagram for a 1 * 8 De-multiplexer can be represented as:

The Selection lines 'S1' and 'S0' are common for both of the 1 * 4 De-multiplexers.

Registers

o A Register is a fast memory used to accept, store, and transfer data and
instructions that are being used immediately by the CPU.

o A Register can also be considered as a group of flip-flops with each flip-flop


capable of storing one bit of information.

o A register with n flip-flops is capable of storing binary information of n-bits.

o The flip-flops contain the binary information whereas the gates control the flow of
information, i.e. when and how the information?s are transferred into a register.

o Different types of registers are available commercially. A simple register consists


of only flip-flops with no external gates.

o The transfer of new data into a register is referred to as loading the register.

o The above figure shows a register constructed with four D-type flip-flops and a
common clock pulse-input.

o The clock pulse-input, CP, enables all flip-flops so that the information presently
available at the four inputs can be transferred into the four-bit register.

Shift - Registers

Shift - Registers are capable of shifting their binary information in one or both directions.
The logical configuration of a Shift - Register consists of a series of flip-flops, with the
output of one flip-flop connected to the input of the next flip-flop.
Note: To control the flow of shifts, i.e. the flow of binary information from one register to the
next, a common clock is connected to all of the registers connected in series. This clock
generates a clock pulse which initiates the shift from one stage to the next.

The following image shows the block diagram of a Shift - Register and its configuration.

The basic configuration of a Shift - Register contains the following points:

o The most general Shift - Registers are often referred to as Bidirectional Shift
Register with parallel load.

o A common clock is connected to each register in series to synchronize all


operations.

o A serial input line is associated with the left-most register, and a serial output line
is associated with the right-most register.

o A control state is connected which leaves the information in the register


unchanged even though clock pulses are applied continuously

Register Transfer Language

A digital computer system exhibits an interconnection of digital modules such as


registers, decoders, arithmetic elements, and Control logic.

These digital modules are interconnected with some common data and control paths to
form a complete digital system.

Moreover, digital modules are best defined by the registers and the operations that are
performed on the data stored in them.

The operations performed on the data stored in registers are called Micro-operations.

530.1K
What to Expect on the 2022 MacBook Air: Should You WAIT To Buy One?!

The internal hardware organization of a digital system is best defined by specifying:

o The set of registers and the flow of data between them.

o The sequence of micro-operations performed on the data which are stored in the
registers.

o The control paths that initiates the sequence of micro-operation

The Register Transfer Language is the symbolic representation of notations used to


specify the sequence of micro-operations.
In a computer system, data transfer takes place between processor registers and
memory and between processor registers and input-output systems. These data transfer
can be represented by standard notations given below:

o Notations R0, R1, R2..., and so on represent processor registers.

o The addresses of memory locations are represented by names such as LOC,


PLACE, MEM, etc.

o Input-output registers are represented by names such as DATA IN, DATA OUT and
so on.

o The content of register or memory location is denoted by placing square brackets


around the name of the register or memory location

Register Transfer

The term Register Transfer refers to the availability of hardware logic circuits that can
perform a given micro-operation and transfer the result of the operation to the same or
another register.

Most of the standard notations used for specifying operations on various registers are
stated below.

o The memory address register is designated by MAR.

o Program Counter PC holds the next instruction's address.

o Instruction Register IR holds the instruction being executed.

o R1 (Processor Register).

o We can also indicate individual bits by placing them in parenthesis. For instance,
PC (8-15), R2 (5), etc.

o Data Transfer from one register to another register is represented in symbolic


form by means of replacement operator. For instance, the following statement
denotes a transfer of the data of register R1 into register R2.

1. R2 ← R1

o Typically, most of the users want the transfer to occur only in a predetermined
control condition. This can be shown by following if-then statement:
If (P=1) then (R2 ← R1); Here P is a control signal generated in the control section.

o It is more convenient to specify a control function (P) by separating the control


variables from the register transfer operation. For instance, the following
statement defines the data transfer operation under a specific control function (P).

1. P: R2 ← R1

The following image shows the block diagram that depicts the transfer of data from R1 to
R2.
Here, the letter 'n' indicates the number of bits for the register. The 'n' outputs of the
register R1 are connected to the 'n' inputs of register R2.

00:00/04:28

A load input is activated by the control variable 'P' which is transferred to the register R2.

Bus and Memory Transfers

A digital system composed of many registers, and paths must be provided to transfer
information from one register to another. The number of wires connecting all of the
registers will be excessive if separate lines are used between each register and all other
registers in the system.

A bus structure, on the other hand, is more efficient for transferring information between
registers in a multi-register configuration system.

A bus consists of a set of common lines, one for each bit of register, through which binary
information is transferred one at a time. Control signals determine which register is
selected by the bus during a particular register transfer.

The following block diagram shows a Bus system for four registers. It is constructed with
the help of four 4 * 1 Multiplexers each having four data inputs (0 through 3) and two
selection inputs (S1 and S2).

41.1M

677

Hello Java Program for Beginners

We have used labels to make it more convenient for you to understand the input-output
configuration of a Bus system for four registers. For instance, output 1 of register A is
connected to input 0 of MUX1.
The two selection lines S1 and S2 are connected to the selection inputs of all four
multiplexers. The selection lines choose the four bits of one register and transfer them
into the four-line common bus.

When both of the select lines are at low logic, i.e. S1S0 = 00, the 0 data inputs of all four
multiplexers are selected and applied to the outputs that forms the bus. This, in turn,
causes the bus lines to receive the content of register A since the outputs of this register
are connected to the 0 data inputs of the multiplexers.

Similarly, when S1S0 = 01, register B is selected, and the bus lines will receive the
content provided by register B.

The following function table shows the register that is selected by the bus for each of the
four possible binary values of the Selection lines.

A bus system can also be constructed using three-state gates instead of multiplexers.

The three state gates can be considered as a digital circuit that has three gates, two of
which are signals equivalent to logic 1 and 0 as in a conventional gate. However, the
third gate exhibits a high-impedance state.

The most commonly used three state gates in case of the bus system is a buffer gate.

The graphical symbol of a three-state buffer gate can be represented as:


The following diagram demonstrates the construction of a bus system with three-state
buffers.

o The outputs generated by the four buffers are connected to form a single bus line.

o Only one buffer can be in active state at a given point of time.

o The control inputs to the buffers determine which of the four normal inputs will
communicate with the bus line.

o A 2 * 4 decoder ensures that no more than one control input is active at any given
point of time.

Memory Transfer

Most of the standard notations used for specifying operations on memory transfer are
stated below.

o The transfer of information from a memory unit to the user end is called
a Read operation.

o The transfer of new information to be stored in the memory is called


a Write operation.

o A memory word is designated by the letter M.

o We must specify the address of memory word while writing the memory transfer
operations.

o The address register is designated by AR and the data register by DR.

o Thus, a read operation can be stated as:


1. Read: DR ← M [AR]

o The Read statement causes a transfer of information into the data register (DR)
from the memory word (M) selected by the address register (AR).

o And the corresponding write operation can be stated as:

1. Write: M [AR] ← R1

o The Write statement causes a transfer of information from register R1 into the
memory word (M) selected by address register (AR).

Arithmetic Micro-operations

In general, the Arithmetic Micro-operations deals with the operations performed on


numeric data stored in the registers.

The basic Arithmetic Micro-operations are classified in the following categories:

1. Addition

2. Subtraction

3. Increment

4. Decrement

5. Shift

Some additional Arithmetic Micro-operations are classified as:

1. Add with carry

2. Subtract with borrow

3. Transfer/Load, etc.

The following table shows the symbolic representation of various Arithmetic Micro-
operations.

Symbolic Representation Description

R3 ← R1 + R2 The contents of R1 plus R2 are transferred to R3.


R3 ← R1 - R2 The contents of R1 minus R2 are transferred to R3.

R2 ← R2' Complement the contents of R2 (1's complement)

R2 ← R2' + 1 2's complement the contents of R2 (negate)

R3 ← R1 + R2' + 1 R1 plus the 2's complement of R2 (subtraction)

R1 ← R1 + 1 Increment the contents of R1 by one

R1 ← R1 - 1 Decrement the contents of R1 by one

Binary Adder

The Add micro-operation requires registers that can hold the data and the digital
components that can perform the arithmetic addition.

A Binary Adder is a digital circuit that performs the arithmetic sum of two binary numbers
provided with any length.

A Binary Adder is constructed using full-adder circuits connected in series, with the
output carry from one full-adder connected to the input carry of the next full-adder.

The following block diagram shows the interconnections of four full-adder circuits to
provide a 4-bit binary adder.

Keep Watching

o The augend bits (A) and the addend bits (B) are designated by subscript numbers
from right to left, with subscript '0' denoting the low-order bit.

o The carry inputs starts from C0 to C3 connected in a chain through the full-
adders. C4 is the resultant output carry generated by the last full-adder circuit.

o The output carry from each full-adder is connected to the input carry of the next-
high-order full-adder.

o The sum outputs (S0 to S3) generates the required arithmetic sum of augend and
addend bits.

o The n data bits for the A and B inputs come from different source registers. For
instance, data bits for A input comes from source register R1 and data bits
for B input comes from source register R2.

o The arithmetic sum of the data inputs of A and B can be transferred to a third
register or to one of the source registers (R1 or R2).

Binary Adder-Subtractor

The Subtraction micro-operation can be done easily by taking the 2's compliment of
addend bits and adding it to the augend bits.
The Arithmetic micro-operations like addition and subtraction can be combined into one
common circuit by including an exclusive-OR gate with each full adder.

The block diagram for a 4-bit adder-subtractor circuit can be represented as:

o When the mode input (M) is at a low logic, i.e. '0', the circuit act as an adder and
when the mode input is at a high logic, i.e. '1', the circuit act as a subtractor.

o The exclusive-OR gate connected in series receives input M and one of the inputs
B.

o When M is at a low logic, we have B⊕ 0 = B.


The full-adders receive the value of B, the input carry is 0, and the circuit
performs A plus B.

o When M is at a high logic, we have B⊕ 1 = B' and C0 = 1.


The B inputs are complemented, and a 1 is added through the input carry. The
circuit performs the operation A plus the 2's complement of B.

Binary Incrementer

The increment micro-operation adds one binary value to the value of binary variables
stored in a register. For instance, a 4-bit register has a binary value 0110, when
incremented by one the value becomes 0111.

The increment micro-operation is best implemented by a 4-bit combinational circuit


incrementer. A 4-bit combinational circuit incrementer can be represented by the
following block diagram.

o A logic-1 is applied to one of the inputs of least significant half-adder, and the
other input is connected to the least significant bit of the number to be
incremented.

o The output carry from one half-adder is connected to one of the inputs of the
next-higher-order half-adder.

o The binary incrementer circuit receives the four bits from A0 through A3, adds
one to it, and generates the incremented output in S0 through S3.

o The output carry C4 will be 1 only after incrementing binary 1111.


Memory Hierarchy

A memory unit is an essential component in any digital computer since it is needed for
storing programs and data.

Typically, a memory unit can be classified into two categories:

1. The memory unit that establishes direct communication with the CPU is
called Main Memory. The main memory is often referred to as RAM (Random
Access Memory).

2. The memory units that provide backup storage are called Auxiliary Memory. For
instance, magnetic disks and magnetic tapes are the most commonly used
auxiliary memories.

Apart from the basic classifications of a memory unit, the memory hierarchy consists all
of the storage devices available in a computer system ranging from the slow but high-
capacity auxiliary memory to relatively faster main memory.

The following image illustrates the components in a typical memory hierarchy.

Auxiliary Memory

Auxiliary memory is known as the lowest-cost, highest-capacity and slowest-access


storage in a computer system. Auxiliary memory provides storage for programs and data
that are kept for long-term storage or when not in immediate use. The most common
examples of auxiliary memories are magnetic tapes and magnetic disks.

A magnetic disk is a digital computer memory that uses a magnetization process to


write, rewrite and access data. For example, hard drives, zip disks, and floppy disks.

Magnetic tape is a storage medium that allows for data archiving, collection, and backup
for different kinds of data.

Main Memory

The main memory in a computer system is often referred to as Random Access


Memory (RAM). This memory unit communicates directly with the CPU and with
auxiliary memory devices through an I/O processor.

The programs that are not currently required in the main memory are transferred into
auxiliary memory to provide space for currently used programs and data.

I/O Processor

The primary function of an I/O Processor is to manage the data transfers between
auxiliary memories and the main memory.

Cache Memory
The data or contents of the main memory that are used frequently by CPU are stored in
the cache memory so that the processor can easily access that data in a shorter time.
Whenever the CPU requires accessing memory, it first checks the required data into the
cache memory. If the data is found in the cache memory, it is read from the fast
memory. Otherwise, the CPU moves onto the main memory for the required data.

We will discuss each component of the memory hierarchy in more detail later in this
chapter.

Main Memory

The main memory acts as the central storage unit in a computer system. It is a relatively
large and fast memory which is used to store programs and data during the run time
operations.

The primary technology used for the main memory is based on semiconductor integrated
circuits. The integrated circuits for the main memory are classified into two major units.

1. RAM (Random Access Memory) integrated circuit chips

2. ROM (Read Only Memory) integrated circuit chips

RAM integrated circuit chips

The RAM integrated circuit chips are further classified into two possible operating
modes, static and dynamic.

The primary compositions of a static RAM are flip-flops that store the binary information.
The nature of the stored information is volatile, i.e. it remains valid as long as power is
applied to the system. The static RAM is easy to use and takes less time performing read
and write operations as compared to dynamic RAM.

Keep Watching

The dynamic RAM exhibits the binary information in the form of electric charges that are
applied to capacitors. The capacitors are integrated inside the chip by MOS transistors.
The dynamic RAM consumes less power and provides large storage capacity in a single
memory chip.

RAM chips are available in a variety of sizes and are used as per the system requirement.
The following block diagram demonstrates the chip interconnection in a 128 * 8 RAM
chip.

o A 128 * 8 RAM chip has a memory capacity of 128 words of eight bits (one byte)
per word. This requires a 7-bit address and an 8-bit bidirectional data bus.

o The 8-bit bidirectional data bus allows the transfer of data either from memory to
CPU during a read operation or from CPU to memory during a write operation.
o The read and write inputs specify the memory operation, and the two chip select
(CS) control inputs are for enabling the chip only when the microprocessor selects
it.

o The bidirectional data bus is constructed using three-state buffers.

o The output generated by three-state buffers can be placed in one of the three
possible states which include a signal equivalent to logic 1, a signal equal to logic
0, or a high-impedance state.

Note: The logic 1 and 0 are standard digital signals whereas the high-impedance state
behaves like an open circuit, which means that the output does not carry a signal and has no
logic significance.

The following function table specifies the operations of a 128 * 8 RAM chip.

From the functional table, we can conclude that the unit is in operation only when CS1 =
1 and CS2 = 0. The bar on top of the second select variable indicates that this input is
enabled when it is equal to 0.

ROM integrated circuit

The primary component of the main memory is RAM integrated circuit chips, but a
portion of memory may be constructed with ROM chips.

A ROM memory is used for keeping programs and data that are permanently resident in
the computer.

Apart from the permanent storage of data, the ROM portion of main memory is needed
for storing an initial program called a bootstrap loader. The primary function of
the bootstrap loader program is to start the computer software operating when power
is turned on.

ROM chips are also available in a variety of sizes and are also used as per the system
requirement. The following block diagram demonstrates the chip interconnection in a
512 * 8 ROM chip.
o A ROM chip has a similar organization as a RAM chip. However, a ROM can only
perform read operation; the data bus can only operate in an output mode.

o The 9-bit address lines in the ROM chip specify any one of the 512 bytes stored in
it.

o The value for chip select 1 and chip select 2 must be 1 and 0 for the unit to
operate. Otherwise, the data bus is said to be in a high-impedance state.

Auxiliary Memory

An Auxiliary memory is known as the lowest-cost, highest-capacity and slowest-access


storage in a computer system. It is where programs and data are kept for long-term
storage or when not in immediate use. The most common examples of auxiliary
memories are magnetic tapes and magnetic disks.

Magnetic Disks

A magnetic disk is a type of memory constructed using a circular plate of metal or plastic
coated with magnetized materials. Usually, both sides of the disks are used to carry out
read/write operations. However, several disks may be stacked on one spindle with
read/write head available on each surface.

The following image shows the structural representation for a magnetic disk.

o The memory bits are stored in the magnetized surface in spots along the
concentric circles called tracks.

o The concentric circles (tracks) are commonly divided into sections called sectors.

Magnetic Tape

Magnetic tape is a storage medium that allows data archiving, collection, and backup for
different kinds of data. The magnetic tape is constructed using a plastic strip coated with
a magnetic recording medium.

The bits are recorded as magnetic spots on the tape along several tracks. Usually, seven
or nine bits are recorded simultaneously to form a character together with a parity bit.

Magnetic tape units can be halted, started to move forward or in reverse, or can be
rewound. However, they cannot be started or stopped fast enough between individual
characters. For this reason, information is recorded in blocks referred to as records.
Associative Memory

An associative memory can be considered as a memory unit whose stored data can be
identified for access by the content of the data itself rather than by an address or
memory location.

Associative memory is often referred to as Content Addressable Memory (CAM).

When a write operation is performed on associative memory, no address or memory


location is given to the word. The memory itself is capable of finding an empty unused
location to store the word.

On the other hand, when the word is to be read from an associative memory, the content
of the word, or part of the word, is specified. The words which match the specified
content are located by the memory and are marked for reading.

The following diagram shows the block representation of an Associative memory.

From the block diagram, we can say that an associative memory consists of a memory
array and logic for 'm' words with 'n' bits per word.

The functional registers like the argument register A and key register K each have n bits,
one for each bit of a word. The match register M consists of m bits, one for each memory
word.

The words which are kept in the memory are compared in parallel with the content of the
argument register.

The key register (K) provides a mask for choosing a particular field or key in the
argument word. If the key register contains a binary value of all 1's, then the entire
argument is compared with each memory word. Otherwise, only those bits in the
argument that have 1's in their corresponding position of the key register are compared.
Thus, the key provides a mask for identifying a piece of information which specifies how
the reference to memory is made.

The following diagram can represent the relation between the memory array and the
external registers in an associative memory.
The cells present inside the memory array are marked by the letter C with two
subscripts. The first subscript gives the word number and the second specifies the bit
position in the word. For instance, the cell Cij is the cell for bit j in word i.

A bit Aj in the argument register is compared with all the bits in column j of the array
provided that Kj = 1. This process is done for all columns j = 1, 2, 3......, n.

If a match occurs between all the unmasked bits of the argument and the bits in word i,
the corresponding bit Mi in the match register is set to 1. If one or more unmasked bits of
the argument and the word do not match, Mi is cleared to 0.

Cache Memory

The data or contents of the main memory that are used frequently by CPU are stored in
the cache memory so that the processor can easily access that data in a shorter time.
Whenever the CPU needs to access memory, it first checks the cache memory. If the
data is not found in cache memory, then the CPU moves into the main memory.

Cache memory is placed between the CPU and the main memory. The block diagram for
a cache memory can be represented as:

The cache is the fastest component in the memory hierarchy and approaches the speed
of CPU components.

The basic operation of a cache memory is as follows:

o When the CPU needs to access memory, the cache is examined. If the word is
found in the cache, it is read from the fast memory.

o If the word addressed by the CPU is not found in the cache, the main memory is
accessed to read the word.

o A block of words one just accessed is then transferred from main memory to
cache memory. The block size may vary from one word (the one just accessed) to
about 16 words adjacent to the one just accessed.

o The performance of the cache memory is frequently measured in terms of a


quantity called hit ratio.

o When the CPU refers to memory and finds the word in cache, it is said to produce
a hit.
o If the word is not found in the cache, it is in main memory and it counts as a miss.

o The ratio of the number of hits divided by the total CPU references to memory
(hits plus misses) is the hit ratio.

Parallel Processing

Parallel processing can be described as a class of techniques which enables the system
to achieve simultaneous data-processing tasks to increase the computational speed of a
computer system.

A parallel processing system can carry out simultaneous data-processing to achieve


faster execution time. For instance, while an instruction is being processed in the ALU
component of the CPU, the next instruction can be read from memory.

The primary purpose of parallel processing is to enhance the computer processing


capability and increase its throughput, i.e. the amount of processing that can be
accomplished during a given interval of time.

A parallel processing system can be achieved by having a multiplicity of functional units


that perform identical or different operations simultaneously. The data can be distributed
among various multiple functional units.

The following diagram shows one possible way of separating the execution unit into eight
functional units operating in parallel.

The operation performed in each functional unit is indicated in each block if the diagram:

o The adder and integer multiplier performs the arithmetic operation with integer
numbers.

o The floating-point operations are separated into three circuits operating in


parallel.

o The logic, shift, and increment operations can be performed concurrently on


different data. All units are independent of each other, so one number can be
shifted while another number is being incremented.

Pipelining

The term Pipelining refers to a technique of decomposing a sequential process into sub-
operations, with each sub-operation being executed in a dedicated segment that
operates concurrently with all other segments.
The most important characteristic of a pipeline technique is that several computations
can be in progress in distinct segments at the same time. The overlapping of
computation is made possible by associating a register with each segment in the
pipeline. The registers provide isolation between each segment so that each can operate
on distinct data simultaneously.

The structure of a pipeline organization can be represented simply by including an input


register for each segment followed by a combinational circuit.

Let us consider an example of combined multiplication and addition operation to get a


better understanding of the pipeline organization.

5.7M
How to Fix Ubuntu Linux Freezing on Boot

The combined multiplication and addition operation is done with a stream of numbers
such as:

Ai* Bi + Ci for i = 1, 2, 3, ......., 7

The operation to be performed on the numbers is decomposed into sub-operations with


each sub-operation to be implemented in a segment within a pipeline.

The sub-operations performed in each segment of the pipeline are defined as:

R1 ← Ai, R2 ← Bi Input Ai, and Bi


R3 ← R1 * R2, R4 ← Ci Multiply, and input Ci
R5 ← R3 + R4 Add Ci to product

The following block diagram represents the combined as well as the sub-operations
performed in each segment of the pipeline.

Registers R1, R2, R3, and R4 hold the data and the combinational circuits operate in a
particular segment.

The output generated by the combinational circuit in a given segment is applied as an


input register of the next segment. For instance, from the block diagram, we can see that
the register R3 is used as one of the input registers for the combinational adder circuit.

In general, the pipeline organization is applicable for two areas of computer design which
includes:
1. Arithmetic Pipeline

2. Instruction Pipeline

We will discuss both of them in our later sections.

Arithmetic Pipeline

Arithmetic Pipelines are mostly used in high-speed computers. They are used to
implement floating-point operations, multiplication of fixed-point numbers, and similar
computations encountered in scientific problems.

To understand the concepts of arithmetic pipeline in a more convenient way, let us


consider an example of a pipeline unit for floating-point addition and subtraction.

The inputs to the floating-point adder pipeline are two normalized floating-point binary
numbers defined as:

X = A * 2a = 0.9504 * 103
Y = B * 2b = 0.8200 * 102

Where A and B are two fractions that represent the mantissa and a and b are the
exponents.

The combined operation of floating-point addition and subtraction is divided into four
segments. Each segment contains the corresponding suboperation to be performed in
the given pipeline. The suboperations that are shown in the four segments are:

1. Compare the exponents by subtraction.

2. Align the mantissas.

3. Add or subtract the mantissas.

4. Normalize the result.

We will discuss each suboperation in a more detailed manner later in this section.

The following block diagram represents the suboperations performed in each segment of
the pipeline.
1. Compare exponents by subtraction:

The exponents are compared by subtracting them to determine their difference. The
larger exponent is chosen as the exponent of the result.

The difference of the exponents, i.e., 3 - 2 = 1 determines how many times the mantissa
associated with the smaller exponent must be shifted to the right.

2. Align the mantissas:

The mantissa associated with the smaller exponent is shifted according to the difference
of exponents determined in segment one.

X = 0.9504 * 103
Y = 0.08200 * 103

3. Add mantissas:

The two mantissas are added in segment three.

Z = X + Y = 1.0324 * 103

4. Normalize the result:

After normalization, the result is written as:

Z = 0.1324 * 104

Instruction Pipeline

Pipeline processing can occur not only in the data stream but in the instruction stream as
well.

Most of the digital computers with complex instructions require instruction pipeline to
carry out operations like fetch, decode and execute instructions.

In general, the computer needs to process each instruction with the following sequence
of steps.

1. Fetch instruction from memory.

2. Decode the instruction.

3. Calculate the effective address.

4. Fetch the operands from memory.

5. Execute the instruction.

6. Store the result in the proper place.

Each step is executed in a particular segment, and there are times when different
segments may take different times to operate on the incoming information. Moreover,
there are times when two or more segments may require memory access at the same
time, causing one segment to wait until another is finished with the memory.

Keep Watching

The organization of an instruction pipeline will be more efficient if the instruction cycle is
divided into segments of equal duration. One of the most common examples of this type
of organization is a Four-segment instruction pipeline.
A four-segment instruction pipeline combines two or more different segments and
makes it as a single one. For instance, the decoding of the instruction can be combined
with the calculation of the effective address into one segment.

The following block diagram shows a typical example of a four-segment instruction


pipeline. The instruction cycle is completed in four segments.

Segment 1:

The instruction fetch segment can be implemented using first in, first out (FIFO) buffer.

Segment 2:

The instruction fetched from memory is decoded in the second segment, and eventually,
the effective address is calculated in a separate arithmetic circuit.

Segment 3:

An operand from memory is fetched in the third segment.

Segment 4:

The instructions are finally executed in the last segment of the pipeline organization.

You might also like