0% found this document useful (0 votes)
23 views37 pages

SEN 207 Note 3

Uploaded by

amazonkdp00111
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views37 pages

SEN 207 Note 3

Uploaded by

amazonkdp00111
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

SEN 207

COMPUTER ORGANIZATION AND ARCHITECTURE


Computer Organization and Architecture are used to design computer systems.
Computer Architecture is considered to be those attributes of a system that are visible to the
user like addressing techniques, instruction sets, and bits used for data, and have a direct impact
on the logic execution of a program, It defines the system in an abstract manner, It deals with
What does the system do.
Computer Organization is the way in which a system has to be structured, its operational
units, and the interconnections between them that achieve the architectural specifications, It is
the realization of the abstract model, and It deals with How to implement the system.

DIFFERENCE BETWEEN COMPUTER ARCHITECTURE AND COMPUTER


ORGANIZATION

S.No Computer Architecture Computer Organization


1. Architecture describes what the The Organization describes how it does it.
computer does.
2. Computer Architecture deals with Computer Organization deals with a
the functional behavior of computer structural relationship.
systems.
3. In the above figure, it’s clear that it In the above figure, it’s also clear that it
deals with high-level design issues. deals with low-level design issues.
4. Architecture indicates its hardware. Whereas Organization indicates its
performance.
5. As a programmer, you can view The implementation of the architecture is
architecture as a series of called organization.
instructions, addressing modes, and
registers.
6. For designing a computer, its For designing a computer, an organization is
architecture is fixed first. decided after its architecture.
7. Computer Architecture is also called Computer Organization is frequently called
Instruction Set Architecture (ISA). microarchitecture.
8. Computer Architecture comprises Computer Organization consists of physical
logical functions such as instruction units like circuit designs, peripherals, and
sets, registers, data types, and adders.
addressing modes.
9. The different architectural CPU organization is classified into three
categories found in our computer categories based on the number of address
systems are as follows: fields:
1. Von-Neumann Architecture 1. Organization of a single Accumulator.
2. Harvard Architecture 2. Organization of general registers
3. Instruction Set Architecture 3. Stack organization
4. Micro-architecture
5. System Design
10. It makes the computer’s hardware It offers details on how well the computer
visible. performs.
11. Architecture coordinates the Computer Organization handles the
hardware and software of the segments of the network in a system.
system.
12. The software developer is aware of It escapes the software programmer’s
it. detection.
13. Examples- Intel and AMD created Organizational qualities include hardware
the x86 processor. Sun elements that are invisible to the
Microsystems and others created the programmer, such as the interfacing of
SPARC processor. Apple, IBM, and computers and peripherals, memory
Motorola created the PowerPC. technologies, and control signals.

BASIC STRUCTURE OF COMPUTERS


A computer is an electronic device that accepts data, performs operations, displays results, and
stores the data or results as needed. It is a combination of hardware and software resources that
integrate together and provide various functionalities to the user. Hardware is the physical
components of a computer like a processor, memory devices, monitor, keyboard, etc., while
software is a set of programs or instructions that are required by the hardware resources to
function properly.

Components of a Computer
There are basically three important components of a computer:
Input Unit
Central Processing Unit (CPU)
Output Unit
1. Input Unit:
The input unit consists of input devices that are attached to the computer. These devices take
input and convert it into binary language that the computer understands. Some of the common
input devices are keyboard, mouse, joystick, scanner etc.
The Input Unit is formed by attaching one or more input devices to a computer.
A user input data and instructions through input devices such as a keyboard, mouse, etc.
The input unit is used to provide data to the processor for further processing.
2. Central Processing Unit:
Once the information is entered into the computer by the input device, the processor processes
it. The CPU is called the brain of the computer because it is the control center of the computer.
It first fetches instructions from memory and then interprets them so as to know what is to be
done. If required, data is fetched from memory or input device. Thereafter CPU executes or
performs the required computation, and then either stores the output or displays it on the output
device. The CPU has three main components, which are responsible for different functions:
Arithmetic Logic Unit (ALU), Control Unit (CU) and Memory registers

A. Arithmetic and Logic Unit (ALU): The ALU, as its name suggests performs mathematical
calculations and makes logical decisions. Arithmetic calculations include addition, subtraction,
multiplication, and division. Logical decisions involve the comparison of two data items to see
which one is larger or smaller or equal.
Arithmetic Logical Unit is the main component of the CPU
It is the fundamental building block of the CPU.
Arithmetic and Logical Unit is a digital circuit that is used to perform arithmetic and logical
operations.
B. Control Unit: The Control unit coordinates and controls the data flow in and out of the
CPU, and also controls all the operations of ALU, memory registers and also input/output units.
It is also responsible for carrying out all the instructions stored in the program. It decodes the
fetched instruction, interprets it and sends control signals to input/output devices until the
required operation is done properly by ALU and memory.
The Control Unit is a component of the central processing unit of a computer that directs the
operation of the processor.
It instructs the computer’s memory, arithmetic, and logic unit, and input and output devices on
how to respond to the processor’s instructions.
In order to execute the instructions, the components of a computer receive signals from the
control unit.
It is also called the central nervous system or brain of the computer.
C. Memory Registers: A register is a temporary unit of memory in the CPU. These are used
to store the data, which is directly used by the processor. Registers can be of different sizes(16-
bit, 32-bit, 64-bit, and so on) and each register inside the CPU has a specific function, like
storing data, storing an instruction, storing the address of a location in memory, etc. The user
registers can be used by an assembly language programmer for storing operands, intermediate
results, etc. The accumulator (ACC) is the main register in the ALU and contains one of the
operands of an operation to be performed in the ALU.
Memory attached to the CPU is used for the storage of data and instructions and is called
internal memory.
The internal memory is divided into many storage locations, each of which can store data or
instructions. Each memory location is of the same size and has an address. With the help of the
address, the computer can read any memory location easily without having to search the entire
memory. When a program is executed, its data is copied to the internal memory and stored in
the memory till the end of the execution. The internal memory is also called the Primary
memory or Main memory. This memory is also called RAM, i.e., Random Access Memory.
The time of access of data is independent of its location in memory, therefore, this memory is
also called Random Access memory (RAM).

Memory Unit is the primary storage of the computer. It stores both data and instructions.
Data and instructions are stored permanently in this unit so that they are available whenever
required.
3. Output Unit:
The output unit consists of output devices that are attached to the computer. It converts the
binary data coming from the CPU to human understandable form. The common output devices
are monitors, printers, plotters, etc.
The output unit displays or prints the processed data in a user-friendly format.
The output unit is formed by attaching the output devices of a computer.
The output unit accepts the information from the CPU and displays it in a user-readable form.
PERFORMANCE OF COMPUTER IN COMPUTER ORGANIZATION
In computer organization, performance refers to the speed and efficiency at which a computer
system can execute tasks and process data. A high-performing computer system is one that can
perform tasks quickly and efficiently while minimizing the amount of time and resources
required to complete these tasks.
There are several factors that can impact the performance of a computer system, including:
Processor speed: The speed of the processor, measured in GHz (gigahertz), determines how
quickly the computer can execute instructions and process data.
Memory: The amount and speed of the memory, including RAM (random access memory)
and cache memory, can impact how quickly data can be accessed and processed by the
computer.
Storage: The speed and capacity of the storage devices, including hard drives and solid-state
drives (SSDs), can impact the speed at which data can be stored and retrieved.
I/O devices: The speed and efficiency of input/output devices, such as keyboards, mice, and
displays, can impact the overall performance of the system.
Software optimization: The efficiency of the software running on the system, including
operating systems and applications, can impact how quickly tasks can be completed.
Improving the performance of a computer system typically involves optimizing one or more of
these factors to reduce the time and resources required to complete tasks. This can involve
upgrading hardware components, optimizing software, and using specialized performance-
tuning tools to identify and address bottlenecks in the system.
Computer performance is the amount of work accomplished by a computer system. The word
performance in computer performance means “How well is the computer doing the work it is
supposed to do?”. It basically depends on the response time, throughput, and execution time of
a computer system. Response time is the time from the start to completion of a task. This also
includes:

• Operating system overhead.


• Waiting for I/O and other processes
• Accessing disk and memory
• Time spent executing on the CPU or execution time.
Throughput is the total amount of work done in a given time.
CPU execution time is the total time a CPU spends computing on a given task. It also excludes
time for I/O or running other programs. This is also referred to as simply CPU time.
Performance is determined by execution time as performance is inversely proportional to
execution time.
Performance = (1 / Execution time)
And,
(Performance of A / Performance of B)
= (Execution Time of B / Execution Time of A)
If given that Processor A is faster than Processor B that means the execution time of A is less
than that of the execution time of B. Therefore, the performance of A is greater than that of the
performance of B.
Example – Machine A runs a program in 100 seconds, and Machine B runs the same program
in 125 seconds
(Performance of A / Performance of B)
= (Execution Time of B / Execution Time of A)
= 125 / 100 = 1.25
That means Machine A is 1.25 times faster than Machine B. And, the time to execute a given
program can be computed as:

Execution time (T) = CPU clock cycles x clock cycle time (𝑡𝑐𝑦𝑐𝑙𝑒 )
Since clock cycle time 𝑡𝑐𝑦𝑐𝑙𝑒 and clock rate are reciprocals, so,
Execution time = CPU clock cycles/clock rate
The number of CPU clock cycles can be determined by,
CPU clock cycles = (No. of instructions / Program) x (Clock cycles / Instruction)
=𝑁𝐼𝑛𝑠𝑡𝑟 x CPI
Which gives,
Execution time = Instruction Count x CPI x clock cycle time
= Instruction Count x CPI/clock rate

T = 𝑁𝐼𝑛𝑠𝑡𝑟 ∗ 𝐶𝑃𝐼 ∗ 𝑡𝑐𝑦𝑐𝑙𝑒

T Execution Time

𝑁𝐼𝑛𝑠𝑡𝑟 Number Of Instructions Executed Per Program

𝐶𝑃𝐼 Average Number Of Cycles Per Instruction

𝑡𝑐𝑦𝑐𝑙𝑒 Time Taken By The Clock Cycle

Example: Calculate the execution time (T) for a program that executes 5million instructions
with the information in the table below while the CPU has a frequency of 2GHz.

Instructions 𝒇𝒓𝒆𝒒𝒖𝒆𝒏𝒄𝒚𝒊𝒏𝒔𝒕𝒓 𝑪𝑷𝑰𝒊𝒏𝒔𝒕𝒓


ALU 50% 3
Load 20% 5
Store 10% 4
Branch 20% 3

Solution:

Calculate The Average Number Of Cycle Per Instruction

CPI = ∑ 𝐹𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦𝑖𝑛𝑠𝑡𝑟 *𝐶𝑃𝐼𝑖𝑛𝑠𝑡𝑟


CPI = (0.5*3) + (0.2*5) + (0.1*4) + (0.2*3)

CPI = 1.5 + 1 + 0.4 + 0.6 = 3.5

Frequency = 2GHZ = 2 ∗ 109

T = (5 ∗ 106 * 3.5)/2 ∗ 109 = 8.75 ∗ 10−3 = 8.75ms

How to Improve Performance?


To improve performance you can either:
▪ Decrease the CPI (clock cycles per instruction) by using new Hardware.
▪ Decrease the clock time or Increase the clock rate by reducing propagation delays or by
using pipelining.
▪ Decrease the number of required cycles or improve ISA or Compiler.
USES AND BENEFITS OF PERFORMANCE OF COMPUTER
Some of the key uses and benefits of a high-performing computer system include:
Increased productivity: A high-performing computer can help increase productivity by
reducing the time required to complete tasks, allowing users to complete more work in less
time.
Improved user experience: A fast and efficient computer system can provide a better user
experience, with smoother operation and fewer delays or interruptions.
Faster data processing: A high-performing computer can process data more quickly, enabling
faster access to critical information and insights.
Enhanced gaming and multimedia performance: High-performance computers are better
suited for gaming and multimedia applications, providing smoother and more immersive
experiences.
Better efficiency and cost savings: By optimizing the performance of a computer system, it
is possible to reduce the time and resources required to complete tasks, leading to better
efficiency and cost savings.

Amdahl’s Law
Amdahl's law is an expression used to find the maximum expected improvement to an overall
system when only part of the system is improved. It is often used in parallel computing to
predict the theoretical maximum speed up using multiple processors.
Speedup is defined as the ratio of performance for the entire task using the enhancement and
performance for the entire task without using the enhancement or
Speedup can be defined as the ratio of execution time for the entire task without using the
enhancement and execution time for the entire task using the enhancement.

If 𝑃𝑒 is the performance for the entire task using the enhancement when possible,

𝑃𝑤 is the performance for the entire task without using the enhancement,

𝐸𝑤 is the execution time for the entire task without using the enhancement and

𝐸𝑒 is the execution time for the entire task using the enhancement when possible then,

Speedup = 𝑃𝑒 /𝑃𝑤 or

Speedup = 𝐸𝑤 /𝐸𝑒

Amdahl’s law uses two factors to find speedup from some enhancement:
Fraction enhanced – The fraction of the computation time in the original computer that can be
converted to take advantage of the enhancement. For example- if 10 seconds of the execution
time of a program that takes 40 seconds in total can use an enhancement, the fraction is 10/40.
This obtained value is Fraction Enhanced. Fraction enhanced is always less than 1.
Speedup enhanced – The improvement gained by the enhanced execution mode; that is, how
much faster the task would run if the enhanced mode were used for the entire program. For
example – If the enhanced mode takes, say 3 seconds for a portion of the program, while it is
6 seconds in the original mode, the improvement is 6/3. This value is Speedup enhanced.
Speedup Enhanced is always greater than 1.
The overall Speedup is the ratio of the execution time:-
Overall Speedup ( 𝑆𝑎𝑙𝑙 ) = Old Execution Time/ New Execution Time

1
𝑆𝑎𝑙𝑙 = 𝐹𝑟𝑎𝑐𝑡𝑖𝑜𝑛
൫1−𝐹𝑟𝑎𝑐𝑡𝑖𝑜𝑛𝑒𝑛ℎ𝑎𝑛𝑐𝑒𝑑 ൯ + ( 𝑆𝑝𝑒𝑒𝑑𝑢𝑝 𝑒𝑛ℎ𝑎𝑛𝑐𝑒𝑑 )
𝑒𝑛ℎ𝑎𝑛𝑐𝑒𝑑

Overall Speedup (max) = 1/ {1 – 𝐹𝑟𝑎𝑐𝑡𝑖𝑜𝑛𝑒𝑛ℎ𝑎𝑛𝑐𝑒𝑑 }

Of course, this is just theoretical (ideal) and is not achievable in real-life conditions.
Likewise, we can also think of the case where f = 1.
Amdahl’s law is a principle that states that the maximum potential improvement to the
performance of a system is limited by the portion of the system that cannot be improved. In
other words, the performance improvement of a system as a whole is limited by its bottlenecks.
The law is often used to predict the potential performance improvement of a system when
adding more processors or improving the speed of individual processors. It is named after Gene
Amdahl, who first proposed it in 1967.
The formula for Amdahl’s law is:
S = 1 / (1 – P + (P / N))
Where:
S is the speedup of the system
P is the proportion of the system that can be improved
N is the number of factor to enhance the system
For example, if a system has a single bottleneck that occupies 20% of the total execution time,
and we add 4 more processors to the system, the speedup would be:
S = 1 / (1 – 0.2 + (0.2 / 4))
S = 1 / (0.8 + 0.04)
S = 1 / 0.84
S = 1.176
This means that the overall performance of the system would improve by about 19% with the
addition of the 4 processors.
It’s important to note that Amdahl’s law assumes that the rest of the system is able to fully
utilize the additional processors, which may not always be the case in practice.
Advantages of Amdahl’s law:
✓ Provides a way to quantify the maximum potential speedup that can be achieved by
parallelizing a program, which can help guide decisions about hardware and software
design.
✓ Helps to identify the portions of a program that are not easily parallelizable, which can
guide efforts to optimize those portions of the code.
✓ Provides a framework for understanding the trade-offs between parallelization and
other forms of optimization, such as code optimization and algorithmic improvements.
Disadvantages of Amdahl’s law:
❖ Assumes that the portion of the program that cannot be parallelized is fixed, which may
not be the case in practice. For example, it is possible to optimize code to reduce the
portion of the program that cannot be parallelized, making Amdahl’s law less accurate.
❖ Assumes that all processors have the same performance characteristics, which may not
be the case in practice. For example, in a heterogeneous computing environment, some
processors may be faster than others, which can affect the potential speedup that can be
achieved.
❖ Does not take into account other factors that can affect the performance of parallel
programs, such as communication overhead and load balancing. These factors can
impact the actual speedup that is achieved in practice, which may be lower than the
theoretical maximum predicted by Amdahl’s law.
What Kinds of Problems Do We Solve with Amdahl’s Law?
Recall how we defined the performance of a system that has been speedup:
Example 1: Assume a microprocessor which is widely used for scientific applications. It has
both integer and floating point instructions. Now the floating point instructions are enhanced
and made 3 times faster than before and the integer instructions are unenhanced. If there are
20% of the floating point instruction in the program. Find the overall speedup.
Solution:
P = 20% = 0.2, N = 3, S=?
S = 1/ ((1-P) + (P/N))
S= 1/ ((1-0.2) + (0.2/3))
S = 1/ ((0.8) + (0.66666))
S= 1/ (0.867)
S= 1.153
Example 2: Suppose that a task makes extensive use of floating point operation with 40% of
the time consumed by floating point operation with a new hardware design. If the floating point
module is speedup by factor 4. What is the overall speedup?
Solution:
P = 40% = 0.4, N = 4, S = ?
S = 1/ ((1-P) + (P/N))
S= 1/ ((1-0.4) + (0.4/4))
S = 1/ ((0.6) + (0.1))
S= 1/ (0.7)
S = 1.428

Assignment: In an Enhancement of the design of a CPU, the speed of a floating point unit has
been increased by 20% and the fixed point unit has been increased by 10%. What is the overall
speedup achieved if the ratio of the number of floating point operations to the number of the
fixed point operation is 2:3 and the floating point operation used to take twice the time taken
by the fixed point operation in the original design?
COMPUTER ARCHITECTURE AND ORGANIZATION
Computer Architecture is the design of computers, including their instruction sets, hardware
component and system organization.
Computer Architecture deals with the functional behavior of computer system and design
implementation for the various parts of the computer while Computer organization deals with
the structural relationship, operational attributes that are linked together and contribute to
realize the architectural specification.
We have different types of Computer Architecture:
i. Von Newmann Architecture/ Princeton Architecture
ii. Harvard Architecture.
Von Newmann Architecture is a digital computer architecture whose design is based on the
concept of stored program computers where program data and instruction data are stored in the
same memory. This architecture was designed by the famous mathematician and physicist John
Von Neumann in 1945.

CPU
Instruction Data
Program Data

Computer Architecture / Van Newmann Architecture.

ADVANTAGES OF VON NEUMANN ARCHITECTURE


✓ Less physical space is required than Harvard
✓ Handling just one memory block is simpler and easier to achieve
✓ Cheaper to use than Harvard
DISADVANTAGE OF VON NEUMANN ARCHITECTURE
❖ Shared memory - a defective program can overwrite another in memory, causing it to
crash
❖ Memory leaks - some defective programs fail to release memory when they are
finished with it, which could cause the computer to crash due to insufficient memory
❖ Data bus speed - the CPU is much faster than the data bus, meaning it often sits idle
(Von Neumann bottleneck)
❖ Fetch rate - data and instructions share the same data bus, even though the rate at
which each needs to be fetched is often very different.

Harvard Architecture is the digital computer architecture whose design is based on the
concept that there are separate storage and separate buses (signal paths) for instruction and
data. It was basically developed to overcome the bottleneck of Von Neumann Architecture. The
main advantage of having separate buses for instruction and data is that the CPU can access
instructions and read/write data at the same time.

Harvard Architecture

ADVANTAGE OF HARVARD ARCHITECTURE


✓ Simultaneous instruction and data access: utilizes separate memory and buses for
instructions and data.
✓ Reduced resource conflicts: The distinct memory units and buses for instructions and
data reduce the likelihood of pipeline stalls caused by resource conflicts.
✓ Independent Cache Memory optimization: enables independent caching of instructions
and data. This feature allows for more effective cache memory usage, as the likelihood
of cache misses is diminished, contributing to speed and performance improvements.
✓ Enhanced parallelism: With its separate memory units and buses, Harvard Architecture
promotes parallelism in processing instructions and data.
DISADVANTAGE OF HARVARD ARCHITECTURE
❖ Increased design complexity: The architecture necessitates separate memory units,
buses, and management mechanisms for instructions and data, increasing system
complexity and potentially leading to a larger chip size.
❖ Higher implementation cost: Due to the increased complexity of the design,
implementing the Harvard Architecture may entail higher manufacturing expenses
when compared to the von Neumann Architecture.
❖ Code and data sharing limitations: The separation of instructions and data memory can
create challenges when code and data need to be shared.
❖ While the Harvard Architecture offers superior performance and efficiency for specific
use cases, such as digital signal processing and embedded systems, it may not always
be the optimal choice for general-purpose computing applications.
Difference between Von Newmann and Harvard Architecture
VON NEUMANN ARCHITECTURE HARVARD ARCHITECTURE
It is ancient computer architecture based on It is modern computer architecture based on
stored program computer concept. Harvard Mark I relay based model.
Same physical memory address is used for Separate physical memory address is used for
instructions and data. instructions and data.
There is common bus for data and instruction Separate buses are used for transferring data
transfer. and instruction.
Two clock cycles are required to execute single An instruction is executed in a single cycle.
instruction.
It is cheaper in cost. It is costly than Von Neumann Architecture.
CPU cannot access instructions and read/write CPU can access instructions and read/write at
at the same time. the same time.
It is used in personal computers and small It is used in micro controllers and signal
computers. processing.

Flynn’s taxonomy is a classification scheme for computer architectures proposed by Michael


Flynn in 1966. The taxonomy is based on the number of instruction streams and data streams
that can be processed simultaneously by a computer architecture.
i. Single Instruction Stream, Single Data Stream (SISD): In a SISD architecture, there
is a single processor that executes a single instruction stream and operates on a
single data stream. This is the simplest type of computer architecture and is used in
most traditional computers.

SSID
ii. Single Instruction Stream, Multiple Data Stream (SIMD): In a SIMD architecture,
there is a single processor that executes the same instruction on multiple data
streams in parallel. This type of architecture is used in applications such as image
and signal processing.
SIMD

iii. Multiple Instruction Stream, Single Data Stream (MISD): In a MISD architecture,
multiple processors execute different instructions on the same data stream. This
type of architecture is not commonly used in practice, as it is difficult to find
applications that can be decomposed into independent instruction streams.

MISD

iv. Multiple Instruction Stream, Multiple Data Stream (MIMD): In a MIMD


architecture, multiple processors execute different instructions on different data
streams. This type of architecture is used in distributed computing, parallel
processing, and other high-performance computing applications.

MIMD
CPU ORGANIZATION AND MICRO-ARCHITECTURAL LEVEL DESIGN
CPU Organization
What is a CPU?
A Central Processing Unit is the most important component of a computer system. A CPU is
hardware that performs data input/output, processing, and storage functions for a computer
system. A CPU can be installed into a CPU socket. These sockets are generally located on the
motherboard. CPU can perform various data processing operations. CPU can store data,
instructions, programs, and intermediate results.
History of CPU
Since 1823, when Baron Jons Jakob Berzelius discovered silicon, which is still the primary
component used in the manufacture of CPUs today, the history of the CPU has experienced
numerous significant turning points. The first transistor was created by John Bardeen, Walter
Brattain, and William Shockley in December 1947. In 1958, the first working integrated circuit
was built by Robert Noyce and Jack Kilby.
The Intel 4004 was the company’s first microprocessor, which it unveiled in 1971. Ted Hoff’s
assistance was needed for this. When Intel released its 8008 CPU in 1972, Intel 8086 in 1976,
and Intel 8088 in June 1979, it contributed to yet another win. The Motorola 68000, a 16/32-
bit processor, was also released in 1979. The Sun also unveiled the SPARC CPU in 1987. AMD
unveiled the AM386 CPU series in March 1991.
In January 1999, Intel introduced the Celeron 366 MHZ and 400 MHz processors. AMD back
in April 2005 with its first dual-core processor. Intel also introduced the Core 2 Dual processor
in 2006. Intel released the first Core i5 desktop processor with four cores in September 2009.
In January 2010, Intel released other processors like Core 2 Quad processor Q9500, the first
Core i3 and i5 mobile processors, first Core i3 and i5 desktop processors.
In June 2017, Intel released Core i9 desktop processor, and Intel introduced its first Core i9
mobile processor In April 2018.
What Does a CPU Do?
The main function of a computer processor is to execute instructions and produce an output.
CPU: Fetch, Decode, and Execute are the fundamental functions of the computer.
Fetch: the first CPU gets the instruction. That means binary numbers that are passed from
RAM to CPU.
Decode: When the instruction is entered into the CPU, it needs to decode the instructions. with
the help of ALU (Arithmetic Logic Unit), the process of decode begins.
Execute: After decode step the instructions are ready to execute
Store: After execute step the instructions are ready to store in the memory.
Types of CPU
We have three different types of CPUs:
Single Core CPU: The oldest type of computer CPU is a single-core CPU. These CPUs were
used in the 1970s. these CPUs only have a single core that performs different operations. This
means that the single-core CPU can only process one operation at a single time. single-core
CPU is not suitable for multitasking.
Dual-Core CPU: Dual-Core CPUs contain a single Integrated Circuit with two cores. Each
core has its cache and controller. These controllers and cache work as a single unit. dual-core
CPUs can work faster than single-core processors.
Quad-Core CPU: Quad-Core CPUs contain two dual-core processors present within a single
integrated circuit (IC) or chip. A quad-core processor contains a chip with four independent
cores. These cores read and execute various instructions provided by the CPU. Quad Core CPU
increases the overall speed for programs. Without even boosting the overall clock speed it
results in higher performance.
Different Parts of the CPU
The CPU consists of 3 major units, which are:

• Control Unit
• Memory or Storage Unit
• ALU (Arithmetic Logic Unit)
Control Unit
The Control Unit is the part of the computer’s central processing unit (CPU), which directs the
operation of the processor. It was included as part of the Von Neumann Architecture by John
von Neumann. It is the responsibility of the control unit to tell the computer’s memory,
arithmetic/logic unit, and input and output devices how to respond to the instructions that have
been sent to the processor. It fetches internal instructions of the programs from the main
memory to the processor instruction register, and based on this register contents, the control
unit generates a control signal that supervises the execution of these instructions. A control unit
works by receiving input information which it converts into control signals, which are then sent
to the central processor. The computer’s processor then tells the attached hardware what
operations to perform. The functions that a control unit performs are dependent on the type of
CPU because the architecture of the CPU varies from manufacturer to manufacturer.
Examples of devices that require a CU are:

• Central Processing Units (CPUs)


• Graphics Processing Units (GPUs)
Functions of the Control Unit
❖ It coordinates the sequence of data movements into, out of, and between a processor’s
many sub-units.
❖ It interprets instructions.
❖ It controls data flow inside the processor.
❖ It receives external instructions or commands which it converts to a sequence of
control signals.
❖ It controls many execution units (i.e. ALU, data buffers, and registers) contained
within a CPU.
❖ It also handles multiple tasks, such as fetching, decoding, execution handling, and
storing results.
Advantages of a Well-Designed Control Unit
✓ Efficient instruction execution: A well-designed control unit can execute instructions
more efficiently by optimizing the instruction pipeline and minimizing the number of
clock cycles required for each instruction.
✓ Improved performance: A well-designed control unit can improve the performance of
the CPU by increasing the clock speed, reducing the latency, and improving the
throughput.
✓ Support for complex instructions: A well-designed control unit can support complex
instructions that require multiple operations, reducing the number of instructions
required to execute a program.
✓ Improved reliability: A well-designed control unit can improve the reliability of the
CPU by detecting and correcting errors, such as memory errors and pipeline stalls.
✓ Lower power consumption: A well-designed control unit can reduce power
consumption by optimizing the use of resources, such as registers and memory, and
reducing the number of clock cycles required for each instruction.
✓ Better branch prediction: A well-designed control unit can improve branch prediction
accuracy, reducing the number of branch mispredictions and improving performance.
✓ Improved scalability: A well-designed control unit can improve the scalability of the
CPU, allowing it to handle larger and more complex workloads.
✓ Better support for parallelism: A well-designed control unit can better support
parallelism, allowing the CPU to execute multiple instructions simultaneously and
improve overall performance.
✓ Improved security: A well-designed control unit can improve the security of the CPU
by implementing security features such as address space layout randomization and data
execution prevention.
✓ Lower cost: A well-designed control unit can reduce the cost of the CPU by minimizing
the number of components required and improving manufacturing efficiency.
Disadvantages of a Poorly-Designed Control Unit
❖ Reduced performance: A poorly designed control unit can reduce the performance of
the CPU by introducing pipeline stalls, increasing the latency, and reducing the
throughput.
❖ Increased complexity: A poorly designed control unit can increase the complexity of
the CPU, making it harder to design, test, and maintain.
❖ Higher power consumption: A poorly designed control unit can increase power
consumption by inefficiently using resources, such as registers and memory, and
requiring more clock cycles for each instruction.
❖ Reduced reliability: A poorly designed control unit can reduce the reliability of the CPU
by introducing errors, such as memory errors and pipeline stalls.
❖ Limitations on instruction set: A poorly designed control unit may limit the instruction
set of the CPU, making it harder to execute complex instructions and limiting the
functionality of the CPU.
❖ Inefficient use of resources: A poorly designed control unit may inefficiently use
resources such as registers and memory, leading to wasted resources and reduced
performance.
❖ Limited scalability: A poorly designed control unit may limit the scalability of the CPU,
making it harder to handle larger and more complex workloads.
❖ Poor support for parallelism: A poorly designed control unit may limit the ability of the
CPU to support parallelism, reducing the overall performance of the system.
❖ Security vulnerabilities: A poorly designed control unit may introduce security
vulnerabilities, such as buffer overflows or code injection attacks.
❖ Higher cost: A poorly designed control unit may increase the cost of the CPU by
requiring additional components or increasing the manufacturing complexity.

Memory or Storage Unit


What is Computer Memory?
Computer memory is just like the human brain. It is used to store data/information and
instructions. It is a data storage unit or a data storage device where data is to be processed and
instructions required for processing are stored. both the input and output can be stored here.
Characteristics of Computer Memory
It is faster as compared to secondary memory.
It is semiconductor memories.
It is usually a volatile memory and the main memory of the computer.
A computer system cannot run without primary memory.
How Does Computer Memory Work?
When you open a program, it is loaded from secondary memory into primary memory. Because
there are various types of memory and storage, an example would be moving a program from
a solid-state drive (SSD) to RAM. Because primary storage is accessed more quickly, the
opened software can connect with the computer’s processor more quickly. The primary
memory is readily accessible from temporary memory slots or other storage sites.
Memory is volatile, which means that data is only kept temporarily in memory. Data saved in
volatile memory is automatically destroyed when a computing device is turned off. When you
save a file, it is sent to secondary memory for storage.
There are various kinds of memory accessible. Its operation will depend upon the type of
primary memory used. but normally, semiconductor-based memory is more related with
memory. Semiconductor memory made up of IC (integrated circuits) with silicon-based metal-
oxide-semiconductor (MOS) transistors.
Types of Computer Memory
In general, computer memory is of three types:
1. Primary memory
2. Secondary memory
3. Cache memory
1. Primary Memory
It is also known as the main memory of the computer system. It is used to store data and
programs or instructions during computer operations. It uses semiconductor technology and
hence is commonly called semiconductor memory. Primary memory is of two types:
RAM (Random Access Memory): It is a volatile memory. Volatile memory stores
information based on the power supply. If the power supply fails/is interrupted/stopped, all the
data and information on this memory will be lost. RAM is used for booting up or starting the
computer. It temporarily stores programs/data which has to be executed by the processor. RAM
is of two types:
S RAM (Static RAM): S RAM uses transistors and the circuits of this memory are capable of
retaining their state as long as the power is applied. This memory consists of the number of flip
flops with each flip flop storing 1 bit. It has less access time and hence, it is faster.
D RAM (Dynamic RAM): D RAM uses capacitors and transistors and stores the data as a
charge on the capacitors. They contain thousands of memory cells. It needs refreshing of charge
on the capacitor after a few milliseconds. This memory is slower than S RAM.
ROM (Read Only Memory): It is a non-volatile memory. Non-volatile memory stores
information even when there is a power supply failed/ interrupted/stopped. ROM is used to
store information that is used to operate the system. As its name refers to read-only memory,
we can only read the programs and data that are stored on it. It contains some electronic fuses
that can be programmed for a piece of specific information. The information is stored in the
ROM in binary format. It is also known as permanent memory. ROM is of four types:
MROM (Masked ROM): Hard-wired devices with a pre-programmed collection of data or
instructions were the first ROMs. Masked ROMs are a type of low-cost ROM that works in
this way.
PROM (Programmable Read-Only Memory): This read-only memory is modifiable once
by the user. The user purchases a blank PROM and uses a PROM program to put the required
contents into the PROM. Its content can’t be erased once written.
EPROM (Erasable Programmable Read Only Memory): EPROM is an extension to PROM
where you can erase the content of ROM by exposing it to Ultraviolet rays for nearly 40
minutes.
EEPROM (Electrically Erasable Programmable Read Only Memory): Here the written
contents can be erased electrically. You can delete and reprogramme EEPROM up to 10,000
times. Erasing and programming take very little time, i.e., nearly 4 -10 ms(milliseconds). Any
area in an EEPROM can be wiped and programmed selectively.
2. Secondary Memory
It is also known as auxiliary memory and backup memory. It is a non-volatile memory and
used to store a large amount of data or information. The data or information stored in secondary
memory is permanent, and it is slower than primary memory. A CPU cannot access secondary
memory directly. The data/information from the auxiliary memory is first transferred to the
main memory, and then the CPU can access it.
Characteristics of Secondary Memory
It is a slow memory but reusable.
It is a reliable and non-volatile memory.
It is cheaper than primary memory.
The storage capacity of secondary memory is large.
A computer system can run without secondary memory.
In secondary memory, data is stored permanently even when the power is off.
Types of Secondary Memory
1. Magnetic Tapes: Magnetic tape is a long, narrow strip of plastic film with a thin, magnetic
coating on it that is used for magnetic recording. Bits are recorded on tape as magnetic patches
called RECORDS that run along many tracks. Typically, 7 or 9 bits are recorded concurrently.
Each track has one read/write head, which allows data to be recorded and read as a sequence
of characters. It can be stopped, started moving forward or backward, or rewound.

2. Magnetic Disks: A magnetic disk is a circular metal or a plastic plate and these plates are
coated with magnetic material. The disc is used on both sides. Bits are stored in magnetized
surfaces in locations called tracks that run in concentric rings. Sectors are typically used to
break tracks into pieces.

Hard discs are discs that are permanently attached and cannot be removed by a single user.

3. Optical Disks: It’s a laser-based storage medium that can be written to and read. It is
reasonably priced and has a long lifespan. The optical disc can be taken out of the computer by
occasional users.
Types of Optical Disks
CD – ROM
It’s called compact disk. Only read from memory.
Information is written to the disc by using a controlled laser beam to burn pits on the disc
surface.
It has a highly reflecting surface, which is usually aluminium.
The diameter of the disc is 5.25 inches.
16000 tracks per inch is the track density.
The capacity of a CD-ROM is 600 MB, with each sector storing 2048 bytes of data.
The data transfer rate is about 4800KB/sec. & the new access time is around 80 milliseconds.
WORM- (WRITE ONCE READ MANY)
A user can only write data once.
The information is written on the disc using a laser beam.
It is possible to read the written data as many times as desired.
They keep lasting records of information but access time is high.
It is possible to rewrite updated or new data to another part of the disc.
Data that has already been written cannot be changed.
Usual size – 5.25 inch or 3.5inch diameter.
The usual capacity of 5.25inch disk is 650 MB,5.2GB etc.
DVDs
The term “DVD” stands for “Digital Versatile/Video Disc,” and there are two sorts of DVDs:
DVDR (writable)
DVDRW (Re-Writable)
DVD-ROMS (Digital Versatile Discs): These are read-only memory (ROM) discs that can be
used in a variety of ways. When compared to CD-ROMs, they can store a lot more data. It has
a thick polycarbonate plastic layer that serves as a foundation for the other layers. It’s an optical
memory that can read and write data.
DVD-R: DVD-R is a writable optical disc that can be used just once. It’s a DVD that can be
recorded. It’s a lot like WORM. DVD-ROMs have capacities ranging from 4.7 to 17 GB. The
capacity of 3.5 inch disk is 1.3 GB.
3. Cache Memory
It is a type of high-speed semiconductor memory that can help the CPU run faster. Between
the CPU and the main memory, it serves as a buffer. It is used to store the data and programs
that the CPU uses the most frequently.
Advantages of Cache Memory
It is faster than the main memory.
When compared to the main memory, it takes less time to access it.
It keeps the programs that can be run in a short amount of time.
It stores data in temporary use.
Disadvantages of Cache Memory
Because of the semiconductors used, it is very expensive.
The size of the cache (amount of data it can store) is usually small.
Arithmetic and Logical Unit (ALU)
An arithmetic unit, or ALU, enables computers to perform mathematical operations on binary
numbers. They can be found at the heart of every digital computer and are one of the most
important parts of a CPU (Central Processing Unit).
In its simplest form, an arithmetic unit can be thought of as a simple binary calculator -
performing binary addition or subtraction on two inputs (A & B) to output a result (to explore
more on how this works check out our note: Binary Addition with Full Adders).

As well as performing basic mathematical operations, the arithmetic unit may also output a
series of 'flags' that provide further information about the status of a result: if it is zero, if there
is a carryout, or if an overflow has occurred. This is important as it enables a computational
machine to perform more complex behaviors like conditional branching.
Modern computational machines, however, contain 'arithmetic units' which are far more
complex than the one described above. These units may perform additional basic mathematical
operations (multiply & divide) and bitwise operations (AND, OR, XOR et al). As such, they
are commonly referred to as an ALU (Arithmetic Logic Unit).
ALUs enable mathematical procedures to be performed in an optimized manner, and this can
significantly reduce the number of steps required to perform a particular calculation.
Today, most CPUs (Central Processing Units) contain ALUs that can perform operations on
32 or 64-bit binary numbers. However, AUs & ALUs which process much smaller numbers
also have their place in the history of computing.
COMPUTER REGISTER

Registers are a type of computer memory used to quickly accept, store, and transfer data and
instructions that are being used immediately by the CPU. The registers used by the CPU are
often termed as Processor registers.

A processor register may hold an instruction, a storage address, or any data (such as bit
sequence or individual characters).
The computer needs processor registers for manipulating data and a register for holding a
memory address. The register holding the memory location is used to calculate the address of
the next instruction after the execution of the current instruction is completed.

Following is the list of some of the most common registers used in a basic computer:

Register Symbol Number of Function


bits

Data register DR 16 Holds memory operand

Address register AR 12 Holds address for the memory

Accumulator AC 16 Processor register

Instruction register IR 16 Holds instruction code

Program counter PC 12 Holds address of the instruction

Temporary TR 16 Holds temporary data


register

Input register INPR 8 Carries input character

Output register OUTR 8 Carries output character

The common registers in a computer and the memory are depicted in the diagram below:
The following explains the various computer registers and their functions:
Accumulator Register (AC)
The Accumulator Register is a general-purpose Register. The initial data to be processed, the
intermediate result, and the final result of the processing operation are all stored in this register.
If no specific address for the result operation is specified, the result of arithmetic operations is
transferred to AC. The number of bits in the accumulator register equals the number of bits per
word.
Address Register (AR)
The Address Register is the address of the memory location or Register where data is stored or
retrieved. The size of the Address Register is equal to the width of the memory address is
directly related to the size of the memory because it contains an address. If the memory has a
size of 2n * m, then the address is specified using n bits.
Data Register (DR)
The operand is stored in the Data Register from memory. When a direct or indirect addressing
operand is found, it is placed in the Data Register. This value was then used as data by the
processor during its operation. It's about the same size as a word in memory.
Instruction Register (IR)
The instruction is stored in the Instruction Register. The instruction register contains the
currently executed instruction. Because it includes instructions, the number of bits in the
Instruction Register equals the number of bits in the instruction, which is n bits for an n-bit
CPU.
Input Register (INPR)
Input Register is a register that stores the data from an input device. The computer's
alphanumeric code determines the size of the input register.
Program Counter (PC)
The Program Counter serves as a pointer to the memory location where the next instruction is
stored. The size of the PC is equal to the width of the memory address, and the number of bits
in the PC is equal to the number of bits in the PC.
Temporary Register (TR)
The Temporary Register is used to hold data while it is being processed. As Temporary
Register stores data, the number of bits it contains is the same as the number of bits in the data
word.
Output Register (OUTR)
The data that needs to be sent to an output device is stored in the Output Register. Its size is
determined by the alphanumeric code used by the computer.

COMPUTER BUS
A Computer bus consists of a set of parallel conductors, which may be conventional wires,
copper tracks on a PRINTED CIRCUIT BOARD, or microscopic aluminum trails on the
surface of a silicon chip. Each wire carries just one bit, so the number of wires determines the
most significant data WORD the bus can transmit: a bus with eight wires can carry only 8-bit
data words and hence defines the device as an 8-bit device.
The bus is a communication channel.
The characteristic of the bus is shared transmission media.
The limitation of a bus is only one transmission at a time.
A bus used to communicate between the major components of a computer is called a System
bus.

System bus contains 3 categories of lines used to provide the communication between the
CPU, memory and IO named as:
1. Address lines (AL)
2. Data lines (DL)
3. Control lines (CL)
1. Address Lines:
Used to carry the address to memory and IO.
Unidirectional.
Based on the width of an address bus we can determine the capacity of a main memory
2. Data Lines:
Used to carry the binary data between the CPU, memory and IO.
Bidirectional.
Based on the width of a data bus we can determine the word length of a CPU.
Based on the word length we can determine the performance of a CPU.
3. Control Lines:
Used to carry the control signals and timing signals
Control signals indicate the type of operation.
Timing Signals are used to synchronize the memory and IO operations with a CPU clock.
Typical Control Lines may include Memory Read/Write, IO Read/Write, Bus Request/Grant,
etc.
ADDRESSING MODES
The term addressing modes refers to the way in which the operand of an instruction is specified.
The addressing mode specifies a rule for interpreting or modifying the address field of the
instruction before the operand is actually executed.
Addressing modes for 8086 instructions are divided into two categories:
1) Addressing modes for data
2) Addressing modes for branch.
The 8086 memory addressing modes provide flexible access to memory, allowing you to easily
access variables, arrays, records, pointers, and other complex data types. The key to good
assembly language programming is the proper use of memory addressing modes.
An assembly language program instruction consists of two parts

The memory address of an operand consists of two components:


IMPORTANT TERMS
Starting address of memory segment.
Effective address or Offset: An offset is determined by adding any combination of three address
elements: displacement, base and index.
Displacement: It is an 8 bit or 16-bit immediate value given in the instruction.
Base: Contents of base register, BX or BP.
Index: Content of index register SI or DI.
According to different ways of specifying an operand by 8086 microprocessor, different
addressing modes are used by 8086.
Addressing modes used by 8086 microprocessors are discussed below:
Implied mode: In implied addressing the operand is specified in the instruction itself. In this
mode the data is 8 bits or 16 bits long and data is the part of instruction. Zero address instruction
are designed with implied addressing mode.

Example: CLC (used to reset Carry flag to 0)


Immediate addressing mode (symbol #): In this mode data is present in address field of
instruction. Designed like one address instruction format.
Note: Limitation in the immediate mode is that the range of constants are restricted by size of
address field.

Example: MOV AL, 35H (move the data 35H into AL register)
Register mode: In register addressing the operand is placed in one of 8 bit or 16 bit general
purpose registers. The data is in the register that is specified by the instruction.
Here one register reference is required to access the data.

Example: MOV AX, CX (move the contents of CX register to AX register)


Register Indirect mode: In this addressing the operand’s offset is placed in any one of the
registers BX, BP, SI, DI as specified in the instruction. The effective address of the data is in
the base register or an index register that is specified by the instruction.
Here two register reference is required to access the data.

The 8086 CPUs let you access memory indirectly through a register using the register indirect
addressing modes.
MOV AX, [BX] (move the contents of memory location s
addressed by the register BX to the register AX)
Auto Indexed (increment mode): Effective address of the operand is the contents of a register
specified in the instruction. After accessing the operand, the contents of this register are
automatically incremented to point to the next consecutive memory location. (R1)+.
Here one register reference, one memory reference and one ALU operation is required to access
the data.
Example:
Add R1, (R2) + // OR
R1 = R1 +M[R2]
R2 = R2 + d
Useful for stepping through arrays in a loop. R2 – start of array d – size of an element
Auto indexed (decrement mode): Effective address of the operand is the contents of a register
specified in the instruction. Before accessing the operand, the contents of this register are
automatically decremented to point to the previous consecutive memory location. –(R1)
Here one register reference, one memory reference and one ALU operation is required to access
the data.
Example:
Add R1, -(R2) //OR
R2 = R2-d
R1 = R1 + M[R2]
Auto decrement mode is same as auto increment mode. Both can also be used to implement a
stack as push and pop. Auto increment and Auto decrement modes are useful for implementing
“Last-In-First-Out” data structures.
Direct addressing/ Absolute addressing Mode (symbol []): The operand’s offset is given in
the instruction as an 8 bit or 16-bit displacement element. In this addressing mode the 16-bit
effective address of the data is the part of the instruction.
Here only one memory reference operation is required to access the data.

Example: ADD AL, [0301] //add the contents of offset address 0301 to AL
Indirect addressing Mode (symbol @ or ()): In this mode address field of instruction contains
the address of effective address. Here two references are required.
1st reference to get effective address.
2nd reference to access the data.
Based on the availability of Effective address, Indirect mode is of two kind:
Register Indirect: In this mode effective address is in the register, and corresponding register
name will be maintained in the address field of an instruction.
Here one register reference, one memory reference is required to access the data.
Memory Indirect: In this mode effective address is in the memory, and corresponding memory
address will be maintained in the address field of an instruction.
Here two memory reference is required to access the data.
Indexed addressing mode: The operand’s offset is the sum of the content of an index register
SI or DI and an 8 bit or 16-bit displacement.
Example: MOV AX, [SI +05]
Based Indexed Addressing: The operand’s offset is sum of the content of a base register BX
or BP and an index register SI or DI.
Example: ADD AX, [BX+SI]
Based on Transfer of control, addressing modes are:
PC relative addressing mode: PC relative addressing mode is used to implement intra
segment transfer of control, in this mode effective address is obtained by adding displacement
to PC.
EA= PC + Address field value
PC= PC + Relative value.
Base register addressing mode: Base register addressing mode is used to implement inter
segment transfer of control. In this mode effective address is obtained by adding base register
value to address field value.
EA= Base register + Address field value.
PC= Base register + Relative value.
Note:
1. PC relative and based register both addressing modes are suitable for program
relocation at runtime.
2. Based register addressing mode is best suitable to write position independent codes.
Advantages of Addressing Modes
✓ To give programmers to facilities such as Pointers, counters for loop controls, indexing
of data and program relocation.
✓ To reduce the number bits in the addressing field of the Instruction.
COMPUTER ARITHMETIC
BOOLEAN ALGEBRA
Boolean algebra is a type of algebra that is created by operating the binary system. In the year
1854, George Boole, an English mathematician, proposed this algebra. This is a variant of
Aristotle’s propositional logic that uses the symbols 0 and 1, or True and False. Boolean algebra
is concerned with binary variables and logic operations.
Boolean Expression and Variables
A Boolean expression is an expression that produces a Boolean value when evaluated, true or
false, the only way to express a Boolean value. Whereas boolean variables are variables that
store Boolean numbers. P + Q = R is a Boolean phrase in which P, Q, R are Boolean variables
that can only store two values: 0 and 1. The computer performs all operations using binary 0
and 1 as the computer understands machine language (0/1). Boolean logic, named after George
Boole, is a type of algebra in which all values are reduced to one of two possibilities: 1 or 0.
To effectively comprehend Boolean logic, we must first comprehend the rules of Boolean logic,
as well as the truth table and logic gates.
Truth Tables
A truth table represents all the variety of combinations of input values and outputs in a tabular
manner. All the possibilities of the input and output are shown in it and hence the name truth
table is kept. In logic problems such as Boolean algebra and electronic circuits, truth tables are
commonly used. T or 1 denotes ‘True’ & F or 0 denotes ‘False’ in the truth table
A B X=A.B
T T T
T F F
F T F
F F F

Logic Gates
A logic gate is a virtual or physical device that performs a Boolean function. These are used to
make logic circuits. Logic gates are the main components of any digital system. This electrical
circuit can have only one output and 1 or more inputs. The relation between the input and the
output is governed by specific logic. AND, OR, NOT gate, etc are the examples of logic gates.
Types of Logic Gates
1. AND Gate (Product): A logic gate with two or more inputs and a single output is known as
an AND gate. The logic multiplication rules are used to operate an AND gate. An AND gate
can have any number of inputs, although the most common are two and three-input AND gates.
If any of the inputs are low (0), the output is also low in this gate. When all of the inputs are
high (1), the output will be high as well.

Truth table: For AND gate, the output X is true if and only if both the inputs P and Q are
true. So the truth table of AND gate is as follows:
P Q X=P.Q
T T T
T F F
F T F
F F F

2. OR Gate (Sum): A logic gate that performs a logical OR operation is known as an OR gate.
If one or both of the gate’s inputs are high, the logical OR operation produces a high output
(1). (1). If neither of the inputs is high, the result is a low output (0). In the same way that an
AND gate can have an unlimited number of input probes, an OR gate can only have one output
probe. A logical OR gate finds the maximum between two binary digits.

Truth table: For the OR gate, the output X is true if and only if any of the inputs P or Q is
true. So the truth table of OR gate is as follows:
P Q X=P+Q
T T T
T F T
F T T
F F F

3. NOT Gate (Complement): Inverting NOT gates are those devices that takes only one input
with an output level that is ordinarily at logic level 1 and goes low to a logic level 0 when their
single input is at logic level 1, or in other words, they invert their input signal. A NOT gate’s
output only returns high, when its input is at logic level 0. The output ~P (~ denotes Not) of a
single input NOT gate is only true when the input P is false or we can say, Not true. It is also
called inverse gate as it results the negation of the input Boolean Expression.

Truth table: For the NOT gate, the output X is true if and only if input P is false. So the truth
table of NOT gate is as follows:
P ~P
T F
F T

4. NAND Gate: A logic gate known as a NAND gate provides a low output (0) only if all of
its inputs are true, and high output (1) otherwise. As a result, the NAND gate is the inverse
of an AND gate, and its circuit is created by joining AND gate and NOT gate. NAND means
‘Not of AND’ Gate and it results in false only when both the inputs P and Q are true. AND
gates (together with NOR gates) are known as universal gates because they are a form of
logic gate that can implement any Boolean function without the usage of any other gate type.
Truth table:
For the NAND gate, the output X is false if and only if both the inputs (i.e., P and Q) are true.
So the truth table of the NAND gate is as follows:

P Q ~(P.Q)
T T F
T F T
F T T
F F T

5. NOR Gate: A logic gate known as a NOR gate provides a high output (1) only if all of its
inputs are false, and low output (0) otherwise. As a result, the NOR gate is the inverse of an
OR gate, and its circuit is created by joining OR gate and NOT gate. NOR means ‘Not of
OR’ Gate & it results in true only when both the inputs P and Q are false.

Truth table:
For the NAND gate, the output X is true if and only if both the inputs (i.e., P and Q) are false.
So the truth table of NOR gate is as follows:

P Q ~(P+Q)
T T F
T F F
F T F
F F T

6. XOR Gate: An XOR gate (also known as an Exclusive OR gate) is a digital logic gate that
conducts exclusive disjunction and has two or more inputs and one output. Only one of an
XOR gate’s inputs must be true for the output to be true. The output of an XOR gate is false
if both of its inputs are false, or true if both of its inputs are true. XOR means ‘Exclusive OR’
Gate & it results in true only when either of the 2 inputs P & Q is true, i.e., either P is true or
Q is true but not both.

Truth table:

P Q X=P⊕Q
T T F
T F T
F T T
F F F

7. XNOR Gate: An NOR gate (also known as an Exclusive NOR gate) is a digital logic gate
that is just opposite of XOR gate. It has two or more inputs and one output. When one of its
two input is true but not both then it will return false. XNOR means ‘Exclusive NOR’ Gate
and it result is true only when both of its inputs P and Q are either true or false.

Truth table:
P Q X=P XNOR Q
T T T
T F F
F T F
F F T

Laws for Boolean Logic

Following are some laws for boolean logic:


Law OR form AND form
Identity Law P+1=P P.0 = 0
Idempotent Law P+P=P P.P = P
Commutative Law P+Q=Q+P P.Q = Q.P
Null Law 1+P=1 0.P = 0
Inverse Law P + (~P) = 1 P.(~P) = 0
Associative Law P + (Q + R) = (P + Q) + R P.(Q.R) = (P.Q).R
Distributive Law P + QR = (P + Q).(P + R) P.(Q + R) = P.Q + P.R
Absorption Law P + PQ = P P.(P + Q) = P
De Morgan’s Law ~(P + Q) = (~P).(~Q) ~(P.Q) = (~P) + (~Q)

De Morgan’s laws

De Morgan’s Law states that:


Statement 1: The Complement of the product (AND) of two Boolean variables (or
expressions) is equal to the sum(OR) of the complement of each Boolean variable (or
expression).
~(P.Q) = (~P) + (~Q)
Proof:
Statement: ~(P.Q) = (~P) + (~Q)
The truth table is:
P Q (~P) (~Q) ~(P.Q) (~P)+(~Q)
T T F F F F
T F F T T T
F T T F T T
F F T T T T
We can clearly see that truth values for ~(P.Q) are equal to truth values for (~P) + (~Q),
corresponding to the same input.
Statement 2: The Complement of sum (OR) of two Boolean variables (or expressions) is
equal to the product (AND) of the complement of each Boolean variable (or expression).
~(P + Q) = (~P).(~Q)
Proof
Statement: ~(P+Q) = (~P).(~Q)
The truth table is :
P Q (~P) (~Q) ~(P + Q) (~P).(~Q)
T T F F F F
T F F T F F
F T T F F F
F F T T T T
We can clearly see that truth values for ~(P + Q) are equal to truth values for (~P).(~Q),
corresponding to the same input.

Logic circuits

An electric circuit in which we can give one or more binary inputs (assuming two states, on
or off) and we get a single binary output corresponding to the input in a fashion that can be
described as a function in symbolic logic. The AND, OR, and NOT gates are basic logic
circuits that perform the logical functions – AND, OR, and NOT, respectively. Computers
can do more complicated tasks with circuits than they could with only a single gate.
Example: A chain of two logic gates is the smallest circuit. Consider the following circuit:

This logic circuit is for the Boolean expression: (P + Q).R.


Here, the first OR gate is used: P, Q are input to it and P + Q is the output.
Then, AND gate is used : (P + Q), R is input to it & (P + Q).R is the output.
So the truth table is :
P Q R P+Q X = (P + Q).R
T T T T T
T T F T F
T F T T T
T F F T F
F T T T T
F T F T F
F F T F F
F F F F F

Questions

Question 1. Design the logical circuit for: A.B + B.C


Solution:

Question 2. What will be the Boolean expression for the following logic circuit:

Solution:
X = ~(P + Q).R + S
Question 3. Verify using truth table: P + P.Q = P
Solution:
The truth table for P + P.Q = P
P Q P.Q P + P.Q
T T T T
T F F T
F T F F
F F F F
In the truth table, we can see that the truth values for P + P.Q is exactly the same as P.

You might also like