Architecture of Computer System
Architecture of Computer System
Computer is an electronic machine that makes performing any task very easy. In
computer, the CPU executes each instruction provided to it, in a series of steps, this
series of steps is called Machine Cycle, and is repeated for each instruction. One
machine cycle involves fetching of instruction, decoding the instruction, transferring the
data, executing the instruction.
Computer system has five basic units that help the computer to perform operations,
which are given below:
1. Input Unit
2. Output Unit
3. Storage Unit
4. Arithmetic Logic Unit
5. Control Unit
Input Unit
Input unit connects the external environment with internal computer system. It provides
data and instructions to the computer system. Commonly used input devices
are keyboard, mouse, magnetic tape etc.
Input unit performs following tasks:
Output Unit
It connects the internal system of a computer to the external environment. It provides
the results of any computation, or instructions to the outside world. Some output devices
are printers, monitor etc.
Storage Unit
This unit holds the data and instructions. It also stores the intermediate results before
these are sent to the output devices. It also stores the data for later use.
The storage unit of a computer system can be divided into two categories:
Primary Storage: This memory is used to store the data which is being currently
executed. It is used for temporary storage of data. The data is lost, when the
computer is switched off. RAM is used as primary storage memory.
Secondary Storage: The secondary memory is slower and cheaper than
primary memory. It is used for permanent storage of data. Commonly used
secondary memory devices are hard disk, CD etc.
Control Unit
It controls all other units of the computer. It controls the flow of data and instructions to
and from the storage unit to ALU. Thus it is also known as central nervous system of the
computer.
CPU
It is Central Processing Unit of the computer. The control unit and ALU are together
known as CPU. CPU is the brain of computer system. It performs following tasks:
1. AND
2. OR
3. NOT
4. NAND
5. NOR
6. XOR
7. XNOR
AND Gate
The AND gate produces the AND logic function, that is, the output is 1 if input A and
input B are both equal to 1; otherwise the output is 0.
The algebraic symbol of the AND function is the same as the multiplication symbol of
ordinary arithmetic.
We can either use a dot between the variables or concatenate the variables without an
operation symbol between them. AND gates may have more than two inputs, and by
definition, the output is 1 if and only if all inputs are 1.
OR Gate
The OR gate produces the inclusive-OR function; that is, the output is 1 if input A or
input B or both inputs are 1; otherwise, the output is 0.
The algebraic symbol of the OR function is +, similar to arithmetic addition.
OR gates may have more than two inputs, and by definition, the output is 1 if any input
is 1.
NOR Gate
The NOR gate is the complement of the OR gate and uses an OR graphic symbol
followed by a small circle.
Exclusive-OR Gate
The exclusive-OR gate has a graphic symbol similar to the OR gate except for the
additional curved line on the input side.
The output of the gate is 1 if any input is 1 but excludes the combination when both
inputs are 1. It is similar to an odd function; that is, its output is 1 if an odd number of
inputs are 1.
A + B = A.B + A.B
A B A B A.B A.B OUT
0 0 1 1 0 0 0
0 1 1 0 1 0 1
1 0 0 1 0 0 1
1 1 0 0 0 0 0
Exclusive-NOR Gate
The exclusive-NOR is the complement of the exclusive-OR, as indicated by the small
circle in the graphic symbol.
The output of this gate is 1 only if both the inputs are equal to 1 or both inputs are equal
to 0.
Registers in Computer
Architecture
Register is a very fast computer memory, used to store data/instruction in-execution.
A Register is a group of flip-flops with each flip-flop capable of storing one bit of
information. An n-bit register has a group of n flip-flops and is capable of storing binary
information of n-bits.
A register consists of a group of flip-flops and gates. The flip-flops hold the binary
information and gates control when and how new information is transferred into a
register. Various types of registers are available commercially. The simplest register is
one that consists of only flip-flops with no external gates.
These days registers are also implemented as a register file.
1. Accumulator: This is the most common register, used to store data taken out from the
memory.
2. General Purpose Registers: This is used to store data intermediate results during
3. Special Purpose Registers: Users do not access these registers. These registers are
memory unit.
o MBR: Memory Buffer Register stores instruction and data received from the
Register Transfer
Information transferred from one register to another is designated in symbolic form by
means of replacement operator.
R2 ← R1
It denotes the transfer of the data from register R1 into R2.
Normally we want the transfer to occur only in predetermined control condition. This can
be shown by following if-then statement: if (P=1) then (R2 ← R1)
Here P is a control signal generated in the control section.
Control Function
A control function is a Boolean variable that is equal to 1 or 0. The control function is
shown as:
P: R2 ← R1
The control condition is terminated with a colon. It shows that transfer operation can be
executed only if P=1.
Micro-Operations
The operations executed on data stored in registers are called micro-operations. A
micro-operation is an elementary operation performed on the information stored in one
or more registers.
Example: Shift, count, clear and load.
Types of Micro-Operations
The micro-operations in digital computers are of 4 types:
another.
registers.
registers.
Arithmetic Micro-Operations
Some of the basic micro-operations are addition, subtraction, increment and decrement.
Add Micro-Operation
It is defined by the following statement:
R3 → R1 + R2
The above statement instructs the data or contents of register R1 to be added to data or
content of register R2 and the sum should be transferred to register R3.
Subtract Micro-Operation
Let us again take an example:
R3 → R1 + R2' + 1
R1 → R1 – 1
Symbolic
Description
Designation
Logic Micro-Operations
These are binary micro-operations performed on the bits stored in the registers. These
operations consider each bit separately and treat them as binary variables.
Let us consider the X-OR micro-operation with the contents of two registers R1 and R2.
P: R1 ← R1 X-OR R2
R1 ← she R1
b) Circular Shift
This circulates or rotates the bits of register around the two ends without any loss of
data or contents. In this, the serial output of the shift register is connected to its serial
input. "cil" and "cir" is used for circular shift left and right respectively.
c) Arithmetic Shift
This shifts a signed binary number to left or right. An arithmetic shift left multiplies a
signed binary number by 2 and shift left divides the number by 2. Arithmetic shift micro-
operation leaves the sign bit unchanged because the signed number remains same
when it is multiplied or divided by 2.
Computer Architecture:
Instruction Codes
While a Program, as we all know, is, A set of instructions that specify the operations, operands,
and the sequence by which processing has to occur. An instruction code is a group of bits that
tells the computer to perform a specific operation part.
Computers with a single processor register is known as Accumulator (AC). The operation is
performed with the memory operand and the content of AC.
Load(LD)
The lines from the common bus are connected to the inputs of each register and data inputs of
memory. The particular register whose LD input is enabled receives the data from the bus during
the next clock pulse transition.
Before studying about instruction formats lets first study about the operand address parts.
When the 2nd part of an instruction code specifies the operand, the instruction is said to
have immediate operand. And when the 2nd part of the instruction code specifies the address of
an operand, the instruction is said to have a direct address. And in indirect address, the 2nd
part of instruction code, specifies the address of a memory word in which the address of the
operand is found.
Computer Instructions
The basic computer has three instruction code formats. The Operation code (opcode) part of the
instruction contains 3 bits and remaining 13 bits depends upon the operation code encountered.
There are three types of formats:
3. Input-Output Instruction
These instructions are recognized by the operation code 111 with a 1 in the left most bit of
instruction. The remaining 12 bits are used to specify the input-output operation.
Format of Instruction
The format of an instruction is depicted in a rectangular box symbolizing the bits of an
instruction. Basic fields of an instruction format are given below:
1. An operation code field that specifies the operation to be performed.
3. A mode field that specifies the way the operand of effective address is determined.
Computers may have instructions of different lengths containing varying number of addresses.
The number of address field in the instruction format depends upon the internal organization of
its registers.
Immediate Mode
In this mode, the operand is specified in the instruction itself. An immediate mode
instruction has an operand field rather than the address field.
For example: ADD 7, which says Add 7 to contents of accumulator. 7 is the operand
here.
Register Mode
In this mode the operand is stored in the register and this register is present in CPU.
The instruction has the address of the Register where the operand is stored.
Advantages
Disadvantages
The operand is A cells away from the current cell(the one pointed to by PC)
Instruction Cycle
An instruction cycle, also known as fetch-decode-execute cycle is the basic
operational process of a computer. This process is repeated continuously by CPU from
boot up to shut down of computer.
Following are the steps that occur during an instruction cycle:
The cycle is then repeated by fetching the next instruction. Thus in this way the
instruction cycle is repeated continuously.
Memory Organization in Computer
Architecture
A memory unit is the collection of storage units or devices together. The memory unit
stores the binary information in the form of bits. Generally, memory/storage is classified
into 2 categories:
Volatile Memory: This loses its data, when power is switched off.
Non-Volatile Memory: This is a permanent storage and does not lose any data when
Memory Hierarchy
1. Random Access: Main memories are random access memories, in which each memory
location has a unique address. Using this unique address any memory location can be
3. Direct Access: In this mode, information is stored in tracks, with each track having a
powered off.
o NVRAM: Non-Volatile RAM, retains its data, even when turned off. Example:
Flash memory.
ROM: Read Only Memory, is non-volatile and is more like a permanent storage for
information. It also stores the bootstrap loader program, to load and start the operating
Auxiliary Memory
Devices that provide backup storage are called auxiliary memory. For
example: Magnetic disks and tapes are commonly used auxiliary devices. Other
devices used as auxiliary memory are magnetic drums, magnetic bubble memory and
optical disks.
It is not directly accessible to the CPU, and is accessed using the Input/Output
channels.
Cache Memory
The data or contents of the main memory that are used again and again by CPU, are
stored in the cache memory so that we can easily access that data in shorter time.
Whenever the CPU needs to access memory, it first checks the cache memory. If the
data is not found in cache memory then the CPU moves onto the main memory. It also
transfers block of recent data into the cache and keeps on deleting the old data in cache
to accomodate the new one.
Hit Ratio
The performance of cache memory is measured in terms of a quantity called hit ratio.
When the CPU refers to memory and finds the word in cache it is said to produce a hit.
If the word is not found in cache, it is in main memory then it counts as a miss.
The ratio of the number of hits to the total CPU references to memory is called hit ratio.
Hit Ratio = Hit/(Hit + Miss)
Associative Memory
It is also known as content addressable memory (CAM). It is a memory chip in which
each bit position can be compared. In this the content is compared in each bit cell which
allows very fast table lookup. Since the entire chip can be compared, contents are
randomly stored without considering addressing scheme. These chips have less
storage capacity than regular memory chips.
Associative Mapping
Direct Mapping
Direct Mapping
The CPU address of 15 bits is divided into 2 fields. In this the 9 least significant bits
constitute the index field and the remaining 6 bits constitute the tag field. The number
of bits in index field is equal to the number of address bits required to access cache
memory.
Set Associative Mapping
The disadvantage of direct mapping is that two words with same index address can't
reside in cache memory at the same time. This problem can be overcome by set
associative mapping.
In this we can store two or more words of memory under the same index address. Each
data word is stored together with its tag and this forms a set.
Replacement Algorithms
Data is continuously replaced with new data in the cache memory using replacement
algorithms. Following are the 2 replacement algorithms used:
FIFO - First in First out. Oldest item is replaced with the latest item.
LRU - Least Recently Used. Item which is least recently used by CPU is removed.
Virtual Memory
Virtual memory is the separation of logical memory from physical memory. This
separation provides large virtual memory for programmers when only small physical
memory is available.
Virtual memory is used to give programmers the illusion that they have a very large
memory even though the computer has a small main memory. It makes the task of
programming easier because the programmer no longer needs to worry about the
amount of physical memory available.
Parallel Processing and Data
Transfer Modes in a Computer
System
Instead of processing each instruction sequentially, a parallel processing system
provides concurrent data processing to increase the execution time.
In this the system may have two or more ALU's and should be able to execute two or
more instructions at the same time. The purpose of parallel processing is to speed up
the computer processing capability and increase its throughput.
NOTE: Throughput is the number of instructions that can be executed in a unit of time.
Parallel processing can be viewed from various levels of complexity. At the lowest level,
we distinguish between parallel and serial operations by the type of registers used. At
the higher level of complexity, parallel processing can be achieved by using multiple
functional units that perform many operations simultaneously.
Data Transfer Modes of a Computer System
According to the data transfer mode, computer can be divided into 4 major groups:
1. SISD
2. SIMD
3. MISD
4. MIMD
What is Pipelining?
Pipelining is the process of accumulating instruction from the processor through a
pipeline. It allows storing and executing instructions in an orderly process. It is also
known as pipeline processing.
Pipelining is a technique where multiple instructions are overlapped during execution.
Pipeline is divided into stages and these stages are connected with one another to form
a pipe like structure. Instructions enter from one end and exit from another end.
Pipelining increases the overall instruction throughput.
In pipeline system, each segment consists of an input register followed by a
combinational circuit. The register is used to hold data and combinational circuit
performs operations on it. The output of combinational circuit is applied to the input
register of the next segment.
Pipeline system is like the modern day assembly line setup in factories. For example in
a car manufacturing industry, huge assembly lines are setup and at each point, there
are robotic arms to perform a certain task, and then the car moves on ahead to the next
arm.
Types of Pipeline
It is divided into 2 categories:
1. Arithmetic Pipeline
2. Instruction Pipeline
Arithmetic Pipeline
Arithmetic pipelines are usually found in most of the computers. They are used for
floating point operations, multiplication of fixed point numbers etc. For example: The
input to the Floating Point Adder pipeline is:
X = A*2^a
Y = B*2^b
Registers are used for storing the intermediate results between the above operations.
Instruction Pipeline
In this a stream of instructions can be executed by
overlapping fetch, decode and execute phases of an instruction cycle. This type of
technique is used to increase the throughput of the computer system.
An instruction pipeline reads instruction from the memory while previous instructions are
being executed in other segments of the pipeline. Thus we can execute multiple
instructions simultaneously. The pipeline will be more efficient if the instruction cycle is
divided into segments of equal duration.
Pipeline Conflicts
There are some factors that cause the pipeline to deviate its normal performance. Some
of these factors are given below:
1. Timing Variations
All stages cannot take same amount of time. This problem generally occurs in
instruction processing where different instructions have different operand requirements
and thus different processing time.
2. Data Hazards
When several instructions are in partial execution, and if they reference same data then
the problem arises. We must ensure that next instruction does not attempt to access
data before the current instruction, because this will lead to incorrect results.
3. Branching
In order to fetch and execute the next instruction, we must know what that instruction is.
If the present instruction is a conditional branch, and its result will lead us to the next
instruction, then the next instruction may not be known until the current one is
processed.
4. Interrupts
Interrupts set unwanted instruction into the instruction stream. Interrupts effect the
execution of instruction.
5. Data Dependency
It arises when an instruction depends upon the result of a previous instruction but this
result is not yet available.
Advantages of Pipelining
1. The cycle time of the processor is reduced.
2. It increases the throughput of the system
Disadvantages of Pipelining
1. The design of pipelined processor is complex and costly to manufacture.
2. The instruction latency is more.
1. Petroleum exploration.
2. Medical diagnosis.
3. Data analysis.
4. Weather forecasting.
6. Image processing.
7. Artificial intelligence.
Superscalar Processors
It was first invented in 1987. It is a machine which is designed to improve the
performance of the scalar processor. In most applications, most of the operations are on
scalar quantities. Superscalar approach produces the high performance general
purpose processors.
The main principle of superscalar approach is that it executes instructions
independently in different pipelines. As we already know, that Instruction pipelining
leads to parallel processing thereby speeding up the processing of instructions. In
Superscalar processor, multiple such pipelines are introduced for different operations,
which further improves parallel processing.
There are multiple functional units each of which is implemented as a pipeline. Each
pipeline consists of multiple stages to handle multiple instructions at a time which
support parallel execution of instructions.
It increases the throughput because the CPU can execute multiple instructions per clock
cycle. Thus, superscalar processors are much faster than scalar processors.
A scalar processor works on one or two data items, while the vector processor works
with multiple data items. A superscalar processor is a combination of both. Each
instruction processes one data item, but there are multiple execution units within each
CPU thus multiple instructions can be processing separate data items concurrently.
While a superscalar CPU is also pipelined, there are two different performance
enhancement techniques. It is possible to have a non-pipelined superscalar CPU or
pipelined non-superscalar CPU. The superscalar technique is associated with some
characteristics, these are:
As most of the Array processors operates asynchronously from the host CPU, hence it improves
Array Processors has its own local memory, hence providing extra memory for systems with low
memory.
Computer Architecture:
Input/Output Organisation
In this tutorial we will learn about how Input and Output is handled in a computer system.
Input/Output Subsystem
The I/O subsystem of a computer provides an efficient mode of communication between the
central system and the outside environment. It handles all the input-output operations of the
computer system.
Peripheral Devices
Input or output devices that are connected to computer are called peripheral devices. These
devices are designed to read information into or out of the memory unit upon command from the
CPU and are considered to be the part of computer system. These devices are also
called peripherals.
For example: Keyboards, display units and printers are common peripheral devices.
There are three types of peripherals:
1. Input peripherals : Allows user input, from the outside world to the computer. Example:
2. Output peripherals: Allows information output, from the computer to the outside world.
3. Input-Output peripherals: Allows both input(from outised world to computer) as well as,
Interfaces
Interface is a shared boundary btween two separate components of the computer system which
can be used to attach two or more components to the system for communication purposes.
There are two types of interface:
1. CPU Inteface
2. I/O Interface
Input-Output Interface
Peripherals connected to a computer need special communication links for interfacing with CPU.
In computer system, there are special hardware components between the CPU and peripherals to
control or manage the input-output transfers. These components are called input-output
interface units because they provide communication links between processor bus and
peripherals. They provide a method for transferring information between internal system and
input-output devices.
1. Programmed I/O
Programmed I/O
Programmed I/O instructions are the result of I/O instructions written in computer program. Each
data item transfer is initiated by the instruction in the program.
Usually the program controls data transfer to and from CPU and peripheral. Transferring data
under programmed I/O requires constant monitoring of the peripherals by the CPU.
Interrupt Initiated I/O
In the programmed I/O method the CPU stays in the program loop until the I/O unit indicates
that it is ready for data transfer. This is time consuming process because it keeps the processor
busy needlessly.
This problem can be overcome by using interrupt initiated I/O. In this when the interface
determines that the peripheral is ready for data transfer, it generates an interrupt. After receiving
the interrupt signal, the CPU stops the task which it is processing and service the I/O transfer and
then returns back to its previous processing task.
Computer Architecture:
Input/Output Processor
An input-output processor (IOP) is a processor with direct memory access capability. In
this, the computer system is divided into a memory unit and number of processors.
Each IOP controls and manage the input-output tasks. The IOP is similar to CPU except
that it handles only the details of I/O processing. The IOP can fetch and execute its own
instructions. These IOP instructions are designed to manage I/O transfers only.
The communication between the IOP and the devices is similar to the program control
method of transfer. And the communication with the memory is similar to the direct
memory access method.
In large scale computers, each processor is independent of other processors and any
processor can initiate the operation.
The CPU can act as master and the IOP act as slave processor. The CPU assigns the
task of initiating operations but it is the IOP, who executes the instructions, and not the
CPU. CPU instructions provide operations to start an I/O transfer. The IOP asks for
CPU through interrupt.
Instructions that are read from memory by an IOP are also called commands to
distinguish them from instructions that are read by CPU. Commands are prepared by
programmers and are stored in memory. Command words make the program for IOP.
CPU informs the IOP where to find the commands in memory.
Computer Architecture: Interrupts
Data transfer between the CPU and the peripherals is initiated by the CPU. But the CPU
cannot start the transfer unless the peripheral is ready to communicate with the CPU.
When a device is ready to communicate with the CPU, it generates an interrupt signal.
A number of input-output devices are attached to the computer and each device is able
to generate an interrupt request.
The main job of the interrupt system is to identify the source of the interrupt. There is
also a possibility that several devices will request simultaneously for CPU
communication. Then, the interrupt system has to decide which device is to be serviced
first.
Priority Interrupt
A priority interrupt is a system which decides the priority at which various devices, which
generates the interrupt signal at the same time, will be serviced by the CPU. The
system has authority to decide which conditions are allowed to interrupt the CPU, while
some other interrupt is being serviced. Generally, devices with high speed transfer such
as magnetic disks are given high priority and slow devices such as keyboards are given
low priority.
When two or more devices interrupt the computer simultaneously, the computer
services the device with the higher priority first.
Types of Interrupts:
Following are some different types of interrupts:
Hardware Interrupts
When the signal for the processor is from an external device or hardware then this
interrupts is known as hardware interrupt.
Let us consider an example: when we press any key on our keyboard to do some
action, then this pressing of the key will generate an interrupt signal for the processor to
perform certain action. Such an interrupt can be of two types:
Maskable Interrupt
The hardware interrupts which can be delayed when a much high priority
interrupt has occurred at the same time.
Software Interrupts
The interrupt that is caused by any internal system of the computer system is known as
a software interrupt. It can also be of two types:
Normal Interrupt
Exception
Unplanned interrupts which are produced during the execution of some program
are called exceptions, such as division by zero.
Modes Of Transmission
Data can be transmitted between 2 points by three different modes:
Simplex
A simplex line carries information in one direction only. In this mode receiver cannot
communicate with the sender to indicate the occurrence of errors that means only sender can
send data but receiver cannot. For example: Radio and Television Broadcasting.
Half Duplex
In half duplex mode, system is capable of transmitting data in both directions but data can be
transmitted in one direction only at a time. A pair of wires is needed for this mode. For
example: Walkie - Talkie.
Full Duplex
In this mode data can be send and received in both directions simultaneously. In this four wire
link is used. For example: Video Calling, Audio calling etc.
Types of Protocols
There are two types of protocols:
Input/Output Channels
A channel is an independent hardware component that co-ordinate all I/O to a set of
controllers. Computer systems that use I/O channel have special hardware components
that handle all I/O operations.
Channels use separate, independent and low cost processors for its functioning which
are called Channel Processors.
Channel processors are simple, but contains sufficient memory to handle all I/O tasks.
When I/O transfer is complete or an error is detected, the channel controller
communicates with the CPU using an interrupt, and informs CPU about the error or the
task completion.
Each channel supports one or more controllers or devices. Channel programs contain
list of commands to the channel itself and for various connected controllers or devices.
Once the operating system has prepared a list of I/O commands, it executes a single
I/O machine instruction to initiate the channel program, the channel then assumes
control of the I/O operations until they are completed.
Selector
This channel can handle only one I/O operation at a time and is used to control one high
speed device at a time.
Block-Multiplexer
It combines the features of both multiplexer and selector channels.
The CPU directly can communicate with the channels through control lines. Following
diagram shows the word format of channel operation.
The computer system may have number of channels and each is assigned an address.
Each channel may be connected to several devices and each device is assigned an
address.
What is Interleaved Memory?
It is a technique for compensating the relatively slow speed of DRAM(Dynamic RAM). In
this technique, the main memory is divided into memory banks which can be accessed
individually without any dependency on the other.
For example: If we have 4 memory banks(4-way Interleaved memory), with each
containing 256 bytes, then, the Block Oriented scheme(no interleaving), will assign
virtual address 0 to 255 to the first bank, 256 to 511 to the second bank. But in
Interleaved memory, virtual address 0 will be with the first bank, 1 with the second
memory bank, 2 with the third bank and 3 with the fourt, and then 4 with the first
memory bank again.
Hence, CPU can access alternate sections immediately without waiting for memory to
be cached. There are multiple memory banks which take turns for supply of data.
Memory interleaving is a technique for increasing memory speed. It is a process that
makes the system more efficient, fast and reliable.
For example: In the above example of 4 memory banks, data with virtual address 0, 1,
2 and 3 can be accessed simultaneously as they reside in spearate memory banks,
hence we do not have to wait for completion of a data fetch, to begin with the next.
An interleaved memory with n banks is said to be n-way interleaved. In an interleaved
memory system, there are still two banks of DRAM but logically the system seems one
bank of memory that is twice as large.
In the interleaved bank representation below with 2 memory banks, the first long word of
bank 0 is floowed by that of bank 1, which is followed by the second long word of bank
0, which is followed by the second long word of bank 1 and so on.
The following figure shows the organization of two physical banks of n long words. All
even long words of logical bank are located in physical bank 0 and all odd long words
are located in physical bank 1.
Types of Interleaving
There are two methods for interleaving a memory:
2-Way Interleaved
Two memory blocks are accessed at same time for writing and reading operations.
4-Way Interleaved
Four memory blocks are accessed at the same time.
RISC and CISC Processors
In this tutorial, we will learn about RISC Processor and CISC Processor and difference between
them.
RISC Processor
It is known as Reduced Instruction Set Computer. It is a type of microprocessor that has a limited
number of instructions. They can execute their instructions very fast because instructions are
very small and simple.
RISC chips require fewer transistors which make them cheaper to design and produce. In RISC,
the instruction set contains simple and basic instructions from which more complex instruction
can be produced. Most instructions complete in one cycle, which allows the processor to handle
many instructions at same time.
In this instructions are register based and data transfer takes place from register to register.
CISC Processor
It is known as Complex Instruction Set Computer.
Instruction size and Large set of instructions with variable formats (16- Small set of instructions with
format 64 bits per instruction). fixed format (32 bit).
Most micro coded using control memory (ROM) Mostly hardwired without
CPU control
but modern CISC use hardwired control. control memory.
It is implemented through flip-flops, gates, Microinstructions generate signals to control the execution
decoders etc. of instructions.
Fixed instruction format. Variable instruction format (16-64 bits per instruction).
A B C OUTPUT
0 0 0 0
0 0 1 0
0 1 0 0
0 1 1 0
1 0 0 0
1 0 1 0
1 1 0 0
1 1 1 1
A’.B’ = A’’ + B’’ = A+B
A B A’ B’ A’.B’ A+B
A’.B’
0 0 1 1 1 0 0
0 1 1 0 0 1 1
1 0 0 1 0 1 1
1 1 0 0 0 1 1
A.B = A’+B’
A.Logic Gates using only NAND Gates
Logic Gates using only NOR Gates