Computer Hardware and Peripherals
Computer Hardware and Peripherals
Digital
Computers
A Digital computer can be considered as a digital system that performs various
computational tasks.
The first electronic digital computer was developed in the late 1940s and was used
primarily for numerical computations.
By convention, the digital computers use the binary number system, which has two
digits: 0 and 1. A binary digit is called a bit.
The software of the computer consists of the instructions and data that the computer
manipulates to perform various data-processing tasks.
o The Central Processing Unit (CPU) contains an arithmetic and logic unit for
manipulating data, a number of registers for storing data, and a control circuit
for fetching and executing instructions.
o The memory unit of a digital computer contains storage for instructions and
data.
o The Random Access Memory (RAM) for real-time processing of the data.
o The Input-Output devices for generating inputs from the user and displaying
the final results to the user.
o The Input-Output devices connected to the computer include the keyboard,
mouse, terminals, magnetic disk drives, and other communication devices.
Hardware
Computer hardware consists of interconnected electronic devices that we can use to control
computer’s operation, input and output. Examples of hardware are CPU, keyboard, mouse,
hard disk, etc.
Hardware Components
Computer hardware is a collection of several components working together. Some parts are
essential and others are added advantages. Computer hardware is made up of CPU and
peripherals as shown in image below.
Software
A set of instructions that drives computer to do stipulated tasks is called a program. Software
instructions are programmed in a computer language, translated into machine language, and
executed by computer. Software can be categorized into two types −
System software
Application software
System Software
System software operates directly on hardware devices of computer. It provides a platform to
run an application. It provides and supports user functionality. Examples of system software
include operating systems such as Windows, Linux, Unix, etc.
Application Software
An application software is designed for benefit of users to perform one or more tasks.
Examples of application software include Microsoft Word, Excel, PowerPoint, Oracle, etc.
Access time in RAM is independent of the address, that is, each storage location inside the
memory is as easy to reach as other locations and takes the same amount of time. Data in the
RAM can be accessed randomly but it is very expensive.
RAM is volatile, i.e. data stored in it is lost when we switch off the computer or if there is a
power failure. Hence, a backup Uninterruptible Power System (UPS) is often used with
computers. RAM is small, both in terms of its physical size and in the amount of data it can
hold.
RAM is of two types −
ROM stands for Read Only Memory. The memory from which we can only read but cannot
write on it. This type of memory is non-volatile. The information is stored permanently in
such memories during manufacture. A ROM stores such instructions that are required to start
a computer. This operation is referred to as bootstrap. ROM chips are not only used in the
computer but also in other electronic items like washing machine and microwave oven.
Advantages of ROM
The advantages of ROM are as follows −
Non-volatile in nature
Cannot be accidentally changed
Cheaper than RAMs
Easy to test
More reliable than RAMs
Static and do not require refreshing
Contents are always known and can be verified
Magnetic memories :
In a computer system, several types of secondary storage devices like HDD, CD, DVD, etc.
are used to store permanent data and information. These devices can be categorized into two
types namely – magnetic memory and optical memory.
A magnetic memory like HDD consists of circular disks made up of non-magnetic materials
and coated with a thin layer of magnetic material in which data is stored. On the other hand,
optical disks are made up of plastic and consist of layers of photo-sensitive materials in
which the data is stored using optical effects. A major advantage of the magnetic disk and
optical disk is that they are inexpensive storage devices.
Given below is a series of steps that depicts working of associative memory in computer
architecture:
Data is stored at the very first empty location found in memory.
In associative memory when data is stored at a particular location then no address is stored
along with it.
When the stored data need to be searched then only the key (i.e. data or part of data) is
provided.
A sequential search is performed in the memory using the specified key to find out the
matching key from the memory.
If the data content is found then it is set for the next reading by the memory.
Next Page
A computer can address more memory than the amount physically installed on the system.
This extra memory is actually called virtual memory and it is a section of a hard disk that's
set up to emulate the computer's RAM.
The main visible advantage of this scheme is that programs can be larger than physical
memory. Virtual memory serves two purposes. First, it allows us to extend the use of physical
memory by using disk. Second, it allows us to have memory protection, because each virtual
address is translated to a physical address.
Following are the situations, when entire program is not required to be loaded fully in main
memory.
User written error handling routines are used only when an error occurred in
the data or computation.
Certain options and features of a program may be used rarely.
Many tables are assigned a fixed amount of address space even though only a
small amount of the table is actually used.
The ability to execute a program that is only partially in memory would
counter many benefits.
Less number of I/O would be needed to load or swap each user program into
memory.
A program would no longer be constrained by the amount of physical memory
that is available.
Each user program could take less physical memory, more programs could be
run the same time, with a corresponding increase in CPU utilization and
throughput.
Modern microprocessors intended for general-purpose use, a memory management unit, or
MMU, is built into the hardware. The MMU's job is to translate virtual addresses into
physical addresses. A basic example is given below −
Virtual memory is commonly implemented by demand paging. It can also be implemented in
a segmentation system. Demand segmentation can also be used to provide virtual memory.
Demand Paging
A demand paging system is quite similar to a paging system with swapping where processes
reside in secondary memory and pages are loaded only on demand, not in advance. When a
context switch occurs, the operating system does not copy any of the old program’s pages out
to the disk or any of the new program’s pages into the main memory Instead, it just begins
executing the new program after loading the first page and fetches that program’s pages as
they are referenced.
While executing a program, if the program references a page which is not available in the
main memory because it was swapped out a little ago, the processor treats this invalid
memory reference as a page fault and transfers control from the program to the operating
system to demand the page back into the memory.
Advantages
Following are the advantages of Demand Paging −
Cache Memory
Cache memory is a high-speed memory, which is small in size but faster than the
main memory (RAM). The CPU can access it more quickly than the primary memory.
So, it is used to synchronize with high-speed CPU and to improve its performance.
Cache memory can only be accessed by CPU. It can be a reserved part of the main
memory or a storage device outside the CPU. It holds the data and programs which
are frequently used by the CPU. So, it makes sure that the data is instantly available
for CPU whenever the CPU needs this data. In other words, if the CPU finds the
required data or instructions in the cache memory, it doesn't need to access the
primary memory (RAM). Thus, by acting as a buffer between RAM and CPU, it speeds
up the system performance.
The operands of the instructions can be located either in the main memory or in the CPU
registers. If the operand is placed in the main memory, then the instruction provides the
location address in the operand field. Many methods are followed to specify the operand
address. The different methods/modes for specifying the operand address in the instructions
are known as addressing modes.
Immediate Mode − In this mode, the operand is specified in the instruction itself. In other
words, an immediate-mode instruction has an operand field instead of an address field. The
operand field includes the actual operand to be used in conjunction with the operation
determined in the instruction. Immediate-mode instructions are beneficial for initializing
registers to a constant value.
Register Mode − In this mode, the operands are in registers that reside within the CPU. The
specific register is selected from a register field in the instruction. A k-bit field can determine
any one of the 2k registers.
Register Indirect Mode − In this mode, the instruction defines a register in the CPU whose
contents provide the address of the operand in memory. In other words, the selected register
includes the address of the operand rather than the operand itself.
A reference to the register is then equivalent to specifying a memory address. The advantage
of a register indirect mode instruction is that the address field of the instruction uses fewer
bits to select a register than would have been required to specify a memory address directly.
Autoincrement or Autodecrement Mode &minuend; This is similar to the register indirect
mode except that the register is incremented or decremented after (or before) its value is used
to access memory. When the address stored in the register defines a table of data in memory,
it is necessary to increment or decrement the register after every access to the table. This can
be obtained by using the increment or decrement instruction.
Direct Address Mode − In this mode, the effective address is equal to the address part of the
instruction. The operand resides in memory and its address is given directly by the address
field of the instruction. In a branch-type instruction, the address field specifies the actual
branch address.
Indirect Address Mode − In this mode, the address field of the instruction gives the address
where the effective address is stored in memory. Control fetches the instruction from
memory and uses its address part to access memory again to read the effective address.
Indexed Addressing Mode − In this mode, the content of an index register is added to the
address part of the instruction to obtain the effective address. The index register is a special
CPU register that contains an index value. The address field of the instruction defines the
beginning address of a data array in memory.
Discuss
Introduction :
In computer architecture, the control unit is responsible for directing the flow of data
and instructions within the CPU. There are two main approaches to implementing a
control unit: hardwired and micro-programmed.
A hardwired control unit is a control unit that uses a fixed set of logic gates and
circuits to execute instructions. The control signals for each instruction are hardwired
into the control unit, so the control unit has a dedicated circuit for each possible
instruction. Hardwired control units are simple and fast, but they can be inflexible and
difficult to modify.
On the other hand, a micro-programmed control unit is a control unit that uses a
microcode to execute instructions. The microcode is a set of instructions that can be
modified or updated, allowing for greater flexibility and ease of modification. The
control signals for each instruction are generated by a microprogram that is stored in
memory, rather than being hardwired into the control unit.
Micro-programmed control units are slower than hardwired control units because they
require an extra step of decoding the microcode to generate control signals, but they
are more flexible and easier to modify. They are commonly used in modern CPUs
because they allow for easier implementation of complex instruction sets and better
support for instruction set extensions.
To execute an instruction, the control unit of the CPU must generate the required
control signal in the proper sequence. There are two approaches used for generating
the control signals in proper sequence as Hardwired Control unit and the Micro-
programmed control unit.
Hardwired Control Unit: The control hardware can be viewed as a state machine
that changes from one state to another in every clock cycle, depending on the contents
of the instruction register, the condition codes, and the external inputs. The outputs of
the state machine are the control signals. The sequence of the operation carried out by
this machine is determined by the wiring of the logic elements and hence named
“hardwired”.
Fixed logic circuits that correspond directly to the Boolean expressions are
used to generate the control signals.
Hardwired control is faster than micro-programmed control.
A controller that uses this approach can operate at high speed.
RISC architecture is based on the hardwired control unit
Micro-programmed Control Unit –
The control signals associated with operations are stored in special memory
units inaccessible by the programmer as Control Words.
Control signals are generated by a program that is similar to machine
language programs.
The micro-programmed control unit is slower in speed because of the time
it takes to fetch microinstructions from the control memory.
Some Important Terms
1. Control Word: A control word is a word whose individual bits represent
various control signals.
2. Micro-routine: A sequence of control words corresponding to the control
sequence of a machine instruction constitutes the micro-routine for that
instruction.
3. Micro-instruction: Individual control words in this micro-routine are
referred to as microinstructions.
4. Micro-program: A sequence of micro-instructions is called a micro-
program, which is stored in a ROM or RAM called a Control Memory
(CM).
5. Control Store: the micro-routines for all instructions in the instruction set
of a computer are stored in a special memory called the Control Store.
RISC and CISC are two different types of computer architectures that are used to design the
microprocessors that are found in computers. The fundamental difference between RISC and
CISC is that RISC (Reduced Instruction Set Computer) includes simple instructions and
takes one cycle, while the CISC (Complex Instruction Set Computer) includes complex
instructions and takes multiple cycles.
Read this tutorial to find out more about RISC and CISC and how these two architectures are
different from each other.
let us know about the concepts of RISC and CISC
What is RISC?
In the RISC architecture, the instruction set of the computer system is simplified to reduce
the execution time. RISC architecture has a small set of instructions that generally includes
register-to-register operations.
The RISC architecture uses comparatively a simple instruction format that is easy to decode.
The instruction length can be fixed and aligned to word boundaries. RISC processors can
execute only one instruction per clock cycle.
The following are some important characteristics of a RISC Processor −
A RISC processor has a few instructions.
RISC processor has a few addressing modes.
In the RISC processor, all operations are performed within the registers of the
CPU.
RISC processor can be of fixed-length.
RISC can be hardwired rather than micro-programmed control.
RISC is used for single-cycle instruction execution.
RISC processor has easily decodable instruction format.
RISC architectures are characterized by a small, simple instruction set and a highly efficient
execution pipeline. This allows RISC processors to execute instructions quickly, but it also
means that they can only perform a limited number of tasks.
What is CISC?
The CISC architecture comprises a complex instruction set. A CISC processor has a
variable-length instruction format. In this processor architecture, the instructions that require
register operands can take only two bytes.
In a CISC processor architecture, the instructions which require two memory addresses can
take five bytes to comprise the complete instruction code. Therefore, in a CISC processor,
the execution of instructions may take a varying number of clock cycles. The CISC
processor also provides direct manipulation of operands that are stored in the memory.
The primary objective of the CISC processor architecture is to support a single machine
instruction for each statement that is written in a high-level programming language.
The following are the important characteristics of a CISC processor architecture −
CISC can have variable-length instruction formats.
It supports a set of a large number of instructions, typically from 100 to 250
instructions.
It has a large variety of addressing modes, typically from 5 to 20 different
modes.
CISC has some instructions which perform specialized tasks and are used
infrequently.
CISC architectures have a large, complex instruction set and a less efficient execution
pipeline. This allows CISC processors to perform a wider range of tasks, but they are not as
fast as RISC processors when executing instructions.
Introduction
Have you ever visited an industrial plant and see the assembly lines over there?
How a product passes through the assembly line and while passing it is worked on,
at different phases simultaneously. For example, take a car manufacturing plant.
At the first stage, the automobile chassis is prepared, in the next stage workers add
body to the chassis, further, the engine is installed, then painting work is done and
so on.
The group of workers after working on the chassis of the first car don’t sit idle.
They start working on the chassis of the next car. And the next group take the
chassis of the car and add body to it. The same thing is repeated at every stage,
after finishing the work on the current car body they take on next car body which is
the output of the previous stage.
Here, though the first car is completed in several hours or days, due to the
assembly line arrangement it becomes possible to have a new car at the end of an
assembly line in every clock cycle.
Similarly, the concept of pipelining works. The output of the first pipeline
becomes the input for the next pipeline. It is like a set of data processing unit
connected in series to utilize processor up to its maximum.
Look at the figure below the 5 instructions are pipelined. The first instruction gets
completed in 5 clock cycle. After the completion of first instruction, in every new
clock cycle, a new instruction completes its execution.
Observe that when the Instruction fetch operation of the first instruction is
completed in the next clock cycle the instruction fetch of second instruction gets
started. This way the hardware never sits idle it is always busy in performing some
or other operation. But, no two instructions can execute their same stage at
the same clock cycle.
Types of Pipelining
In 1977 Handler and Ramamoorthy classified pipeline processors depending on
their functionality.
1. Arithmetic Pipelining
Here, the number of instruction are pipelined and the execution of current
instruction is overlapped by the execution of the subsequent instruction. It is also
called instruction lookahead.
3. Processor Pipelining
Here, the processors are pipelined to process the same data stream. The data
stream is processed by the first processor and the result is stored in the memory
block. The result in the memory block is accessed by the second processor. The
second processor reprocesses the result obtained by the first processor and the
passes the refined result to the third processor and so on.
The pipeline performing the precise function every time is unifunctional pipeline.
On the other hand, the pipeline performing multiple functions at a different time or
multiple functions at the same time is multifunction pipeline.
The static pipeline performs a fixed-function each time. The static pipeline is
unifunctional. The static pipeline executes the same type of instructions
continuously. Frequent change in the type of instruction may vary the performance
of the pipelining.
Scalar pipelining processes the instructions with scalar operands. The vector
pipeline processes the instruction with vector operands.
Pipelining Hazards
Whenever a pipeline has to stall due to some reason it is called pipeline hazards.
Below we have discussed four pipelining hazards.
1. Data Dependency
But the Sub instruction need the value of the register R2 at the cycle t3. So the Sub
instruction has to stall two clock cycles. If it doesn’t stall it will generate an
incorrect result. Thus depending of one instruction on other instruction for data
is data dependency.
2. Memory Delay
3. Branch Delay
Suppose the four instructions are pipelined I1, I2, I3, I4 in a sequence. The instruction
I1 is a branch instruction and its target instruction is Ik. Now, processing starts and
instruction I1 is fetched, decoded and the target address is computed at the 4th stage
in cycle t3.
But till then the instructions I2, I3, I4 are fetched in cycle 1, 2 & 3 before the target
branch address is computed. As I1 is found to be a branch instruction, the
instructions I2, I3, I4 has to be discarded because the instruction Ik has to be
processed next to I1. So, this delay of three cycles 1, 2, 3 is a branch delay.
Prefetching the target branch address will reduce the branch delay. Like if the
target branch is identified at the decode stage then the branch delay will reduce to 1
clock cycle.
4. Resource Limitation
If the two instructions request for accessing the same resource in the same clock
cycle, then one of the instruction has to stall and let the other instruction to use the
resource. This stalling is due to resource limitation. However, it can be prevented
by adding more hardware.
Advantages
1. Pipelining improves the throughput of the system.
2. In every clock cycle, a new instruction finishes its execution.
3. Allow multiple instructions to be executed concurrently.
Discuss
Parallel computing is a computing where the jobs are broken into discrete parts that
can be executed concurrently. Each part is further broken down to a series of
instructions. Instructions from each part execute simultaneously on different CPUs.
Parallel systems deal with the simultaneous use of multiple computer resources that
can include a single computer with multiple processors, a number of computers
connected by a network to form a parallel processing cluster or a combination of both.
Parallel systems are more difficult to program than computers with a single processor
because the architecture of parallel computers varies accordingly and the processes of
multiple CPUs must be coordinated and synchronized. The crux of parallel processing
are CPUs. Based on the number of instruction and data streams that can be
processed simultaneously, computing
Flynn’s taxonomy is a classification scheme for computer architectures proposed by
Michael Flynn in 1966. The taxonomy is based on the number of instruction streams
and data streams that can be processed simultaneously by a computer architecture.
There are four categories in Flynn’s taxonomy:
1. Single Instruction Single Data (SISD): In an SISD architecture, there is a
single processor that executes a single instruction stream and operates on a
single data stream. This is the simplest type of computer architecture and is
used in most traditional computers.
2. Single Instruction Multiple Data (SIMD): In a SIMD architecture, there is a
single processor that executes the same instruction on multiple data streams
in parallel. This type of architecture is used in applications such as image
and signal processing.
3. Multiple Instruction Single Data (MISD): In a MISD architecture, multiple
processors execute different instructions on the same data stream. This type
of architecture is not commonly used in practice, as it is difficult to find
applications that can be decomposed into independent instruction streams.
4. Multiple Instruction Multiple Data (MIMD): In a MIMD architecture,
multiple processors execute different instructions on different data streams.
This type of architecture is used in distributed computing, parallel
processing, and other high-performance computing applications.
Flynn’s taxonomy is a useful tool for understanding different types of computer
architectures and their strengths and weaknesses. The taxonomy highlights the
importance of parallelism in modern computing, and shows how different types of
parallelism can be exploited to improve performance.
systems are classified into four major
categories:
Flynn’s classification –
1. Single-instruction, single-data (SISD) systems – An SISD computing
system is a uniprocessor machine which is capable of executing a single
instruction, operating on a single data stream. In SISD, machine instructions
are processed in a sequential manner and computers adopting this model are
popularly called sequential computers. Most conventional computers have
SISD architecture. All the instructions and data to be processed have to be
stored in primary
set. Domin
ant representative SIMD systems is Cray’s vector processing machine.
3. Multiple-instruction, single-data (MISD) systems – An MISD computing
system is a multiprocessor machine capable of executing different
instructions on different PEs but all of them operating on the same
dataset
. E
xample Z = sin(x)+cos(x)+tan(x) The system performs different operations
on the same data set. Machines built using the MISD model are not useful
in most of the application, a few machines are built, but none of them are
available commercially.
4. Multiple-instruction, multiple-data (MIMD) systems – An MIMD
system is a multiprocessor machine which is capable of executing multiple
instructions on multiple data sets. Each PE in the MIMD model has separate
instruction and data streams; therefore machines built using this model are
capable to any kind of application. Unlike SIMD and MISD machines, PEs
in MIMD machines work
asynchronously.
1. SISD architecture: This is the simplest and most common type of computer
architecture. It is easy to program and debug, and can handle a wide range
of applications. However, it does not offer significant performance gains
over traditional computing systems.
2. SIMD architecture: This type of architecture is highly parallel and can offer
significant performance gains for applications that can be parallelized.
However, it requires specialized hardware and software, and is not well-
suited for applications that cannot be parallelized.
3. MISD architecture: This type of architecture is not commonly used in
practice, as it is difficult to find applications that can be decomposed into
independent instruction streams.
4. MIMD architecture: This type of architecture is highly parallel and can
offer significant performance gains for applications that can be parallelized.
It is well-suited for distributed computing, parallel processing, and other
high-performance computing applications. However, it requires specialized
hardware and software, and can be difficult to program and debug.
Overall, the advantages and disadvantages of different types of computer architectures
depend on the specific application and the level of parallelism that can be exploited.
Flynn’s taxonomy is a useful tool for understanding the different types of computer
architectures and their potential uses, but ultimately the choice of architecture depends
on the specific needs of the application.
Low Cost: Scalar processors are typically much cheaper than vector
processors, making them more accessible to many people.
Low Power Consumption: Scalar processors are much more efficient than
vector processors, reducing the amount of power needed to operate them.
Easier to Program: Scalar processors are simpler to program than vector
processors, making them easier to use for novice programmers.
Flexible: Scalar processors are more flexible than vector processors,
allowing them to be used in a variety of applications.
High Clock Speed: Scalar processors are able to process instructions at a
much faster rate than vector processors, increasing the speed of
computations.
Good for Single-Threaded Tasks: Scalar processors are better suited for
single-threaded tasks, as they can process one operation at a time.
What is Multiprocessor?
Computer ArchitectureComputerScienceNetwork
A multiprocessor is a data processing system that can execute more than one program or
more than one arithmetic operation simultaneously. It is also known as a multiprocessing
system. Multiprocessor uses with more than one processor and is similar to
multiprogramming that allows multiple threads to be used for a single procedure.
The term ‘multiprocessor’ can also be used to describe several separate computers running
together. It is also referred to as clustering. A system is called a multiprocessor system only
if it includes two or more elements that can implement instructions independently.
A multiprocessor system employs a distributed approach. In the distributed approach, a
single processor does not perform a complete task. Instead, more than one processor is used
to do the subtasks.
Advantages of Multiprocessor
There is the following advantage of Multiprocessor which are as follows −
Distributed System
Sharing resources such as hardware, software, and data is one of the
principles of cloud computing. With different levels of openness to the
software and concurrency, it’s easier to process data simultaneously
through multiple processors. The more fault-tolerant an application is, the
more quickly it can recover from a system failure.
Organizations have turned to distributed computing systems to handle data
generation explosion and increased application performance needs. These
distributed systems help businesses scale as data volume grows. This is
especially true because the process of adding hardware to a distributed
system is simpler than upgrading and replacing an entire centralized
system made up of powerful servers.
Distributed systems consist of many nodes that work together toward a
single goal. These systems function in two general ways, and both of them
have the potential to make a huge difference in an organization.
System architecture
System-level architecture focuses on the entire system and the placement
of components of a distributed system across multiple machines. The
client-server architecture and peer-to-peer architecture are the two major
system-level architectures that hold significance today. An example would
be an ecommerce system that contains a service layer, a database, and a
web front.
i) Client-server architecture
Parallel Transmission:
In Parallel Transmission, many bits are flow together simultaneously from one
computer to another computer. Parallel Transmission is faster than serial transmission
to transmit the bits. Parallel transmission is used for short distance.
Parallel Transmission
Carry look-ahead adder has just two levels of gate delay from any input to any
output. So, the time delay is independent of the number of bits in the operand. But it
depends on the number of operands, propagation delay.
The booth algorithm is a multiplication algorithm that allows us
to multiply the two signed binary integers in 2's complement,
respectively. It is also used to speed up the performance of the
multiplication process. It is very efficient too.
Divide algorithm- Restoring and Non-Restoring.
Example −Suppose number is using 32-bit format: the 1 bit sign bit, 8 bits for signed
exponent, and 23 bits for the fractional part. The leading bit 1 is not stored (as it is always 1
for a normalized number) and is referred to as a “hidden bit”.
Then −53.5 is normalized as -53.5=(-110101.1)2=(-1.101011)x25 , which is represented as
following below,