0% found this document useful (0 votes)
2 views

Computer Organization and Architecture Notes

The document provides an overview of computer organization and architecture, detailing the principles, components, and types of computer systems. It distinguishes between computer architecture, which focuses on high-level design and functionality, and computer organization, which addresses the physical implementation of these designs. Key components such as the CPU, memory, input/output devices, and their interconnections are discussed, alongside the significance of the Von Neumann architecture.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Computer Organization and Architecture Notes

The document provides an overview of computer organization and architecture, detailing the principles, components, and types of computer systems. It distinguishes between computer architecture, which focuses on high-level design and functionality, and computer organization, which addresses the physical implementation of these designs. Key components such as the CPU, memory, input/output devices, and their interconnections are discussed, alongside the significance of the Von Neumann architecture.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 102

UNDERSTAND COMPUTER ORGANISATION AND ARCHITECTURE

Element 1: Understand principles of computer organization and design

Introduction

This element helps us to explore the fundamental principles that govern how computers are

structured and function. You'll learn about the different components that make up a computer

system, how they interact with each other, and how they work together to process information.

Computers are incredible machines that have revolutionized the way we live, work, and

communicate. But have you ever stopped to think about how they actually work? What makes

them tick? How do they manage to perform complex tasks like calculations, data processing, and

communication? The answers lie in the principles of computer organization and design. These

principles are the foundation upon which all modern computers are built, and understanding them

is essential for designing, building, and using computers effectively.

Definition of terms

Computer: is an electronic device that processes data according to a set of instructions known as

programs.

In the context of computers, organization refers to the way in which the various components of a

computer system are structured and interconnected to perform specific tasks. It encompasses the

design and arrangement of the hardware and software components, including the central processing

unit (CPU), memory, input/output devices, and storage devices, as well as the relationships

between them.

Key aspects of computer organization

Hardware organization: The physical layout and interconnection of hardware components, such as

buses, sockets, and slots.


Software organization: The structure and organization of software components, including

operating systems, application programs, and device drivers.

Data organization: The way in which data is stored, processed, and transmitted within the system.

Control organization: The flow of control between different parts of the system, including

sequencing, branching, and looping.

Types of computer organization

Von Neumann architecture: A classical organization model that separates the CPU from memory

and input/output devices.

Harvard architecture: A modified von Neumann architecture that separates memory into two

types: one for instructions and one for data.

Pipelining: A technique used to improve instruction-level parallelism by breaking down the

instruction processing pipeline into stages.

Computer architecture refers to the conceptual design and fundamental operational structure of a

computer system. It encompasses the specifications that dictate how hardware and software

components interact to form a cohesive computing environment.

Key aspects of computer architecture

Instruction Set Architecture (ISA): The set of instructions that a CPU can execute, including the

format of these instructions and the operations they perform.

Memory Hierarchy: The organization and structure of memory within the system, including the

levels of cache memory, main memory, and secondary storage.

Input/Output (I/O) Organization: The way in which data is transferred between the CPU and

external devices such as keyboards, displays, and storage devices.


Bus Organization: The way in which different components communicate with each other using

buses or other interconnects.

Control Unit Organization: The organization of the control signals that manage the flow of data

between different parts of the system.

Types of computer architectures

Von Neumann Architecture: A classical architecture that separates the CPU from memory and I/O

devices.

Harvard Architecture: A modified von Neumann architecture that separates memory into two

types: one for instructions and one for data.

RISC (Reduced Instruction Set Computing) Architecture: A design that focuses on simplicity and

efficiency by reducing the number of instructions in the instruction set.

CISC (Complex Instruction Set Computing) Architecture: A design that focuses on increasing

performance by using complex instructions that perform multiple operations in a single clock

cycle.

Computer Architecture vs. Organization: While computer architecture deals with the high-level

design and functional behavior of a computer system, computer organization focuses on the low-

level implementation details, such as how the hardware is arranged and how it operates to execute

instructions efficiently.

Computer Organization is the realization of what is specified by the computer architecture. It deals

with how operational attributes are linked together to meet the requirements specified by computer

architecture. Some organizational attributes are hardware details, control signals, peripherals.
Computer Architecture deals with giving operational attributes of the computer or Processor to be

specific. It deals with details like physical memory, ISA of the processor, the no of bits used to

represent the data types, Input Output mechanism and technique for addressing memories.

Difference between computer organization and architecture

Computer Architecture Computer Organization


Computer architecture refers to the overall Computer organization refers to the physical
design, structure, and functional behavior of a implementation of the computer system’s
computer system. It defines how hardware architecture. It deals with the operational units
components interact and what is needed to and their interconnections.
execute software.
Architecture focuses on the logical design, Organization focuses on how the components
including instruction sets, data types, of the computer are interconnected and how
addressing modes, and the overall operational they operate physically. It looks at the actual
principle of the system. implementation of the design.

It operates at a higher level of abstraction than It operates at a lower level of abstraction than
organization. It outlines what the system does architecture. This includes details about the
and how it works from a user perspectives, hardware implementation and the specific
such as the programming model and physical layout of the components.
instruction set.
Components of architecture include the CPU Components of organization include the actual
(Central Processing Unit) design, memory hardware circuits, memory chips, buses,
hierarchy (cache, RAM), I/O systems, and how control signals, and the arrangement of the
different parts of the system communicate internal registers.
(e.g., buses, interconnects).
Examples include instruction set architectures Examples include the physical layout of a
(ISA) like x86, ARM, and MIPS, as well as CPU, how many cache levels are implemented,
design paradigms like RISC (Reduced how memory is organized (e.g., RAM
Instruction Set Computing) and CISC modules, interfaces), and the specific logic
(Complex Instruction Set Computing). circuits used.

Components of computer system

A computer system typically consists of several components that work together to enable it to

perform tasks. Computer systems consist of two fundamental components: hardware and software.

Hardware refers to the physical, tangible parts of a computer system that can be seen and touched.
Example includes storage devices, input/output devices, etc. Software refers to the set of

instructions and programs that tell the hardware how to perform tasks. It is intangible and cannot

be physically touched. Examples include Operating systems, Programs designed for end-users,

such as word processors (Microsoft Word), web browsers (Google Chrome), and games.

A computer system typically consists of several components that work together to enable it to

perform tasks. The main components of a computer system are:

1. Hardware Components:

 Input Devices: Allow users to interact with the computer, such as:

 Keyboard

 Mouse

 Scanner

 Webcam

 Output Devices: Display information to the user, such as:

 Monitor

 Printer

 Speaker

 Central Processing Unit (CPU): Executes instructions and performs calculations, such as:

Microprocessor (e.g., Intel Core i5, AMD Ryzen 5)

 Memory (RAM): Temporary storage for data and programs, such as: Random Access

Memory (RAM)

 Storage Devices: Permanent storage for data and programs, such as: Hard Disk Drive

(HDD), Solid-State Drive (SSD)

 Power Supply: Converts Alternating Current (AC) power from the wall outlet to Direct

Current (DC) power for the computer's components.

2. Software Components:
 Operating System (OS): Manages computer resources, provides a platform for running

applications, and interacts with hardware components, such as:

 Windows

 macOS

 Linux

 Firmware: Permanent software stored in ROM (Read-Only Memory) that provides basic

instructions for the computer's hardware components.

3. System Components:

 Motherboard: Connects all hardware components together and provides a platform for them

to function.

 Bus: A communication pathway that allows data to be transferred between devices.

4. Network Components: Network Interface Card (NIC): Allows the computer to connect to a

network, such as a Local Area Network (LAN) or Wide Area Network (WAN).

5. Other Components:

 Cooling System: Helps to keep the computer's components at a safe temperature, such as

fans or liquid cooling systems.

 Case: The outer casing of the computer that holds all the components together.

6. Peripherals:

 Graphics Card: Enhances the computer's graphics capabilities, such as NVIDIA GeForce or

AMD Radeon.

 Sound Card: Enhances the computer's audio capabilities, such as Realtek or Creative Sound

Blaster.

7. Other Peripherals:

 Printer: Allows the computer to print documents and images.

 Scanner: Allows the computer to scan documents and images.

8. Accessory Devices:
 Headphones: Allow users to listen to audio output from the computer.

Structure of a computer system

While describing the structure of a computer we shall use the Von Neumann Architecture and

organization.

Structure of Von Neumann Architecture

The Von Neumann architecture consists of five primary components;

1. Central Processing Unit (CPU): is the primary component of a computer that performs

most of the data processing. It is often referred to as the "brain" of the computer. The CPU

is responsible for executing most instructions that the computer receives. It is made up of;

 Arithmetic Logic Unit (ALU): Performs all arithmetic and logical operations.

 Control Unit (CU): Directs the operation of the processor and manages the execution of

instructions.

 Registers: Small, fast storage locations within the CPU that hold data and instructions

temporarily.

2. Memory: Stores both program instructions and data. This unified memory structure allows

the CPU to fetch instructions and data from the same memory space, facilitating the

execution of programs.
3. Input Devices: Devices such as keyboards, mice, and scanners that allow users to input

data into the computer system.

4. Output Devices: Devices such as monitors and printers that present processed data to the

user.

5. System Bus: A communication pathway that connects the CPU, memory, and input/output

devices. It consists of three types of buses:

 Data Bus: Transfers actual data.

 Address Bus: Carries addresses from the CPU to other components, indicating where data

is to be read from or written to.

 Control Bus: Transmits control signals from the CPU to other components to coordinate

operations.

Key Features

 Unified Memory Structure: Both instructions and data are stored in the same memory,

simplifying the architecture.

 Sequential Instruction Processing: Instructions are executed one after another in a linear

sequence, which can lead to performance limitations known as the "Von Neumann

bottleneck."

 Shared System Bus: The components communicate through a single bus, which can restrict

simultaneous data and instruction access, affecting performance.

 Modularity: The architecture can be scaled to accommodate a wide range of computing

systems, from simple devices to complex supercomputers.

Function of computer components

Motherboard
As the given name, we can say this is either the start or endpoint of all components. It is either the

origin point of other parts of the computer or where every other component connects to. In a way,

it is a lot like what you would refer to your home country as: the motherland. It is basically a

circuit board of a decent size, depending on the size of the computer we‟re dealing with. It

facilitates the communication of the other components in the computer. There are ports on the

motherboard that face the outside of the computer, allowing you to plug in different components

into your computer and also to charge it. Most motherboards also allow you to scale up by

including slots that allow for expansion. You could add in components like CPUs and RAMs,

Video cards, and so on. You can also expand the motherboard by adding more ports that allow you

to connect even more auxiliary devices to your computer. In other words, you have control over

just what your computer’s capabilities are. Apart from this, the motherboard plays other roles like

storing some simple information when the computer is off, such as the system time. That’s why

your computer always tells you the correct time, even when you turn it on after a long time.

Functions of major components of the motherboard


1. CPU Socket: This is where the Central Processing Unit (CPU) is plugged in. The CPU

socket provides electrical and mechanical connections for the CPU, allowing it to connect

to the rest of the motherboard. It provides a secure and efficient connection for the CPU.

2. Basic Input/Output System (BIOS): BIOS is firmware that initializes hardware during the

booting process and provides runtime services for operating systems and programs. It is

essential for the system to start up and recognize hardware components.

3. CMOS Battery: The CMOS battery powers the CMOS chip, which stores BIOS settings

and system time. This allows the motherboard to retain configuration settings even when

the computer is powered off.

4. Northbridge and Southbridge: These are two chips that manage communication between

the CPU and other components;

 Northbridge: Connects directly to the CPU and handles high-speed data transfers,

particularly with RAM and graphics cards.

 Southbridge: Manages lower-speed peripherals such as USB ports, audio, and the system

BIOS, facilitating communication between the CPU and these devices.


5. Heat Sink and Cooling System: Heat sinks and fans are used to dissipate heat generated by

the CPU and other components. Proper cooling is essential to maintain optimal

performance and prevent overheating.

6. Memory Slots: These are where the RAM (Random Access Memory) modules are inserted.

The memory slots provide electrical connections for the RAM modules, allowing them to

communicate with the CPU. It allows the system to access and use the RAM modules.

7. Expansion Slots: These are used to add expansion cards, such as graphics cards, sound

cards, and network cards, to the system. The expansion slots provide electrical connections

for the expansion cards. It allows users to upgrade or add new functionality to the system

by adding expansion cards.

8. Chipset: The chipset is a group of chips that manage data transfer between different

components on the motherboard. It manages data transfer between the CPU, memory, and

storage devices.

9. BIOS Chip: This chip contains the Basic Input/Output System (BIOS) firmware that

controls the boot process, hardware settings, and other low-level system functions. It

initializes and configures the system during boot-up and provides basic input/output

operations.

10. Power Connectors: These connectors provide power to various components on the

motherboard, such as fans, hard drives, and peripherals. It provides power to components

on the motherboard.

11. USB Ports: These connectors provide a connection for USB devices such as keyboards,

mice, and flash drives. It provides a connection for USB devices.

12. SATA Ports: These connectors provide a connection for storage devices such as hard drives

and solid-state drives. It provides a connection for storage devices.


13. PCIe (Peripheral Component Interconnect) Slots: These slots provide a connection for

expansion cards that require faster data transfer speeds than traditional PCI slots. It

provides a connection for high-speed expansion cards.

14. Audio Connectors: These connectors provide audio output from the system to speakers or

headphones. It provides audio output from the system.

15. Networking Connectors: These connectors provide a connection for networking devices

such as Ethernet cables or Wi-Fi adapters. It provides a connection for networking devices.

Motherboard form factor

A motherboard form factor is the physical design and layout of a motherboard, which refers to the

overall shape, size, and design of the board. The form factor determines how the components are

arranged, connected, and integrated on the board, which in turn affects the functionality,

performance, and ease of use of the system. The form factor ensures parts are interchangeable

across different vendors and technology generations in the IBM PC compatible industry; ensures

server modules fit into existing rack mount systems in enterprise computing. It also dictates the

overall size of the computer case.

The most significant form factor in modern PCs is the ATX (Advanced Technology Extended),

introduced in 1995, which has become a standard for most desktop computers. Other form factors,

such as Mini-ITX and Micro-ATX, cater to smaller systems and specific use cases, like compact or

budget builds.

Power supply

As the name suggests, it powers all other components of the machine. It usually plugs into the

motherboard to power the other parts. The power supply connects to either an internal battery (on a

laptop) or a plug for an outlet (on a desktop). They have different input voltage depending on the

machine/computer specifications.
Central Processing Unit (CPU)

Sometimes it's referred to as the computer‟s brain. It is the workhorse of the machine. It performs

the calculations needed by a system, and can vary in speed. The work that a CPU does generates

heat, which is why your computer has a fan inside. A more powerful CPU is necessary for intense

computer work like editing high-definition video or programming complex software.

Random Access Memory (RAM)

RAM is a temporary memory. When you open an application in our computer, the computer will

place that application and all its data in the RAM. When you close the application, then space in

the RAM is freed. That is why your computer gets so slow when you have too many applications

open; your RAM is probably being used at capacity. Since RAM is only temporary, it has a volatile

nature. The minute you turn your computer off, all of the memory that is stored in RAM is lost.

That‟s why you‟re advised to keep saving the work you do in applications as you go along to

avoid losing all of it in case your computer suddenly goes off. The more RAM you have, the

greater the number of programs that you can run simultaneously.


Hard disk /Solid State Disk

Remember that we said RAM is volatile due to its temporary nature, which means the computer

still needs a more permanent form of data storage. That’s why the hard drive or solid state drive

exists. Traditionally, the hard drive is a drum with several platters piled on it and spinning, and the

physical arm then writes data onto these platters. These disks are very slow because of the

mechanics through which data is stored, although latest hard drives, solid state drives, are much

faster.

Solid state drives have the same kind of memory as the one on your phone or flash drive, also

known as flash memory. They cost more but are also faster and more efficient than traditional hard

drives.

The data stored in the hard drive does not disappear when you switch your computer off. It will be

there when you switch the computer back on. You are, however, advised to keep it far away from

magnets as they could damage it and cause you to lose your information.

Video Card
The video card is a dedicated unit for handling image output to be displayed by the monitor. They

come with their own RAM, dedicated to this singular purpose. If your work involves highly visual

work at very high definitions, then you should get yourself a video card to take the load off your

RAM. Sometimes, the computer may have integrated graphics, where some of the RAM is

borrowed for graphics processing. This happens frequently on laptops, because there is a need to

save space. Using integrated graphics is much less expensive using a graphics card, but is not

sufficient for intense graphics functions.

Optical devices

These have become a lot less common today, with many machines doing away with them

altogether. An optical drive is used to read CDs and DVDs, which can be used to listen to music or

watch movies. They can also be used to install software, play games, or write new information into

a disk

Practical activity

Task: Determine the specification of a computer

To check the motherboard form factor using system information details, you can follow these

steps:
For Windows users:

 Press the Windows key + R to open the Run dialog box.

 Type msinfo32 and press Enter to open the System Information window.

 Click on the "System Summary" tab.

 Look for the "Motherboard" section.

 Check the "Form Factor" field to see if it lists the motherboard form factor, such as ATX,

ATX, or Micro-ATX.

For macOS users:

 Click the Apple menu and select "About This Mac".

 Click on the "System Report" button.

 Scroll down and select "Hardware" from the left-hand menu.

 Click on "PCIe" from the sub-menu.

 Look for the "Motherboard" section and check the "Form Factor" field to see if it lists the

motherboard form factor.

For Linux users:

 Open a terminal window.

 Type lspci and press Enter to list all PCI devices.

 Look for the "Motherboard" section or "Mainboard" section, which should list information

about your motherboard, including its form factor


Element 2: Understand Central Processing Unit functions

Introduction

This learning outcome covers explaining the Central Processing Unit design, CPU architecture,

role of registers, instruction representation and execution, prescribing CPU specifications and

verifying CPU specifications for a given computer.

Definition of key terms

Central Processing Unit (CPU)

The Central Processing Unit (CPU) is the electronic circuitry within a computer that carries out the

instructions of a computer program by performing the basic arithmetic, logical, control and

input/output (I/O) operations specified by the instructions. It is the primary component of a

computer system responsible for executing instructions and performing calculations. It serves as

the "brain" of the computer and is responsible for interpreting and carrying out instructions from

computer programs. The CPU typically consists of several key components, including the

Arithmetic Logic Unit (ALU), which performs arithmetic and logical operations, and the Control

Unit, which manages the execution of instructions and the flow of data within the CPU and

between other components of the computer system. The CPU also includes registers, which are

small, high-speed storage locations used to store data temporarily during processing. The CPU

fetches instructions from memory, decodes them to determine the operation to be performed,

fetches the required data from memory or registers, executes the operation, and then stores the

result back in memory or a register. This process is carried out repeatedly to execute the

instructions of a computer program.


Registers

It is a memory location within the actual processor that works at very fast speeds. It stores

instructions which are waiting to be decoded or executed. It is small, high-speed storage location

within the Central Processing Unit (CPU) of a computer system. Registers are used to store

temporary data, addresses, and control information that the CPU needs to perform calculations and

execute instructions. Registers are often directly accessible by the CPU and are much faster than

accessing data from system memory. Registers are used to hold data that is being processed by the

CPU during computation, as well as to store intermediate results and memory addresses. They are

an important component of the CPU's operation, as they enable the CPU to quickly access and

manipulate data while performing calculations and executing instructions.

Processor

It is also known as a microprocessor or central processing unit (CPU), is a key electronic

component of a computer system that carries out instructions and performs arithmetic, logic, and

control tasks. There are 2 different types of processors namely:

 Complex instruction set computers (CISC)

 Reduced instruction set computers (RISC)

Complex instruction set computers (CISC)

Complex Instruction Set Computers (CISC) processors are a type of processor architecture that

emphasizes the completion of complex instructions in as few cycles as possible. CISC processors

are designed to handle a large number of complex, multi-step instructions that can be executed

efficiently on the processor itself. These processors can do complex operations, which can be

carried out in just one instruction. They have many different addressing modes and a wide range of

instructions that can be used. For example, a CISC processor might have a 'complicated' instruction
designed into the hardware called 'POWER'. This can take one number from a register, find the

power of that number, held in a different register, and then store the result in yet a third register.

So, 2 to the power 4 would be calculated as 16 and this would be stored in a third register. This

would all be done using one complex instruction, which might take about 3 or 4 CPU clock cycles

to complete. Complex Instruction Set Computers (CISC) are processors that have a large number

of instructions, each with a complex operation. CISC processors were popular in the 1980s and

early 1990s, but have largely given way to Reduced Instruction Set Computers (RISC).

Characteristics of CISC

 Many instructions: CISC processors have a large number of instructions (often hundreds or

thousands) that can perform complex operations in a single clock cycle.

 Complex instructions: Each instruction can execute multiple operations, such as loads,

stores, arithmetic, and logical operations.

 Long instruction lengths: CISC instructions are typically longer than RISC instructions,

which can lead to slower decode times.

 Microcode: CISC processors often use microcode to implement complex instructions,

which can increase power consumption and slow down the processor.

Examples of CISC processors include:

 Intel x86-8086

 Motorola 68000

Reduced instruction set computers (RISC)

Reduced Instruction Set Computers (RISC) processors are a type of processor architecture that

emphasizes simplicity and efficiency in instruction execution. RISC processors are designed to use

a smaller and more streamlined set of instructions, each of which performs a single, basic

operation. This approach allows RISC processors to execute instructions quickly and efficiently,
making them ideal for high-performance computing tasks. There are also CPUs that are known as

RISC (pronounced „risk‟), or Reduced Instruction Set Computers. RISC processors such as ultra-

SPARC and ALPHA use a much smaller, simpler set of instructions than CISC processors and so

to carry out any particular programming task may take many “fetch decode execute” cycles. RISC

processors, however, are much more efficient at processing huge blocks of data than CISC.

Reduced Instruction Set Computers (RISC) are designed to be simple and efficient. RISC

processors have a smaller number of instructions, each with a simple operation. RISC processors

are the norm today.

Characteristics OF RISC

 Fewer instructions: RISC processors have a smaller number of instructions (often tens or

hundreds) that are designed to be simple and easy to execute.

 Simple instructions: Each instruction typically performs a single operation, such as a load,

store, or arithmetic operation.

 Short instruction lengths: RISC instructions are typically shorter than CISC instructions,

which reduces decode times and improves performance.

 No microcode: RISC processors do not use microcode to implement complex instructions,

reducing power consumption and increasing speed.

Examples of RISC processors include:

 SPARC

 PowerPC

 ARM

Advantages of RISC over CISC

 Faster execution: RISC processors can execute instructions faster due to their simplicity

and reduced decode times.


 Higher performance: RISC processors can achieve higher clock speeds and better

performance due to their simpler design.

 Lower power consumption: RISC processors typically consume less power than CISC

processors due to the reduced complexity of their design.

 Easier design: RISC processors are easier to design and maintain due to their simplicity.

Advantages of CISC over RISC

 More flexible: CISC processors can perform more complex operations in a single

instruction, making them more flexible.

 Better for legacy code: CISC processors can still execute older code written for earlier

CISC architectures.

Differences between Complex instruction set computers (CISC) and Reduced instruction set
computers (RISC)

Complex instruction set computers (CISC) Reduced instruction set computers (RISC)
CISC architectures have a large set of RISC architectures have a smaller set of
instructions, with many specialized instructions simpler instructions, focusing on a few basic
that can perform multiple operations in a single operations that can be executed quickly.
instruction.
Contains a wide variety of instructions (often Has a limited number of instructions (typically
hundreds) that can execute complex tasks, dozens) that are highly optimized for speed and
including memory manipulation directly within efficiency. Most instructions are of fixed
an instruction. Instructions are variable in length, simplifying instruction decoding and
length, meaning they can take different enabling faster execution.
amounts of bits.
Supports multi-step operations in one Each instruction typically performs one simple
instruction (e.g., loading data from memory, operation, which may require multiple
performing an operation, and storing the result instructions to accomplish what CISC does in
in one instruction). Higher complexity in one instruction. Simpler, more uniform
individual instructions, which can lead to instructions lead to easier pipelining and higher
longer execution times due to variable throughput.
instruction length and complex addressing
modes.
Pipelining can be less effective due to the Designed to take advantage of pipelining
complexity and variable length of instructions, effectively. The simplicity and uniformity of
which can lead to stalls and inefficient pipeline instructions allow a more predictable flow
utilization. through the pipeline.
Instructions can directly manipulate memory; Generally, more instructions are needed to
hence, fewer instructions may be needed to perform the same task compared to CISC.
perform certain tasks, which could lead to However, the simpler instructions often lead to
more compact code. better optimization for performance rather than
size.
May offer better performance for certain Typically achieves higher performance through
complex tasks due to fewer total instructions to efficient instruction execution, pipelining, and
execute, but this can vary based on the specific optimized resource utilization, especially in
implementation and use case. applications requiring high throughput and
efficient parallel processing.
Commonly found in applications where Widely used in modern computing, particularly
backward compatibility with older in embedded systems, mobile devices, and
architectures is critical, such as x86 applications requiring high performance, such
architecture used in most personal computers. as ARM architecture in smartphones and
tablets.

Pipelining

Pipelining is a technique used in the design of modern processors to improve their performance

and throughput by allowing multiple instruction phases to be processed simultaneously. This

concept is analogous to an assembly line in manufacturing, where different stages of production

occur simultaneously for different items.

Basic Concept of Pipelining


In a traditional, non-pipelined processor, each instruction goes through a series of phases

sequentially, typically including:

 Fetch: Retrieving the instruction from memory.

 Decode: Interpreting what the instruction means.

 Execute: Performing the operation specified by the instruction.

 Memory Access: Reading from or writing to memory (if necessary).

 Write-back: Writing the result back to the register file.

In a pipelined processor, these stages overlap. While one instruction is being executed, another can

be decoded, and yet another can be fetched. This means that multiple instructions are being

processed at different stages of execution at the same time.

Benefits of Pipelining

Increased Throughput: The primary advantage of pipelining is that it increases the number of

instructions that can be executed in a given time period. Ideally, after the initial few cycles, a new

instruction can be completed in every cycle.

Performance Improvement: By reducing the time each instruction spends in the CPU (due to

overlap of instruction phases), pipelining leads to better performance, enabling faster processing.

Efficient Resource Utilization: Pipelining allows better utilization of CPU resources, as different

units in the CPU (such as ALU, memory, etc.) are used concurrently.

Standard CPU specification factors

There are four key factors about CPU architecture that affect its performance:

1. Cores
A CPU can contain one or more processing units. Each unit is called a core. A core contains an

ALU, control unit and registers. It is common for computers to have two (dual), four (quad) or

even more cores. CPUs with multiple cores have more power to run multiple programs at the same

time.

2. Clock speed

The clock speed - also known as clock rate - indicates how fast the CPU can run. This is measured

in megahertz (MHz) or gigahertz (GHz) and corresponds with how many instruction cycles the

CPU can deal with in a second.

3. Cache size

Cache is a small amount of memory which is a part of the CPU - closer to the CPU than RAM. It is

used to temporarily hold instructions and data that the CPU is likely to reuse.

4. Processor type

Processor type can be either RISC or CISK

CPU ARCHITECTURE

CPU itself has the following three components.

 Arithmetic Logic Unit (ALU)

 Control Unit

 Memory Unit (Registers)


Control Unit

This unit controls the operations of all parts of the computer but does not carry out any actual data

processing operations. Functions of this unit are −

 It is responsible for controlling the transfer of data and instructions among other units of a

computer.

 It manages and coordinates all the units of the computer.

 It obtains the instructions from the memory, interprets them, and directs the operation of the

computer.

 It communicates with Input/output devices for transfer of data or results from storage.

 It does not process or store data.

ALU (Arithmetic Logic Unit)

This unit consists of two subsections namely,

 Arithmetic Section

 Logic Section
Arithmetic Section

Function of the arithmetic section is to perform arithmetic operations like addition, subtraction,

multiplication, and division. All complex operations are done by making repetitive use of the

above operations.

Logic Section

Function of the logic section is to perform logic operations such as comparing, selecting, matching,

and merging of data.

Registers

Processor register is a quickly accessible location available to a digital processor's central

processing unit. The registers are an integral part of the CPU. They are a type of memory that can

be accessed very quickly compared to other types of memory. The pieces of information they hold

are needed by the CPU to run each program instruction during a 'fetch-decode-execute cycle' or

can be used to hold values that are generated as part of the ALU working on data. There are a

number of very special registers that do very specific jobs.

Buses
A BUS is a data connection between two or more devices connected to the computer. For example,

a bus enables a computer processor to communicate with the memory or a video card to with the

memory. These buses carry different types of signals. The same types of buses carry only one type

of signal. These buses help to increase the efficiency and accuracy of working. A computer

consists of many components such as motherboard, memory, input/output devices. These devices

work with the help of buses. PC motherboards have buses for expansion and external devices, all

computers have three basic buses.

We have three different types of buses. These are:

1. DATA BUS: The data bus controls the traffic of data. It helps to send data, according to

the request of the user and PC. The data bus is a group of wires that carries data between

the CPU, memory, and input/output devices. It's a unidirectional bus, meaning data flows in

one direction only (e.g., from the CPU to memory or from memory to the CPU). The data

bus is typically used to transfer data between the following components:

 CPU: The CPU uses the data bus to transfer data to and from memory or I/O devices.

 Memory: The memory uses the data bus to transfer data between itself and the CPU.

 I/O Devices: The I/O devices use the data bus to transfer data to and from the CPU.
The data bus is usually a parallel bus, meaning multiple bits are transferred simultaneously over

multiple wires.

2. ADDRESS BUS: The address bus is a group of wires that carries memory addresses

between the CPU and memory. It's a unidirectional bus, meaning addresses flow in one

direction only (e.g., from the CPU to memory). The address bus is used by the CPU to

specify the location of data in memory. It's essential for accessing specific memory

locations. The address bus is usually a parallel bus, meaning multiple bits are transferred

simultaneously over multiple wires. The number of address bits determines the total

number of unique addresses that can be accessed (2^n, where n is the number of address

bits).

3. CONTROL BUS: The control bus is a group of wires that carries control signals between

the CPU and other components. It's a bidirectional bus, meaning control signals can flow in

both directions (e.g., from the CPU to peripherals and from peripherals to the CPU).

Control signals are used to regulate the flow of data and control operations on the system.

Examples include:

 Read/Write signals: Indicate whether the CPU is reading or writing data.

 Chip Select signals: Select specific peripherals or devices on the system.

 Clock signals: Provide a timing reference for other components on the system.

 Interrupt signals: Signal to the CPU when an event has occurred that requires attention.

Instruction representation and execution

Fetch Execute Cycle

This is the basic operation (instruction) cycle of a computer (also known as the fetch decode

execute cycle). During the fetch execute cycle; the computer retrieves a program instruction from

its memory. It then establishes and carries out the actions that are required for that instruction. The
cycle of fetching, decoding, and executing an instruction is continually repeated by the CPU whilst

the computer is turned on.

1. Fetch

The first step the CPU carries out is to fetch some data and instructions (program) from main

memory then store them in its own internal temporary memory areas. These memory areas are

called 'registers'. For this to happen, the CPU makes use of a vital hardware path called the 'address

bus'. The CPU places the address of the next item to be fetched on to the address bus. Data from

this address then move from main memory into the CPU by travelling along another hardware path

called the 'data bus'.

2. Decode

The next step is for the CPU to make sense of the instruction it has just fetched. This process is

called 'decode'. The CPU is designed to understand a specific set of commands. These are called

the 'instruction set' of the CPU. Each make of CPU has a different instruction set. The CPU

decodes the instruction and prepares various areas within the chip in readiness of the next step.

3. Execute
This is the part of the cycle when data processing actually takes place. The instruction is carried

out upon the data (executed). The result of this processing is stored in yet another register. Once

the execute stage is complete, the CPU sets itself up to begin another cycle once more.

Instruction Set

Instruction sets are instruction codes to perform some task. It is classified into five categories. The

instruction set, also called ISA (instruction set architecture) is part of a computer that pertains to

programming, which is basically machine language. The instruction set provides commands to the

processor, to tell it what it needs to do. The instruction set consists of addressing modes,

instructions, native data types, registers, memory architecture, interrupt, and exception handling,

and external I/O.

An example of an instruction set is the x86 instruction set, which is common to find on computers

today. Different computer processors can use almost the same instruction set while still having

very different internal design.

Both the Intel Pentium and AMD Athlon processors use nearly the same x86 instruction set. An

instruction set can be built into the hardware of the processor, or it can be emulated in software,

using an interpreter. The hardware design is more efficient and faster for running programs than

the emulated software version. Examples of instruction set:

 ADD - Add two numbers together.

 COMPARE - Compare numbers.

 IN - Input information from a device, e.g., keyboard.

 JUMP - Jump to designated RAM address.

 JUMP IF - Conditional statement that jumps to a designated RAM address.

 LOAD - Load information from RAM to the CPU.

 OUT - Output information to device, e.g., monitor.


 STORE - Store information to RAM.

Process of instruction execution in a pipelined architecture

Pipelining is a technique used to improve the performance of a processor by breaking down the

instruction execution process into a series of stages, allowing multiple instructions to be processed

simultaneously. Each stage is responsible for a specific task, and the processor flows through these

stages in a linear fashion, hence the name "pipeline." Stages involved in instruction execution in a

pipelined architecture;

 Instruction Fetch (IF): The processor fetches an instruction from memory and decodes it to

determine what operation needs to be performed.

 Instruction Decode (ID): The instruction is decoded, and the necessary information is

extracted (e.g., registers, operands, and destination).

 Operand Fetch (OF): The necessary operands (data) are fetched from registers or memory.

 Execution (EX): The instruction is executed according to its operation type (e.g., addition,

multiplication, jump, etc.).

 Memory Access (MA): If the instruction requires memory access (e.g., load or store), this

stage handles the memory operation.

 Write Back (WB): The results of the instruction execution are written back to the registers

or memory.

 Completion (COM): The instruction is considered complete, and the processor can move on

to the next instruction.

Example
In this example, the ADD instruction takes 5 clock cycles to complete:

 Cycle 1: Fetch and decode

 Cycle 2: Fetch operands

 Cycle 3: Execute addition

 Cycle 4: Write back result

 Cycle 5: Completion

The pipeline can be filled with multiple instructions, allowing for significant performance

improvements over non-pipelined architectures. However, there are some challenges associated

with pipelining;

 Pipeline stalls: If an instruction requires data from a previous instruction, it can cause a stall

in the pipeline.

 Branch prediction: If an instruction is mispredicted (e.g., taken vs. not taken), it can cause

pipeline flushes or stalls.

 Exceptions: If an exception occurs during execution, it can cause the pipeline to flush or

stall.

To mitigate these challenges, pipelined architectures often employ various techniques such as

branch prediction, cache hierarchies, and exception handling mechanisms.


Checking for CPU specifications in a computer

1. Windows

System Information: Press the Windows key + R, type msinfo32, and press Enter. This will open

the System Information window, where you can find the CPU specifications under the "Hardware

Resources" section.

Device Manager: Press the Windows key + X, select Device Manager, and expand the

"Processors" section. Right-click on the CPU and select "Properties" to view its specifications.

Task Manager: Press the Ctrl + Shift + Esc keys to open Task Manager, click on the "Performance"

tab, and then click on the "CPU" tab. The CPU specifications will be displayed in the "Processor"

section.

2. macOS

System Information: Click the Apple menu and select "About This Mac." Then, click on "System

Report" and scroll down to the "Hardware" section. Click on "Hardware Overview" and then select

"CPU" to view its specifications.

Terminal: Open the Terminal app and type sysctl -n hw.model to display the CPU model. Type

sysctl -n hw.cpu cores to display the number of CPU cores.

3. Linux

Terminal: Open a terminal and type lscpu to display detailed information about your CPU,

including its specifications.

cat /proc/cpuinfo: This command will display detailed information about your CPU, including its

specifications.

4. Online Tools
CPU-DB: A free online tool that allows you to scan your system and provide detailed information

about your CPU, including its specifications.

HWiNFO: A free online tool that provides detailed information about your system's hardware,

including the CPU.

5. BIOS Settings

Restart your computer and press the appropriate key to enter the BIOS settings (usually F2, F12, or

Del). Look for the "Advanced" or "Performance" tab and scroll down to find the CPU settings.

Practical Exercise

1. Identify the following CPU specifications from your computer

 Clock Speed

 Manufacturer

 Generation

2. Case Study: Verification of CPU Specifications for a Computer Build

Scenario: A company is planning to build a high-performance workstation for graphic design and

video editing tasks. The IT department is tasked with selecting the appropriate CPU based on

specific requirements and verifying that the chosen CPU meets the desired specifications for the

computer build. Highlight the specifications needed to achieve this scenario and document the

steps taken.
Element 3: Understand Computer Memory Organization

Introduction

This learning outcome covers explaining memory organisation, various storage technologies,

cache and virtual memory, prescribing memory specifications for a user and verifying memory

specifications for a given computer.

Definition of key terms

 Memory: It is any physical device capable of storing information temporarily, or

permanently. Memory devices utilize integrated circuits and are used by operating

systems, software, and hardware. It can also be said to be a storage space in the

computer, where data is to be processed and instructions required for processing are

stored.

 Memory unit: It is the collection of storage units or devices together.

 Memory hierarchy: It is an organizational structure in which memory units are ranked

according to levels of importance. The computer memory hierarchy ranks components


in terms of response times, with processor registers at the top of the pyramid structure

and tape backup at the bottom.

 Virtual memory: Is a memory management capability of an operating system (OS)

that uses hardware and software to allow a computer to compensate for physical

memory shortages by temporarily transferring data from random access memory

(RAM) to disk storage. This technique creates an abstraction of the memory resources,

enabling more efficient utilization of memory and allowing systems to run larger

applications than would otherwise fit into the physical memory.

 Cached memory: a small-sized type of volatile computer memory that provides

high- speed data access to a processor and stores frequently used computer

programs, applications and data.

 Volatile Memory: It is a type of memory where there is loss of data, when power is

switched off.

 Non-Volatile Memory: It is a type of memory where there us permanent storage

and does not lose any data when power is switched off.

Memory Organization in Computer Architecture

Generally, memory/storage is classified into 2 categories:

 Volatile Memory: A type of memory where the is loss of data, when power is

switched off.

 Non-Volatile Memory: A type of memory where there is a permanent storage and

does not lose any data when power is switched off.

The total memory capacity of a computer can be visualized by hierarchy of components. The

memory hierarchy system consists of all storage devices contained in a computer system from
the slow Auxiliary Memory to fast Main Memory and to smaller Cache memory.

Types of memory

Memory in a computer system is primarily classified into three types;

 Cache Memory

 Primary Memory/Main Memory

 Secondary Memory

1. Cache Memory

Cache memory is a small, high-speed storage area located inside or very close to the CPU (Central

Processing Unit) of a computer. It is used to temporarily hold frequently accessed data and instructions,

enabling faster retrieval than accessing data from the main memory (RAM). Cache memory

significantly improves the overall performance of a computer system by reducing the time it takes for

the CPU to access data and execute instructions. It is a very high speed semiconductor memory

which can speed up the CPU. It acts as a buffer between the CPU and the main memory. It is
used to hold those parts of data and program which are most frequently used by the CPU. The

parts of data and programs are transferred from the disk to cache memory by the operating

system, from where the CPU can access them.

Operations of the cache memory

 Requests regarding the contents of memory location are done by CPU.

 Check cache for this data

 If present, get from cache (fast)

 If not present, read required block from main memory to cache

 Then deliver from cache to CPU

Cache deals with the tags so as to identify which block of main memory is present in each cache

slot

Advantages

The advantages of cache memory are as follows;

 Cache memory is faster than main memory.

 It consumes less access time as compared to main memory.


 It stores the program that can be executed within a short period of time.

 It stores data for temporary use.

Disadvantages

The disadvantages of cache memory are as follows;

 Cache memory has limited capacity.

 It is very expensive.

2. Primary Memory (Main Memory)

Primary memory holds only those data and instructions on which the computer is currently

working. It has a limited capacity and data is lost when power is switched off. (Volatile) It is

generally made up of semiconductor devices. These memories are not as fast as registers. The

data and instruction required to be processed resides in the main memory. It is divided into two

subcategories RAM and ROM.

a) RAM (Random Access Memory)

Random access memory (RAM) is the best known form of computer memory. It is a type of

volatile memory used in computers and other electronic devices to store data that is actively

being used or processed. It is characterized by its ability to allow data to be read and written in

any order (hence "random access"), making it much faster than other forms of storage, such as
hard drives or solid-state drives, which store data sequentially. RAM is considered "random

access" because you can access any memory cell directly if you know the row and column that

intersect at that cell.

Types of RAM

 Static RAM (SRAM)

 Dynamic RAM (DRAM)

Static RAM (SRAM)

Static RAM (SRAM) is a type of random-access memory that uses latching circuits (flip-flops)

to store data bits. It is faster and more expensive than Dynamic RAM (DRAM) but does not

require refreshing like DRAM does. SRAM is commonly used in cache memory and other

applications where high speed and low power consumption are important.

 Stores data using the flip-flop state

 Retains value indefinitely, as long as it is kept powered.

 Mostly used to create cache memory of CPU.

 Faster and more expensive than DRAM.

Dynamic RAM (DRAM)

Dynamic RAM (DRAM) is a type of random-access memory that stores each bit of data in a

separate capacitor within an integrated circuit. DRAM needs to be refreshed to maintain the

stored data as the capacitors leak charge over time. DRAM is slower and less expensive than

SRAM, but it is more commonly used in computers and other devices for main memory due to
its higher density and lower cost per bit.

 Each cell stores a bit with a capacitor and transistor.

 Large storage capacity

 Needs to be refreshed frequently.

 Used to create main memory.

 Slower and cheaper than SRAM.

Non-Volatile Random Access Memory (NVRAM)

It is a type of memory that retains data even when power is turned off. This is achieved by using

a combination of volatile and non-volatile memory technologies. NVRAM typically stores data

using either battery-backed SRAM or flash memory. NVRAM is often used in applications

where it is essential to retain certain data even in the event of a power failure or system

shutdown. Examples of such applications include storing BIOS settings in a computer, storing

configuration data in networking devices, and storing critical data in industrial control systems.

 It is a category of Random Access Memory (RAM) that retains stored data even if the

power is switched off.

 NVRAM uses a tiny 24-pin dual inline package (DIP) integrated circuit chip, which helps it

to gain the power required to function from the CMOS battery on the motherboard.

 NVRAM monitors several system parameters, such as Ethernet the MAC address, serial

number, date of manufacture, HOSTID, etc.

 NVRAM is a non-volatile memory type that provides the random access facility.

b) ROM (Read Only Memory)

Read Only Memory (ROM) is a type of non-volatile memory used in computers and other
electronic devices to permanently store data that is not intended to be modified or erased

frequently. Unlike Read/Write memory types, such as Random Access Memory (RAM), data in

ROM is typically written during the manufacturing process and cannot be easily modified or

overwritten. It is the memory from which we can only read but cannot write on it. This type of

memory is non-volatile. The information is stored permanently in such memories during

manufacture. ROM stores such instructions that are required to start a computer. This operation is

referred to as bootstrap. ROM chips are not only used in the computer but also in other electronic

items like washing machine and microwave oven.

Types of ROM

There are five basic ROM types:

a) ROM - Read Only Memory: is a type of non-volatile memory that stores data permanently

and cannot be electronically modified after manufacture. The data in ROM is typically

programmed by the manufacturer and is maintained even when the power is turned off.

b) PROM - Programmable Read Only Memory: Programmable Read-Only Memory (PROM) is

a type of read-only memory (ROM) that allows users or manufacturers to program custom

data onto the memory chip after manufacture. PROM can be programmed by blowing fuses

on the chip to set the desired data pattern. Once the data has been programmed onto a

PROM chip, it cannot be changed or erased. This makes PROM similar to Mask ROM, as

the data programmed onto PROM is permanent. However, the advantage of PROM is that it

allows for customization of data by the user or manufacturer without requiring specialized

manufacturing processes. PROM was widely used in early computer systems and embedded

systems to store firmware, boot loaders, configuration data, and other essential data that

needed to be permanently stored and not easily modified. With the development of more

advanced types of programmable ROM, such as EEPROM and flash memory, PROM has

become less common in modern electronic devices.


c) EPROM - Erasable Programmable Read Only Memory: Erasable Programmable Read-Only

Memory (EPROM) is a type of read-only memory (ROM) that can be rewritten and

reprogrammed multiple times. EPROM chips are programmed by selectively applying an

electrical charge to specific memory cells, which changes the stored data. The unique feature

of EPROM is that it can be erased by exposing the memory chip to ultraviolet (UV) light,

typically using a special UV eraser device. This erases the previously stored data, allowing

the chip to be reprogrammed with new data. EPROM can be erased and reprogrammed

multiple times, making it a versatile memory option for applications that require frequent

updates or changes to the stored data. EPROM was commonly used in early computer

systems, embedded systems, and electronic devices where the firmware or software needed

to be updated periodically. It played a crucial role in the development of technology by

providing a flexible and reprogrammable memory solution. However, EPROM has been

largely replaced by Electrically Erasable Programmable Read-Only Memory (EEPROM)

and flash memory in modern electronic devices due to their faster erase and reprogram times

and lower power consumption.

d) EEPROM - Electrically Erasable Programmable Read Only Memory: Electrically Erasable

Programmable Read-Only Memory (EEPROM) is a type of non-volatile memory that can be

electrically erased and reprogrammed multiple times. EEPROM does not require ultraviolet light

for erasing, making it more convenient to use than EPROM. It can be programmed and erased in-

circuit, which means that the chip does not need to be removed from the circuit board for these

operations. EEPROM works by storing data in memory cells that can be individually erased and

reprogrammed using electrical signals. This allows for easy and flexible updates to the stored

data without the need for specialized equipment or UV light. EEPROM is commonly used in

electronic devices where small amounts of data need to be stored and updated, such as in

firmware, system configurations, and calibration data. It is particularly useful in applications that

require regular updates or changes to the stored information, as it allows for easy reprogramming
without the need for physical removal of the memory chip. EEPROM has largely replaced

EPROM in modern electronic devices due to its faster erase and reprogram times, lower power

consumption, and ease of use. It is widely used in consumer electronics, automotive systems,

industrial automation, and other applications that require reliable and flexible non-volatile

memory solutions.

e) Flash EEPROM memory: Flash EEPROM (Electrically Erasable Programmable Read-Only

Memory) is a type of non-volatile memory that combines the features of EEPROM and flash

memory. Flash EEPROM is electrically erasable like EEPROM, but it allows for bulk erasing of

entire blocks of memory at once, making it more efficient for updating and reprogramming large

amounts of data. Flash EEPROM memory is commonly used in electronic devices for storing

system firmware, operating system code, application software, and other types of data that need

to be retained even when power is turned off. It is widely used in USB drives, solid-state drives

(SSDs), memory cards, and embedded systems. Flash EEPROM memory allows for fast read and

write operations, and it can be updated more easily and quickly compared to traditional

EEPROM memory. It is also more cost-effective than EEPROM due to its larger storage

capacity and faster erase/write times.

Characteristics of Main Memory

Main memory, commonly referred to as RAM (Random Access Memory), is a critical component

of a computer system that plays a central role in the performance of the system. It stores data

that the CPU needs while performing tasks, allowing for quick access to information and

instructions. Here are the key characteristics of main memory in a computer system:

1) Main memory (RAM) is volatile, meaning it loses all stored data when the power is turned

off. This necessitates that any data needing to be preserved beyond a power cycle must

be saved to non-volatile storage (like hard disks or SSDs).


2) RAM is significantly faster than secondary storage (like hard disks and SSDs). This speed

allows the CPU to access data and instructions in real-time, thereby enhancing the overall

performance of the computer.

3) In RAM, any memory location can be accessed directly in constant time, regardless of the

physical location of the data. This contrasts with sequential access memory types, where

data must be accessed in order.

4) Main memory allows both reading and writing of data. The CPU can quickly read data

from RAM and write new data back to it, enabling dynamic operations as programs run.

5) The size of main memory can vary significantly between different systems and usage

cases. Modern computers typically have from a few gigabytes (GB) to several terabytes

(TB) of RAM, depending on their design and intended applications.

6) Main memory is organized in addressable units (usually bytes), which can be managed by

the CPU. This organization allows the CPU to reference specific memory locations

efficiently.

7) Modern computer architectures often implement a hierarchical memory structure. This

includes cache memory (L1, L2, L3), which is a smaller, faster type of volatile memory that

sits between the CPU and main memory to speed up data access.

8) In multi-core or multi-threaded processors, main memory can be accessed by different

CPU cores or threads simultaneously, allowing for parallel processing of tasks.

Main memory is typically installed in the form of memory modules (like DIMMs - Dual In-Line

Memory Modules) that are inserted into motherboard slots designed for RAM.

3. Secondary Memory
Secondary memory, also known as auxiliary storage or external memory, refers to storage

devices that are used to store data and programs that are not currently in use by the computer's

main memory (RAM). Unlike primary memory (RAM), secondary memory is non-volatile,

meaning it retains information even when the power is turned off. This makes it essential for

long-term data storage in computer systems. This type of memory is also known as external

memory or non-volatile. It is slower than the main memory. These are used for storing

data/information permanently. The CPU does not directly access these memories; instead they

are accessed via input-output routines. The contents of secondary memories are first transferred

to the main memory, and then the CPU can access it. For example, disk, CD-ROM, DVD, etc.

Characteristics of Secondary Memory

Secondary memory, also known as auxiliary storage or external storage, is crucial for storing data

and programs that are not currently in use by the computer's primary memory (RAM). Here are

the key characteristics of secondary memory:

1) Secondary memory retains data even when the computer is powered off. This is essential

for long-term data storage, as all saved files, applications, and system updates remain

intact during power outages or shutdowns.


2) Secondary memory typically offers much larger storage capacities than primary memory

(RAM). For instance, modern hard drives and solid-state drives can store hundreds of

gigabytes to several terabytes of data, accommodating vast amounts of information.

3) Access times for secondary storage are generally slower than those for primary storage.

While RAM allows for quick read and write operations, secondary memory devices, such

as HDDs and SSDs, have longer retrieval times.

4) Secondary memory is usually cheaper per gigabyte compared to primary memory. This

cost-effectiveness makes it practical for storing large quantities of data.

5) Secondary memory often works in conjunction with primary memory in a hierarchical

storage system. Data that is accessed less frequently is kept in secondary storage, while

frequently accessed data resides in primary memory.

6) Some secondary memory devices (like HDDs with magnetic platters) have mechanical

components that may require sequential access for certain operations. In contrast, solid-

state drives (SSDs) offer random access capabilities similar to RAM.

7) Data stored in secondary memory can be retained for an extended period, making

secondary storage ideal for backups, archives, and long-term file retention.

8) Many secondary storage devices, such as USB flash drives and external hard drives, are

portable, allowing users to easily transfer data between different computers or locations.

Depending on whether secondary memory device is part of CPU or not, there are two types of

secondary memory – fixed and removable.

Differences between primary and secondary memory


Storage Devices

SSD Solid State Drive

It is a storage device that uses integrated circuit assemblies as memory to store data. SSD are

also known as solid-state disks although SSDs do not have physical disks. Form-factors and

protocols such as SATA and SAS of traditional hard disk drive (HDD) may be used by SSD,

greatly simplifying usage of SSDs in computers. New form factors such as the M.2 form factor,

and new I/O protocols such as NVM Express have been developed to address specific

requirements of the Flash memory technology used in SSDs.

Characteristics
 There are no moving mechanical components in SSD. This makes them different from

conventional electromechanical drives such as hard disk drives (HDDs) or floppy disks,

which contain movable read/write heads and spinning disks.

 SSDs are typically more resistant to physical shock.

 Run silently, have quicker access time and lower latency compared to

electromechanical devices.

Optical storage devices

This is an electronic data storage medium that can be written to and read from using a low

powered laser beam. Optical storage devices save data as patterns of dots that can be read using

light. A laser beam is the usual light source. The data on the storage medium is read by bouncing

the laser beam off the surface of the medium. Dots can be created using the laser beam (for

media that is writable such as CD-Rs). The beam is used in a high-power mode to actually mark

the surface of the medium, making a dot. This process is known as „burning‟ data onto a disc.
Magnetic Storage Device

A magnetic disk is a storage device that uses a magnetization process to write, rewrite and

access data. It is covered with a magnetic coating and stores data in the form of tracks, spots

and sectors. Hard disks and zip disks are common examples of magnetic disks.

Virtual Memory

Virtual memory is a memory management capability implemented by operating systems that

allows a computer to compensate for physical memory shortages by temporarily transferring

data from random access memory (RAM) to disk storage. This technique creates an

abstraction of the memory resources, enabling more efficient utilization of memory and

allowing systems to run larger applications than would otherwise fit into the physical

memory. Virtual Memory is a technique to increase the main memory capacity. It uses data

swap technology and hard disk area is used as virtual memory. It is a technique that is

implemented using both hardware and software. It maps memory addresses used by a
program, called virtual addresses, into physical addresses in computer memory.

Benefits of virtual memory

 Increased effective Memory: Virtual memory allows systems to run larger applications

and multiple tasks concurrently than the physical memory alone would allow.

 Memory Isolation: Each process operates in its own memory space, which enhances

security and stability by preventing one process from accidentally accessing or

modifying the memory of another.

 Simplified Memory Management: The operating system handles the allocation and

deallocation of memory, allowing programs to use memory without needing to manage

it directly.

Limitations of virtual memory

 Performance Costs: Although virtual memory provides significant advantages, accessing

data from disk (swapping) is much slower than accessing data from RAM. Frequent page

faults can lead to decreased performance, often referred to as "thrashing," where the

system spends more time swapping pages than executing processes.

 Disk Space Usage: Virtual memory relies on disk space (swap files or partitions). If the

disk space is insufficient, the system may encounter issues when attempting to allocate

memory, and performance may degrade.

Differences between Cache Memory and Virtual Memory


Redundant Array of Independent Disks (RAID)

A Redundant Array of Independent Disks (RAID) is a data storage virtualization technology that

combines multiple physical disk drive components into one or more logical units for the

purposes of data redundancy, performance improvement, or both. RAID is commonly used in

servers, workstations, and storage systems to enhance data reliability and performance. By

storing data across multiple disks, RAID can protect against data loss in the event of a single
disk failure or multiple failures, depending on the configuration. RAID configurations can

improve read and write speeds by leveraging the parallelism offered by multiple disks. RAID

presents multiple physical disks as a single logical unit to the operating system.

Common RAID Levels

1. RAID 0 (Striping): Data is split across multiple disks, which increases speed. There is no

redundancy; if one disk fails, all data is lost. Best for applications where performance is

critical and data loss is acceptable (e.g., gaming, video editing).

2. RAID 1 (Mirroring): Data is copied identically to two or more disks. Provides redundancy;

if one disk fails, the data remains available on the other disk(s). Suitable for critical data

storage where uptime is essential.

3. RAID 5: Data and parity information are striped across three or more disks. Can tolerate

the failure of one disk. Parity information allows for data recovery. Often used in

business applications where a balance of performance, capacity, and redundancy is

needed.

4. RAID 6: Similar to RAID 5, but uses two parity blocks instead of one. Can tolerate the

failure of two disks. Useful for applications where data availability is crucial, offering

higher fault tolerance than RAID 5.

5. RAID 10 (or 1+0): Combines RAID 1 and RAID 0 by mirroring data on pairs of disks and

striping across those pairs. Provides redundancy and improved performance; can

withstand multiple disk failures as long as they are not from the same mirrored pair.
Ideal for database applications or any applications requiring high availability and

performance.

Additional RAID Levels

 RAID 2: Uses Hamming code for error correction and requires a dedicated disk for each

bit. It's rarely used today.

 RAID 3 and RAID 4: Both use dedicated parity disks for error correction but are less

common than RAID 5 and 6.

 RAID 50, RAID 60: Combinations of RAID 5 or 6 with RAID 0 for performance and

redundancy.

RAID Implementation

Hardware RAID: Uses a dedicated hardware controller to manage the disks. It can provide

better performance and additional features but requires specific hardware.

Software RAID: Managed by the operating system without special hardware. It's more flexible

and cost-effective but may consume system resources.

Advantages of RAID

 Increased Performance: Depending on the RAID level, you can achieve better read and

write speeds.

 Data Protection: RAID can prevent data loss through redundancy.

 Scalability: RAID configurations can often be expanded by adding more disks.


Disadvantages of RAID

 Complexity: Configuration and management can be complex, especially with higher

RAID levels.

 Cost: Additional disks and potentially hardware controllers increase costs.

 Not a Backup Solution: While RAID provides redundancy, it is not a substitute for regular

data backups.

Key RAM specifications to consider

Physical Size RAM: modules vary in physical size based on the type of computer they're used

for and the number of pins on the module. Dual Inline Memory Modules (DIMMs) with 168

pins are 5.25 inches long. DIMMs with fewer pins are typically smaller, with more pins

meaning a physically larger module. DIMMs are commonly used in desktop computers,

whereas laptops

typically use Small Outline Dual Inline Memory Modules (SODIMMs). SODIMMs use the

same technology but they're physically smaller, allowing them to fit in laptops.

Amount of memory: The amount is another important specification to remember. Your

computer can only hold so much RAM and while going over the specified limit won't harm

your computer, your PC will only use as much of it as it was designed to use. Amount is

commonly measured in gigabytes (GB), though older or low-end computers may measure

maximum RAM in megabytes (MB). Some computers have two slots to install memory,

others have four and some have even more.

Type of memory: The memory type is important because this is where the majority of RAM's
compatibility issues lie. Multiple variations of Double Data Rate (DDR) memory technology

are used in various computers. 1 Objective Cache memory increases CPU access speed. Virtual

memory increases the main memory capacity. 2 Memory Unit Cache memory is a memory unit

and is very fast to access. Virtual memory is a technique and involves hard disk and is slower to

access. 3 Management CPU and related hardware manages cache memory. Operating System

manages virtual memory. 4 Size Cache memory is small in size. Size of virtual memory is

much larger than cache memory. 5 Operation Cache memory keeps recently used data. Virtual

memory keeps the programs which are not getting accommodated in main memory. DDR2 is

faster than DDR memory, while DDR3 memory is faster than both. If your computer requires

DDR3 memory, DDR2 memory won't work.

Memory Speed: Memory speed is frequently denoted by "PC-" followed by a number that

denotes the peak transfer rate and bandwidth of that type of memory. For example, PC-2400's

peak transfer rate is around 2,400 megabytes per second (MB/s). The peak transfer rate

basically

denotes the best performance possible for that memory. "PC2" and "PC3" simply refer to

DDR2 and DDR3 memory, respectively. The specifications may list the memory under a

name known as the "friendly name," which looks something like "DDR3-1066." In this case,

1066 represents the data transfer rate in millions per second. All together, the memory

specification may read something like "2 GB PC3-6400 DDR3 SODIMM."

Memory Specifications Factors in A Computer System

When discussing memory specifications in a computer system, several key factors and

specifications are essential to understand. These factors can significantly impact the
performance, efficiency, and capability of a computer. Here are the primary specifications and

factors to consider:

 Type of Memory

 Capacity: Capacity is measured in bytes (kilobytes, megabytes, gigabytes, terabytes) and

indicates how much data can be stored.

 Speed: Checked as Data Transfer Rate which is often measured in megahertz (MHz) or

gigahertz (GHz), this specifies how fast data can be read from or written to memory.

Higher speeds result in better performance. Also Access Time which is the time it takes

for the memory to retrieve data. Lower access times typically indicate faster memory.

 Bandwidth: Refers to the amount of data that can be transmitted to and from the

memory over a given period. Higher bandwidth allows for greater data throughput,

which can significantly affect performance.

 Latency: refers to the delay before a transfer of data begins following an instruction.

Lower latency is preferable, as it allows for quicker response times.

Checking for memory specification factors;

For Windows:

a. Using System Information Utility

Press Windows Key + R to open the Run dialog.

Type msinfo32 and hit Enter. This will open the System Information window.

In the left pane, select Components and then Memory.


You will find information about the Total Physical Memory, Available Physical Memory, and

more.

b. Using Task Manager

Right-click the Taskbar and select Task Manager or press Ctrl + Shift + Esc to open it.

Go to the Performance tab.

Click on Memory on the left side. Here, you can see details about the total memory size,

memory speed, form factor, number of channels, and usage.

c. Using Command Prompt or PowerShell

Open Command Prompt (cmd) or PowerShell.

Run the command:

wmic memorychip get /format:list

This command will list detailed specifications of each installed memory module, including

capacity, speed, manufacturer, and part number.


Element 4: Understand input-output functions

Introduction

This learning outcome covers the explaining categorizing the peripheral devices, explain

Input/output (I/O) processing, explain bus interface role in I/O and different modes of data

transfer. It also involves prescribing I/O specifications for a user and verifying I/O

specifications for a given computer.

Definition of terms

In computer organization, INPUT-OUTPUT (I/O) refers to the process of exchanging data

between the computer system and external devices. Input devices allow users to input data or

commands into the computer, while output devices display or provide results to the user. The

input-output operation involves transferring data between the computer's memory and

input/output devices.

Peripheral: A peripheral device is an internal or external device that connects to a computer but

does not contribute to the computer's primary function, such as computing. It helps end users

access and use the functionalities of a computer.

Bus: A bus is a subsystem that transfers data between computer components inside a computer

or between computers.

Overview of peripheral devices


A peripheral is a device that can be attached to the computer processor. Peripheral devices can be

external, such as a mouse, keyboard, printer, monitor or scanner. Peripheral devices can also be

internal, such as a CD-ROM drive, DVD-R drive or modem.

Categories of peripheral devices

We have different categories. These include;

 Input devices

 Output devices

 Storage devices

Input Devices

A device that feeds data into a computer processor is called an input device. Input can take a

variety of forms, from commands you enter from the keyboard to data from another computer or

device.

Examples of input devices


Output Devices

A device that shows data from a computer processor is an output device. Output can also appear

in a variety of forms - text, video, graphics, and so on. Examples of common output devices;
Backing Storage Devices

Backing storage is a device which holds and retains data. These devices allow the user to save

data in a more permanent way than RAM so that data is not lost and may be used at a later time.
Peripheral specifications

These specifications should match the user’s needs. If the user needs to save/store data, they will

need storage, a device categorized as storage will be ideal. The user specification may entail a

very long list of devices depending on the special purpose at hand.

I/O Processing

I/O processor is a processor separate from the CPU designed to handle only input/output

processes for a device or the computer. The I/O processor is capable of performing actions

without interruption or intervention from the CPU. The CPU only needs to initiate the I/O

processor by telling it what activity to perform. Once the necessary actions are performed, the

I/O processor then provides the results to the CPU.

Doing these actions allow the I/O processor to act as a bus to the CPU, carrying out activities by

directly interacting with memory and other devices in the computer. A more advanced I/O

processor may also have memory built into it, allowing it to perform actions and activities more

quickly.

Bus Interface unit (BIU)

BIU takes care of all data and addresses transfers on the buses for the EU (Execution Unit) like

sending addresses, fetching instructions from the memory, reading data from the ports and the

memory as well as writing data to the ports and the memory. EU has no direct connection with

System Buses so this is possible with the BIU. EU and BIU are connected with the Internal Bus.

BUS: is a communication system that transfers data between components inside a computer, or

between computers
Interface: is a hardware circuitry between the microcomputer and the I/O devices. It provides all

the input/output transfer. They connect/interface the computer and the peripherals.

Types of buses

Each bus defines its set of connectors to physically plug devices, cards or cables together. There

are two types of buses: internal and external. Internal buses are connections to various internal

components. External buses are connections to various external components. There are different

kinds of slots that internal and external devices can connect to.

Internal

Types of Slots

There are many different kinds of internal buses, but only a handful of popular ones. Different

computers come with different kinds and number of slots. It is important to know what kind and

number of slots you have on your computer before you go out and by a card that matches up to a

slot you don’t have.

PCI
PCI (Peripheral Component Interconnect) is common in modern PCs. This kind of bus is being

succeeded by PCI Express. Typical PCI cards used in PCs include: network cards, sound cards,

modems, extra ports such as USB or serial, TV tuner cards and disk controllers. Video cards

have outgrown the capabilities of PCI because of their higher bandwidth requirements.

PCI Express

PCI Express was introduced by Intel in 2004. It was designed to replace the general-purpose PCI

expansion bus and the AGP graphics card interface. PCI express is not a bus but instead a point-

to-point connection of serial links called lanes. PCI Express cards have faster bandwidth then

PCI cards which make them more ideal for high-end video cards.

PCMCIA

PCMCIA (also referred to as PC Card) is the type of bus used for laptop computers. The name

PCMCIA comes from the group who developed the standard: Personal Computer Memory Card

International Association. PCMCIA was originally designed for computer memory expansion,

but the existence of a usable general standard for notebook peripherals led to many kinds of

devices being made available in this form. Typical devices include network cards, modems, and

hard disks.
AGP

AGP (Accelerated Graphics Port) is a high-speed point-to-point channel for attaching a graphics

card to a computer’s motherboard, primarily to assist in the acceleration of 3D computer

graphics. AGP has been replaced over the past couple years by PCI Express. AGP cards and

motherboards are still available to buy, but they are becoming less common.

Types of Cards

Video Card

A video card (also known as graphics card) is an expansion card whose function is to generate

and output images to a display. Some video cards offer added functions, such as video capture,

TV tuner adapter, ability to connect multiple monitors, and others. Most video cards all share

similar components.

They include a graphics processing unit (GPU) which is a dedicated microprocessor optimized

for 3D graphics rendering. It also includes video BIOS that contains the basic program that

governs the video card’s operations and provides the instructions that allow the computer and

software to interface with the card. If the video card is integrated in the motherboard, it may use

the computer RAM memory. If not, it will have its own video memory called Video RAM.
This kind of memory can range from 128MB to 2GB. A video card also has a RAMDAC

(Random Access Memory Digital-to-Analog Converter) which takes responsibility for turning

the digital signals produced by the computer processor into an analog signal which can be

understood by the computer display. Lastly, they all have outputs such as an HD-15 connector

(standard monitor cable), DVI connector, S-Video, composite video or component video.

Sound Card

A sound card is an expansion card that facilitates the input and output of audio signals to/from a

computer under control of computer programs. Typical uses for sound cards include providing

the audio component for multimedia applications such as music composition, editing video or

audio, presentation/education, and entertainment. Many computers have sound capabilities built

in, while others require additional expansion cards to provide for audio capability.

Network Card

A network card is an expansion card that allows computers to communicate over a computer

network. It allows users to connect to each other either by using cables or wirelessly. Although

other network technologies exist, Ethernet has achieved near-ubiquity for a while now. Every
Ethernet network card has a unique 48-bit serial number called a MAC address, which is stored

in ROM carried on the card.

External

Types of Connections

USB

USB (Universal Serial Bus) is a serial bus standard to interface devices. USB was designed to

allow many peripherals to be connected using a single standardized interface socket and to

improve the plug-and-play capabilities by allowing devices to be connected and disconnected

without rebooting the computer.

Other convenient features include providing power to low-consumption devices without the need

for an external power supply and allowing many devices to be used without requiring

manufacturer specific, individual device drivers to be installed. USB is by far the dominating bus

for connecting external devices to your computer.


Firewire

Firewire (technically known as IEEE 1394 and also known as i.LINK for Sony) is a serial bus

interface standard for high-speed communications and isochronous real-time data transfer,

frequently used in a personal computer. Firewire has replaced Parallel ports in many

applications. It has been adopted as the High Definition Audio-Video Network Alliance (HANA)

standard connection interface for A/V (audio/visual) component communication and control.

Almost all modern digital camcorders have included this connection.

PS/2

The PS/2 connector is used for connecting some keyboards and mice to a PC compatible

computer system. The keyboard and mouse interfaces are electrically similar with the main

difference being that open collector outputs are required on both ends of the keyboard interface

to allow bidirectional communication. If a PS/2 mouse is connected to a PS/2 keyboard port, the

mouse may not be recognized by the computer depending on configuration.


Modes of Data Transfer

There are different modes of data transfer. We are going to have a look at the following:

 Programmed I/O

 Interrupt initiated I/O

 Direct memory access(DMA)

a. Programmed I/O

Programmable I/O is one of the I/O techniques other than the interrupt-driven I/O and direct

memory access (DMA). The programmed I/O is the simplest type of I/O technique for the

exchanges of data or any types of communication between the processor and the external

devices. With programmed I/O, data is exchanged between the processor and the I/O module.

The processor executes a program that gives it direct control of the I/O operation, including

sensing device status, sending a read or write command, and transferring the data. When the

processor issues a command to the I/O module, it must wait until the I/O operation is complete.

If the processor is faster than the I/O module, this is wasteful of processor time. The overall

operation of the programmed I/O can be summarized as follows:

 The processor is executing a program and encounters an instruction relating to I/O

operation.
 The processor then executes that instruction by issuing a command to the appropriate I/O

module.

 The I/O module will perform the requested action based on the I/O command issued by

the processor (READ/WRITE) and set the appropriate bits in the I/O status register.

 The processor will periodically check the status of the I/O module until it finds that the

operation is complete.

To execute an I/O-related instruction, the processor issues an address, specifying the particular

I/O module and external device, and an I/O command. There are four types of I/O commands

that an I/O module may receive when it is addressed by a processor:


 Control: Used to activate a peripheral and tell it what to do. For example, a magnetic tape

unit may be instructed to rewind or to move forward one record. These commands are

tailored to the particular type of peripheral device.

 Test: Used to test various status conditions associated with an I/O module and its

peripherals. The processor will want to know that the peripheral of interest is powered on

and available for use. It will also want to know if the most recent I/O operation is

completed and if any errors occurred.

 Read: Causes the I/O module to obtain an item of data from the peripheral and place it in

an internal buffer. The processor can then obtain the data item by requesting that the I/O

module place it on the data bus.

 Write: Causes the I/O module to take an item of data (byte or word) from the data bus

and subsequently transmit that data item to the peripheral.

Advantages and disadvantages of programmed I/O

b. Interrupt- initiated I/O


Interrupt I/O is a way of controlling input/output activity whereby a peripheral or terminal that

needs to make or receive a data transfer sends a signal. This will cause a program interrupt to be

set at a time appropriate to the priority level of the I/O interrupt. Relative to the total interrupt

system, the processors enter an interrupt service routine. The function of the routine will depend

upon the system of interrupt levels and priorities that is implemented in the processor. The

interrupt technique requires more complex hardware and software, but makes far more efficient

use of the computer’s time and capacities.

For input, the device interrupts the CPU when new data has arrived and is ready to be retrieved

by the system processor. The actual actions to perform depend on whether the device uses I/O

ports or memory mapping.

For output, the device delivers an interrupt either when it is ready to accept new data or to

acknowledge a successful data transfer. Memory-mapped and DMA-capable devices usually


generate interrupts to tell the system they are done with the buffer. Here the CPU works on its

given tasks continuously. When an input is available, such as when someone types a key on the

keyboard, then the CPU is interrupted from its work to take care of the input data. The CPU can

work continuously on a task without checking the input devices, allowing the devices themselves

to interrupt it as necessary.

Basic Operations of Interrupt

1. CPU issues read command.

2. I/O module gets data from peripheral whilst CPU does other work.

3. I/O module interrupts CPU.

4. CPU requests data.

5. I/O module transfers data.

Interrupt Processing
 A device driver initiates an I/O request on behalf of a process.

 The device driver signals the I/O controller for the proper device, which initiates the

requested I/O.

 The device signals the I/O controller that is ready to retrieve input, the output is complete

or that an error has been generated.

 The CPU receives the interrupt signal on the interrupt-request line and transfer control

over the interrupt handler routine.

 The interrupt handler determines the cause of the interrupt performs the necessary

processing and executes a “return from” interrupt instruction.

 The CPU returns to the execution state prior to the interrupt being signaled.

 The CPU continues processing until the cycle begins again.


c. Direct Memory Access (DMA)

Different from Programmed I/O and Interrupt-Driven I/O, Direct Memory Access is a technique

for transferring data within main memory and external device without passing it through the

CPU.

It is a way to improve processor activity and I/O transfer rate by taking-over the job of

transferring data from the processor, and letting the processor do other tasks. This technique

overcomes the drawbacks of other two I/O techniques which are the time consuming process

when issuing command for data transfer and tie-up the processor in data transfer while the data

processing is neglected. It is more efficient to use the DMA method when a large volume of data

has to be transferred.

For DMA to be implemented, the processor has to share its‟ system bus with the DMA module.

Therefore, the DMA module must use the bus only when the processor does not need it, or it

must force the processor to suspend operation temporarily. The latter technique is more common

to be used and it is referred to as cycle stealing.


Basic Operation of DMA

When the processor wishes to read or send a block of data, it issues a command to the DMA

module by sending some information to the module. The information includes:

 Read or write command, sending through read and write control lines.

 Number of words to be read or written, communicated on the data lines and stored in the

data count register.

 Starting location in memory to read from or write to, communicated on data lines and

stored in the address register.

 Address of the I/O device involved, communicated on the data lines.


After the information is sent, the processor continues with other work. The DMA module then

transfers the entire block of data directly to or from memory without going through the

processor. When the transfer is complete, the DMA module sends an interrupt signal to the

processor to inform that it has finished using the system bus.

Advantages & Disadvantages of DMA


Key I/O Specifications for a computer

 Keyboard – Layout, Language and ease of use.

 Mouse – Type (Wireless, wired)

 Monitor – Resolution, Size

 Printer – Speed (Pages per minutes), Colour or non-colour, resolution (dpi), type (impact

vs. non impact)


Element 5: Understand computer arithmetic and logic

Introduction

This learning outcome involves explaining number systems, demonstrating Integer and floating

point representations according to IEEE (Institute of Electrical and Electronics Engineers)

standard, explaining integer and floating point arithmetic, logic operators and logic operations

and demonstrating methods of representing logic operations.

The Number System

The number system is a structured way of representing and classifying numbers. It provides a

framework for mathematical operations and helps us understand the relationships between

different types of numbers. We have different types of numbering systems. They include:

1. Decimal number system

2. The binary number system

3. Octal number system

4. Hexadecimal number system

Decimal Number System

The decimal numbering system, also known as the base-10 system, is the most widely used

numbering system in everyday life. It is the standard system for denoting integer and non-integer

numbers. The decimal system is a positional numeral system based on the number 10. It uses ten

digits to represent numbers: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. Each digit in a decimal number has a

position that reflects its value based on powers of 10. The decimal system can also represent

negative numbers by prefixing them with a minus sign (e.g., -3.75). example,
Say we have three numbers; 734, 971 and 207. The value of 7 in all three numbers is different:

 In 734, value of 7 is 7 hundreds or 700 or 7 × 100 or 7 × 102

 In 971, value of 7 is 7 tens or 70 or 7 × 10 or 7 × 101

 In 207, value 0f 7 is 7 units or 7 or 7 × 1 or 7 × 100

Binary Number System

The binary number system, also known as base-2, is a numeral system that uses only two digits:

0 and 1. It is the foundation of digital electronics and computing, as it directly relates to the

on/off states of electronic circuits. Each binary digit is also called a bit. The binary system is

based on the number 2, which means it uses two symbols (0 and 1) to represent values. Similar to

the decimal system, each digit in a binary number (called a bit) has a positional value based on

powers of 2. each digit has a value expressed in powers of 2, i.e;

In any binary number, the rightmost digit is called least significant bit (LSB) and leftmost digit is

called most significant bit (MSB).

Conversion of binary number to decimal number


The decimal equivalent of this number is the sum of the product of each digit with its positional

value.

Example;

110102 = 1×24 + 1×23 + 0×22 + 1×21 + 0×20

= 16 + 8 + 0 + 2 + 0

= 2610

Conversion of decimal number to binary number

To convert a decimal number to binary, repeatedly divide the number by 2 and record the

remainders. Reading the remainders from bottom to top. Example;

Convert 1310 to Binary

(13 \div 2 = 6) remainder 1

(6 \div 2 = 3) remainder 0

(3 \div 2 = 1) remainder 1

(1 \div 2 = 0) remainder 1

Recording the reminder from bottom to up yields 1101.

Therefore;

1310 = 11012
Computer memory is measured in terms of how many bits it can store. Here is a chart for

memory capacity conversion.

1 byte (B) = 8 bits

1 Kilobytes (KB) = 1024 bytes

1 Megabyte (MB) = 1024 KB

1 Gigabyte (GB) = 1024 MB

1 Terabyte (TB) = 1024 GB

1 Exabyte (EB) = 1024 PB

1 Zettabyte = 1024 EB

1 Yottabyte (YB) = 1024 ZB

Octal Number System

The octal number system, also known as base-8, is a numeral system that uses eight symbols to

represent values. These symbols are the digits from 0 to 7. The octal system is less common than

the decimal (base-10) system and the binary (base-2) system but has been used in specific

applications, especially in computing and digital electronics. Similar to other numeral systems

like decimal and binary, each digit in an octal number has a positional value based on powers of

8. Octal number system is also a positional value system with where each digit has its value

expressed in powers of 8;
Decimal equivalent of any octal number is the sum of the product of each digit with its positional

value.

7268 = 7×82 + 2×81 + 6×80

= 448 + 16 + 6

= 47010

Conversion

Octal to Decimal

To convert an octal number to decimal, multiply each digit by the corresponding power of 8 and

sum the results. Example;

Convert 2378 to Decimal;

Solution

2378 = (2 * 82 ) + (3 * 81 ) + (7 * 80 ) = 128 + 24 + 7 = 159

Therefore;

2378 = 15910

Decimal to Octal
To convert a decimal number to octal, repeatedly divide the number by 8 and record the

remainders. Then, Read the remainders from bottom to top. Example;

Convert 15910 to Octal

Solution

(159 \div 8 = 19) remainder 7

(19 \div 8 = 2) remainder 3

(2 \div 8 = 0) remainder 2

Record the remainder from bottom to top yields 2378

Binary to Octal

To convert a binary number to octal, group the binary digits into sets of three (starting from the

right) and convert each group to its octal equivalent. Example;

Convert 10101112 to octal

Solution

Divide binary starting from right into sets of three yields, note have arranged from bottom to top.

001

010

111

For every set, convert it to its octal equivalence, i.e;


0012 = 18

0102 = 28

1112 = 78

Therefore;

10101112 = 1278

Hexadecimal Number System

The hexadecimal number system, also known as base-16, is a numeral system that uses sixteen

distinct symbols to represent values. These symbols are the digits from 0 to 9 (representing

values zero through nine) and the letters A, B, C, D, E, and F (representing values ten through

fifteen). The hexadecimal system is widely used in computing and digital electronics due to its

efficiency and compact representation of binary data. The hexadecimal system is based on the

number 16, which allows it to use sixteen symbols. The symbols are:

0, 1, 2, 3, 4, 5, 6, 7, 8, 9 (representing values 0 to 9)

A (10), B (11), C (12), D (13), E (14), F (15) (representing values 10 to 15)

Converting numbers between the hexadecimal (base-16) and decimal (base-10) number systems

involves understanding the value of each digit position in the respective bases. Below are the

methods for converting from hexadecimal to decimal and vice versa.

Convert Hexadecimal to Decimal


For a hexadecimal each position in the hexadecimal number represents a power of 16, starting

from 0 from the rightmost digit. Convert each hexadecimal digit to its decimal equivalent (0-9

remain the same, A=10, B=11, C=12, D=13, E=14, F=15). Multiply each digit by

\(16^{\text{position}}\) and sum them up. Example: Convert 2F316 to Decimal;

Solution;

Assign Powers of 16

3 is in the \(16^0\) position (1s place)

F (15 in decimal) is in the \(16^1\) position (16s place)

2 is in the \(16^2\) position (256s place)

This yields;

(2 * 162) + (15 * 161) + (3 * 160) = 512 + 240 + 3 = 755

Thus 2F316 = 75510

Convert Decimal to Hexadecimal

Divide the decimal number by 16. Keep track of the remainder after each division; this will form

the hexadecimal digits. Repeat the division with the quotient until the quotient is zero. The

hexadecimal digits are read in reverse order (the last remainder is the most significant). Example:

Convert 2552 to Hexadecimal;

Solution

255 \div 16 = 15\) remainder 15 (which is F)


(15 \div 16 = 0\) remainder 15 (which is also F)

The remainders collected from bottom to top give you the hexadecimal result.

So, 255 in decimal is represented as FF in hexadecimal

Binary to Hexadecimal

To convert a binary number to hexadecimal number, these steps are followed;

 Starting from the least significant bit, make groups of four bits.

 If there is one or two bits less in making the groups, 0s can be added after the most

significant bit.

 Convert each group into its equivalent hexadecimal number.

Example;

Convert 101101101012 to hexadecimal

Solution

0101 1011 0101

Now, we convert each group of four binary digits to its hexadecimal equivalent:

0101 = (5)

1011 = (B)

0101 = (5)

Therefore;
101101101012 = 5B516

Number System Relationship

American Standard Code for Information Interchange (ASCII)

ASCII, which stands for American Standard Code for Information Interchange, is a character

encoding standard that uses numerical values to represent characters and symbols. It plays a

critical role in computing and digital communication, particularly in defining how text is

represented in a computer's memory and transmitted between devices. ASCII consists of 128

predefined characters, which include:


 Standard English letters (A-Z, a-z)

 Digits (0-9)

 Control characters (like newline, carriage return)

 Punctuation marks (e.g., . , ; ! ?)

 Special symbols (e.g., @, #, $, %, &)

Each ASCII character is represented by a 7-bit binary number, allowing for 128 unique values

(from 0 to 127). For example, the ASCII value for 'A' is 65, while 'a' is 97, and '0' is 48.

Although ASCII is a 7-bit encoding scheme, it is often stored in 8 bits (1 byte), with the eighth

bit typically used for parity or extended character sets in some implementations.

ASCII was designed to facilitate the exchange of information between different manufacturers

and computer systems, helping establish uniformity in text representation.


Unicode

Unicode is a standardized character encoding system designed to provide a consistent method for

representing text from different languages and writing systems across computers and networks. It

addresses the limitations of older character encodings, such as ASCII, which primarily focused

on the English language and had a limited number of characters. Unicode was developed to

create a universal character set that could accommodate all written languages, symbols, and

special characters used around the world. It aims to support global communication effectively by

ensuring that text can be represented and manipulated in digital form consistently. Unicode can

represent over 1.1 million characters, including scripts for nearly all languages, mathematical
symbols, emojis, and various special characters. Each character in Unicode is assigned a unique

identifier called a code point, represented in the format "U+XXXX," where "XXXX" is a

hexadecimal number. For example:

Latin letter 'A' is U+0041.

Emoji for a smiling face is U+1F600.

IEEE-based Integer and Floating point representations

The IEEE 754 standard defines formats for representing both integer and floating-point numbers

in computer systems. Let's break down both integer representation and floating-point

representation in the context of IEEE standards.

Integers

An integer is a whole number (not a fraction) that can be positive, negative, or zero. Therefore,

the numbers 10, 0, -25, and 5,148 are all integers. Integers are a commonly used data type in

computer programming. For example, whenever a number is being incremented, such as within a

"for loop" or "while loop," an integer is used. Integers are also used to determine an item's

location within an array.

When two integers are added, subtracted, or multiplied, the result is also an integer. However,

when one integer is divided into another, the result may be an integer or a fraction. For example,

6 divided by 3 equals 2, which is an integer, but 6 divided by 4 equals 1.5, which contains a

fraction. Decimal numbers may either be rounded or truncated to produce an integer result.

Floating Point numbers


As the name implies, floating point numbers are numbers that contain floating decimal points.

For example, the numbers 5.5, 0.001, and -2,345.6789 are floating point numbers. Computers

recognize real numbers that contain fractions as floating point numbers. When a calculation

includes a floating point number, it is called a "floating point calculation." Older computers used

to have a separate floating point unit (FPU) that handled these calculations, but now the FPU is

typically built into the computer's CPU.

Floating point arithmetic

Floating-point arithmetic is a method of representing real numbers (including fractions) in a way

that maintains a wide range of values by using a fixed number of digits (binary bits) for the

representation. The IEEE 754 standard specifies how floating-point arithmetic is performed to

ensure consistency and precision across different computing systems. Floating-point numbers in

the IEEE 754 format are represented using three primary components:

Sign bit (S): Indicates whether the number is positive (0) or negative (1).

Exponent (E): Represents the scale or magnitude of the number, using a biased exponent.

Mantissa or Fraction (F): Represents the significant digits of the number.

Basic Operations

The main arithmetic operations are addition, subtraction, multiplication, and division. Each of

these operations involves several key steps:

Addition and Subtraction


Align the Exponents: If the exponents of the two numbers differ, adjust the smaller exponent and

its mantissa accordingly by shifting the mantissa. This may involve padding with zeros.

Perform the Operation: After aligning the exponents, perform the addition or subtraction on the

mantissas.

Normalize the Result: Ensure that the result fits into the normalized format. This involves

adjusting the mantissa and exponent so that the mantissa is in the range [1.0, 2.0).

Round the Result: rounding (e.g., round to nearest) to maintain a finite precision.

Multiplication

Multiply the Mantissas: Multiply the mantissas of the two numbers.

Add the Exponents: Add the exponents of the two numbers. Remember to subtract the bias from

the result.

Normalize the Result: Normalize the result if necessary.

Round the Result: Apply rounding as in addition/subtraction.

Division

Divide the Mantissas: Perform division on the mantissas.

Subtract the Exponents: Subtract the exponent of the divisor from the exponent of the dividend,

remembering to consider the bias.

Normalize and Round: Normalize the result and round as described earlier.

Important Considerations
Precision and Rounding: Floating-point arithmetic is subject to rounding errors due to the finite

precision of mantissas and exponents. Common rounding modes include round to nearest, round

toward zero, round toward positive infinity, and round toward negative infinity.

Overflow and Underflow: Overflow occurs when a result’s magnitude exceeds the maximum

representable number, leading to positive or negative infinity. Underflow occurs when a result’s

magnitude is too small to be represented, leading to zero (or denormalized numbers).

Examples;

Consider the learning guides for this unit

Logic operators and Logic Operations

Logic operators are fundamental components in digital circuits and programming that allow for

the manipulation of binary values—specifically, true/false values (1/0). They are primarily used

in Boolean algebra, which underlies computer logic, control flow, and conditional statements in

programming languages. We have a range of the logic operators which include OR, AND, NOT,

NOR, XOR and XNOR.

OR Operator

The OR logic operation returns True if either its inputs are True. If all inputs are false, the output

is also false. In computer programming, the OR operation is usually written as || (two vertical

bars). In Boolean algebra, the OR value of two inputs A and B can be written as A+B. Note Do

not mistake the OR operation for arithmetic addition, even though they both use the "+" symbol.

They are distinct operations. Below is the truth table and the circuit diagram of an OR logic gate.

Truth table
The circuit diagram

AND operator

The AND logic operation returns True only if either of its inputs are True. If either of the inputs

is False, the output is also false. In computer programming, the AND operation is usually written

as && (two ampersands). In Boolean algebra, the AND operation of two inputs A and B can be

written as AB. Below is the truth table and circuit diagram of an AND logic gate.
Circuit diagram

NOT Operator

The NOT logic operation returns True if its input is False, and False if its input is True. In

computer programming, the NOT operation is usually written as ! (an exclamation mark). In

Boolean algebra, the NOT value of an input A can be written as (A with an overscore). Below is

the circuit diagram of a NOT logic gate.


NOR Operator

The NOR logic operation “stands for NOT OR” returns true if either of its inputs are false, and

false if either of its inputs are true. In Boolean algebra, the NOR value of two inputs A and B can

be written as (A+B with an overscore). NOR has the distinction of being one of two "universal"

logic gates, because any other logic operation can be created using only NOR gates. (The other

universal logic gate is NAND.) Below is the circuit diagram of a NOR logic gate.

XOR Operator
The XOR logic operation (which stands for "Exclusive OR" returns true if either of its inputs

differ, and false if they are all the same. In other words, if its inputs are a combination of true and

false, the output of XOR is true. If its inputs are all true or all false, the output of XOR is false. In

Boolean algebra, the XOR value of two inputs A and B can be written as A⊕B (the XOR

symbol, resembles a plus sign inside a circle). Below is the XOR operation circuit diagram.

XNOR The XNOR logic operation (which stands for "Exclusive NOT OR" returns true if either

of its inputs are the same, and false if either of them differ. In other words, if its inputs are a

combination of true and false, the output of XNOR is false. If its inputs are all true or all false,

the output of XNOR is true. In Boolean algebra, the XNOR value of two inputs A and B can be

written as (the XOR symbol ⊕, resembles a plus sign inside a circle with a line over everything).

Below is the XNOR operation circuit diagram.


Karnaugh map

A Karnaugh map (K-map) is a graphical representation used to simplify Boolean expressions and

assist in minimizing the logic functions of digital circuits. Developed by Maurice Karnaugh in

1953, K-maps offer a systematic way to minimize complex logic expressions without resorting to

Boolean algebraic manipulation. The Karnaugh map can also be described as a special

arrangement of a truth table.

A K-map consists of a grid-like structure where each cell represents a possible combination of

input variables. The arrangement of cells follows the Gray code order, where only one bit

changes between adjacent cells, which helps in identifying groups of 1s (true values) that can be

combined or simplified.

Use a Karnaugh map, you fill in the map based on the output of the given Boolean function for

each combination of input variables:

1s: Represent the minterms where the output is true (1).

0s: Represent the maxterms where the output is false (0).

You might also like