Computer System Organization - IM's With Matermark
Computer System Organization - IM's With Matermark
Overview:
This chapter provides a foundational understanding of computer system organization and
architecture. It introduces the key concepts, components, and principles that form the basis of
computer systems. This chapter sets the stage for exploring the intricate details of how computer
systems are structured, how they function, and why their organization and architecture are
crucial.
Objective:
At the end of this chapter, students will be able to:
1. Identify the difference between computer system organization and computer
architecture.
2. Understand the structure, components, and design principles of organization and
architecture of computer systems.
Computer system organization and architecture refer to the structure, components, and design
principles of a computer system. While computer organization focuses on the physical aspects
and arrangement of hardware components, computer architecture deals with the conceptual
models and high-level design principles that define the behavior and functionality of a computer
system. Understanding both aspects is crucial for comprehending how computer systems are
structured and how they execute programs efficiently.
1. Central Processing Unit (CPU): The CPU is responsible for executing instructions and
performing calculations. It consists of the arithmetic logic unit (ALU) for arithmetic and
logical operations, control unit for instruction execution, and registers for temporary data
storage.
2. Memory Hierarchy: The memory hierarchy includes primary memory (e.g., RAM) and
secondary memory (e.g., hard drives). It explores the organization and management of
different levels of memory, such as caches, main memory, and virtual memory.
3. Input/Output (I/O) Subsystems: I/O subsystems facilitate communication between the
computer system and external devices. This includes input devices (e.g., keyboards, mice)
and output devices (e.g., displays, printers). The organization and management of I/O
devices and interfaces are essential considerations.
4. Bus Systems: Buses are communication channels that transfer data, addresses, and
control signals between different components of the computer system. This includes the
data bus, address bus, and control bus. The organization and protocols of bus systems
impact the overall system performance.
The computer system organization is the way in which a system has to structure, and it is
operational units and the interconnections between them that achieve the architectural
specifications, it is the realization of the abstract model, and It deals with How to implement the
system.
1. Instruction Set Architecture (ISA): ISA defines the interface between the hardware and
software components of a computer system. It specifies the set of instructions that a CPU
can execute and how they are encoded. Different ISAs have varying instruction formats
and addressing modes.
2. Pipelining and Parallelism: Pipelining involves dividing the execution of instructions into
stages, enabling multiple instructions to overlap and improve system throughput.
Parallelism explores techniques such as multi-core processors, vector processing, and
parallel computing to achieve faster execution.
3. Memory Hierarchy and Caching: Memory hierarchy design determines the organization
of different levels of memory, aiming to optimize memory access time and capacity.
Caching techniques, such as cache hierarchies and cache coherence protocols, are used
to minimize memory access latency.
4. Virtual Memory: Virtual memory allows a computer system to use disk storage as an
extension of main memory, enabling the execution of larger programs. It involves
techniques such as paging, segmentation, and demand paging.
The computer system architecture is considered to be those attributes of a system that are visible
to the user like addressing techniques, instruction sets, and bits used for data, and have a direct
impact on the logic execution of a program, It defines the system in an abstract manner, It deals
with What does the system do.
The aims of computer system organization to optimize the performance and efficiency of the
hardware components, ensuring that they work together seamlessly to execute instructions and
process data, whereas computer system architecture is to provide a framework for building
efficient and scalable computer systems.
2
Chapter 2
Components of a Computer System
Overview:
The components of a computer system are the building blocks that work together to enable the
functionality and operation of computers. These components can be broadly categorized into
hardware and software. Hardware components are tangible physical devices, while software
components refer to the intangible programs and instructions that enable computer operations.
Objective:
At the end of this chapter, students will be able to:
1. Understand the brief history of computers
2. Identify and understand the main components of a computer system
In this era, mostly the processing operational system was used. Punch cards, paper tape, and
magnetic tape became used as input and output devices. The computer systems on
this era used machine code as the programming language.
• Vacuum tube technology: Vacuum tube technology refers to the use of vacuum tubes, also
known as electronic valves, in electronic devices. Vacuum tubes are glass or metal tubes that
contain electrodes and are used to control the flow of electric current. They were a
fundamental component of early electronic devices, such as radios and early computers,
before the advent of transistors.
• Unreliable: Vacuum tube technology was relatively unreliable compared to modern
electronic components. Vacuum tubes had a tendency to fail or burn out frequently, requiring
regular replacement. This unreliability often resulted in system downtime and required
maintenance efforts.
• Supported machine language only: Vacuum tube technology-based computers typically
supported machine language as their primary programming language. Machine language is
the lowest level of programming language that directly corresponds to the instructions
executed by the computer's hardware. It consists of binary code that is difficult for humans
to read and write.
• Very costly: Vacuum tube technology was expensive to develop, manufacture, and maintain.
3
The production of vacuum tubes involved intricate processes, and their large number was
required for complex systems. Additionally, due to their limited lifespan, frequent
replacements added to the overall cost.
• Generated a lot of heat: Vacuum tubes consumed significant amounts of power and
generated substantial heat during operation. The heat dissipation required additional cooling
mechanisms, such as fans or specialized cooling systems, to prevent overheating and ensure
the proper functioning of the electronic devices.
• Slow input and output devices: Vacuum tube-based computers had relatively slow input and
output devices. Data input and output were primarily performed through punch cards,
magnetic tapes, or paper tapes, which had limited data transfer rates compared to modern
devices like solid-state drives or network connections.
• Huge size: Vacuum tube technology necessitated the use of large and bulky components. The
vacuum tubes themselves were sizeable, and the overall design of electronic devices utilizing
vacuum tubes required extensive space. Early computers using vacuum tubes often filled
entire rooms or even buildings.
• Need of AC: Vacuum tubes required high-voltage power supplies, typically alternating current
(AC), to operate effectively. Alternating current provided the necessary voltage levels to
power the vacuum tubes and maintain their functionality. Therefore, electronic systems
utilizing vacuum tubes needed access to AC power sources.
• Non-portable: The size, weight, and power requirements of vacuum tube technology made
electronic devices incorporating vacuum tubes non-portable. Moving or transporting these
devices was impractical due to their bulkiness, making them primarily fixed installations.
• Consumed a lot of electricity: Vacuum tubes were power-hungry devices, requiring a
significant amount of electricity to operate. The power consumption was considerably higher
compared to modern electronic components, leading to increased electricity bills and adding
to the overall cost of running and maintaining vacuum tube-based systems.
• ENIAC: Electronic Numerical Integrator and Computer was one of the earliest general-
purpose electronic computers. Developed during the 1940s at the University of Pennsylvania,
ENIAC was built using vacuum tube technology. It was an enormous machine that occupied a
large room and consisted of thousands of vacuum tubes, switches, and other electronic
components. ENIAC was primarily designed for calculating artillery firing tables for the United
States Army during World War II. It was programmed using a combination of plugboard wiring
and switches, making it a challenging and time-consuming process. Despite its limitations,
ENIAC played a significant role in advancing computer technology and laid the foundation for
future developments.
• EDVAC: Electronic Discrete Variable Automatic Computer was an early electronic computer
that was designed to overcome some of the limitations of ENIAC. Proposed by John von
Neumann and his team at the Institute for Advanced Study in the late 1940s, EDVAC
introduced the stored-program concept. This concept allowed instructions and data to be
stored in the computer's memory, providing more flexibility in programming. EDVAC used
4
binary code and stored data and instructions in a memory unit made of vacuum tubes and
magnetic drums. It was faster and more reliable than ENIAC and had a significant impact on
the development of modern computing architectures.
• UNIVAC: UNIVersal Automatic Computer was the first commercially successful electronic
computer. Developed by J. Presper Eckert and John Mauchly, the creators of ENIAC, UNIVAC
was built in the early 1950s. It employed vacuum tube technology and was primarily used for
scientific and business applications. UNIVAC introduced several innovations, such as
magnetic tape storage and the use of high-level programming languages like FORTRAN. One
of the notable achievements of UNIVAC was its successful prediction of the 1952 U.S.
presidential election results, which marked a significant milestone in demonstrating the
potential of computers for data processing and analysis.
• IBM-701: The IBM-701, also known as the Defense Calculator, was a computer system
developed by IBM in the early 1950s. It was one of the first large-scale electronic computers
produced by IBM for scientific and engineering applications. The IBM-701 utilized vacuum
tubes and magnetic core memory for data storage. It had a fixed instruction set and
supported both machine language and assembly language programming. The IBM-701 was
widely used in various scientific research projects, including nuclear energy research and
weather prediction. Its success paved the way for future generations of IBM computers and
contributed to the growth of the computing industry.
• IBM-650: It was introduced in 1954, was an early computer system designed for business and
scientific applications. It was the world's first mass-produced computer and became very
popular in the business sector. The IBM-650 utilized vacuum tubes and electrostatic storage
tubes for memory. It supported both machine language and assembly language programming
and featured a decimal-based architecture, making it well-suited for financial calculations.
The IBM-650 was an important milestone in the advancement of computer technology, as it
made computing more accessible to businesses and helped automate various data processing
tasks.
ENIAC, EDVAC, UNIVAC, IBM-701, and IBM-650 were all significant contributions to the early
development of electronic computing. These machines marked important milestones in terms of
size reduction, program storage, commercial viability, and wider accessibility, laying the
foundation for the rapid advancement of computer technology in subsequent years.
Second Generation. The era of second-generation technology was from 1959-1965. In this era,
transistors have been used that have been cheaper, spent much less power, extra compact in
size, extra dependable and quicker than the primary era machines manufactured from vacuum
tubes. In this era, magnetic cores have been used because the primary memory and magnetic
tape and magnetic disks as secondary storage devices.
In this generation, assembly language and high-level programming languages like FORTRAN,
COBOL have been used. The computer systems used batch processing and
multiprogramming operating system.
5
• Use of transistors: The second generation of computers replaced the vacuum tubes used in
the first generation with transistors. Transistors are smaller, more reliable, and more
efficient electronic components that perform functions similar to vacuum tubes but with
significant advantages. Transistors enabled computers to be smaller, faster, and more
reliable than their vacuum tube-based predecessors.
• Reliable in comparison to first-generation computers: Transistors were much more reliable
than vacuum tubes. Vacuum tubes were prone to frequent failures, requiring regular
replacement and maintenance. Transistors, on the other hand, had longer lifespans and were
less susceptible to mechanical and electrical failures. This increased reliability reduced
downtime and improved overall system performance.
• Smaller size as compared to first-generation computers: The use of transistors allowed for
a significant reduction in the size of computer systems. Transistors were much smaller and
more compact than vacuum tubes, enabling the construction of more compact and portable
computers. Second-generation computers were typically room-sized rather than occupying
entire buildings like their vacuum tube-based predecessors.
• Generated less heat as compared to first-generation computers: Vacuum tubes generated
a substantial amount of heat during operation, requiring additional cooling mechanisms. The
use of transistors in second-generation computers significantly reduced heat generation.
Transistors were more energy-efficient and produced less heat, contributing to improved
system reliability and reducing the need for extensive cooling systems.
• Consumed less electricity as compared to first-generation computers: Vacuum tube-based
computers consumed large amounts of electricity. Transistors, being more energy-efficient,
consumed significantly less electricity. This reduction in power consumption not only led to
cost savings but also made it more feasible to operate computers for extended periods.
• Faster than first-generation computers: The second-generation computers were faster and
more powerful than their predecessors. Transistors switched on and off faster than vacuum
tubes, allowing for faster calculations and improved processing speeds. This increase in
speed facilitated more complex computations and improved overall system performance.
• Still very costly: Despite the advancements in technology, second-generation computers
remained relatively expensive. The development and production of transistors were still
costly, making the computers themselves expensive to manufacture. Additionally, the
infrastructure and components required for computer systems, such as magnetic core
memories and peripherals, contributed to the overall cost.
• AC required: Second-generation computers, like their first-generation counterparts,
required access to alternating current (AC) power sources to operate effectively. AC power
provided the necessary voltage levels for powering the transistors and other components of
the computer system.
• Supported machine and assembly languages: Second-generation computers continued to
support machine language, which was the lowest level of programming language. However,
they also introduced support for assembly languages, which offered a more human-readable
and mnemonic-based representation of machine instructions. Assembly languages made
programming more accessible and facilitated the development of more sophisticated
software.
6
The second generation of computers marked a significant leap in technology and laid the
groundwork for subsequent advancements. The transition from vacuum tubes to transistors
brought improvements in reliability, size, speed, energy efficiency, and programming flexibility,
setting the stage for the continued evolution of computing systems.
• IBM 1620: The IBM 1620, introduced in 1959, was a popular scientific and engineering
computer. It was designed as an affordable option for small to medium-sized businesses and
educational institutions. The IBM 1620 was notable for its relatively compact size and its
decimal-based architecture. It featured magnetic core memory, punched card input/output,
and supported both machine language and FORTRAN programming. The IBM 1620 was
widely used for scientific calculations, engineering simulations, and educational purposes.
• IBM 7094: The IBM 7094, released in 1962, was a powerful and versatile mainframe
computer. It was an improved version of the earlier IBM 7090, featuring faster transistor-
based circuitry and expanded memory options. The IBM 7094 was widely used in scientific
research, defense applications, and large-scale data processing. It supported a variety of
programming languages, including FORTRAN and COBOL. The IBM 7094 played a significant
role in advancing computer science and technology during the 1960s.
• CDC 1604: The CDC 1604, developed by Control Data Corporation (CDC) and released in 1960,
was a highly reliable and fast computer. It was designed for scientific and engineering
applications, particularly in the field of numerical simulations. The CDC 1604 was the first
computer to employ transistorized logic extensively, which improved its performance and
reliability. It had magnetic core memory and supported a variety of programming languages,
including FORTRAN and ALGOL. The CDC 1604 found widespread use in scientific research
and government organizations.
• CDC 3600: The CDC 3600, introduced in 1963, was a mainframe computer designed for
scientific and high-performance computing. It was known for its advanced architecture and
parallel processing capabilities. The CDC 3600 featured a 48-bit word length and supported
a variety of programming languages, including FORTRAN, COBOL, and ALGOL. It utilized a
unique peripheral system called the Peripheral Control Unit (PCU), which allowed for
efficient I/O operations. The CDC 3600 was widely used in scientific research, aerospace, and
government applications.
• UNIVAC 1108: The UNIVAC 1108, released in 1964, was a powerful mainframe computer
manufactured by Sperry Univac. It was part of the UNIVAC 1100 series, known for their
advanced architecture and high-performance capabilities. The UNIVAC 1108 utilized
transistorized logic and had a 36-bit word length. It supported a variety of programming
languages, including FORTRAN and ALGOL. The UNIVAC 1108 found applications in scientific
research, engineering, and large-scale data processing, providing significant computing
power for its time.
All of these computers played important roles in the advancement of computing technology
during the 1960s. They showcased improvements in speed, reliability, memory capacity, and
programming capabilities. These systems were used in a wide range of scientific, engineering,
7
and commercial applications, contributing to the growth of computer usage and the
development of modern computing architectures.
Third Generation. The era of third generation technology became from 1965-1971. The
computer systems of third generation technology used Integrated Circuits (ICs) in place of
transistors. A single IC has many transistors, resistors, and capacitors at the side of the related
circuitry. The IC was invented by Jack Kilby. This development made computer structures smaller
in size, reliable, and efficient. In this generation far-off processing, time-sharing,
multiprogramming working device have been used. High-level languages were used during this
generation.
• Integrated Circuits (IC) used: The third generation of computers introduced the use of
integrated circuits (ICs). Integrated circuits are small electronic circuits that are etched onto
a single silicon chip. These ICs contained multiple transistors, resistors, and capacitors,
allowing for greater miniaturization and improved performance.
• More reliable in comparison to previous two generations: The use of integrated circuits
significantly improved the reliability of third-generation computers. The miniaturized
components on ICs were less prone to failure and required less maintenance compared to
the vacuum tubes and discrete transistors used in previous generations. The reliability of
computers increased, resulting in reduced system downtime.
• Smaller Size: Third-generation computers were smaller and more compact than their
predecessors. The introduction of integrated circuits allowed for higher component density,
reducing the physical size of the computers. This compactness made them more space-
efficient and facilitated easier installation and maintenance.
• Generated Less Heat: Integrated circuits generated less heat compared to the vacuum tubes
and discrete transistors used in earlier generations. The reduced heat generation resulted
from the miniaturization and increased efficiency of the integrated circuits. This
advancement led to improved system reliability and decreased the need for extensive
cooling mechanisms.
• Faster: Third-generation computers exhibited significant improvements in processing speed.
The use of integrated circuits allowed for faster switching and increased computational
power. This improved speed facilitated more complex calculations and enhanced overall
system performance.
• Lesser Maintenance: The reliability of third-generation computers, owing to the use of
integrated circuits, reduced the need for frequent maintenance. With fewer failures and
more stable operations, the computers required less troubleshooting and repair, resulting in
reduced maintenance efforts.
• Costly: Despite the advancements in technology, third-generation computers were still
relatively expensive. The development and production of integrated circuits involved
complex manufacturing processes and high costs. Additionally, the accompanying
infrastructure, peripherals, and software further contributed to the overall cost of these
systems.
8
• AC Required: Like previous generations, third-generation computers required access to
alternating current (AC) power sources to operate. AC power supplied the necessary voltage
and frequency for powering the integrated circuits and other components of the computer
system.
• Consumed Lesser Electricity: Third-generation computers consumed lesser electricity
compared to their predecessors. The integration of components onto ICs increased energy
efficiency, resulting in reduced power consumption. This not only led to cost savings but also
had a positive environmental impact.
• Supported High-Level Language: Third-generation computers marked the widespread
adoption of high-level programming languages. High-level languages like COBOL, FORTRAN,
and BASIC were developed during this era, allowing programmers to write more user-friendly
and human-readable code. This facilitated the development of complex software
applications and increased productivity.
• IBM-360 Series: The IBM-360 series, introduced in 1964, was a family of mainframe
computers developed by IBM. It was one of the most influential computer systems of its time
and played a crucial role in the widespread adoption of third-generation technology. The IBM-
360 series featured a range of models with different performance levels and configurations,
allowing businesses and organizations to choose a system that best suited their needs. It
supported a variety of programming languages and had advanced features like virtual
memory and multiprogramming. The IBM-360 series found extensive use in various industries
and set a standard for compatibility and scalability.
• Honeywell-6000 Series: The Honeywell-6000 series was a line of mainframe computers
introduced by Honeywell in the late 1960s. These computers were known for their reliability
and high performance. The Honeywell-6000 series featured advanced technologies like
integrated circuits, multiprogramming, and virtual memory. It supported multiple operating
systems and programming languages, making it versatile for different applications. The
Honeywell-6000 series was widely used in scientific research, engineering, and industrial
applications.
• PDP (Personal Data Processor): The PDP (Personal Data Processor) series, developed by
Digital Equipment Corporation (DEC), was a range of mini-computers introduced during the
third generation. The PDP series offered a more affordable and compact alternative to
mainframe computers. PDP systems were known for their versatility and were used in various
industries, including scientific research, manufacturing, and education. The PDP series
included models such as PDP-8 and PDP-11, which gained popularity for their ease of use,
reasonable cost, and wide range of available software.
• IBM-370/168: The IBM-370/168 was a specific model within the IBM System/370 series,
9
which was part of the third generation of computers. Introduced in 1972, the IBM-370/168
was a mid-range mainframe computer with significant computing power. It offered features
such as virtual memory, time-sharing, and improved I/O capabilities. The IBM-370/168 was
widely used in various industries for transaction processing, scientific applications, and data
processing tasks. It supported multiple operating systems and programming languages,
providing flexibility to users.
• TDC-316: The TDC-316, developed by TRW Data Systems Division, was a computer system
introduced in the mid-1960s. It was part of the third-generation technology and was known
for its high performance and reliability. The TDC-316 utilized integrated circuits and offered
advanced features such as multiprocessing and multitasking. It was commonly used in
scientific and industrial applications, including aerospace and defense projects.
These computers played important roles in advancing computing technology during the third
generation. They brought improved performance, reliability, and versatility, catering to the
evolving needs of businesses, research institutions, and other organizations. The third generation
marked a significant shift towards more accessible and powerful computing systems, setting the
stage for further advancements in subsequent generations.
Fourth Generation. The duration of fourth generation was from 1971-1980.Very Large Scale
Integrated (VLSI) circuits was used of this era. VLSI circuits having approximately 5000 transistors
and distinct circuit factors with their associated circuits on a single chip made it feasible to have
microcomputers.
Fourth generation computer systems have become further powerful, compact, reliable, and
affordable. As a result, it gave rise to Personal Computer (PC) revolution. In this era, time-
sharing, actual time networks, distributed operating system had been used. All the high-
level languages like C, C++, DBASE etc., had been used on this era.
• Very Large-Scale Integration: The fourth generation of computers saw the widespread
adoption of Very Large Scale Integration (VLSI) technology. VLSI technology allowed for the
integration of a large number of transistors and other electronic components onto a single
chip, resulting in increased computational power and improved efficiency.
• Very Cheap: With advancements in semiconductor technology, the cost of manufacturing
computer components significantly decreased. The fourth-generation computers became
much more affordable, making them accessible to a wider range of users, including
individuals and small businesses.
• Portable and Reliable: The fourth-generation computers introduced smaller and more
compact designs, making them portable and easier to transport. Additionally, the
advancements in technology, such as integrated circuits and miniaturization, improved the
reliability and stability of the systems.
• Use of PCs: Personal Computers (PCs) became prevalent during the fourth generation. These
computers, designed for individual use, were compact, affordable, and user-friendly. PCs
10
revolutionized the way people interacted with computers, empowering individuals to have
computing power at their fingertips.
• Very Small Size: The fourth-generation computers were significantly smaller in size
compared to their predecessors. The miniaturization of components, thanks to VLSI
technology, allowed for more compact and efficient designs. This made it possible to have
powerful computing systems in a relatively small physical footprint.
• Pipeline Processing: Fourth-generation computers introduced the concept of pipeline
processing. Pipeline processing involves breaking down instructions into smaller stages and
executing them concurrently, improving overall processing speed and efficiency. This
technique enabled computers to perform multiple operations simultaneously, enhancing
their performance.
• No AC Required: With the advancement in power supply technology, fourth-generation
computers required less power and, in some cases, could operate using direct current (DC)
power sources. This reduced the dependency on alternating current (AC) power and made
computers more versatile in terms of power requirements.
• Concept of the internet was introduced: The fourth generation of computers witnessed the
introduction and development of the concept of the internet. Networks were established to
connect computers, enabling communication and sharing of information on a global scale.
This laid the foundation for the modern internet we use today.
• Great developments in the fields of networks: Along with the internet, significant
developments in networking technologies occurred during the fourth generation. Local Area
Networks (LANs) and Wide Area Networks (WANs) became more prevalent, facilitating
communication and data sharing between computers and across organizations.
• Computers became easily available: The fourth-generation computers became more easily
available to the general public. With affordable prices and user-friendly designs, computers
became commonplace in homes, schools, and offices. This widespread availability played a
crucial role in transforming various industries and revolutionizing the way people work and
communicate.
The fourth generation of computers marked a significant shift towards more affordable,
portable, and powerful computing systems. The advancements in VLSI technology, networking,
and the introduction of PCs made computing more accessible and revolutionized various aspects
of society, from personal productivity to global connectivity.
• DEC 10: The DEC 10, also known as the PDP-10, was a mainframe computer developed by
Digital Equipment Corporation (DEC) in the late 1960s. It was one of the most powerful and
influential computers of its time. The DEC 10 was designed for time-sharing and high-
performance computing. It featured a 36-bit word length, supported multiple users
simultaneously, and had advanced features like virtual memory and multiprocessing. The DEC
10 found applications in scientific research, education, and large-scale data processing.
• STAR 1000: The STAR 1000, developed by Control Data Corporation (CDC), was a series of
supercomputers introduced in the 1970s. These computers were known for their exceptional
11
performance and were widely used in scientific and research applications. The STAR 1000
series utilized advanced technologies like vector processing and parallel computing. These
systems were capable of performing complex simulations and computations at high speeds.
• PDP 11: The PDP-11, developed by Digital Equipment Corporation (DEC), was a popular
minicomputer introduced in the early 1970s. It was known for its versatility and wide range
of applications. The PDP-11 series encompassed various models, offering different
configurations and performance levels. It supported multiple operating systems and
programming languages, making it popular among developers and researchers. The PDP-11
played a significant role in the growth of computer networks and was widely used in academic
institutions and businesses.
• CRAY-1 (Supercomputer): The CRAY-1, introduced in 1976, was a highly advanced
supercomputer developed by Seymour Cray. It was known for its innovative design and
exceptional computational power. The CRAY-1 utilized vector processing, which allowed for
rapid execution of mathematical operations. It had a distinctive cylindrical design and
employed liquid cooling to manage the heat generated by its powerful processors. The CRAY-
1 was widely used in scientific research, weather prediction, and other high-performance
computing applications.
• CRAY-X-MP (Supercomputer): The CRAY-X-MP, introduced in the 1980s, was the successor
to the CRAY-1 and another notable supercomputer developed by Cray Research. It featured
enhanced performance and additional features, including multiprocessing capabilities. The
CRAY-X-MP was widely used in scientific and engineering research, enabling complex
simulations and data analysis. Its advanced architecture and vector processing capabilities
contributed to its exceptional computational speed.
Fifth Generation. The era of fifth technology is 1980-until date. In the fifth generation, VLSI
technology was converted to (Ultra Large-Scale Integration) technology, ensuing the
manufacturing of microprocessor chips having ten million digital electronic elements.
This generation is primarily based on parallel processing hardware and AI (Artificial Intelligence)
software. AI is a developing section in computer science, which translates the means and method
of creating computer systems thinks like human beings. All the high-level languages like C and
C++, Java, .Net etc., are used in this technology.
AI includes:
• Robotics: Robotics refers to the field of technology and engineering that deals with the
design, construction, operation, and programming of robots. Robots are machines that can
be programmed to perform various tasks autonomously or with human assistance. Robotics
12
encompasses various disciplines, including mechanical engineering, electronics, computer
science, and artificial intelligence. Robots can be found in various industries, including
manufacturing, healthcare, exploration, and entertainment, and they are designed to
perform tasks that are repetitive, dangerous, or require precision.
• Neural Networks: Neural networks are a subset of artificial intelligence that attempts to
mimic the structure and functioning of the human brain's neural networks. They are
composed of interconnected nodes, called artificial neurons or units, that work together to
process and transmit information. Neural networks excel at pattern recognition, learning
from data, and making predictions or classifications. They are trained using large datasets,
and through a process called backpropagation, the network adjusts its internal parameters
to improve its performance on a specific task. Neural networks have been successfully
applied in various domains, including image and speech recognition, natural language
processing, and autonomous vehicles.
• Game Playing: Game playing in the context of artificial intelligence refers to the development
of computer programs or algorithms capable of playing games. This includes traditional
board games like chess and Go, video games, and even complex strategy games. The
objective is to create game-playing agents that can make intelligent decisions, employ
strategies, and compete against human players or other AI agents. Game playing involves
developing algorithms that analyze the game state, simulate possible moves, and evaluate
potential outcomes to make optimal decisions. Game playing has been an important area of
AI research as it pushes the boundaries of decision-making, strategic planning, and real-time
problem-solving.
• Development of expert systems to make decisions in real-life situations: Expert systems are
computer programs or AI systems that possess specialized knowledge and expertise in a
specific domain. They are designed to emulate the decision-making capabilities of human
experts in solving complex problems. Expert systems use a knowledge base, which contains
domain-specific rules and facts, and an inference engine, which applies logical reasoning and
inference techniques to derive conclusions or make recommendations. Expert systems can
be used in various real-life situations, such as medical diagnosis, financial analysis, and
troubleshooting technical problems. They provide valuable insights, recommendations, and
solutions based on their deep knowledge of the subject matter.
• Natural language understanding and generation: Natural language understanding (NLU) and
natural language generation (NLG) are areas of artificial intelligence focused on enabling
computers to understand and generate human language. NLU involves teaching computers
to comprehend and interpret human language, including speech and text, to extract meaning
and understand user intent. It involves tasks such as sentiment analysis, named entity
recognition, and language parsing. NLG, on the other hand, is about generating human-like
language, whether it's in the form of written text or spoken responses. NLG systems can
create coherent and contextually relevant responses based on input data and predefined
rules or patterns. NLU and NLG are crucial for applications such as virtual assistants, chatbots,
machine translation, and voice recognition systems.
13
• ULSI (Ultra Large Scale Integration) technology: The fifth generation of computers
witnessed the introduction of ULSI technology. ULSI involved integrating billions of
transistors and other electronic components onto a single chip, enabling higher
computational power and increased functionality.
• Development of true artificial intelligence: The goal of the fifth generation was to
develop true artificial intelligence (AI) systems capable of performing tasks that typically
require human intelligence. This involved creating AI algorithms and systems that could
reason, learn, understand natural language, and exhibit problem-solving capabilities.
• Development of Natural Language Processing (NLP): NLP refers to the ability of
computers to understand, interpret, and respond to human language in a natural and
meaningful way. Fifth-generation computers made significant advancements in NLP,
enabling better human-computer interaction, voice recognition, machine translation,
and language understanding.
• Advancement in Parallel Processing: Parallel processing involves carrying out multiple
tasks or instructions simultaneously, thereby significantly increasing computational
speed and efficiency. Fifth-generation computers leveraged advancements in parallel
processing, enabling them to tackle complex computations and process large amounts of
data more rapidly.
• Advancement in Superconductor technology: Superconductors, materials that exhibit
zero electrical resistance at very low temperatures, were explored in the fifth generation
for their potential in computer technology. Superconductor technology offered the
possibility of faster and more efficient computing systems with reduced energy
consumption.
• More user-friendly interfaces with multimedia features: Fifth-generation computers
focused on improving user interfaces and making computing more accessible to a
broader audience. Graphical user interfaces (GUIs) with icons, windows, and menus were
developed, allowing users to interact with computers more intuitively. Multimedia
features like audio, video, and graphics were incorporated, enhancing the user
experience.
• Availability of very powerful and compact computers at cheaper rates: Fifth-generation
computers brought about advancements in miniaturization and affordability. Powerful
computing systems became available in compact and portable forms, such as laptops and
handheld devices, at more affordable prices. This made computing technology accessible
to individuals and led to widespread adoption.
The fifth generation of computers aimed to create more intelligent and user-friendly systems,
leveraging advanced technologies and pushing the boundaries of what computers could achieve.
While some of the specific goals and features of the fifth generation were not fully realized, it
set the stage for ongoing developments in AI, NLP, parallel processing, and user interfaces that
continue to shape computing technology today.
Computer System
14
A computer system is composed of various interconnected components that work together to
perform computational tasks. Understanding the key components of a computer system is
essential for comprehending how it functions and how different hardware elements interact
with software. Here are the main components of a computer system:
1. Central Processing Unit (CPU): The CPU is the primary component responsible for executing
instructions and performing calculations. It consists of the arithmetic logic unit (ALU), control
unit, and registers. The ALU carries out arithmetic and logical operations, the control unit
manages instruction execution, and registers temporarily store data and instructions.
2. Memory: Memory refers to the storage units used to hold data and instructions that the CPU
accesses during program execution. The primary types of memory include:
3. Storage Devices: Storage devices are used for long-term data storage. They provide non-
volatile memory and higher capacity than RAM.
4. Input Devices: Input devices allow users to input data or commands into the computer
system.
5. Output Devices: Output devices present processed information or results to the user.
6. Others (Bluetooth adapters, wireless cards, etc.).
15
Chapter 3
Computer System Architecture
Overview:
This chapter introduces system architecture. The chapter starts out with a discussion of
automated computing, including mechanical implementation, electronic implementation, and
optical implementation. Next, the discussion moves to computer capabilities. This discussion
includes a description of processors, formulas and algorithms, comparisons and branching,
storage capacity, and finally input/output capability. Computer hardware is discussed in detail,
including hardware used for processing, storage, external communication, and internal
communication. The discussion continues with a review of different types of computer hardware
and hardware configurations. The chapter concludes with a look at the role of software, system
software layer, and the economics of system and application development software.
Objectives:
At the end of this chapter, students will be able to:
1. Explain the fundamental structure and system architecture of a computer.
2. List computer system classes and their distinguishing characteristics or limitations with its
design.
3. Understand computer components and their functions.
1. Fixed Program Computers - Their function is very specific, and they couldn’t be
programmed, e.g. Calculators.
2. Stored Program Computers - These can be programmed to carry out many different
tasks, applications are stored on them, hence the name.
16
Store Program Control Concept
The term Stored Program Control Concept refers to the storage of instructions in computer
memory to enable it to perform a variety of tasks in sequence or intermittently.
The idea was introduced in the late 1940s by John von Neumann who proposed that a program
be electronically stored in the binary-number format in a memory device so that instructions
could be modified by the computer as determined by intermediate computational results.
A stored-program digital computer keeps both program instructions and data in read -
write, random-access memory (RAM). Stored-program computers were an advancement over
the program-controlled computers of the 1940s, such as the Colossus and the ENIAC. Those were
programmed by setting switches and inserting patch cables to route data and control signals
between various functional units. The vast majority of modern computers use the same memory
for both data and program instructions, but have caches between the CPU and memory, and, for
the caches closest to the CPU, have separate caches for instructions and data, so that most
instruction and data fetches use separate buses (split cache architecture).
ENIAC (Electronic Numerical Integrator and Computer) was the first computing system designed
in the early 1940s. It was based on Stored Program Concept in which machine use memory for
processing data.
1. Von-Neumann Model
2. General Purpose System
3. Parallel Processing
The Von Neumann architecturee is also known as the von Neumann model or Princeton
architecture. A computer architecture based on a 1945 description by John von Neumann and
others in the First Draft of a Report on the EDVAC. That document describes a design architecture
for an electronic digital computer with these components:
17
• A processing unit that contains an arithmetic logic unit and processor registers
• A control unit that contains an instruction register and program counter
• Memory that stores data and instructions
• External mass storage
• Input and output mechanisms
The term "von Neumann architecture" has evolved to mean any stored-program computer in
which an instruction fetch and a data operation cannot occur at the same time because they
share a common bus. This is referred to as the von Neumann bottleneck and often limits the
performance of the system.
Von-Neumann proposed his computer architecture design in 1945 which was later known as Von-
Neumann Architecture. It consisted of a Control Unit, Arithmetic, and Logical Memory Unit (ALU),
Registers and Inputs/Outputs.
Von Neumann architecture is based on the stored-program computer concept, where instruction
data and program data are stored in the same memory. This design is still used in most computers
produced today.
18
The Von Neumann Architecture / Model is a foundational computer architecture proposed by
mathematician and computer scientist John von Neumann in the late 1940s. It serves as the basis
for most modern computer systems and describes the fundamental structure and organization
of a computer. The Von Neumann Architecture consists of four main components:
1. Central Processing Unit (CPU): The CPU is responsible for executing instructions and
performing calculations. It comprises an Arithmetic Logic Unit (ALU) for carrying out
arithmetic and logical operations, and a Control Unit for managing the execution of
instructions.
2. Memory: The memory Stores both data and instructions that the CPU needs to execute.
In the Von Neumann Architecture, a single memory unit is used to hold both program
instructions and data. This memory is accessed sequentially, meaning instructions and
data are fetched and processed one after another.
3. Input/Output (I/O) Device: The I/O device facilitates the interaction between the
computer system and the external world. They allow for the input of data and instructions
into the system and the output of processed results.
4. Bus: The bus is a communication pathway that enables the transfer of data and
instructions between the CPU, memory, and I/O devices. It serves as the medium for
exchanging information within the computer system.
1. Stored-Program Concept: In this architecture, instructions and data are stored in the
same memory. This concept allows for flexibility in program execution and enables
computers to be easily reprogrammed.
2. Sequential Execution: Instructions are fetched from memory and executed in a
sequential order. This sequential execution implies that the CPU processes instructions
one at a time.
3. Single Bus Structure: The Von Neumann Architecture employs a single bus for
communication between the CPU, memory, and I/O devices. This shared bus can
potentially become a performance bottleneck if multiple components attempt to access
it simultaneously.
4. Shared Memory: In this architecture, instructions and data share the same memory
space. While this design simplifies the hardware implementation, it may limit the amount
of available memory for data storage.
Harvard Architecture
The Harvard Architecture / Model is a computer architecture design that separates the memory
for instruction and data. It was named after the Harvard Mark I computer, developed in the
1940s. There are separate memory units for storing instructions (instruction memory) and data
19
(data memory). This separation allows simultaneous access to both instruction and data memory,
which can improve system performance.
1. Separate Instruction and Data Memory: The Harvard Architecture has dedicated memory
units for instructions and data. This separation enables simultaneous access to instruction
and data memory, allowing for parallel fetching of instructions and data. This parallelism
can result in improved performance compared to the Von Neumann Architecture.
2. Independent Instruction and Data Buses: The Harvard Architecture employs separate
buses for instructions and data. This separation ensures that fetching instructions do not
interfere with data transfers and vice versa. It allows for simultaneous and independent
access to instruction and data memory.
3. Faster Instruction Fetch: With separate instruction memory, the Harvard Architecture
can fetch instructions at a faster rate since it does not have to contend with fetching data
simultaneously. This can lead to improved instruction execution and overall system
performance.
4. Reduced Instruction-Data Conflicts: In the Harvard Architecture, there are no conflicts
between instruction fetches and data accesses since they use separate memory units.
This reduces the chances of contention and improves the overall efficiency of instruction
execution.
5. Suitable for Embedded Systems: The Harvard Architecture is commonly used in
embedded systems, such as microcontrollers and digital signal processors (DSPs). These
systems often require high performance, predictable execution, and efficient memory
access, making the Harvard Architecture well-suited for their requirements.
20
architectural designs. The Modified Harvard Architecture possesses several notable
characteristics and advantages:
1. Flexible Data and Instruction Operations: Unlike the traditional Harvard Architecture,
the Modified Harvard Architecture allows for limited data operations on the instruction
memory or limited instruction operations on the data memory. This flexibility enables the
execution of specific tasks that may require occasional manipulation of instructions or
data from the alternate memory unit.
2. Improved Performance: The Modified Harvard Architecture retains the advantages of the
Harvard Architecture, such as faster instruction fetch and reduced instruction-data
conflicts. These features can lead to improved overall system performance and more
efficient execution of tasks.
3. Suitable for Specific Computing Scenarios: The Modified Harvard Architecture is
particularly useful in scenarios where there is a requirement for both the advantages of
strict separation between instruction and data, as offered by the Harvard Architecture,
and occasional interaction between the two memory units. This architectural model
provides the versatility to accommodate such requirements.
4. Comparison of Different Architectures: A comparison of different architectures,
including the Von Neumann, Harvard, and Modified Harvard Architectures, involves
evaluating their characteristics, advantages, and limitations. Factors for comparison may
include performance, flexibility, ease of programming, memory access efficiency, and
suitability for specific applications.
21
Chapter 4
Computer System Interconnections
Overview:
Computer system interconnections refer to the various methods and technologies used to
connect the components within a computer system, as well as to establish communication
between different computer systems. These interconnections play a crucial role in facilitating
data transfer, synchronization, and coordination among the system components. Understanding
computer system interconnections is essential for designing efficient and scalable systems,
enabling collaboration, and supporting seamless information exchange.
Objectives:
At the end of this chapter, students will be able to:
1. Describe the typical organization of a CPU and how it works inside a computer.
2. Describe the methods and technologies of computer system interconnections.
3. Identify the hardware components that enables to connect various hardware.
4. Understand the importance of the bus network topology used for data transfer.
5. Describe the distinguishing attributes and benefits associated with the bus network
topologies.
22
Figure 4.2 The CPU Architecture.
CPU stands for Central Processing Unit. CPU or basically a processor is the most important part
of the computer system. We can’t think of a computer without a CPU. CPU is frequently called
the brain of the computer because it’s a fundamental element of the computer which is intended
to process, calculation and moving the data.
The number of instructions carried by the computer in one second is used to calculate the speed
of that computer. The speed of the computer is calculated in Hertz. Nowadays the speed of the
computer is in gigahertz (GHz), which is equal to 1,000,000 times Hertz.
23
Figure 4.3 Inside the CPU.
CPU is a very complex device with a highly large set of electronic circuitries. A processor is used
to perform a stored program instruction which is given to it by the user through input. Every type
of computer, whether it is small or large, must have a processor in them.
The computer is a very fast machine. A normal desktop computer can execute an instruction in
less than 1/millionth of a second whereas a supercomputer (which is fastest of all the
computers) can execute an instruction in less than 1/billionth of a second!
CPU speed of executing an instruction depends on its clock frequency which is measured in MHz
(megahertz) or GHz (gigahertz), more the clock frequency, more is the speed of computer’s
instruction execution.
Now before sending the information back to the RAM, the CPU reads the information linked with
the task given to it. After reading the information, the CPU starts its calculation and transporting
the data.
Before the information is further performed, it must travel through the System BUS. A bus in the
computer is a communication system that is used to transfer the data among all the components
of the computer.
The responsibility of the CPU is to make sure that the data is processed and is on the system bus.
The CPU manages data to make it in the right order while placing the data on the system bus.
Thus, the action requested by the user is done and the user gets the processed and calculated
information. Now when the data is processed, the CPU is required to store it in the system’s
memory.
24
Components of CPU
• Control Unit
• Logic Unit
• Memory or Storage Unit
Control Unit
This part of the CPU is used to manage the operation of the CPU. It instructs the various
computer components to respond according to the program’s instruction. The computer
programs are stored in the storage devices (hard disks and SSDs) and when a user executes those
programs, they load straight into the primary memory (RAM) for their execution. No program
can be able to run without loading into primary memory. The control unit of the CPU is used to
direct the whole computer system to process program’s instruction using electrical signals. The
control unit of a CPU links with ALU and memory to carry out the process instructions. The
control unit does not carry out the instruction of the program, instead, it directs the other part
of the process. Without the control unit, the respective components will not be able to execute
the program as they don’t know what to do and when to do it. This unit controls the operations
of all parts of the computer but does not carry out any actual data processing operations.
Hardwired Control Unit. In the Hardwired control unit, the control signals that are important
for instruction execution control are generated by specially designed hardware logical circuits,
in which we cannot modify the signal generation method without physical change of the circuit
structure. The operation code of an instruction contains the basic data for control signal
25
generation. In the instruction decoder, the operation code is decoded. The instruction decoder
constitutes a set of many decoders that decode different fields of the instruction opcode.
Micro programmable Control Unit. The fundamental difference between these unit structures
and the structure of the hardwired control unit is the existence of the control store that is used
for storing words containing encoded control signals mandatory for instruction execution. In
microprogrammed control units, subsequent instruction words are fetched into the instruction
register in a normal way. However, the operation code of each instruction is not directly
decoded to enable immediate control signal generation, but it comprises the initial address of
a microprogram contained in the control store.
Arithmetic Section
The function of arithmetic section is to perform arithmetic operations like addition, subtraction,
multiplication, and division. All complex operations are done by making repetitive use of the
above operations.
Logic Section
The function of logic section is to perform logic operations such as comparing, selecting,
matching, and merging of data.
Its size affects speed, power, and capability. Primary memory and secondary memory are two
types of memories in the computer.
Elements of CPU
26
Figure 4.5 The Elements of CPU.
Register
A Register is a very small place which is used to hold data of the processor. A register is used to
store data such as instruction, storage address and any kind of data like bit sequence or any
characters etc. A processor’s register should be large enough to store all the given information. A
64-bit processor should have at least 64-bit registers and 32-bit register for a 32-bit processor.
The register is the fastest of all the memory devices.
1. PC - program counter - stores address of the -> next <- instruction in RAM.
2. MAR - memory address register - stores the address of the current instruction being
executed.
3. MDR - memory data register - stores the data that is to be sent to or fetched from
memory.
4. CIR - current instruction register - stores actual instruction that is being decoded and
executed.
5. ACC - accumulator - stores result of calculations.
27
• L2 cache: L2 cache has more data holding capacity than L1 cache. It is situated in CPU
chip or in the separate chip but connected to CPU with the high-speed alternative data
bus.
Buses
In computer architecture, buses are a crucial component that enables the transfer of data,
instructions, and control signals between different hardware components within a computer
system. A bus acts as a communication pathway or a set of electrical lines that connect various
hardware components, allowing them to exchange information.
A bus consists of several lines, each serving a specific purpose, such as data lines, address lines,
control lines, and power lines. These lines carry different types of signals, including data signals
for transmitting information, address signals for specifying memory locations, control signals for
coordinating operations, and power signals for supplying electrical power.
The primary function of buses is to facilitate communication and coordination among the
components of a computer system. They provide a means for data transfer, enabling the CPU to
access memory, input/output devices, and other peripherals. Buses ensure that data and control
signals are properly routed between components, allowing for effective synchronization and
coordination of operations. Buses can be categorized based on their purpose and scope within
the system. Some common types of buses include:
1. System Bus: The system bus, also known as the front-side bus, connects the CPU to
the main memory (RAM) and is responsible for high-speed communication between
these components. It carries data, instructions, and control signals.
2. Address Bus: The address bus carries the memory address information, specifying
the location in memory to read from or write to. The width of the address bus
determines the maximum amount of memory that can be addressed.
3. Data Bus: The data bus carries the actual data being transferred between
components. Its width determines the maximum amount of data that can be
transferred simultaneously.
4. Control Bus: The control bus carries control signals that coordinate and regulate the
operations of various components. These signals include read and write signals,
interrupt signals, and clock signals for synchronization.
Bus Topologies
Bus topology is a network arrangement where all devices are connected to a central
communication channel called a bus. In this topology, devices share a common transmission
medium, and data is transmitted in a linear fashion, from one end of the bus to the other. Each
device on the bus can receive the transmitted data, but only the intended recipient processes it.
28
define bus topologies and the inherent characteristics. The bus topology has the following key
characteristics:
Despite its limitations, bus topologies have been widely used in local area networks (LANs) and
small-scale networks due to their simplicity and cost-effectiveness.
Single Bus. The Single Bus architecture, also known as the Single-System Bus, utilizes a single bus
for communication between devices. All data, instructions, and control signals are transferred
through this shared bus. Single bus topologies are commonly used in local area networks (LANs)
and small-scale networks due to their simplicity and cost-effectiveness.
Multi Bus. The Multi-Bus architecture employs multiple buses for data transfer and
communication within a computer system. Instead of relying on a single bus, this design
incorporates separate buses for specific tasks or components. For example, there may be
separate buses for memory access, I/O operations, and inter-processor communication. Multi-
bus architectures enhance efficiency and reduce contention by dedicating buses for specific
purposes.
Hierarchical Bus. The Hierarchical Bus architecture extends the concept of multi-bus design by
organizing buses in a hierarchical structure. It introduces multiple levels of buses, enabling better
organization and management of data transfers. Hierarchical bus topologies enhance system
performance by reducing contention and providing more efficient communication paths
between different components.
Switched Interconnects
Switched interconnects refer to a network architecture that utilizes switches to enable
communication and data transfer between multiple devices or nodes. In this architecture,
switches serve as intelligent devices that receive incoming data and forward it to the appropriate
destination based on the destination address. Unlike shared bus or multi-drop architectures,
29
where data is broadcast to all devices, switched interconnects provide a dedicated point-to-point
connection between sender and receiver. Switched interconnects offer several benefits,
including:
Multi-core CPUs
The multi-core processor means that more than one processor is embedded in the CPU Chip.
Those multi-core processors work simultaneously and the benefit of using the multi-core CPU is
that it rapidly achieves the high performance, consuming less energy power and the multi-tasking
or parallel processing is efficient. Since all the processors are plugged into the same plug so the
connection between them is also actually fast.
30
• As the two centers of processors are on single chip so PC reserve exploits and
information has not to travel longer.
• PCB (printed circuit board) needs less space in case of utilizing multi-core processors.
Single-core CPU
It is the oldest type of CPU which is available and employed in most of the personal and official
computers. The single-core CPU can execute only one command at a time and it’s not efficient in
multi-tasking. It signifies that there is a markable decline in performance if more than a single
application is executed. If one operation is started, the second process should wait until the first
one is finished. But if it is fed with multiple operations, the performance of the computer is
drastically reduced. The performance of a single-core CPU is based on its clock speed by
measuring its power.
Dual-core CPU
It is a single CPU that comprises of two strong cores and functions like dual CPU acting like one.
Unlike the CPU with a single core, the processor must switch back and forth within a variable
array of data streams and if or more thread is executed, the dual-core CPU manages the
multitasking effectively. To utilize the dual-core CPU effectively, the running programs and
31
operating system should have a unique code called simultaneous multi-threading technology
embedded in it. Dual-core CPU is rapid than a single core, but it is not robust as quad-core CPU.
Quad-Core CPU
The quad-core CPU is a refined model of multiple core CPU features and design with four cores
on a single CPU. Like dual-core CPU, that divides the workload in between the cores, and quad-
core enables for effective multitasking. It doesn’t signify any single operation which is four times
faster rapid than others. Unless the applications and program executed on it by SMT code will
fasten the speed and becomes unnoticeable. Such types of CPU are used in people who need to
execute multiple different programs at the same time as gamers, series of supreme commander
that is optimized in multiple core CPU.
Hexa-core Processors
It is another multiple core processor which is available with six cores and can execute tasks which
work rapidly than the quad-core and dual-core processors. For users of the personal computer,
the processors of Hexacore is simple and now the Intel is launched with Inter core i7 in 2010 with
Hexa core processor. But here the users of smartphones use only quad-core and dual-core
processors. Nowadays, smartphones are available with hexacore processors.
Octa-core Processors
The dual-core is built with two cores, four cores are built-in quad-core, Hexa comes with six cores
where the octa processors are developed with eight independent cores to execute an effective
task that is efficient and even acts rapidly than quad-core processors. Trending octa-core
processors comprises of a dual set of quad-core processors that divides different activities
between the various types. Many times, the minimum powered core sets are employed to
produce advanced tasks. If there is any emergency or requirement, the rapid four sets of cores
will be kicked in. In precise, the octa-core is perfectly defined with dual-code core and adjust it
accordingly to give the effective performance.
Deca-core Processors
The processor with double core comprises two cores, 4 cores are available with quad cores, six
cores are available in hexacore processors. Deca-core is available with ten independent systems that
are deployed to execute and manage the task that is successful than other processors that are developed
until now. Owning a PC, or any device made with a deca-core processor is the best option. It is faster than
other processors and very successful in multi-tasking. Deca-core processors are trending with their
advanced features. Most of the smartphones are now available with Deca core processors with low-cost
and never become outdated. Surely, most gadgets in the market are updated with new processors to give
more useful purposes.
Types of CPU
In the past, computer processors used numbers to identify the processor and help identify faster
processors. For example, the Intel 80486 (486) processor is faster than the 80386 (386) processor.
After the introduction of the Intel Pentium processor (which would technically be the 80586), all
computer processors started using names like Athlon, Duron, Pentium, and Celeron.
32
Today, in addition to the different names of computer processors, there are different
architectures (32-bit and 64-bit), speeds, and capabilities. Below is a list of the more common
types of CPUs for home or business computers.
AMD PROCESSORS
K6-2 Sempron Turion 64 Phenom X3 Athlon II
K6-III Athlon 64 Athlon 64 X2 Athlon 6-series E2 series
Athlon Mobile Athlon 64 Turion 64 X2 Athlon 4-series A4 series
Duron Athlon XP-M Phenom FX Athlon X2 A6 series
Athlon XP Athlon 64 FX Phenom X4 Phenom II A8 series
A10 series
INTEL PROCESSORS
4004 Pentium Pentium 4 Pentium Extreme
8080 Pentium w/ MMX Mobile Pentium 4-M Edition
8086 Pentium Pro Pentium D Core Duo
8087 Pentium II Core 2 Duo
8088 Celeron Core i3
80286 (286) Pentium III Core i5
80386 (386) Pentium M Core i7
80486 (486) Celeron M
The AMD Opteron series and Intel Itanium and Xeon series are CPUs used in servers and high-
end workstation computers.
Some mobile devices, like smartphones and tablets, use ARM CPUs. These CPUs are smaller in
size, require less power, and generate less heat.
33
Chapter 5
Computer System Organization and Operation
Overview:
Computer system organization refers to the arrangement and structure of various hardware and
software components that collectively form a computer system. It involves understanding the
roles, interactions, and operations of these components to facilitate the execution of tasks and
achieve desired outcomes. Computer system operation encompasses the processes and
mechanisms by which these components work together to perform computations, store and
retrieve data, and provide a user-friendly interface.
Objectives:
At the end of this chapter, students will be able to:
1. Understand computer system organization and its operation.
2. Describe the system interactions and coordination:
3. Identify the input and output operations.
4. Understand the role of the operating system and computer system organization.
1. Fetch: In the fetch stage, the processor retrieves the next instruction from the memory.
The program counter (PC) holds the memory address of the next instruction to be fetched.
The instruction is then loaded into the instruction register (IR) within the CPU.
2. Decode: During the decode stage, the CPU decodes the fetched instruction to determine
the operation to be performed and the operands involved. The control unit interprets the
opcode (operation code) and generates control signals to coordinate subsequent
operations.
3. Execute: In the execute stage, the CPU performs the actual operation specified by the
instruction. This stage may involve arithmetic calculations, logical operations, data
transfers, or other specific operations depending on the instruction type.
4. Memory Access: Some instructions require accessing memory to read or write data. In
such cases, a memory access stage is included in the execution cycle. The CPU calculates
the memory address and retrieves or stores the data as required.
5. Write Back: In the write backstage, the CPU updates the results of the executed
instruction. The result may be written back to a register or memory location, depending
on the instruction and the architecture.
34
6. Store. In the store stage, the results of the executed instruction are stored. This stage may
involve writing the result back to a register or memory location, updating the status flags,
or transferring control to a different part of the program.
These stages are repeated for each instruction in a program, allowing the CPU to execute
instructions sequentially or based on the program flow.
Memory Hierarchy
Memory hierarchy refers to the organization and arrangement of different types of memory in a
computer system, ranging from high-speed, low-capacity memory to slower, higher-capacity
memory. The memory hierarchy is designed to optimize the performance and efficiency of
memory access by placing frequently accessed data closer to the CPU, while utilizing larger and
slower memory for storing less frequently accessed data.
Caching. Caching is an essential component of the memory hierarchy that utilizes high-speed,
small-capacity memory known as cache memory. The cache memory acts as a buffer between
the CPU and the main memory, storing frequently accessed data and instructions to improve
system performance. Caching exploits the principle of locality, which states that programs tend
to access a small portion of data or instructions repeatedly. By keeping this data in the cache, the
CPU can fetch it quickly, reducing the latency associated with accessing data from slower main
memory.
Main Memory. Main memory, also known as primary memory or random-access memory (RAM),
is the next level in the memory hierarchy. It provides a larger storage capacity than cache memory
but with higher access latency. Main memory holds the program instructions and data that are
actively used by the CPU during execution. It is typically made up of dynamic random-access
memory (DRAM) modules, which offer faster access times compared to secondary storage
devices.
Virtual Memory. Virtual memory is a memory management technique that extends the
addressable space of the main memory beyond its physical capacity. It allows programs to utilize
more memory than is physically available by storing less frequently used data on secondary
storage devices, such as hard disk drives (HDDs) or solid-state drives (SSDs). The virtual memory
system automatically swaps data between main memory and secondary storage as needed,
ensuring that the active portions of a program remain in main memory while less frequently used
portions are temporarily stored in secondary storage.
1. Input Operations. Input operations involve the transfer of data or signals from external
devices to the computer system. Examples of input devices include keyboards, mice,
35
scanners, microphones, and sensors. When a user types on a keyboard, moves a mouse,
or scans a document, the input devices send signals or data to the computer system for
processing.
2. Output Operations. Output operations involve the transfer of data, results, or signals
from the computer system to external devices for display, storage, or other purposes.
Output devices include displays, printers, speakers, storage devices, and network
interfaces. The computer system generates output data or signals that are then
transferred to these devices for presentation or storage. For example, the system sends
display data to a monitor for visual output or sends data to a printer for physical
document creation.
I/O operations are facilitated by I/O controllers or interfaces that manage the communication
between the CPU and the external devices. These controllers handle the low-level details of data
transfer, timing, and signaling, allowing the CPU to focus on processing the data received from
or destined for the devices.
Polling. Polling is a technique used in I/O operations to determine the status of a device by
repeatedly checking its status register. In polling, the CPU continuously checks the status of an
I/O device to determine if it is ready for data transfer. It involves sending requests to the device,
waiting for a response, and then proceeding with the data transfer or operation. Polling can be
implemented using busy-wait loops or interrupts.
Interrupts. Interrupts are signals generated by devices to request attention from the CPU. When
an I/O device has completed an operation or requires CPU intervention, it sends an interrupt
signal to the CPU, causing the current execution to pause and transfer control to an interrupt
handler routine. Interrupts allow devices to asynchronously request service from the CPU,
improving system efficiency by reducing the need for continuous polling.
Direct Memory Access (DMA). It is a technique that allows devices to transfer data directly to
and from memory without CPU involvement. With DMA, the device gains control of the system
bus, bypassing the CPU to transfer data directly to memory. DMA reduces CPU overhead and
improves data transfer rates, making it particularly useful for high-speed data transfers, such as
disk I/O or network communication.
1. Interaction between Hardware and Operating System: The operating system interacts
closely with the underlying hardware components of a computer system. It manages the
central processing unit (CPU), memory, input/output (I/O) devices, and other system
resources. The OS provides an interface between the hardware and software, enabling
applications to utilize the system's resources effectively.
36
2. Resource Management: One of the key functions of an operating system is resource
management. It allocates and manages system resources such as CPU time, memory, disk
space, and I/O devices. The OS ensures efficient utilization of resources, implements
scheduling algorithms, and provides mechanisms for process synchronization and
communication.
3. Process and Thread Management: The operating system manages processes and
threads, which are the execution units of applications. It schedules processes for
execution, allocates resources, and provides mechanisms for inter-process
communication and synchronization. Thread management allows for parallel execution
within a process, enabling efficient utilization of multiple cores or processors.
4. File System and Storage Management: The operating system provides file system
services for organizing and accessing data stored on storage devices. It manages file
allocation, access control, and file I/O operations. Storage management involves disk
scheduling, data caching, and implementing techniques for data reliability, such as RAID
(Redundant Array of Independent Disks).
The relationship between the operating system and computer system organization is crucial for
the proper functioning of computer systems. The operating system manages system resources,
coordinates hardware components, and provides services that enable applications to run
efficiently. Understanding this relationship is fundamental for system administrators, software
developers, and anyone involved in computer system organization and operating system
management.
Note: The references provided in each subsection are relevant for further exploration of the
specific topic covered in that subsection.
Interaction between Hardware and Software. The interaction between hardware and software
is crucial in managing I/O operations. The operating system provides abstractions and interfaces
that allow software applications to communicate with hardware devices. The software interacts
with the operating system's I/O subsystem, which in turn interacts with device drivers and I/O
controllers to facilitate data transfer and manage device resources.
Role of the Operating System in Managing System Resources. The operating system plays a vital
role in managing system resources, including I/O devices. It provides services and mechanisms to
control and coordinate I/O operations, allocate resources to devices, handle interruptions,
schedule I/O requests, and ensure data integrity. The operating system's resource management
ensures efficient utilization of system resources and provides a seamless interface for
applications to interact with I/O devices.
37
Chapter 6
Performance Evaluation and Optimization
Overview:
Performance evaluation and optimization are crucial processes in computer systems to ensure
efficient utilization of resources, enhance system performance, and meet user requirements.
Performance evaluation involves measuring and analyzing the system's performance
characteristics to identify areas for improvement. Performance optimization focuses on
addressing identified bottlenecks and implementing strategies to improve overall efficiency.
These processes are iterative and ongoing, adapting to changing workloads and system
requirements.
Objectives:
At the end of this chapter, students will be able to:
1. Understand the importance of performance evaluation in computer systems.
2. Identify the factors affecting performance of computer systems.
3. Understand the importance of performance optimization in computer systems.
4. Explain the role of performance evaluation and optimization in addressing the system
requirements.
Performance Evaluation
Performance evaluation involves measuring and analyzing the system's performance
characteristics to identify areas for improvement. Key aspects of performance evaluation include:
1. Performance Metrics: Defining relevant performance metrics based on system
requirements and user expectations. Common metrics include response time,
throughput, latency, resource utilization, and scalability.
2. Benchmarking: Conducting benchmark tests to measure the system's performance
against standardized workloads or specific application scenarios. Benchmarking helps
compare system performance against industry standards or similar systems.
3. Profiling and Monitoring: Profiling the system to collect data on resource usage,
execution time, and system behavior during different workloads. Monitoring tools and
techniques, such as performance counters, log analysis, and tracing, provide insights into
system performance and identify performance bottlenecks.
4. Workload Analysis: Analyzing the characteristics and patterns of the workload or
application running on the system. This helps identify specific tasks or operations that
have a significant impact on performance.
38
1. Processor Speed and Architecture: The speed of the central processing unit (CPU) affects
the system's overall processing capability. Faster CPUs can execute instructions more
quickly, leading to improved performance. Additionally, the CPU architecture, including
the number of cores, cache size, and instruction set, can significantly impact performance,
especially for multi-threaded and computationally intensive workloads.
2. Memory Capacity and Access Speed: The amount of memory (RAM) available in the
system and its access speed affect the system's ability to store and retrieve data
efficiently. Insufficient memory can lead to frequent disk swapping, slowing down
performance. Faster memory access, such as high-speed RAM or cache memory, reduces
data retrieval latency and enhances overall system performance.
3. Storage System Performance: The performance of storage devices, such as hard disk
drives (HDDs) and solid-state drives (SSDs), directly impacts data access and transfer
rates. SSDs generally offer faster read/write speeds than HDDs, leading to improved
performance, especially for tasks involving frequent disk operations, such as file transfers
or database queries.
4. Input/Output (I/O) Subsystem: The performance of the I/O subsystem affects the speed
at which data can be transferred to and from peripheral devices. Factors such as the type
of interface (e.g., USB, Ethernet), I/O bus speed, device driver efficiency, and disk or
network latency can impact I/O performance. Slow I/O operations can cause system
bottlenecks and reduce overall performance.
5. Software Efficiency and Optimization: Well-written and optimized software can
significantly improve system performance. Efficient algorithms, proper data structures,
and optimized code can reduce computational complexity, minimize memory usage, and
enhance overall system responsiveness. Additionally, optimizing software configurations,
such as database settings or application parameters, can improve performance for
specific workloads.
6. System Configuration and Resource Allocation: Proper system configuration and
resource allocation play a critical role in performance. Optimizing settings such as CPU
scheduling, memory allocation, disk caching, and network configurations can help ensure
resources are allocated efficiently and prevent resource bottlenecks. Incorrect
configurations or inadequate resource allocation can hinder system performance.
7. Workload Characteristics: The nature and characteristics of the workload running on the
system impact its performance. Factors such as the type of applications, data access
patterns, concurrency requirements, and input/output demands influence system
performance. Understanding the workload characteristics helps optimize system
resources and tailor configurations to meet specific requirements.
8. Environmental Factors: Environmental conditions, such as temperature, humidity, and
power supply stability, can impact system performance. High ambient temperatures can
lead to thermal throttling and reduce CPU performance. Unstable power supply or
electrical noise can cause system interruptions or affect component performance.
9. Network Performance: For networked systems, network performance is crucial. Factors
such as network bandwidth, latency, packet loss, and network congestion can affect data
transfer rates and system responsiveness. Optimizing network configurations, using high-
39
speed connections, and implementing efficient network protocols can enhance network
performance.
10. Scalability: The ability of a system to scale and handle increasing workloads is important
for performance. Scalability considerations include factors such as system architecture,
load balancing, parallel processing, and distributed computing. A well-designed scalable
system can accommodate growing demands without significant degradation in
performance.
Understanding and optimizing these factors helps ensure optimal performance in computer
systems. Regular monitoring, performance analysis, and tuning are necessary to identify and
address performance bottlenecks, adapt to changing workloads, and maximize system efficiency.
Performance Optimization
Performance optimization aims to improve system performance and efficiency by addressing
identified bottlenecks. Optimization strategies can target various components of the system,
including hardware, software, and system configuration. Key approaches for performance
optimization include:
1. Algorithmic Optimization: Analyzing and optimizing algorithms and data structures used
in applications to reduce computational complexity and improve efficiency. This includes
selecting appropriate algorithms, optimizing data access patterns, and minimizing
redundant computations.
2. System Configuration: Optimizing system configuration settings, such as memory
allocation, CPU scheduling, I/O settings, and network configurations, to align with
workload requirements and improve system performance.
3. Parallelism and Concurrency: Leveraging parallel processing and concurrency techniques
to exploit system resources effectively. This includes utilizing multi-core processors,
parallel algorithms, threading, and task parallelism to improve performance through
simultaneous execution of multiple tasks.
4. Memory Optimization: Optimizing memory usage to minimize data access latency and
maximize cache efficiency. Techniques such as data locality optimization, caching
strategies, and memory allocation algorithms help improve memory performance.
5. I/O Optimization: Improving input/output performance through techniques such as
buffering, prefetching, and asynchronous I/O operations. These optimizations reduce I/O
overhead, enhance data transfer rates, and improve system responsiveness.
6. Compiler and Code Optimization: Utilizing compiler optimizations, code refactoring, and
performance-oriented programming techniques to generate optimized machine code and
reduce execution time. This includes loop unrolling, instruction pipelining, and
vectorization to improve code efficiency.
7. Hardware Upgrades: Upgrading hardware components, such as CPUs, memory, storage
devices, and network interfaces, to meet higher performance demands. Hardware
upgrades can significantly improve system performance, especially when existing
hardware becomes a bottleneck.
8. Profiling and Analysis: Continuously monitoring and analyzing system performance to
identify bottlenecks and measure the impact of optimizations. Profiling tools and
40
performance analysis techniques help evaluate the effectiveness of optimization
strategies and guide further improvements.
Performance evaluation and optimization are ongoing processes, as system requirements and
workloads change over time. By regularly evaluating system performance, identifying
bottlenecks, and implementing targeted optimizations, computer systems can deliver better
performance, improved efficiency, and enhanced user experiences.
41
Chapter 7
Computer Number System
Overview:
Computer number systems are the methods used to represent and manipulate numbers in digital
computer systems. They provide a systematic way to express numerical values using a set of
symbols and rules. The most commonly used number systems in computer systems are the
decimal (base-10), binary (base-2), octal (base-8), and hexadecimal (base-16) systems. Each
number system has its own unique properties and applications.
Objectives:
At the end of this chapter, students will be able to:
1. Explain the use binary (Base 2) number system in computers.
2. Evaluate different types of number systems as they relate to computers.
3. Convert values from decimal, binary, octal, and hexadecimal number systems to each
other and back to the other systems.
4. Convert values with fractional part from decimal, binary, octal, and hexadecimal number
systems to each other and back to the other systems.
5. Conduct addition and subtraction in binary, octal, and hexadecimal number systems.
When we type some letters or words, the computer translates them in numbers as computers
can understand only numbers. A computer can understand the positional number system where
there are only a few symbols called digits and these symbols represent different values
depending on the position they occupy in the number.
• The digit
• The position of the digit in the number
• The base of the number system (where the base is defined as the total number of digits
available in the number system)
The number system that we use in our day-to-day life is known as the decimal number system.
Decimal number system has base 10 since it uses 10 digits from 0 to 9. In decimal number system,
the successive positions to the left of the decimal point represent units, tens, hundreds,
thousands, and so on. Each position represents a specific power of the base (10). For example,
the decimal number 2578 consists of the digit 8 in the units’ position, 7 in the tens position, 5 in
the hundreds position, and 2 in the thousands position. Its value can be written as:
42
(2 x 103) + (5 x 102) + (7 x 101) + (8 x 100)
2000 500 70 + 8 = 2578
Note:
In this lesson, two (2) solutions will be presented when converting any given number to another
base. These are Successive Division Method and either using Powers of 2 Method, or Multiples of
8 Method or Multiple of 16 Method.
43
Write the remainders from bottom to top.
The left most digit in the given base-2 number 100010 with the highlighted Most Significant Bit
(MSB)it assigned the bit number 1.
Step 1: Choose a number from the middle column which is <=25. In this case 16 matches
the requirement. Mark the Status column with 1.
44
Step 3: Choose a number from the middle column which is <=9. In this case 8 matches the
requirement. Mark the Status column with 1
Step 5: Choose a number from the middle column which is <=1. In this case 1 matches the
requirement. Mark the Status column with 1
Step 6: Stop the process when you have a difference = 0. In this case 1 – 1 = 0.
Step 7: Copy the bits from bottom to top. Replace 0 digit for every blank Status in
between MSB and LSB.
Step 1: Choose a number from the middle column which is <=34. In this case 32 matches
the requirement. Mark the Status column with 1.
Step 3: Choose a number from the middle column which is <=2. In this case 2 matches the
requirement. Mark the Status column with 1
Step 5: Stop the process when you have a difference = 0. In this case 2 – 2 = 0.
Step 6: Copy the bits from bottom to top. Replace 0 digit for every blank Status in between
MSB and LSB.
45
Binary numbers are used to represent digital information in computers, with 0 representing "off"
or "false" and 1 representing "on" or "true." For example, the binary number 1011 is calculated
as (1 * 2^3) + (0 * 2^2) + (1 * 2^1) + (1 * 2^0), which is equivalent to 11 in decimal representation.
16 8 4 2 1 Powers of 2 values
24 23 22 21 20 Powers of 2
Step 2: Multiply each bit with their corresponding powers of 2 value. And get their sum.
(1 x 32) + (0 x 16) + (0 x 8) + (0 x 4) + (1 x 2) + (0 x 1)
32 + 0 + 0 + 0 + 2 + 0 = 34
46
Therefore, 5610 = 708
2 16 128 1024
3 24 192 1536
4 32 256 2048
5 40 320 2560
6 48 384 3072
7 56 448 3584
Step 1: Choose a number from the table which is <=137. In this case 128. It can be located
from Table 7.3c; in Grp 2, Position 2.
Step 3: Choose a number from the table which is <=9. In this case 8. It can be located from
Table 7.3c; in Grp 1, Position 1.
Step 5: Choose a number from the table which is <=1. In this case 1. It can be located from
Table 7.3c; in Grp 0, Position 1.
Step 6: Stop the process when you have a difference = 0. In this case 1 – 1 = 0.
47
Table 7.3e Conversion Table from Decimal to Octal
Grp 0 Grp 1 Grp 2 Grp 3
0 0 0 0
1 8 64 512
2 16 128 1024
3 24 192 1536
4 32 256 2048
5 40 320 2560
6 48 384 3072
7 56 448 3584
Step 1: Choose a number from the table which is <=56. In this case 56. It can be located
from Table 7.3e; in Grp 1, Position 7.
Step 3: Choose a number from the table which is <=0. In this case 0. It can be located from
Table 7.3e; in Grp 1, Position 0.
Step 6: Stop the process when you have a difference = 0. In this case 0 – 0 = 0.
64 8 1 Powers of 8 values
48
82 81 80 Powers of 8
2 1 1 Octal digits
Step 2: Multiply each bit with their corresponding powers of 8 value. And get their sum.
(2 x 64) + (1 x 8) + (1 x 1)
128 + 8 + 1 = 137
8 1 Powers of 8 values
81 80 Powers of 8
7 0 Octal digits
Step 2: Multiply each bit with their corresponding powers of 8 value. And get their sum.
(7 x 81) + (0 x 80)
(7 x 56) + (0 x 1)
56 + 0 = 56
49
Divisor Number Remainder
/16 223
/16 13 15 ---> (F)
0 13 ---> (D)
Step 1: Choose a number from the table which is <=223. In this case 208. It can be located
from Table 7.4c; in Grp 1, Position 13 (D).
50
Step 2: Subtract 208 from 223. (223-208 = 15).
Step 3: Choose a number from the table which is <=15. In this case 15. It can be located
from Table 7.4c; in Grp 0, Position 15 (F).
Step 5: Stop the process when you have a difference = 0. In this case 15 – 15 = 0.
Step 1: Choose a number from the table which is <=348. In this case 256. It can be located
from Table 7.4d; in Grp 2, Position 1.
51
Step 3: Choose a number from the table which is <=92. In this case 80. It can be located
from Table 7.4d; in Grp 1, Position 5.
Step 5: Choose a numb1r from the table which is <=92. In this case 12. It can be located
from Table 7.4d; in Grp 0, Position 12 (C).
Step 7: Stop the process when you have a difference = 0. In this case 12 – 12 = 0.
16 1 Powers of 16 values
161 160 Powers of 16
D F Hexadecimal digits
Step 2: Multiply each bit with their corresponding powers of 16 value. And get their sum.
208 + 15 = 223
52
1 5 C Hexadecimal digits
Step 2: Multiply each bit with their corresponding powers of 16 value. And get their sum.
256 + 80 + 12 = 348
Group the bits by 3, from the right going to the left. Since 7 is the maximum number in octal.
4 2 1
22 + 21 + 20 = 7
4 2 1 4 21 4 2 1
010 011 110
2 3 6
Therefore, 100111102 = 2368
4 2 1 421 421 4 2 1
001 101 111 001
1 5 7 1
Therefore, 1101110012 = 15718
53
Solution: 4-2-1 Method
5 6 0
421 421 421
2 4 7
421 421 421
Group the bits by 4, from the right going to the left. Since 15 is the maximum number in
hexadecimal.
8 4 2 1
23 + 22 + 21 + 20 = 15
8 4 21 8 4 21
1001 1110
9 14 E
8 4 21 8 4 21 8 4 21
0011 1011 0001
3 11 (B) 1
54
Convert a number in Hexadecimal to Binary.
C A B 0
8 4 2 1 8 4 2 1 8 4 2 1 8 4 2 1
B E D
8 4 2 1 8 4 2 1 8 4 2 1
55
Example 1: Convert 111110.1012 to its equivalent in Decimal.
1 1 1 1 1 0 . 1 0 1
(32 + 16 + 8 + 4 + 2) + 0 = 62 (0.50 + 0 + 0.125) = 0.625
56
Table 7.8 Conversion Table from Hexadecimal w/ fractional part to Decimal
Power Values Digits
-6
16 0.0000000596
-5
16 0.0000009537 4
-4
16 0.0000152588 1
-3
16 0.0002441406 14 ---> (E)
16-2 0.0039062500 10 ---> (A)
16-1 0.0625000000 7
0
16 1 12 ---> (C)
1
16 16 5
2
16 256 1
163 4096
164 65536
16 5 1048576
(1 x 256 + 5 x 16 + 12 x 1) = 348
Binary Arithmetic
Binary arithmetic refers to the mathematical operations performed on binary numbers, which
use a base-2 number system. In binary arithmetic, only two digits, 0 and 1, are used to represent
numerical values. The binary number system is fundamental to digital systems and computer
architecture.
Binary arithmetic includes basic operations such as addition, subtraction, multiplication, and
division. These operations are carried out using specific rules and algorithms designed for binary
numbers. Binary arithmetic is an essential part of all the digital computers and many other digital
systems.
Binary Addition
Binary addition is the process of adding two binary numbers together. It follows similar principles
to decimal addition, but with only two digits, 0 and 1, in the binary number system. Here's a step-
by-step explanation of binary addition:
57
It is a key for binary subtraction, multiplication, and division. There are four rules of binary
addition.
In fourth case, a binary addition is creating a sum of (1 + 1 = 10) i.e. 0 is written in the given
column and a carry of 1 over to the next column.
Example: Addition
Subtraction and Borrow, these two words will be used very frequently for binary subtraction.
There are four rules of binary subtraction.
Example: Subtraction
0011010 - 001100 = 0000110 11 carry
0011010 = 2610
+ 0001100 = 1210
0001110 = 1410
Octal Arithmetic
Octal arithmetic refers to the mathematical operations performed on octal numbers, which use
a base-8 number system. In octal arithmetic, digits from 0 to 7 are used to represent numerical
values. Octal arithmetic follows similar principles to decimal arithmetic but operates with a
smaller set of digits.
58
Octal Addition
Octal addition is the process of adding two octal numbers together. In octal arithmetic, digits
from 0 to 7 are used, and addition follows similar principles to decimal addition. Below is the data
table representation of octal arithmetic table that will help you to handle octal addition.
+ 0 1 2 3 4 5 6 7 A
0 0 1 2 3 4 5 6 7
1 1 2 3 4 5 6 7 10
2 2 3 4 5 6 7 10 11
3 3 4 5 6 7 10 11 12 SUM
4 4 5 6 7 10 11 12 13
5 5 6 7 10 11 12 13 14
6 6 7 10 11 12 13 14 15
7 7 10 11 12 13 14 15 16
To use the table, simply follow the directions used in this example: Add 68 and 58. Locate 6 in the
A column then locate the 5 in the B column. The point in 'sum' area where these two columns
intersect is the 'sum' of two numbers.
Consider: 68 + 58 = 138.
Example: Addition
The subtraction of octal numbers follows the same rules as the subtraction of numbers in any
other number system. The only variation is in borrowed number. In the decimal system, you
borrow a group of 1010. In the binary system, you borrow a group of 210. In the octal system you
borrow a group of 810.
Example: Subtraction
59
Hexadecimal Arithmetic
Hexadecimal arithmetic refers to the mathematical operations performed on hexadecimal
numbers, which use a base-16 number system. In hexadecimal arithmetic, digits from 0 to 9 are
used for values 0 to 9, and letters A to F represent values 10 to 15.
Hexadecimal Addition
Hexadecimal addition is the process of adding two hexadecimal numbers together. In
hexadecimal arithmetic, digits from 0 to 9 represent values 0 to 9, and letters A to F represent
values 10 to 15. Hexadecimal addition follows similar principles to decimal addition, but with a
larger set of digits. Below is the data table representation of hexadecimal arithmetic table that
will help you to handle hexadecimal addition.
+ 0 1 2 3 4 5 6 7 8 9 A B C D E F
0 0 1 2 3 4 5 6 7 8 9 A B C D E F
1 1 2 3 4 5 6 7 8 9 A B C D E F 10
2 2 3 4 5 6 7 8 9 A B C D E F 10 11
3 3 4 5 6 7 8 9 A B C D E F 10 11 12
4 4 5 6 7 8 9 A B C D E F 10 11 12 13
5 5 6 7 8 9 A B C D E F 10 11 12 13 14
6 6 7 8 9 A B C D E F 10 11 12 13 14 15
7 7 8 9 A B C D E F 10 11 12 13 14 15 16
8 8 9 A B C D E F 10 11 12 13 14 15 16 17
9 9 A B C D E F 10 11 12 13 14 15 16 17 18
A A B C D E F 10 11 12 13 14 15 16 17 18 19
B B C D E F 10 11 12 13 14 15 16 17 18 19 1A
C C D E F 10 11 12 13 14 15 16 17 18 19 1A 1B
D D E F 10 11 12 13 14 15 16 17 18 19 1A 1B 1C
E E F 10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D
F F 10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E
The following hexadecimal addition table will help you greatly to handle Hexadecimal addition.
To use the table, simply follow the directions used in this example − Add A 16 and 516. Locate A in
the X column then locate the 5 in the Y column. The point in 'sum' area where these two columns
intersect is the sum of two numbers.
Consider: A16 + 516 = F16
Example: Addition
60
+ 1B3 = 43510
659 = 162510
Hexadecimal Subtraction
The subtraction of hexadecimal numbers follows the same rules as the subtraction of numbers
in any other number system. The only variation is in borrowed number. In the decimal system,
you borrow a group of 1010. In the binary system, you borrow a group of 210. In the hexadecimal
system you borrow a group of 1610.
Example: Subtraction
61
Chapter 8
Computer Essentials
Overview:
Computer essentials refer to the fundamental knowledge and skills required to effectively use
and understand computer systems. These essentials encompass various components, concepts,
and skills that are essential for interacting with computers, managing data, navigating the digital
landscape, and ensuring the security and optimal functioning of computer systems.
Objectives:
At the end of this chapter, students will be able to:
1. Describe the different basic hardware and software.
2. Articulate the functionalities of each hardware and software.
3. Demonstrate the use of digital devices to facilitate information gathering.
The Computer
A computer is an electronic device that manipulates information, or data. It can store, retrieve,
and process data. You may already know that you can use a computer to type documents, send
email, play games, and browse the Web. You can also use it to edit or create spreadsheets,
presentations, and even videos.
• Hardware is any part of your computer that has a physical structure, such as the keyboard
or mouse. It also includes all the computer's internal parts, which you can see in the image
on the right.
• Software is any set of instructions that tells the hardware what to do and how to do it.
Examples of software include web browsers, games, and word processors. Below, you can
see an image of Microsoft PowerPoint, which is used to create presentations
62
Figure 8.2 Sample Software Interface (Microsoft PowerPoint)
Everything you do on your computer will rely on both hardware and software. For example, when
you vie a lesson in a web browser (software) and use your mouse (hardware) to click from page
to page. As you learn about different types of computers, ask yourself about the differences in
their hardware. As you progress through this tutorial, you'll see that different types of computers
also often use different types of software.
Desktop Computers
Many people use desktop computers at work, home, and school. Desktop computers are
designed to be placed on a desk, and they're typically made up of a few different parts, including
the computer case, monitor, keyboard, and mouse (See Figure 8.3).
Laptop Computers
The second type of computer you may be familiar with is a laptop computer, commonly called a
laptop. Laptops are battery-powered computers that are more portable than desktops, allowing
you to use them almost anywhere. (See Figure 8.5)
63
Tablet Computers
Tablet computers - or tablets are handheld computers that are even more portable than laptops.
Instead of a keyboard and mouse, tablets use a touch-sensitive screen for typing and navigation.
The iPad is an example of a tablet, as shown in (Figure 8.7).
• Smartphones: Many cell phones can do a lot of things computers can do, including
browsing the Internet and playing games. They are often called smartphones.
• Wearables: Wearable technology is a general term for a group of devices including fitness
trackers and smartwatches that are designed to be worn throughout the day. These
devices are often called wearables for short.
• Game consoles: A game console is a specialized type of computer that is used for playing
video games on your TV.
• TVs: Many TVs now include applications or apps that let you access various types of online
content. For example, you can stream video from the Internet directly onto your TV.
64
PC’s
This type of computer began with the original IBM PC that was introduced in 1981. Other
companies began creating similar computers, which were called IBM PC Compatible (often
shortened to PC). Today, this is the most common type of personal computer, and it typically
includes the Microsoft Windows operating system (See Figure 8.9).
Computer Case
The computer case is the metal and plastic box that contains the main components of the
computer, including the motherboard, central processing unit (CPU), and power supply. The front
of the case usually has an On/Off button and one or more optical drives.
65
Computer cases come in different shapes and sizes. A desktop case lies flat on a desk, and the
monitor usually sits on top of it. A tower case is tall and sits next to the monitor or on the floor.
All-in-one computers come with the internal components built into the monitor, which
eliminates the need for a separate case.
Monitor
The monitor works with a video card, located inside the computer case, to display images and
text on the screen. Most monitors have control buttons that allow you to change your monitor's
display settings, and some monitors also have built-in speakers.
Newer monitors usually have LCD (liquid crystal display) or LED (light-emitting diode) displays.
These can be made very thin, and they are often called flat panel displays. Older monitors use
CRT (cathode ray tube) displays. CRT monitors are much larger and heavier, and they take up
more desk space.
Keyboard
The keyboard is one of the main ways to communicate with a computer. There are many different
types of keyboards, but most are very similar and allow you to accomplish the same basic tasks.
66
Mouse
The mouse is another important tool for communicating with computers. Commonly known as a
pointing device, it lets you point to objects on the screen, click on them, and move them.
• The optical mouse uses an electronic eye to detect movement and is easier to clean
(Figure 8.13a).
• The mechanical mouse uses a rolling ball to detect movement and requires regular
cleaning to work properly (Figure 8.13b).
Figure 8.13a The Bottom of an Optical Figure 8.13b The Bottom of a Mechanical
Mouse. Mouse.
Mouse Alternatives
There are other devices that can do the same thing as a mouse. Many people find them easier to
use, and they also require less desk space than a traditional mouse. The most common mouse
alternatives are below.
• Trackball: A trackball has a ball that can rotate freely. Instead of moving the device like a
mouse, you can roll the ball with your thumb to move the pointer.
• Touchpad: A touchpad - also called a trackpad—is a touch sensitive pad that lets you
control the pointer by making a drawing motion with your finger. Touchpads are common
on laptop computers.
67
Figure 8.15 Touchpad.
2
7
8
3
9
1. Power Socket: This is where you'll connect the power cord to the computer.
2. Ethernet Port: This port looks a lot like the modem or telephone port, but it is slightly wider.
You can use this port for networking and connecting to the Internet.
3. Serial Port: This port is less common on today's computers. It was frequently used to connect
peripherals like digital cameras, but it has been replaced by USB and other types of ports.
4. Expansion Slots: These empty slots are where expansion cards are added to computers. For
example, if your computer did not come with a video card, you could purchase one and install
it here.
5. Parallel Port: This is an older port that is less common on new computers. Like the serial port,
it has now been replaced by USB.
6. Audio In/Audio Out: Almost every computer has two or more audio ports where you can
connect various devices, including speakers, microphones, and headsets.
69
7. USB Ports: On most desktop computers, most of the USB ports are on the back of the
computer case. Generally, you'll want to connect your mouse and keyboard to these ports
and keep the front USB ports free so they can be used for digital cameras and other devices.
8. Monitor Port: This is where you'll connect your monitor cable. In this example, the computer
has both a DisplayPort and a VGA port. Other computers may have other types of monitor
ports, such as DVI (digital visual interface) or HDMI (high-definition multimedia interface).
9. PS/2: These ports are sometimes used for connecting the mouse and keyboard. Typically, the
mouse port is green, and the keyboard port is purple. On new computers, these ports have
been replaced by USB.
1. FireWire (IEEE 1394): FireWire is a high-speed serial interface that was commonly used
for connecting peripherals, such as external hard drives, digital cameras, and audio
interfaces to computers. It provided a fast data transfer rate and the ability to daisy-chain
multiple devices. FireWire was popular in the early 2000s but has been largely phased out
in favor of other interfaces.
2. Thunderbolt: Thunderbolt is an interface technology developed by Intel in collaboration
with Apple. It combines high-speed data transfer and video output capabilities in a single
port. Thunderbolt uses a Mini DisplayPort connector and can support various protocols,
including DisplayPort, PCI Express, and USB. It allows for fast data transfer between
devices, such as external hard drives, monitors, and audio interfaces, and is commonly
found on Mac computers and some Windows PCs.
3. HDMI (High-Definition Multimedia Interface): HDMI is a widely used interface for
transmitting high-definition audio and video signals between devices. It is commonly
found on TVs, monitors, home theater systems, gaming consoles, and other audio/video
equipment. HDMI supports both video and audio transmission through a single cable,
eliminating the need for separate audio connections. It has undergone several revisions
over the years, with newer versions supporting higher resolutions, refresh rates, and
additional features like Ethernet connectivity.
It's important to note that FireWire and HDMI are primarily used for specific purposes like data
transfer or audio/video connectivity, whereas Thunderbolt combines various functionalities into
a single interface, including data transfer, video output, and peripheral connectivity.
If your computer has ports you don't recognize, you should consult your manual for more
information.
70
• Printers: A printer is used to print documents, photos, and anything else that appears on
your screen. There are many types of printers, including inkjet, laser, and photo printers.
There are even all-in-one printers, which can also scan and copy documents.
• Scanners: A scanner allows you to copy a physical image or document and save it to your
computer as a digital (computer-readable) image. Many scanners are included as part of
an all-in-one printer, although you can also buy a separate flatbed or handheld scanner.
• Speakers/headphones: Speakers and headphones are output devices, which means they
send information from the computer to the user—in this case, they allow you to hear
sound and music. Depending on the model, they may connect to the audio port or the
USB port. Some monitors also have built-in speakers.
71
Figure 8.20 Speaker system.
• Web cameras: A web camera or webcam is a type of input device that can record videos
and take pictures. It can also transmit video over the Internet in real time, which allows
for video chat or video conferencing with someone else. Many webcams also include a
microphone for this reason.
• Game controllers and joysticks: A game controller is used to control computer games.
There are many other types of controllers you can use, including joysticks, although you
can also use your mouse and keyboard to control most games.
72
Figure 8.23 Game controllers and Joystick.
• Digital cameras: A digital camera lets you capture pictures and videos in a digital format.
By connecting the camera to your computer's USB port, you can transfer the images from
the camera to the computer.
• Mobile phones, MP3 players, tablet computers, and other devices: Whenever you buy
an electronic device, such as a mobile phone or MP3 player, check to see if it comes with
a USB cable. If it does, this means you can most likely connect it to your computer.
73
Chapter 9
Internal Computer Hardware
Overview:
Internal computer hardware refers to the physical components that are housed within the
computer system unit or case. These components work together to process data, store
information, and perform various tasks. Understanding the key internal hardware components is
essential for building, upgrading, and troubleshooting computer systems.
Objectives:
After finishing this lesson, the student is expected to:
1. Recommend various internal hardware.
2. Discuss the relationship of each part regarding the overall functionality of the computer.
3. Interpret the power rating relative to the overall energy consumption of computer.
4. Design the specification for a desktop computer.
Inside a Computer
Inside a computer, there are various components and subsystems that work together to perform
different functions and carry out tasks. Here are some of the key components found inside a
typical desktop computer:
The Motherboard
Motherboard: The motherboard is a printed circuit board that acts as the main hub or
backbone of the computer. It connects and provides power to various components,
including the CPU, RAM, storage devices, expansion cards, and other peripherals. Figure
9.1, on the next page is a picture of the ASUS P5AD2-E motherboard with labels next to
each of its major components.
74
Motherboard and Components
Below are the different components for each of the motherboard components mentioned
in Figure 9.1.
1. Expansion Slots (PCI Express, PCI, and AGP). Expansion slots, such as PCI Express
(PCIe), PCI, and AGP, are physical slots on a motherboard that allow you to connect
expansion cards to enhance the capabilities of a computer system. Here's an overview
of each expansion slot:
• PCI Express (PCIe):
o PCI Express is the most common and fastest expansion slot used in modern
computers.
o It offers high-speed data transfer rates and is suitable for various peripherals,
including graphics cards, network cards, sound cards, and storage devices.
o PCIe slots come in different sizes, including x1, x4, x8, and x16, indicating the
number of lanes available for data transfer. Larger-sized slots provide more
bandwidth.
• PCI (Peripheral Component Interconnect):
o PCI is an older expansion slot that has been largely phased out but is still
present in some legacy systems.
o It offers lower data transfer rates compared to PCIe and is suitable for
connecting expansion cards such as sound cards, network cards, and legacy
devices.
o PCI slots are typically white and come in 32-bit (older) and 64-bit (newer)
variations.
• AGP (Accelerated Graphics Port):
o AGP is an older expansion slot primarily used for connecting graphics cards.
o It provided a dedicated high-speed channel for graphics data transfer,
enhancing graphics performance compared to using a standard PCI slot.
o AGP slots are no longer commonly found on modern motherboards, as they
have been replaced by PCIe for graphics card connections.
It's important to note that the compatibility between expansion slots and expansion cards
is crucial. For example, a PCIe card cannot be inserted into a PCI or AGP slot, and vice
versa. The type of slot available on a motherboard determines the compatibility of
expansion cards.
In modern systems, PCIe is the most prevalent and versatile expansion slot, providing
high-speed data transfer for a wide range of peripherals. PCI slots are typically used for
older or specialized devices, while AGP slots are outdated and not found in current
motherboard designs.
75
Figure 9.2 PCI Express slots.
Expansion Cards. Expansion cards, also known as expansion boards or add-on cards, are
hardware devices that can be inserted into expansion slots on a computer's
motherboard to add functionality or enhance the capabilities of the system. Here are
some common types of expansion cards:
• Graphics Card (GPU):
o A graphics card is an expansion card that handles the rendering of images, videos,
and 3D graphics on a computer.
o It connects to a PCIe slot and typically has its own graphics processing unit (GPU),
video memory, and video outputs (such as HDMI, DisplayPort, or DVI) to connect
to monitors or displays.
76
• Network Interface Card (NIC):
o A network interface card is an expansion card that enables a computer to connect
to a network, either through Ethernet or Wi-Fi.
o It can provide wired or wireless connectivity and is used for network
communication, data transfer, and internet connectivity.
• TV Tuner Card:
o A TV tuner card allows a computer to receive and display television signals, either
analog or digital.
o It can convert TV signals into a format that the computer can understand and
display on the monitor.
77
o It provides hardware-based RAID functionality, allowing multiple hard drives to
be combined for improved data redundancy, performance, or a combination of
both.
Figure 9.12 USB Sound Card Figure 9.13 Bluetooth USB dongle.
78
Figure 9.16 PCIe Sound Card
• Modem Card:
o A modem card enables a computer to connect to a telephone line or other
communication networks to transmit data over a dial-up or broadband
connection.
• Capture Card:
o A capture card is used to capture audio or video signals from external sources,
such as cameras, game consoles, or VCRs, and input them into a computer for
recording or live streaming.
These are just a few examples of expansion cards available for enhancing specific functionalities
or features of a computer system. The compatibility of expansion cards with a motherboard
depends on the type of expansion slot available, such as PCIe, PCI, or AGP, and the specifications
of the motherboard itself.
79
3-pin Case Fan Connectors
A 3-pin case fan connector, also known as a 3-pin fan header, is a type of connector commonly
used to connect case fans to the motherboard or fan controller. It provides power to the fan and
allows for control of its speed. Here is a description of the different pins found in a 3-pin case fan
connector:
• Ground (GND): This pin is usually marked with a black wire and serves as the ground
connection for the fan. It completes the electrical circuit and provides the reference
voltage for the fan's operation.
• +12V (Power): This pin, often marked in red, supplies the fan with a +12V power source.
It provides the necessary voltage for the fan to operate and spin.
• Tachometer (TACH): The third pin, usually marked in yellow or blue, is the tachometer
pin. It provides feedback to the motherboard or fan controller, reporting the fan's
rotational speed (RPM). The motherboard can monitor this signal to determine if the fan
is functioning correctly.
With a 3-pin case fan connector, the fan speed is typically controlled through voltage modulation.
The motherboard adjusts the voltage supplied to the fan, which in turn affects the fan's speed.
However, it's important to note that 3-pin connectors may provide limited control options
compared to more advanced 4-pin PWM (Pulse Width Modulation) connectors.
When connecting a 3-pin case fan, make sure to align the pins with the corresponding holes on
the fan header and ensure a secure connection. The connector is usually designed to be
polarized, meaning it can only be inserted in one direction.
It's worth mentioning that some motherboards or fan controllers may offer both 3-pin and 4-pin
fan headers to accommodate different types of fans. Additionally, adapters or splitters are
available that allow for connecting multiple fans to a single fan header or converting between
different connector types if needed.
80
Back Pane Connectors
Back panel connectors, also known as I/O ports or rear panel connectors, refer to the various
ports and connectors located on the back of a computer case or motherboard. These connectors
allow for the connection of external devices and peripherals. Here are some common back panel
connectors found on a typical computer:
• USB Ports: Universal Serial Bus (USB) ports are used for connecting a wide range of
devices such as keyboards, mice, printers, external hard drives, and USB flash drives. The
back panel typically features multiple USB ports, with newer versions supporting faster
transfer speeds.
• Audio Jacks: Audio jacks allow for the connection of audio devices such as speakers,
headphones, microphones, and audio input/output devices. Common audio jacks include
the Line-In, Line-Out, and Microphone jacks.
• Video Ports: Video ports enable the connection of monitors and display devices. Common
video connectors include VGA (analog), DVI (digital), HDMI (digital), and DisplayPort
(digital).
• Ethernet Port: The Ethernet port, also known as the LAN port, provides a connection for
wired network communication. It allows you to connect the computer to a local area
network (LAN) or the internet using an Ethernet cable.
• PS/2 Ports: PS/2 ports are used for connecting legacy peripherals such as keyboards and
mice. The purple port is for keyboards (PS/2 keyboard), while the green port is for mice
(PS/2 mouse).
• Serial and Parallel Ports: While less common in modern computers, some back panels
may include serial ports (for connecting serial devices like barcode scanners) and parallel
ports (for connecting printers and other parallel devices).
• Other Ports: Depending on the computer or motherboard, you may find additional
connectors such as eSATA ports (for external SATA storage devices), FireWire ports (for
high-speed data transfer), audio/video input ports (for capturing audio/video signals),
and more.
81
The colors mentioned in the definitions below represent the commonly used color-coding for the
connectors described.
• Keyboard (Purple): The keyboard port, typically colored purple, is used for connecting a
computer keyboard. It is a PS/2 port that allows the keyboard to send input signals to the
computer.
• Mouse (Green): The green-colored port is used for connecting a computer mouse. It is a
PS/2 port that allows the mouse to send input signals to the computer.
• Serial (Cyan): The cyan-colored port refers to a serial port, which is used for connecting
serial devices. Serial ports are less common in modern computers but were widely used
in the past for devices such as modems, barcode scanners, and serial printers.
• Printer (Violet): The violet-colored port, often referred to as a parallel port, is used for
connecting printers and other parallel devices. Parallel ports transmit data in parallel,
allowing for faster communication with compatible devices.
• Monitor (VGA - Video Graphics Array) - Blue: The blue-colored port represents a VGA
port, which is used for connecting a monitor to the computer. VGA ports are commonly
found on older computers and displays and transmit analog video signals.
• Monitor (DVI - Digital Visual Interface) - White: The white-colored port represents a DVI
port, which is used for connecting a monitor to the computer. DVI ports can transmit both
analog and digital video signals and are commonly found on a range of displays.
• Line Out - Lime Green: The lime green-colored port represents the line out port, which is
used for connecting audio output devices such as speakers or headphones. It allows the
computer to send audio signals to external devices.
• Microphone - Pink: The pink-colored port is used for connecting a microphone. It allows
the computer to receive audio input from an external microphone device.
• Audio In - Grey: The grey-colored port refers to the audio input port, which allows the
computer to receive audio input from external devices such as audio players or other
audio sources.
• Joystick - Yellow: The yellow-colored port is used for connecting a joystick or game
controller. It allows the computer to receive input signals from the joystick or controller
for gaming purposes.
Note: It's important to note that while the color-coding described here is commonly used, it may
vary depending on the manufacturer and specific computer or motherboard model.
Heat Sink.
A heat sink is a cooling device used in electronic devices, particularly in computers, to dissipate
heat generated by electronic components such as the central processing unit (CPU), graphics
processing unit (GPU), or other integrated circuits. Its primary function is to absorb and transfer
the heat away from the component to ensure optimal operating temperatures and prevent
overheating.
Heat sinks are typically made of thermally conductive materials like aluminum or copper. They
consist of a large surface area with fins or ridges that increase the contact area for heat
82
dissipation. The heat sink is attached directly to the heat-generating component, such as the CPU,
using thermal interface materials like thermal paste or thermal pads to improve heat transfer
efficiency.
The heat sink works based on the principle of conduction and convection. When the electronic
component generates heat during operation, the heat is conducted through the base of the heat
sink. The large surface area of the fins or ridges allows for efficient heat dissipation by increasing
the contact with the surrounding air. As the air flows over the fins, it carries away the heat,
cooling the heat sink and the component.
In some cases, heat sinks are equipped with fans, known as active cooling, to enhance the heat
dissipation process. These fans help to increase the airflow over the heat sink, thus improving
the cooling efficiency. The combination of a heat sink and fan is commonly referred to as a heat
sink and fan assembly (HSF) or a cooler.
Heat sinks are essential in maintaining the optimal temperature of electronic components.
Without proper cooling, components can overheat, leading to reduced performance, instability,
or even permanent damage. Therefore, heat sinks play a crucial role in ensuring the reliable and
efficient operation of electronic devices, particularly in high-performance computing systems
where heat generation is significant.There are two heat sink types: active and passive.
83
The addition of a fan to a heat sink improves the cooling capacity by facilitating higher
airflow and increased heat dissipation. The fan helps to move air over the heat sink's
fins or ridges, enhancing the convective cooling process. This increased airflow carries
away the heat more efficiently, resulting in improved cooling performance and lower
component temperatures.
Active heat sinks are commonly used in electronic devices where passive cooling alone
may not be sufficient to dissipate the heat generated by high-power components.
These include computer CPUs, GPUs, power amplifiers, and other heat-intensive
components. By combining the thermal conductivity of the heat sink with the forced
airflow from the fan, active heat sinks provide more effective cooling and help maintain
optimal operating temperatures.
The fan in an active heat sink may be directly attached to the heat sink, mounted on
top of it, or integrated into the heat sink design. Some active heat sinks also feature
additional technologies such as heat pipes or vapor chambers to further enhance heat
transfer and distribution within the heat sink.
It's worth noting that active heat sinks can produce noise due to the fan operation.
However, advancements in fan design and control technologies have led to quieter and
more efficient cooling solutions.
Tip: If you are looking to purchase a fan heat sink, we recommend those with ball bearing motors
as they often last much longer than sleeve bearings
84
Passive heat sinks are typically made of thermally conductive materials, such as
aluminum or copper, and feature a large surface area with fins or ridges. The heat sink
is directly attached to the heat-generating component, such as a CPU or GPU, using
thermal interface materials to improve heat transfer efficiency.
The passive heat sink operates based on the principles of conduction and natural
convection. As the electronic component generates heat, the heat is conducted
through the base of the heat sink and transferred to the fins or ridges. The large surface
area of the heat sink allows for increased contact with the surrounding air. Heat is then
dissipated through natural convection, where cooler air replaces the heated air near
the heat sink, creating a continuous flow of air over the fins. This airflow carries away
the heat, cooling the heat sink and the component.
Passive heat sinks are commonly used in applications where noise reduction,
reliability, or power efficiency is crucial. Since they do not rely on fans, passive heat
sinks operate silently and have no moving parts that can fail or require maintenance.
They are particularly suitable for low-power or low-heat applications, where the heat
dissipation requirements can be adequately met through passive cooling alone.
However, it's important to note that passive heat sinks have limitations in terms of
cooling capacity compared to active heat sinks. They are less effective in dissipating
heat from high-power components or in environments with limited airflow. In such
cases, active heat sinks or other cooling methods may be necessary.
Passive heat sinks offer a reliable and silent cooling solution for electronic
components, ensuring proper heat dissipation and preventing overheating. Their
design simplicity, lack of noise, and low maintenance requirements make them well-
suited for specific applications where their cooling capabilities align with the thermal
requirements of the components.
85
The P4 power connector consists of a 4-pin male connector that mates with a corresponding 4-
pin female connector on the motherboard. It delivers additional power beyond what the main
motherboard power connector (usually a 20 or 24-pin connector) provides. This additional
power is necessary to meet the high power demands of modern CPUs, especially in high-
performance systems or systems with overclocked CPUs.
The P4 power connector provides a dedicated power supply to the CPU, ensuring stable voltage
delivery and preventing voltage drops during periods of high CPU activity. It helps to minimize
the risk of instability, system crashes, or damage to the CPU due to insufficient power.
To connect the P4 power connector, align the pins of the male connector with the corresponding
holes on the female connector on the motherboard. It is usually designed in a way that ensures
correct orientation and prevents incorrect insertion. Once properly aligned, gently press the
connectors together until they are fully seated and securely connected.
It's important to note that not all motherboards require a P4 power connector. Older systems
or systems with less power-hungry CPUs may not have this connector. However, for systems
that do require it, it is crucial to connect the P4 power connector to ensure stable and reliable
CPU performance.
The 4-pin P4 power connector plays a vital role in supplying adequate power to the CPU, helping
to maintain system stability and prevent issues related to insufficient power delivery.
Note: If you have a new power supply with an 8-pin connector and a motherboard that needs a
P4 connector, the 8-pin connector can be made into a p4 connector. All 8-pin connectors are
backward compatible and are two 4-pin connectors connected to each other that can be
separated.
86
voltage across the inductor that opposes the change in current. This property is known
as inductance and is measured in henries (H).
Inductors come in various shapes, sizes, and values, allowing them to be used in a wide
range of electronic circuits. They are represented by symbols in circuit diagrams and can
be found in electronic devices ranging from simple consumer electronics to complex
industrial systems.
87
(DC) while allowing alternating current (AC) to pass through, thereby separating
different frequencies and eliminating unwanted noise or ripple.
o Coupling and Decoupling: Capacitors are used for coupling or connecting
different stages of electronic circuits, allowing the AC signal to pass while
blocking DC components. They also provide decoupling or isolation by stabilizing
the power supply voltage, preventing fluctuations from affecting sensitive
components.
o Timing and Oscillation: Capacitors, in conjunction with resistors, are used to
create timing circuits and oscillators. They determine the rate of charging and
discharging, enabling the generation of precise time delays or frequency
oscillations.
o Voltage Regulation: Capacitors are employed in voltage regulator circuits to
stabilize and maintain a constant voltage level. They act as a buffer, supplying
extra energy when the voltage drops and absorbing excess energy when the
voltage rises.
o Power Factor Correction: Capacitors can improve the power factor of electrical
systems by compensating for reactive power, thus increasing efficiency and
reducing energy consumption.
Capacitors are available in various types, including ceramic, electrolytic, tantalum, film,
and more. Each type has specific properties such as capacitance, voltage rating,
temperature stability, and frequency response. Capacitors are represented by symbols
in circuit diagrams, and their values are measured in farads (F) or microfarads (μF),
picofarads (pF), or nanofarads (nF) for smaller capacitance values.
Capacitors are extensively used in electronic devices and systems, ranging from small
consumer electronics to industrial equipment and power distribution networks. Their
ability to store and release electrical energy in a controlled manner makes them
fundamental components in modern electronics.
CPU Socket
A CPU socket, also known as a processor socket or CPU slot, is a mechanical component on a
computer motherboard that serves as the interface between the central processing unit (CPU)
and the motherboard. It is designed to securely hold the CPU and provide electrical connections
for data transfer and power supply.
The CPU socket plays a crucial role in computer architecture as it determines the compatibility of
the CPU with the motherboard. Different CPU sockets are designed to accommodate specific CPU
models or families, each having a unique pin layout and physical design. Examples of commonly
used CPU socket types include:
88
PGA (Pin Grid Array) is a type of packaging technology used for integrated circuits (ICs),
particularly for processors (CPUs) and chipsets. It is a method of connecting the IC to the
printed circuit board (PCB) or socket.
In a PGA, the IC has an array of pins or contacts arranged in a regular grid pattern on the
underside of the package. These pins are typically in the form of small metal protrusions
that extend downward from the IC package.
The PGA package is designed to fit into a corresponding socket on the PCB or
motherboard. The socket has a matching grid of holes or slots that align with the pins on
the PGA package. The pins make contact with the electrical connections in the socket,
establishing the electrical connection between the IC and the PCB.
It's worth noting that there are different variations of PGA, such as PGA-ZIF (Zero Insertion
Force), which allows for easy insertion and removal of the IC package from the socket
without requiring any force. Additionally, the number of pins in a PGA package can vary
depending on the specific IC and application, ranging from a few dozen to several hundred
pins.
89
Figure 9.24 shows a PGA socket. It is considered as an integrated circuit packaging
standard used in most second- through fifth-generation processors. Pin grid array
packages were either rectangular or square in shape, with pins arranged in a regular array.
Pin grid array was preferred for processors with larger-width data buses than dual in-line
pins, as it could handle the required number of connections better.
The pin grid array started with the Intel 80286 microprocessor. It was mounted on a
printed circuit board either by insertion into a socket or occasionally by the through-hole
method. Pin grid arrays had many variations, such as:
It's important to note that the meanings of these terms can vary depending on the specific
context in which they are used. The definitions provided here relate specifically to their
common usage in the field of electronics and technology.
90
PGA is a common packaging technology used for ICs, particularly in CPUs and chipsets. It
provides a secure electrical connection, ease of replacement, and good thermal
performance, making it suitable for various computer and electronic applications.
• LGA (Land Grid Array): LGA sockets, used by Intel CPUs, employ an array of flat contacts
on the CPU that make direct contact with pads on the socket. The CPU is placed onto the
socket, and a locking mechanism secures it in position. The contacts provide the necessary
electrical connections between the CPU and the motherboard.
Figure 9.25 presents an LGA socket. LGA is an integrated circuit design involving a square
grid of contacts that are connected to other components of a printed circuit board. The
term refers to a "socket design" where certain components are disconnected from the
actual circuit board and integrated into the board’s structure in particularly new ways. In
contrast to most other designs, LGA configurations have pins in the socket rather than on
the chip.
91
o Robustness and Reliability: LGA packages provide better mechanical strength and
robustness compared to PGA packages, as there are no fragile pins that can be
easily bent or damaged during handling or installation.
o Easy Replacement: LGA packages offer convenient replacement and upgrading of
ICs. They can be easily removed from the socket using specialized tools and
replaced with a new or upgraded IC without the need for soldering.
• CPU / Processor. A CPU socket, also known as a processor socket or CPU slot, is a
mechanical component on a computer motherboard that serves as the interface between
the central processing unit (CPU) and the motherboard. It is designed to securely hold the
CPU and provide electrical connections for data transfer and power supply. The CPU
socket plays a crucial role in computer architecture as it determines the compatibility of
the CPU with the motherboard. Different CPU sockets are designed to accommodate
specific CPU models or families, each having a unique pin layout and physical design.
When selecting a CPU for a motherboard, it is crucial to ensure compatibility between the
CPU socket type and the motherboard socket. Using a CPU that is not compatible with
the motherboard's socket will result in the CPU being physically incompatible or unable
to function correctly.
92
CPU sockets also dictate other specifications, such as the maximum supported power,
memory type, and chipset compatibility. It is essential to consult the motherboard
manufacturer's documentation or specifications to determine the supported CPU socket
type for a particular motherboard model.
Separate from the Southbridge: Historically, computer chipsets consisted of two main
components: the Northbridge and the Southbridge. The Northbridge handled memory
and high-speed peripherals, while the Southbridge managed slower I/O devices like USB,
SATA, and audio. However, modern chipsets have integrated many Southbridge functions
directly into the CPU or combined them into a single chipset component.
It's important to note that with the advent of newer processor architectures, such as
AMD's Infinity Fabric or Intel's Platform Controller Hub (PCH), the traditional Northbridge
and Southbridge distinction has become less relevant. However, the term "Northbridge"
93
is still used to describe the memory and high-speed component interface functions in
legacy or older computer systems.
• Screw Hole. A screw hole, also known as a threaded hole or tapped hole, is a hole with
internal threads that are designed to receive a screw, bolt, or fastener. See Figure 9.26
and Figure 9.27 below.
Figure 9.26 Screw hole in the Figure Figure 9.27 Screw, standoff
Motherboard and paper washer
94
Screw holes are essential for securely fastening components and objects together. They
provide a reliable method for creating strong connections and allow for easy disassembly
or reassembly when needed. Properly sized and threaded screw holes ensure the
effectiveness and longevity of the fastening mechanism.
• Memory Slot. A memory slot, also known as a RAM slot or DIMM slot, is a socket on a
computer motherboard that is designed to hold and provide connections for memory
modules. It allows for the installation of Random Access Memory (RAM) modules, which
provide temporary storage for data that the computer's processor can quickly access.
Memory slots come in different types and designs, depending on the motherboard and
the type of RAM being used.
Upgrading or adding RAM to a computer often involves installing new memory modules
into the available memory slots. It is important to check the motherboard specifications
and consult the user manual to determine the supported memory type, maximum
capacity, and recommended installation configurations. This ensures compatibility and
optimal performance when adding or upgrading system memory.
95
Figure 9.28 DIMM (Memory) Slots.
• RAM. RAM stands for Random Access Memory. It is a type of computer memory that
provides temporary storage space for data and instructions that are actively being used
by the computer's processor Figure 9.29a and 9.29b.
Figure 9.29a Dual In-Line Memory Module Figure 9.29b Small Outline
(DIMM) for desktop computer. Dual In-Line Memory Module
(SODIMM) for laptop.
96
Types of RAM
▪ DDR (Double Data Rate) RAM: DDR RAM is the most common type used in
modern computers. It comes in various generations, such as DDR2, DDR3,
DDR4, and DDR5, each offering improved speed and efficiency.
▪ SRAM (Static Random Access Memory): SRAM is a faster and more expensive
type of RAM. It is often used in cache memory or specialized applications that
require high-speed access.
▪ DRAM (Dynamic Random Access Memory): DRAM is a more common type of
RAM used in computers. It is less expensive but slightly slower than SRAM.
DRAM requires constant refreshing of data to retain its contents.
o Memory Modules: RAM is typically installed on memory modules that plug into
the motherboard's memory slots. The most common form factors for RAM
modules are DIMM (Dual In-Line Memory Module) and SO-DIMM (Small Outline
Dual In-Line Memory Module), which are used in desktop and laptop computers,
respectively.
o Upgradeability: In most cases, RAM can be easily upgraded or expanded by adding
more modules or replacing existing ones. Increasing the amount of RAM in a
computer can improve overall system performance and allow for smoother
multitasking and running memory-intensive applications.
o Memory Hierarchy: RAM is part of the computer's memory hierarchy, which
includes multiple levels of cache memory and storage devices. Data is transferred
between these levels based on the proximity to the CPU and the speed of access
required.
RAM plays a critical role in the performance and responsiveness of a computer system.
By providing fast and temporary storage for actively used data, it allows the processor to
quickly access information, resulting in efficient computing operations.
• Super I/O. Super I/O, short for Super Input/Output, is a type of integrated circuit (IC)
commonly found on computer motherboards. It is responsible for controlling various
input and output functions that are not directly handled by other specialized chips or
controllers.See Figure 9.30.
97
Here are key points about Super I/O:
o Function: Super I/O chips provide a range of I/O functions and interfaces on a
motherboard, including legacy ports, serial communication ports, parallel ports,
keyboard and mouse controllers, hardware monitoring, and floppy disk drive
support.
o Legacy Ports: Super I/O chips often include support for legacy ports such as serial
ports (COM ports) and parallel ports (LPT ports). These ports were more commonly
used in older computer systems and peripherals but have become less common in
modern systems.
o Serial Communication: Super I/O chips typically include UART (Universal
Asynchronous Receiver-Transmitter) controllers, which enable serial communication
for devices like modems, serial mice, and serial printers.
o Keyboard and Mouse Controllers: Super I/O chips may integrate keyboard and
mouse controllers, allowing for the connection and control of PS/2 or USB keyboards
and mice.
o Hardware Monitoring: Super I/O chips often include hardware monitoring
capabilities to monitor system parameters like temperature, fan speeds, and
voltages. This information can be accessed by system monitoring software or the
motherboard's BIOS.
o Floppy Disk Drive Support: Some Super I/O chips provide support for floppy disk
drives, enabling the system to read and write data to floppy disks. However, floppy
drives have become obsolete in most modern computer systems.
o Configuration and Interface: Super I/O chips are typically connected to the
motherboard's chipset through a bus, such as the Low Pin Count (LPC) bus or the
Industry Standard Architecture (ISA) bus. Configuration settings for the Super I/O
chip are often stored in the motherboard's BIOS.
o Integration and External Controllers: In modern motherboards, some of the
functions traditionally handled by Super I/O chips may be integrated into other chips,
such as the Southbridge chipset. Additionally, certain I/O functions may be offloaded
to dedicated controllers or interfaces, such as USB or SATA controllers.
Super I/O chips provide essential support for legacy I/O functions and interfaces on computer
motherboards. While their significance has decreased with the advancement of technology
and the phasing out of legacy ports, they still play a role in providing compatibility for older
peripherals and supporting certain I/O functionalities.
• Floppy Connection. The floppy connection refers to the interface used to connect a floppy
disk drive to a computer system. Floppy disk drives were once commonly used for data
storage and transfer, but they have become obsolete in modern computing. See Figure
9.31 and Figure 9. 32.
98
Figure 9.31 Motherboard with IDE Figure 9.32Floppy cable.
and Floppy connector.
It's important to note that the floppy connection has become less prevalent and is rarely
found on modern motherboards or computer systems. Floppy disk drives have been largely
replaced by more advanced and higher-capacity storage devices, such as hard drives, solid-
state drives (SSDs), and USB flash drives.
99
ATA/IDE is an older interface standard that was commonly used for connecting storage
devices, particularly hard disk drives, before the introduction of newer interfaces like
SATA (Serial ATA). The primary connection on an ATA/IDE interface is typically used for
the main hard drive in the system.
The primary connection consists of a 40-pin ribbon cable that connects the ATA/IDE hard
drive to the motherboard's ATA/IDE connector. The cable has three connectors: one for
the motherboard, one for the primary drive, and one for the secondary drive (if present).
The connectors are designed to be inserted in a specific orientation to ensure proper
communication between the drive and the motherboard.
The primary ATA/IDE connection supports data transfer rates up to 133 MB/s, depending
on the specific ATA/IDE standard supported by the hardware. It also provides power to
the connected hard drive through a separate power connector.
However, it's important to note that ATA/IDE interfaces have largely been replaced by
SATA interfaces, which offer higher data transfer rates, better performance, and more
compact cable connections. Most modern computer systems no longer include ATA/IDE
interfaces, and SATA interfaces are now the standard for connecting hard drives and other
storage devices.
If you have an older computer system that still utilizes ATA/IDE connections, it is crucial
to ensure compatibility with ATA/IDE hard drives and follow proper cable orientation and
configuration guidelines to establish a reliable connection between the hard drive and
the motherboard.
• PATA, short for Parallel ATA, is a legacy interface standard used for connecting storage
devices, including hard disk drives and optical drives, to a computer system. It is also
known as IDE (Integrated Drive Electronics) or ATA (Advanced Technology Attachment).
PATA was widely used before the introduction of SATA (Serial ATA) as the primary
interface for storage devices. See Figure 9.33
PATA utilizes a parallel data transmission method, where multiple data bits are
transmitted simultaneously over multiple data lines. It uses a 40-pin ribbon cable to
connect the PATA devices to the motherboard's PATA connector. The ribbon cable has
three connectors: one for the motherboard, one for the primary drive, and one for the
secondary drive (if present).
PATA supports data transfer rates up to 133 MB/s, depending on the specific PATA
standard used. The interface also provides power to the connected devices through a
separate power connector.
100
PATA devices, such as hard drives and optical drives, have jumper settings to configure
their operation as the primary (master) or secondary (slave) device on the PATA interface.
Each device connected to the PATA interface should be set to a unique master or slave
setting to avoid conflicts.
It's important to note that PATA interfaces have largely been replaced by SATA interfaces
in modern computer systems. SATA offers several advantages over PATA, including higher
data transfer rates, better cable management, and improved scalability. As a result, PATA
interfaces are rarely found on newer motherboards, and PATA devices are becoming less
common.
However, some older systems or specialized devices may still utilize PATA interfaces. In
such cases, it is necessary to use PATA-compatible devices and ensure proper cable
connections and jumper settings for the devices on the PATA interface.
Overall, PATA was an important interface standard in the history of computer storage,
but it has been largely superseded by SATA due to its limitations in speed and cable
management.
• 24-pin ATX power supply connector. The 24-pin ATX power supply connector is a primary
power connection used in modern computer systems to provide power to the
motherboard. It is also known as the ATX power connector or ATX main power connector.
See Figure 9.34.
The 24-pin ATX connector consists of a rectangular plastic connector with 24 pins
arranged in two rows. It is designed to mate with a corresponding 24-pin female
connector on the motherboard. The connector provides both power and signaling
connections between the power supply and the motherboard.
The 24-pin ATX power supply connector carries various voltages and signals required for
the proper functioning of the motherboard and its components. These include +3.3V, +5V,
+12V, -12V, and ground connections. The additional pins in the 24-pin connector
compared to the older 20-pin ATX connector provide additional power capacity and
support for newer hardware requirements.
101
To connect the 24-pin ATX power supply connector, align the pins of the connector with
the corresponding holes on the motherboard's ATX power connector. The connector is
designed in such a way that it can only be inserted in the correct orientation. Once
properly aligned, gently press the connector into the motherboard until it is fully seated
and securely connected.
The 24-pin ATX power supply connector is essential for providing stable power to the
motherboard, ensuring the proper operation of the computer system. It is designed to be
backward compatible, meaning that if a motherboard has a 20-pin power connector, a
24-pin power supply connector can still be used, but with the extra four pins left unused.
It's worth noting that some high-end motherboards and power supplies may feature
additional auxiliary power connectors, such as 4-pin or 8-pin CPU power connectors, to
provide extra power specifically to the CPU. These connectors are separate from the 24-
pin ATX connector and serve to meet the power requirements of high-performance CPUs.
The 24-pin ATX power supply connector is a vital component in modern computer
systems, delivering power from the power supply unit to the motherboard and ensuring
proper functionality of the entire system.
A power supply with a 24-pin connector (Figure 7.35) can be used on a motherboard with
a 20-pin connector by leaving the four additional pins disconnected. However, if you have
a 24-pin connection on your motherboard all 24-pins need to be connected. If you are
using a power supply that does not have a 24-pin connector, you need to purchase a new
power supply.
102
Warning: When using a connector like that shown above, note the arrows pointing to
each other. For the cable to be correctly inserted, the arrows must point to each other.
• Serial ATA connections. Serial ATA (SATA) connections refer to the interface used to
connect storage devices, such as hard drives, solid-state drives (SSDs), or optical drives,
to a computer system. SATA is the most common interface used in modern computers for
data transfer and storage. See Figure 9.27.
103
o Compatibility: SATA connections are backward compatible, meaning that newer
SATA devices can be connected to older SATA interfaces, and vice versa. However,
the maximum data transfer speed will be limited to the capabilities of the slower
component.
o SATA Cables: SATA cables are relatively thin and flexible compared to older IDE
(Integrated Drive Electronics) cables. This flexibility makes them easier to route and
manage within a computer system, improving airflow and cable management.
o SATA Controllers: SATA connections are usually integrated into the motherboard's
chipset, providing native support for SATA devices. However, if additional SATA ports
are needed, expansion cards with SATA controllers can be installed.
o Multiple Drives: SATA connections allow for connecting multiple drives to a single
system. Motherboards often feature multiple SATA ports, enabling the installation
of multiple hard drives or SSDs for increased storage capacity.
SATA connections have become the standard for storage devices in modern computers
due to their high data transfer speeds, ease of use, and compatibility. They have largely
replaced older interfaces like IDE or SCSI for most consumer and enterprise storage
needs.
• Coin Cell Battery (CMOS backup battery). A coin cell battery, also known as a CMOS
backup battery, is a small, round, flat battery commonly used to provide power to the
CMOS (Complementary Metal-Oxide-Semiconductor) memory in a computer. See Figure
9.37.
104
o Battery Type: Coin cell batteries are generally lithium-based, providing a long shelf
life and stable voltage output. Lithium batteries are commonly used because they
have low self-discharge rates and can operate effectively in a wide range of
temperatures.
o Installation and Location: Coin cell batteries are typically mounted on the
motherboard of a computer or integrated into a CMOS backup battery holder.
They are easily replaceable by removing the old battery and inserting a new one
into the designated holder or socket.
o Lifespan: The lifespan of a coin cell battery varies depending on factors such as
the battery brand, usage patterns, and the power requirements of the CMOS
memory. Generally, coin cell batteries can last anywhere from several months to
several years before needing replacement.
o Voltage and Capacity: The voltage output of coin cell batteries is usually around 3
volts, which is suitable for powering the CMOS memory and associated circuitry.
The capacity of the battery, measured in milliampere-hours (mAh), determines
how long it can sustain power to the CMOS memory.
o Low Power Consumption: Coin cell batteries are designed to provide a small
amount of power to maintain the CMOS memory and do not support heavy loads
or power-intensive components. Their primary function is to retain the settings in
the memory rather than provide power for the entire system.
o Battery Warning: When the coin cell battery approaches the end of its lifespan,
the computer may display a warning message during the boot process indicating
a low or dead CMOS battery. This prompts the user to replace the battery to
ensure proper functioning of the system.
Replacing a coin cell battery is a simple procedure and requires minimal technical
expertise. It is important to use the correct battery type and observe proper safety
precautions, such as handling the battery with care and disposing of used batteries
according to local regulations.
• RAID. RAID (Redundant Array of Independent Disks) is a data storage technology that
combines multiple physical disk drives into a single logical unit for improved performance,
data redundancy, or both. RAID is commonly used in servers, network-attached storage
(NAS) devices, and other systems that require high data availability and reliability.
Here are key points about RAID:
o Purpose: RAID aims to enhance data storage performance, reliability, and capacity by
combining multiple physical disks into a logical array. The array appears as a single
storage volume to the operating system and applications, offering benefits like
increased data transfer speeds, fault tolerance, and data redundancy.
o Levels of RAID: RAID offers several levels or configurations, each with its own
characteristics and trade-offs.
105
The most used RAID levels are:
a. RAID 0 (Striping): RAID 0 stripes data across multiple disks, improving read and
write performance. However, it does not provide redundancy, meaning that the
failure of a single disk can result in data loss.
b. RAID 1 (Mirroring): RAID 1 mirrors data across two or more disks, creating an
exact copy of the data. It offers high data redundancy and fault tolerance, as data
remains accessible even if one disk fails. However, it has reduced storage capacity
as half of the total disk space is used for mirroring.
c. RAID 5 (Striping with Parity): RAID 5 stripes data across multiple disks and also
includes parity information to provide fault tolerance. It offers a good balance
between performance, storage capacity, and data redundancy. In the event of a
single disk failure, data can be reconstructed using the parity information.
d. RAID 6 (Striping with Dual Parity): RAID 6 is similar to RAID 5 but includes an
additional layer of redundancy with dual parity. This provides increased fault
tolerance, allowing for the simultaneous failure of two disks without data loss.
e. RAID 10 (Combination of RAID 1 and RAID 0): RAID 10 combines mirroring (RAID
1) and striping (RAID 0) to provide both performance and redundancy benefits. It
requires a minimum of four disks and offers high fault tolerance.
RAID technology offers various benefits depending on the chosen RAID level, such as
increased data performance, fault tolerance, data redundancy, and improved data
availability. The selection of the appropriate RAID level depends on specific requirements,
including desired performance, data protection, and storage capacity.
Figure 9.38 illustrates RAID combination for highly utilized databse servers or any server
that’s performing many write operations.
106
Figure 9.29 RAID combination of Web Hosting firms.
• System Panel Connectors. System panel connectors, also known as front panel
connectors or header connectors, are a set of pins located on the motherboard of a
computer system. These connectors provide a means of connecting the buttons, LEDs,
and other front panel devices of the computer case to the motherboard, allowing for user
interaction and providing visual indicators. System panel connectors typically include a
set of pins with labels for specific functions. See Figure 9.39. The labels can vary
depending on the motherboard manufacturer, but common labels include:
o Power Switch: This connector allows the power button on the computer case to
turn the system on or off
o Reset Switch: The reset switch connector enables the reset button on the case to
restart the system.
o HDD LED: This connector is for the hard drive activity LED, which indicates when the
hard drive is being accessed or in use.
o Power LED: The power LED connector is used for the power indicator LED, which
shows that the system is powered on.
o Speaker: Some motherboards include a speaker connector for attaching a system
speaker, which provides audible beep codes during system startup for diagnostic
purposes.
o USB Headers: Some front panel connectors also include USB headers, allowing for
the connection of USB ports on the front of the computer case.
To connect the front panel devices to the system panel connectors, the corresponding
wires or cables from the computer case must be attached to the appropriate pins on the
motherboard. The connectors and pins are usually labeled or color-coded for easy
identification. It's important to refer to the motherboard manual or documentation to
ensure proper pin placement and avoid incorrect connections, which could lead to
malfunctioning or non-functional front panel devices.
The exact layout and number of system panel connectors can vary depending on the
motherboard model and manufacturer. Some motherboards may have separate
connectors for each function, while others may combine multiple functions into a single
connector. It's important to consult the motherboard documentation to understand the
specific pin layout and functions for your particular motherboard.
107
Properly connecting the front panel devices to the system panel connectors allows for
convenient control of the system's power, reset functionality, and provides visual
indicators for power status and hard drive activity.
• FWH. FWH stands for Firmware Hub, which is an integrated circuit (IC) component used
in computer systems to store and provide firmware data to the motherboard or other
system components.
The FWH, also known as a BIOS (Basic Input/Output System) or firmware chip, contains
the system's firmware, which includes the BIOS code and configuration settings. The
firmware is responsible for initializing and configuring various hardware components
during the system's startup process.
The FWH is typically located on the motherboard and is connected to the system's chipset
or other relevant components. It communicates with the system's processor and other
devices to provide the necessary firmware information and instructions.
The FWH is typically a non-volatile memory chip, meaning it retains its data even when
power is removed from the system. This allows the firmware to be stored and accessed
each time the system is powered on or reset.
In modern computer systems, the FWH has been largely replaced by newer technologies
such as UEFI (Unified Extensible Firmware Interface) or SPI (Serial Peripheral Interface)
flash memory. These newer technologies offer enhanced functionality and performance
compared to traditional FWH chips.
However, it's important to note that FWH may still be found in older computer systems
or legacy hardware. These systems rely on the FWH chip to provide the necessary
firmware instructions for proper system operation.
108
Overall, the FWH (Firmware Hub) is an integral component in computer systems, serving
as a storage medium for the system's firmware and playing a crucial role in the
initialization and configuration of the hardware during the system's startup process.
Figure 9.40 shows an example of an FWH chip in a Plastic Lead Chip Carrier (PLCC).
109
o Power Management: The Southbridge also includes power management features to
regulate power usage and control system standby, sleep, and other power-related
functions.
The Southbridge chipset plays a vital role in providing I/O support and control for
peripheral devices in a computer system. It allows for seamless connectivity and
communication between the CPU, memory, storage devices, network devices, and other
peripherals, contributing to the overall functionality and performance of the system.
• Serial port connector. A serial port connector, also known as a serial connector or RS-232
connector, is a type of interface used to connect devices for serial communication. It is
commonly found on older computer systems, industrial equipment, and some specialized
devices. The serial port connector allows for the transmission of data one bit at a time
over a single wire Figue 9.41 and Figure 9.42.
The most common type of serial port connector is the DE-9 (9-pin) connector, also
referred to as the DB-9 connector. It consists of a male or female connector with nine pins
arranged in two rows. Each pin has a specific function, including data transmission, data
reception, ground, and control signals.
Serial port connectors are often used for various applications, such as connecting
modems, printers, barcode scanners, serial mice, and other peripherals to a computer
system. They provide a simple and reliable method of data transfer between devices,
especially for devices that require a low-speed or asynchronous serial communication
protocol.
To use a serial port connector, the appropriate cable with matching connectors at both
ends is required. The cable connects the serial port connector on the computer or device
to the serial port connector on the peripheral or device being connected.
It's important to note that serial port connectors have become less common in modern
computer systems, as they have been largely replaced by USB (Universal Serial Bus) and
110
other high-speed interfaces. However, serial ports may still be available on certain devices
or legacy systems, and USB-to-serial adapters can be used to convert USB ports into serial
ports.
When working with serial port connectors, it's essential to ensure the proper
configuration of data settings, such as baud rate, parity, stop bits, and flow control, to
ensure successful communication between devices. These settings need to be matched
on both the sending and receiving devices to establish a reliable serial connection.
The serial port connectors provide a straightforward method of serial communication and
have been widely used in the past for connecting various peripherals and devices to
computer systems. While less common in modern systems, they still serve an important
role in certain applications and legacy hardware.
• USB & 1394 Headers. USB (Universal Serial Bus) headers are internal connectors on a
computer motherboard used to connect USB devices directly to the motherboard.
1394 Headers
1394 headers, also known as FireWire or IEEE 1394 headers, are internal connectors on a
computer motherboard used to connect FireWire devices directly to the motherboard.
111
FireWire enables high-speed data transfer and is commonly used in professional
audio/video applications.
o Pin Configuration: 1394 headers typically consist of six pins arranged in two rows. The
pin configuration may vary depending on the specific motherboard or FireWire
version.
o FireWire Versions: There are multiple versions of the FireWire standard, including
FireWire 400 (IEEE 1394a) and FireWire 800 (IEEE 1394b). The pin configuration of the
1394 header may correspond to either version, depending on the motherboard
specifications.
o Connection: FireWire cables or expansion brackets with FireWire ports can be
connected to the 1394 headers on the motherboard using compatible connectors.
The connectors typically have a plastic guide to ensure proper alignment during
connection.
It's worth noting that USB has become the more widely used and supported interface for
connecting peripheral devices, while FireWire has seen reduced adoption in recent years.
However, some specialized equipment and legacy devices may still rely on FireWire
connections.
As can be seen in the Figure 9.43 and Figure 9.44, both the 1394 and USB headers have
nine pins and closely resemble each other. Every motherboard though is different, the
1394 or USB header on your motherboard may only have four or five pins.
Figure 9.44 44-conductor (left) and 6-conductor (right) FireWire 400 alpha
connectors.
Caution: Plugging a 1394 header cable into the USB header connection or the USB header
cable into a 1394 connection will damage a motherboard. Always consult your
112
motherboard manufacturer manual before connecting anything to the 1394 or USB
header.
When working with jumpers, it is crucial to consult the device's documentation, such as
the motherboard manual or product specifications, to understand the proper jumper
configuration. The documentation will provide information on the specific jumper
settings and their corresponding functions.
113
It's important to handle jumpers with care and ensure they are properly aligned and
securely connected. Incorrect jumper settings or loose connections can lead to system
instability, compatibility issues, or malfunctioning hardware.
In modern computer systems, jumpers have become less common as many hardware
settings and configurations can now be modified through software or firmware
interfaces. However, they are still found in certain devices and motherboards, particularly
in specialized or legacy systems.
Jumpers provide a simple and effective means of configuring and customizing the
behavior of electronic devices, allowing for hardware customization and adaptation to
specific requirements.
Types of Integrated Circuits: There are various types of integrated circuits, including:
114
o Digital Integrated Circuits: These circuits process and store digital signals,
operating with binary states (0s and 1s). They are used in applications such as
microprocessors, memory chips, and logic gates.
o Analog Integrated Circuits: Analog circuits process continuous electrical signals,
allowing for functions like amplification, filtering, and signal conditioning. They
are used in applications such as audio amplifiers, power management, and
sensor interfaces.
o Mixed-Signal Integrated Circuits: These circuits combine both analog and digital
functions, allowing for the processing of both continuous and discrete signals.
They are commonly used in applications like data converters, audio/video
processing, and communication systems.
• SPDIF. Also written as S/PDIF, stands for Sony/Phillips Digital Interface, which is a digital
audio interface used to transmit high-quality audio signals between devices. It is also
commonly referred to as S/PDIF or S/PDIF.
SPDIF can transmit digital audio signals in either a coaxial or optical format. The coaxial
version uses a single RCA connector, while the optical version uses a TOSLINK connector
(a fiber-optic cable with a square-shaped plug). Both formats support the same digital
audio data, but they differ in the method of transmission.
SPDIF is widely used in home theater systems, soundbars, audio interfaces, CD/DVD
players, gaming consoles, and other audio devices. It allows for the transfer of high-
fidelity audio streams without any loss of quality associated with analog connections.
115
o Digital Audio Transmission: SPDIF is designed to transmit digital audio signals,
allowing for the transfer of uncompressed or compressed audio data in a digital
format.
o Wide Compatibility: SPDIF is a widely adopted standard and is supported by a broad
range of audio devices and equipment. This ensures compatibility and seamless
integration between different audio components.
o High-Quality Audio: SPDIF supports high-quality audio formats, including stereo PCM
(Pulse-Code Modulation) and compressed formats such as Dolby Digital and DTS
(Digital Theater Systems), allowing for the transmission of surround sound audio.
o Simplicity and Ease of Use: Connecting devices with SPDIF is relatively
straightforward. You need an appropriate cable (coaxial or optical) to transmit the
audio signal between the SPDIF interfaces of the source and the receiver devices.
o Long Transmission Distance: SPDIF supports relatively long cable runs without signal
degradation. Coaxial SPDIF can transmit audio signals over tens of meters, while
optical SPDIF can reach even longer distances due to the nature of fiber-optic
transmission.
o Consumer and Professional Versions: There are two versions of SPDIF: consumer and
professional. The consumer version supports stereo and compressed surround sound
formats, while the professional version (known as AES/EBU) is used in the audio
industry for transmitting high-quality, uncompressed audio signals.
SPDIF is a widely used and reliable method for transmitting digital audio signals between
devices. It allows for high-fidelity audio reproduction and is a convenient solution for
connecting audio components that support digital audio interfaces.
• CD-IN. CD-IN, also known as CD Audio In, refers to a connection or input on a computer's
sound card that allows the direct input of audio signals from an audio CD. Here are key
points about CD-IN: Figure 9.48 shows a black four-pin connector and an example of what
this connector looks like on a computer motherboard.
116
Purpose: CD-IN was primarily used in earlier sound cards to provide a direct input method
for audio signals from a CD player. It allowed users to connect the audio output of a CD
player to the sound card, enabling the computer to play audio CDs without requiring
additional software or decoding.
o Connection: CD-IN typically uses a 4-pin or 2-pin connector on the sound card. The
connector is designed to match the corresponding output connector on the CD
player. It may be labeled as "CD-IN" or "Aux In."
o Signal Format: CD-IN receives an analog audio signal from the CD player. The
audio signals are analog because audio CDs store audio information in an analog
format. The sound card's built-in digital-to-analog converter (DAC) converts the
analog signal to a digital format that can be processed and played by the
computer's audio software.
o Usage: To use CD-IN, the audio output from the CD player is connected to the CD-
IN connector on the sound card using an appropriate cable. Once connected, the
sound card's audio settings may need to be configured to select the CD-IN as the
audio input source. The computer's audio software can then play the audio signals
from the connected CD player through the computer's speakers or headphones.
o Decline in Usage: With advancements in technology, the use of CD-IN has become
less common. The widespread adoption of digital audio formats and the
availability of software-based CD audio playback have made CD-IN connections
less necessary. Additionally, many modern sound cards no longer include CD-IN
connectors as digital audio interfaces, such as S/PDIF or USB, have become more
prevalent.
It's important to note that the availability and usage of CD-IN can vary depending on
the specific sound card or audio hardware. If you have a sound card with CD-IN
functionality, you may consult the product documentation or the sound card
manufacturer's website for specific instructions on how to use and configure the CD-
IN feature.
• Hard disk drive. A hard disk drive (HDD) is a non-volatile storage device used for storing
and retrieving digital data in computers and other electronic devices. It consists of one
or more rotating disks, called platters, coated with a magnetic material that allows data
to be written and read using a read/write head. drives: Figures 9.48 and 9.49 present a
hard disk drive (HDD) for desktop computers. While Figure 9.50 shows a HDD for laptop
computers.
Figures 9.48 The desktop Hard drive Figures 9.49 The Hard drive (internal).
(external).
117
Figure 9.50 The laptop Hard drive (external).
a. Platters: Circular disks coated with a magnetic materia:l: Data is stored on these
platters in concentric tracks.
b. Read/Write Heads: Positioneabove and below the platters, the read/write heads
magnetically read and write data to and from the platters.
c. Actuator: Moves the read/write heads across the platters to access different
areas of data.
d. Spindle: Rotates the platters at a high speed, typically measured in revolutions
per minute (RPM).
o Data Access and Transfer: The read/write heads move rapidly across the spinning
platters to access and transfer data. Data is organized into sectors, and the read/write
heads align with specific sectors to read or write data. The speed at which data is
accessed and transferred is influenced by factors such as rotational speed, data
density, and seek time.
o File System: Hard disk drives are typically formatted with a file system that organizes
and manages data on the drive. Common file systems include NTFS (Windows), HFS+
(Mac), and ext4 (Linux).
o Interface: Hard disk drives connect to a computer or other device through an
interface, such as SATA (Serial ATA) or PATA (Parallel ATA). These interfaces enable
data transfer between the hard disk drive and the device's motherboard.
o Applications: Hard disk drives are widely used in desktop and laptop computers,
servers, external storage devices, and other electronic devices requiring high-capacity
storage. They are suitable for storing operating systems, software applications,
documents, multimedia files, and more.
o Performance Factors: Factors that impact the performance of a hard disk drive
include rotational speed (higher RPMs result in faster data access), cache size
118
(temporary data storage for faster retrieval), and data transfer rates (measured in
megabytes or gigabytes per second).
o Reliability: Hard disk drives are susceptible to mechanical failures, such as head
crashes or motor failures, which can result in data loss. Regular backups and proper
handling are important to mitigate the risk of data loss.
o Solid-State Drives (SSDs): Solid-state drives, which use flash memory instead of
rotating platters, are an alternative to traditional hard disk drives. SSDs offer faster
data access speeds, lower power consumption, and greater durability but typically
have a higher cost per gigabyte compared to HDDs.
Hard disk drives have been a primary storage solution for decades, providing high-
capacity storage for a wide range of applications. While solid-state drives have gained
popularity due to their faster performance, HDDs continue to be widely used for cost-
effective, high-capacity storage needs.
• Disk Capacity. Disk capacity refers to the amount of data that can be stored on a disk or
storage device, such as a hard disk drive (HDD), solid-state drive (SSD), or optical disc. It
is a measure of the total space available for storing files, documents, programs, and other
digital data.
Disk capacity is typically measured in binary units, such as bytes (B), kilobytes (KB),
megabytes (MB), gigabytes (GB), terabytes (TB), or even petabytes (PB) for larger storage
systems. Each unit represents an increasing order of magnitude, with each unit being
approximately 1,024 times larger than the previous unit.
The specific capacity of a disk depends on the physical characteristics and technology
used in the storage device. For example:
o Hard Disk Drives (HDD): HDDs use magnetic platters to store data and are available
in various capacities. Common HDD capacities range from a few hundred gigabytes
(GB) to several terabytes (TB) in consumer-grade drives, while enterprise-grade HDDs
can reach even higher capacities.
o Solid-State Drives (SSD): SSDs use flash memory technology to store data and offer
faster data access speeds compared to HDDs. SSD capacities have been steadily
increasing over time, with consumer SSDs now available in capacities ranging from
128GB to several terabytes (TB).
o Optical Discs: Optical discs, such as CDs, DVDs, and Blu-ray discs, have limited
capacities compared to HDDs and SSDs. CDs typically hold around 700MB to 800MB
of data, DVDs can store 4.7GB or 8.5GB depending on the type, and Blu-ray discs have
capacities of 25GB, 50GB, or even 100GB for dual-layer discs.
It's important to note that the actual usable capacity of a disk may be slightly lower than
the advertised capacity due to formatting and file system overhead. Additionally, some
storage devices reserve a portion of the capacity for features like wear leveling in SSDs or
error correction in HDDs.
119
The disk capacity required for an individual or organization depends on their specific
needs and usage patterns. Factors to consider include the type of data being stored, the
number and size of files, and the expected growth of data over time.
• Partition capacity. Partition capacity refers to the amount of storage space allocated to a
specific partition on a hard disk drive or other storage device. When you partition a
storage device, you divide it into separate sections or partitions, each with its own
designated capacity.
Partition capacity plays a crucial role in managing and organizing data on a storage device.
By allocating the appropriate amount of capacity to each partition, you can optimize data
120
storage, facilitate data management, and ensure efficient utilization of your storage
resources
For example, a 200 GB hard drive partitioned into two drives of 100 GB (C: and D: drive)
would report that the D: drive has a capacity of 100 GB even though it is part of a 200 GB
hard drive.
• Power Supply Unit. A Power Supply Unit (PSU) is a hardware component in a computer
system that provides electrical power to the various components of the computer. It
converts the incoming AC (alternating current) power from the electrical outlet into the
DC (direct current) power required by the computer's internal components. See Figure
51.
The PSU serves as the main power source for the entire computer system, supplying
power to components such as the motherboard, processor (CPU), memory, storage
drives, graphics card, and other peripherals. It ensures that the components receive a
stable and consistent supply of power to operate effectively.
o Wattage and Power Output: The wattage rating of a PSU indicates the maximum
power it can deliver. It is crucial to choose a PSU with adequate wattage to meet the
power requirements of the components in the computer system. Insufficient power
121
can result in system instability or failure, while excessive power may lead to
unnecessary energy consumption.
o Efficiency Rating: PSU efficiency refers to the percentage of input power that is
converted into usable DC power for the components. Higher efficiency ratings indicate
a PSU that wastes less energy as heat. Common efficiency certifications include 80
Plus Bronze, Silver, Gold, Platinum, and Titanium.
o Connectors and Cables: The PSU provides various power connectors and cables to
connect to different components in the computer system. These include the 24-pin
ATX power connector for the motherboard, CPU power connectors, SATA power
connectors for drives, PCIe power connectors for graphics cards, and peripheral
connectors for other devices.
o Cooling and Fan: PSUs generate heat during operation, and many models incorporate
fans or other cooling mechanisms to dissipate heat and maintain optimal operating
temperatures. The fan helps to circulate air and prevent overheating.
o Modular vs. Non-modular: PSUs can be modular or non-modular. Non-modular PSUs
have fixed cables, while modular PSUs allow for the customization of cable
connections. Modular PSUs offer improved cable management by reducing cable
clutter inside the computer case.
Proper installation and connection of the PSU is also important, following the guidelines
and instructions provided by the manufacturer and ensuring proper grounding and
electrical safety precautions.
The Power Supply Unit (PSU) is a critical component in a computer system, supplying the
necessary electrical power to all components. It is responsible for converting and
delivering stable DC power, and selecting a suitable PSU with the right wattage and
features is important for the overall performance and reliability of the computer system.
122
Voltage Regulation: The PSU regulates the output voltage to provide consistent and stable
power to the computer components. It maintains the voltages within specified tolerance
limits to prevent damage or instability in the system.
o Power Distribution: The PSU distributes the converted DC power to the various
components in the computer system. It provides separate power rails, such as +12V,
+5V, and +3.3V, to different components, including the motherboard, CPU, memory,
storage drives, and peripherals.
o Overvoltage and Overcurrent Protection: The PSU incorporates protection
mechanisms to safeguard the system components from voltage spikes or excessive
current. It monitors the power output and shuts down or reduces the power in case
of overvoltage or overcurrent conditions, preventing damage to the components.
o Cooling and Fan Control: The PSU includes a cooling system, typically with a fan, to
dissipate heat generated during operation. It monitors the internal temperature and
adjusts the fan speed accordingly to maintain optimal operating temperatures.
o Power Good Signal: The PSU provides a "Power Good" signal to the motherboard to
indicate that the power supply is functioning correctly and stable. This signal ensures
that the motherboard and other components receive a clean and reliable power
supply before initiating the system startup.
o Standby Power: The PSU provides standby power even when the computer is turned
off or in a low-power state, enabling functions such as Wake-on-LAN or standby power
for USB charging.
o Connectors and Cables: The PSU includes various connectors and cables to provide
power connections to the motherboard, CPU, graphics card, storage drives, and other
peripherals. These connectors ensure proper power delivery to the respective
components.
It's important to note that different PSUs may have varying features, efficiency ratings,
and signal specifications. The specific functions and signals can also depend on the PSU
model, wattage, and design. It's recommended to refer to the PSU manufacturer's
documentation for detailed information on the specific functions and signals of a
particular PSU model.
The PSU performs critical functions to convert, regulate, and distribute power to the
components in a computer system. It ensures reliable and stable power supply, protects
against power abnormalities, and supports the proper functioning and longevity of the
system.
123
Wattage (Total Power): The wattage rating of a PSU represents the total power it can
deliver to the components in the system. It is typically indicated as a maximum value,
such as 500W, 750W, 1000W, etc.
The wattage rating determines the PSU's capacity to handle the power demands of the
system.
o Voltage Rails: PSUs provide different voltage levels to power different components in
the system. The main voltage rails include:
o +3.3V: This rail provides power to components such as memory modules and some
peripheral devices.
o +5V: This rail powers components like the motherboard, drives, and USB ports.
o +12V: The +12V rail is crucial for powering components like the CPU and graphics card.
Modern systems place a significant emphasis on the +12V rail, as it supplies power to
power-hungry components.
The wattage of the PSU is typically distributed among these voltage rails based on the
power requirements of the system components.
o Amperage (Current): The PSU output ratings also include the amperage or current
ratings for each voltage rail. It indicates the maximum amount of current that can be
provided by each rail. Amperage is calculated by dividing the wattage of a particular
voltage rail by the voltage level. For example, if a +12V rail has a rating of 20A, it can
provide a maximum of 240W (12V * 20A) of power.
o Efficiency Rating: PSUs also have efficiency ratings that indicate how effectively they
convert AC power from the electrical outlet into usable DC power for the components.
Efficiency is expressed as a percentage and represents the amount of input power
that is converted into output power. Higher efficiency ratings indicate more efficient
power conversion, resulting in less wasted energy as heat.
It's important to choose a PSU with sufficient wattage and appropriate current ratings to
meet the power requirements of the components in the system. Factors such as the
number and power requirements of the CPU, graphics card, drives, and other peripherals
should be considered when selecting a PSU.
Additionally, it's worth noting that PSUs with higher wattage ratings often have additional
connectors and cables to support more demanding systems with multiple components.
• Output Power. Output power, in the context of a Power Supply Unit (PSU), refers to the
amount of electrical power that the PSU can deliver to the components in a computer
124
system. It indicates the maximum power capacity of the PSU and is typically measured
in watts (W).
The output power of a PSU is an important specification to consider when selecting a PSU
for a computer system. It needs to be sufficient to meet the power requirements of the
components and peripherals in the system. Insufficient power output can lead to system
instability, crashes, or even component damage, while excessive power may result in
unnecessary energy consumption.
The output power of a PSU is usually divided into different voltage rails, including +3.3V,
+5V, and +12V, which correspond to the power requirements of various components in
the system. The wattage is distributed among these rails based on the power demands of
the components. For example, the +12V rail is critical for providing power to the CPU and
graphics card, which are often the most power-hungry components in a system.
When selecting a PSU, it's important to consider the total power requirements of the
components in the system. This can be determined by assessing the power consumption
values of each component, as specified by the manufacturers. It's recommended to
choose a PSU with a wattage rating that exceeds the total power requirement to allow
for future upgrades or additional components.
It's worth noting that the actual power consumption of a system may vary based on the
specific workload, usage patterns, and efficiency of the PSU. Additionally, PSU efficiency
can affect the amount of power drawn from the electrical outlet, as higher efficiency PSUs
convert more of the input power into usable output power.
Note (in Table 9.2) that the "negative voltages" are added to the total, not subtracted
from it. Here's a sample (actual) 300 W - AT form factor power supply's distribution. You'll
see that the total is close to the rated specification of the power supply:
For the ATX/NLX, SFX and WTX form factors, which provide +3.3 V power (as well as +5 V
Standby power and potentially others), there is an added complication: there is a
maximum rating for each of the +3.3 V and +5 V currents, but also a combined "+3.3 V /
125
+5 V" rating. The power supply will provide up to the combined total on these two
voltages, in any combination if the individual current ratings are not exceeded.
Here's a sample (actual) 300 W ATX form factor power supply's distribution:
Understanding the output power of a PSU is crucial for selecting a suitable PSU that can
provide sufficient and stable power to all the components in a computer system. It
ensures reliable operation, prevents power-related issues, and supports the optimal
performance of the system.
Here are the key factors to consider when assessing system power requirements:
o Component Power Consumption: Each component in a computer system consumes
a certain amount of power. The major components to consider include:
o Processor (CPU): Different CPUs have varying power requirements based on their
architecture, clock speed, and number of cores.
o Graphics Card (GPU): High-performance GPUs used for gaming or professional
applications tend to have higher power demands.
o Memory (RAM): RAM modules have minimal power requirements compared to other
components.
o Storage Drives: Hard Disk Drives (HDDs) and Solid-State Drives (SSDs) have relatively
low power consumption.
o Motherboard: The motherboard itself consumes some power, but the amount is
generally minimal compared to other components.
126
Peripherals: Additional components such as optical drives, network cards, sound cards,
and USB devices can contribute to the overall power requirements.
TDP (Thermal Design Power): The Thermal Design Power rating specifies the maximum
amount of heat generated by a component under typical operating conditions. Although
TDP does not directly correlate to power consumption, it can give an indication of a
component's power requirements.
Overclocking: If you plan to overclock your CPU or GPU, the power requirements will
increase significantly. Overclocking involves running components at higher frequencies or
voltages, which results in increased power consumption.
To determine the system power requirements, you can follow these steps:
Select a PSU with an appropriate wattage rating that exceeds the total power
requirement. It's recommended to choose a reliable and high-quality PSU from a
reputable brand to ensure stable and efficient power delivery.
By accurately assessing the system power requirements and selecting a suitable PSU, you
can ensure that your computer system receives sufficient and reliable power for optimal
performance and stability.
127
Chapter 10
File Formats
Overview:
File formats are standardized structures or specifications that define how data is organized,
stored, and encoded in a computer file. Each file format serves a specific purpose and determines
how data is represented and interpreted by software applications. Understanding file formats is
crucial for working with different types of files and ensuring compatibility across different
software and platforms.
Objectives:
At the end of this chapter, students will be able to:
1. Scrutinize the different file formats.
2. Illustrate the different applications and their native file formats.
3. Convert a native file format to a more accessible platform.
A file format is a standard way that information is encoded for storage in a computer file.
• A file format specifies how bits are used to encode information in a digital storage medium.
• File formats may be either proprietary or free and may be either unpublished or open.
• For example: text format, image file format, audio file format, and video file format.
128
Several text editors utilize the TXT file extension for text files. A sequence of characters and the
words they create that can be decoded into computer-readable formats is called text. Although
there are various widely used formats for text files, including ANSI (used on DOS and Windows
platforms) and ASCII (a cross-platform format), there is no universally accepted definition of what
a text file is.
There is something about writing or logging your day in text files that is quite different from
writing in a Microsoft Word Document, Apple Pages Document, Google Document, or even an
OpenOffice ODT format. Below are the benefits you can gain with plain text files:
Advantages
• Portability. One of the best things about plain text is that it is a portable format between
almost any operating system. You can use plain text files on Windows, Mac OS, iOS,
Android, Windows Phone, Linux, etc. All of these operating systems have ways of natively
showing you the contents of a text file as well and also allowing you to edit its contents.
• Easy To use. Plain text files are at the zenith of ease of use. There isn’t really anything to
learn; you just start typing text into a blank file. That’s it. No keyboard shortcuts to learn,
or complicated menu structures, or ways to format etc. It’s all about putting data in a file
and that is it.
You can create a new plain text file simply in any operating system with built-in apps (i.e.
TextEdit.app, or Notepad.exe).
• No lock-in. Another great reason that newbies love plain text is that there is no vendor
lock-in. This goes hand-in-hand with the portability reason mentioned above. There is no
“special app” that only supports text files. There is no “compatibility issues” that you need
to deal with. For all purposes, text files are just text files and can be opened by pretty much
any document or text creation software.
129
This is a great thing when you want your data sticking around for the long term. When the
.doc format dies in the next 80 years, it would be hard for me to believe that some system
that exists in the world won’t be able to open the simplest of data forms (even if you have
to load it up in your heads-up-display that is embedded in your eyes).
• RTF
o RTF (Rich Text Format) is a file format used for text documents that supports basic
formatting, such as bold, italics, and font styles.
o It can be opened and edited by various word processing applications.
o RTF files have a .rtf file extension.
• HTML
o HTML (Hypertext Markup Language) is a file format used for creating web pages.
o It uses tags and elements to structure and format content on the web.
o HTML files can be opened and displayed by web browsers.
o HTML files have a .html or .htm file extension.
• PDF
o PDF (Portable Document Format) is a file format used for documents that are meant to be
viewed and printed consistently across different platforms and devices.
o PDF files retain the formatting, fonts, images, and other elements of a document.
o PDF files can be opened and viewed using PDF reader software.
o PDF files have a .pdf file extension.
• ZIP
o ZIP is a file format used for compressing and archiving multiple files into a single, smaller file.
o It reduces file size and allows for easier storage and transfer of multiple files.
o ZIP files can be created and extracted using compression software.
o ZIP files have a .zip file extension.
130
The camera is effectively recording data when you take a picture, and that data is then converted
into a digital image. Every image you view online is a file called an image. The majority of what
you see printed on items like paper, plastic, or t-shirts originated as an image file. These files are
available in several forms, and each one is tailored for a certain purpose. Your design will be
exactly as you wanted it to be if you use the proper type for the job. A terrible print, a subpar
web image, a large download, or a missing graphic in an email could all result from using the
incorrect format.
Using a photo editing program, you can retrieve and edit data in a wide variety of file formats.
Here are some important aspects and examples of image file formats.
Raster Image Formats: Raster images are made up of individual pixels arranged in a grid. They
are resolution-dependent, meaning they can lose quality when resized or scaled up.
Raster Images
Each pixel in a raster image is given a specific color and is composed of a grid of dots known as
pixels. Raster images, in contrast to vector images, are resolution-dependent, which means they
only exist at one size. Raster images can become "pixelated" or blurry when they are transformed
since doing so stretches the pixels inside the image. Your software basically makes an educated
estimate about what picture data is missing when you magnify an image based on the
surrounding pixels. The outcomes are typically not fantastic.
Raster images are typically used for photographs, digital artwork, and web graphics (such as
banner ads, social media content, and email graphics). Adobe Photoshop is the industry-standard
131
image editor that is used to create, design, and edit raster images as well as to add effects,
shadows, and textures to existing designs.
• RGB stands for Red, Green, and Blue, which are the primary colors used in the RGB color
model. RGB is an additive color model primarily used for electronic displays, such as
computer monitors, televisions, and digital screens.
Key Differences:
• Color Representation:
o CMYK primarily represents printed colors by using a combination of ink colors
on a physical medium like paper.
o RGB represents colors on electronic displays by combining light emissions
from red, green, and blue pixels.
• Color Range:
o RGB has a wider color gamut and can represent more vibrant and saturated
colors, particularly in the blue and green spectrum.
o CMYK has a narrower color gamut and may not accurately reproduce some
highly saturated colors, particularly in the blue and green range.
• Usage:
132
o CMYK is typically used for printed materials such as brochures, magazines, and
other physical media.
o RGB is used for electronic displays, including websites, computer graphics,
digital images, and multimedia content.
• Conversion:
o Converting an RGB image to CMYK may result in a loss of color vibrancy and
gamut, as the CMYK color space is generally smaller.
o It is essential to consider the intended output when converting between CMYK
and RGB to ensure optimal color representation.
When working with images, it is important to consider the color model that aligns with the
specific requirements of the medium, such as print or digital display. Designers and
photographers often need to work in both color models, ensuring that their images appear
accurately and consistently across different platforms.
Lossy Compression:
o Lossy compression is a data compression technique that reduces file size by
permanently discarding some information deemed less essential.
o During the compression process, non-essential or less noticeable details are
removed or approximated, resulting in a smaller file size.
o The discarded information cannot be fully recovered, leading to a loss of data or
quality.
o Lossy compression is commonly used for multimedia files, such as images (JPEG) and
audio (MP3), where minor loss of quality may not be easily perceivable to the
human senses.
o The level of compression can be adjusted to find a balance between file size
reduction and acceptable quality loss.
Lossless Compression:
o Lossless compression is a data compression technique that reduces file size without
any loss of data or quality.
o The compression algorithm rearranges and represents the data more efficiently,
allowing for full reconstruction of the original file.
o During decompression, the original data is perfectly restored, bit-for-bit, without
any information loss.
o Lossless compression is ideal for applications where maintaining the integrity and
exactness of the data is crucial, such as archiving, text documents, and data backups.
o File formats like PNG (for images) and FLAC (for audio) employ lossless compression.
133
Key Differences:
Choosing between lossy and lossless compression depends on the specific requirements of the
data and the intended use. Lossy compression is often used for multimedia files where some loss
of quality is tolerable, while lossless compression is favored for preserving data accuracy and
integrity.
Typically, lossy files are much smaller than lossless files, making them ideal to use online where
file size and download speed are vital.
134
You should use a JPEG when…
o Storing and Sharing Photographs: JPEG is commonly used for storing and sharing
digital photographs due to its ability to compress image files while maintaining
acceptable image quality. It achieves higher compression ratios by discarding non-
essential image information that may not be easily perceptible to the human eye.
o Web Images: JPEG is suitable for web images, especially when the focus is on
reducing file size for faster loading times. It is effective for photographic images
or complex graphics with a wide range of colors and subtle color transitions.
o Continuous-tone Images: JPEG is well-suited for continuous-tone images, which
include photographs and images with gradients or smooth color transitions. It
preserves the nuances of color and detail in these types of images.
o On-Screen Display: JPEG is optimized for on-screen display, making it ideal for
viewing images on computer screens, mobile devices, and other electronic
displays. It provides a good balance between file size and image quality, ensuring
efficient data transmission and storage.
o Large Image Libraries: When dealing with a large collection of images, such as in
digital photo albums or image archives, JPEG's ability to compress files allows for
efficient storage and management of a significant number of images.
o Flexibility in Compression Settings: JPEG offers flexibility in adjusting compression
settings to find the right balance between file size and image quality. Different
levels of compression can be chosen based on the specific requirements of the
image and the desired trade-off between file size and visual fidelity.
It's important to note that JPEG compression is lossy, meaning that some image quality is
sacrificed to achieve smaller file sizes. Therefore, it may not be suitable for images that require
pixel-perfect accuracy, such as line drawings, diagrams, or images with text. For such cases,
formats like PNG or GIF, which support lossless compression or transparency, might be more
appropriate.
135
o Repeated Editing: JPEG is a lossy format, so each time you edit and re-save a JPEG
image, the compression artifacts may become more pronounced. This can result
in a degradation of image quality over multiple editing sessions. To maintain image
integrity during extensive editing, it's preferable to work with lossless formats like
TIFF or PSD (Photoshop Document).
o Animation: JPEG does not support animation. If you need to create animated
images, formats like GIF or APNG (Animated Portable Network Graphics) are
commonly used.
o Graphics with Flat Colors or Limited Color Palette: JPEG is designed for
continuous-tone images with complex color gradients, such as photographs. If you
are working with images that have flat colors or a limited color palette, formats
like GIF or PNG with indexed color support may result in smaller file sizes and
better color accuracy.
Remember that the suitability of image formats depends on specific requirements, such as
image content, intended use, and desired trade-offs between file size and image quality.
Consider these factors when determining the most appropriate format for your particular
needs.
o Transparency: GIF supports transparency, allowing you to specify one color in the
image to be transparent. This is useful for overlaying images onto different
backgrounds or creating images with irregular shapes.
o Small File Size: GIF uses lossless compression, meaning there is no loss of image
quality during compression. This results in relatively small file sizes, making it ideal
for web graphics and situations where file size is a consideration, such as when
sharing images on websites or through email.
o Browser Compatibility: GIF is supported by virtually all web browsers, making it a
reliable choice for displaying images on websites. It ensures broader compatibility
across different platforms and devices.
136
o Text-Based Images: GIF is suitable for images containing text or simple graphics
with sharp edges. Unlike JPEG, which may introduce compression artifacts around
text or sharp edges, GIF preserves the crispness and readability of text.
o Image Sequences: GIF can be used to display a sequence of images in rapid
succession, creating the illusion of motion. This technique is often used in
tutorials, demonstrations, or storytelling.
o Limited Animation Effects: While GIF supports animation, it has limitations in
terms of the number of frames and color palette. GIF animations are typically
smaller in size and more straightforward in terms of effects compared to other
formats like APNG or video formats.
When considering using GIF, it's important to be mindful of its limitations, such as its limited color
palette and relatively low-quality compared to other formats like JPEG or PNG. However, for
specific use cases like simple animations, transparency, or small file size requirements, GIF can
be a practical and widely supported choice.
137
o Limited or Looping Video Clips: GIF can be used to convert short video clips into
a format that can be easily shared and viewed on platforms that do not support
video formats. However, it's important to consider the file size limitations of GIFs
when using them for video clips, as they can quickly become large files if the
duration or complexity increases.
It's important to note that GIF has certain limitations, such as its limited color palette and
relatively low-quality compared to formats like JPEG or PNG. It's not suitable for high-resolution
images or photographs with complex color gradients. Additionally, the use of GIFs for longer
animations or high-quality visuals may result in large file sizes and reduced image quality.
138
o Images with Textures or Patterns: PNG can effectively preserve images with fine
textures, patterns, or intricate details, such as fabric textures or detailed
illustrations.
o Archiving or Preservation: PNG is a suitable format for archiving or preserving
images due to its lossless compression and support for high-quality visuals. It
ensures that the original image is accurately stored and can be retrieved without
any loss of quality in the future.
It's important to note that PNG files tend to have larger file sizes compared to compressed
formats like JPEG. While PNG is ideal for preserving image quality and transparency, it may not
be the most efficient choice for large or bandwidth-sensitive applications. In such cases,
considerations regarding file size and loading times should be taken into account.
139
Consider these factors when determining the most appropriate image format for your specific
needs. While PNG offers lossless compression and supports transparency, it may not always be
the most efficient choice depending on the specific requirements and constraints of your project.
140
It's important to note that TIFF files tend to have larger file sizes compared to other compressed
formats like JPEG. This makes them less suitable for web or online use where file size and loading
times are critical. However, for applications that require maximum image quality, preservation
of data, and compatibility with professional workflows, TIFF remains a preferred choice.
Consider these factors when determining the most suitable image format for your specific needs.
While TIFF offers lossless compression, high-quality preservation, and extensive editing
capabilities, its larger file sizes and compatibility limitations make it less practical for certain
applications, particularly those involving web-based or size-sensitive environments.
141
BMP (Bitmap Image File):
o BMP is a basic image format that stores data pixel by pixel without compression.
o It is relatively large in file size but maintains high image quality.
o BMP files are commonly used in Windows environments and for simple graphics.
It's worth noting that BMP files tend to have larger file sizes compared to compressed formats
like JPEG or PNG. This makes them less practical for web or online use, where file size and loading
times are critical considerations. However, for applications that prioritize compatibility, lossless
quality, or platform-specific requirements, BMP can be a suitable choice.
142
o Web or Online Use: Due to their larger file sizes, BMP files are not optimized for
web or online use. Uploading or loading BMP files on websites can be slow and
consume significant bandwidth. It is more practical to use compressed formats like
JPEG or PNG for web graphics and online applications.
o Limited Compatibility: While BMP is widely supported, some software
applications, web browsers, or devices may have limitations in handling BMP files,
especially in terms of color depths or more advanced features. This can result in
compatibility issues when sharing or opening BMP files in different environments.
o Lossless Quality is Not Required: BMP uses lossless compression, preserving the
original image quality. However, if lossless quality is not a critical requirement,
formats like JPEG or PNG, which offer effective compression while maintaining
acceptable image quality, may be more suitable.
o Animation or Interactivity: BMP does not support animation or interactivity. If you
need to create animated images or require interactive elements, other formats like
GIF or SVG (Scalable Vector Graphics) are more appropriate.
o Platform-Independent Use: BMP is often associated with specific operating
systems, such as Windows, and may not be as universally recognized or supported
across different platforms. If you require platform-independent compatibility,
formats like JPEG or PNG are more widely recognized and compatible.
o Limited Color or Transparency Options: BMP supports a wide range of color
depths, but it may not offer the same level of color or transparency options as
formats like PNG or GIF. If you need images with transparency, indexed colors, or
more advanced color features, consider using other formats.
Consider these factors when determining the most suitable image format for your specific needs.
While BMP offers lossless quality and compatibility with older systems, its larger file sizes and
limited features make it less practical for certain applications, particularly those involving web-
based or size-sensitive environments.
143
digital artwork, or photo manipulation, where maintaining flexibility and editing
control is crucial.
o Collaboration with Other Designers: PSD files are widely recognized and supported by
other designers, especially those using Adobe Creative Suite software. By sharing PSD
files, you can collaborate more effectively, allowing others to access and edit individual
layers, apply adjustments, or make modifications to the design.
o Printing and Professional Graphics: PSD files are commonly used in professional
printing and graphic design workflows. They support CMYK color space, high-resolution
images, and various color profiles, ensuring accurate color reproduction and
compatibility with printing processes.
o Preservation of Image Metadata: PSD files can store metadata such as color profiles,
author information, copyright details, and other relevant data. This makes them
suitable for archiving or preserving valuable information associated with the image.
o Multiple Variations or Versions: PSD allows you to save different variations or versions
of the same image within a single file, thanks to its layer-based structure. This makes
it convenient for creating design variations, mockups, or different compositions
without cluttering your file system with multiple files.
o Large File Sizes: Since PSD is primarily used for advanced image editing, it can handle
large file sizes without significant loss of performance. This is important when working
with high-resolution images or projects that require detailed edits and adjustments.
o Future Editing and Revisions: Saving your work as a PSD file ensures that you can
revisit and modify your project in the future. It preserves all layers, effects, and
adjustments, allowing you to make changes without starting from scratch or losing any
previous work.
It's important to note that PSD files may not be ideal for all scenarios, especially when sharing
images online or for web-based applications, as their larger file sizes and specific software
requirements may limit their usability. In such cases, formats like JPEG, PNG, or PDF might be
more suitable for sharing or displaying purposes.
144
like JPEG or PNG can still retain the final image with acceptable quality while reducing
file size and increasing compatibility.
o Simplified or Finalized Edits: If you have already completed your image edits and no
further adjustments or non-destructive editing is required, saving the file as a PSD
may not be necessary. Formats like JPEG or PNG can capture the final edited image
effectively and offer better compatibility for sharing or displaying purposes.
o Large-Scale File Distribution: PSD files tend to have larger file sizes due to their
support for layers and advanced editing features. If you need to distribute images on
a large scale, such as sending them via email or uploading them to a server, the larger
file sizes of PSDs can pose challenges in terms of file transfer and storage limitations.
o Quick Viewing or Basic Editing: If you simply need to view or perform basic edits on
an image without requiring the advanced features of a PSD file, using a simpler and
more widely supported format like JPEG or PNG is more practical. These formats are
universally recognized and can be easily opened and edited by a wide range of
software applications.
o Web or Mobile App Development: When developing web or mobile applications, PSD
files are generally not used directly in the production environment. Instead, web-
friendly formats like PNG or SVG (Scalable Vector Graphics) are commonly utilized to
optimize performance, reduce file size, and ensure compatibility across different
devices and browsers.
Consider these factors when deciding whether to use a PSD file. While PSD offers extensive
editing capabilities and layer preservation, its limited compatibility, larger file sizes, and software-
specific requirements may make it less suitable for certain scenarios, such as online sharing,
simplified edits, or widespread distribution.
Vector Image Formats: Unlike raster images, vector images are based on mathematical
equations and can be scaled without loss of quality.
145
o Graphics with Well-Defined Shapes and Lines: SVG excels at representing graphics
with well-defined shapes, lines, and geometric elements. It's particularly useful for
logos, icons, diagrams, and illustrations that rely on crisp lines, curves, and precise
shapes.
o Small File Sizes: SVG files are typically smaller in size compared to raster image formats
like JPEG or PNG. This is because SVG files are based on mathematical descriptions of
shapes and lines, rather than pixel data. Smaller file sizes result in faster loading times,
reduced bandwidth usage, and improved performance, especially in web applications.
o Editability: SVG files can be easily edited and modified using various vector graphics
editing software, such as Adobe Illustrator or Inkscape. You can adjust shapes, colors,
sizes, and other attributes without sacrificing quality. This flexibility is particularly
valuable for designers and developers who need to customize and adapt images to
different requirements.
o Animation and Interactivity: SVG supports animation and interactivity through CSS
(Cascading Style Sheets) or JavaScript. You can create dynamic and interactive
graphics, such as animated icons, infographics, or interactive maps, by manipulating
elements within the SVG file.
o Accessibility: SVG allows for the inclusion of semantic information and accessibility
features. It supports alternative text (alt text), ARIA (Accessible Rich Internet
Applications) attributes, and other accessibility enhancements, making it easier for
screen readers and assistive technologies to interpret and convey information to
visually impaired users.
o Cross-Platform Compatibility: SVG is widely supported across different platforms,
browsers, and devices, including desktops, laptops, tablets, and smartphones. It
ensures consistent rendering and appearance, providing a consistent experience for
users regardless of their device or operating system.
It's important to note that while SVG is suitable for many use cases, it may not be ideal for images
with complex gradients, high levels of detail, or images that rely on photographic content. In such
cases, raster image formats like JPEG or PNG may be more appropriate. Additionally, browser
support for SVG features may vary, so it's important to consider fallback options for older
browsers if advanced SVG functionality is utilized.
146
precise control over individual pixels, making them better suited for certain graphic
design or photo editing tasks.
o Displaying complex animations: While SVG supports basic animations, it may not be
the optimal choice for complex or high-fidelity animations. In such cases, other formats
like GIF, APNG, or HTML5-based animation solutions may provide better results.
o Targeting older web browsers or platforms: Although SVG has good support across
modern web browsers, older versions or less common platforms may not fully support
it. If compatibility with a specific browser or platform is crucial, it's important to check
its SVG support before using it.
o Working with continuous-tone images or gradients: SVG is not the most suitable
format for continuous-tone images or gradients that require smooth transitions of
colors. Raster formats like JPEG or PNG are better suited for handling these types of
images.
o Needing pixel-level photo manipulation: If your workflow requires detailed photo
manipulation or advanced editing features that are commonly found in dedicated
image editing software, a raster format like TIFF or PSD may be more appropriate.
While SVG is a versatile and widely supported vector format, it is essential to consider the
limitations and specific requirements of your project to determine whether SVG is the most
suitable choice for your particular use case.
AI (Adobe Illustrator):
o AI is the native file format of Adobe Illustrator.
o It stores vector-based graphics, allowing for flexible editing and scaling.
o AI files are commonly used in professional graphic design and illustration workflows.
147
or other digital platforms. You can export to formats like PDF, SVG, EPS, or raster
formats like JPEG or PNG as needed.
o Scaling without loss of quality: Vector graphics stored in AI format can be scaled to
any size without sacrificing quality. This is particularly important when your designs
need to be resized for different applications or when working on projects that require
scalability, such as logos or signage.
o Incorporating advanced effects and transparency: AI supports a wide range of design
effects, blending modes, and transparency settings. It allows you to apply gradients,
transparency, shadows, and other advanced effects to create visually appealing and
sophisticated artwork.
o Leveraging integration with Adobe Creative Cloud: If you use Adobe Creative Cloud
and its suite of design applications, AI seamlessly integrates with other Adobe software
like Photoshop, InDesign, or After Effects. This facilitates efficient cross-application
workflows and design asset management.
o Maintaining compatibility with Adobe Illustrator: Using the AI format ensures
compatibility with future versions of Adobe Illustrator. As new features and
improvements are introduced, you can confidently open and work with AI files without
concerns about compatibility issues.
Overall, AI is an excellent choice for working with vector graphics, preserving editing capabilities,
collaborating with other designers, and ensuring precise control over design elements. It offers a
comprehensive set of tools and features for creating, editing, and exporting professional-quality
vector artwork.
148
familiar with or that offers a more streamlined workflow may help you save time and
meet your deadlines more efficiently.
o Budget Constraints: Adobe Illustrator is a professional graphic design software that
comes with a subscription cost. If you have budget constraints and can't afford the
recurring subscription fees, exploring free or more affordable graphic design software
options could be a more practical solution. There are several free and open-source
design tools available that can serve your design needs without the associated costs.
These considerations can help guide your decision on whether to use Adobe Illustrator or opt for
alternative software based on your specific design requirements, skill level, budget, and time
constraints. It's important to choose the software that best aligns with your needs and provides
a smooth and efficient workflow for your design projects.
149
creating or working with branded materials, using EPS ensures consistency and fidelity
in reproducing the logo across various media.
o Saving illustrations or graphics for archival purposes: EPS is a suitable format for
archiving vector-based artwork. It preserves the original vector data, ensuring that the
artwork can be accessed and edited in the future without loss of quality or resolution.
EPS is a versatile and widely supported format that excels in print production and embedding
vector graphics. It offers compatibility with various software applications and is suitable for
maintaining transparency, layering, and scalability. When working in professional print
environments or when vector fidelity is crucial, EPS remains a reliable choice.
150
Consider these factors when determining whether to use an AI file. While AI offers extensive
editing capabilities and compatibility with Adobe Illustrator, its limited compatibility, larger file
sizes, and specific software requirements may make it less suitable for certain scenarios, such as
online sharing, basic editing, or broad collaboration.
151
o Creating e-books or digital publications: PDF is commonly used for creating e-books
or digital publications. It provides a consistent reading experience across different
devices, maintaining the document's structure, and allowing for easy navigation.
o Combining multiple files into a single document: PDF supports merging multiple files,
such as text documents, images, or spreadsheets, into a single PDF file. This
consolidates related content into one document for easy sharing or distribution.
PDF is a versatile format suitable for sharing, printing, archiving, and securing documents while
preserving their original layout and content. Its compatibility, portability, and rich feature set
make it a widely adopted standard for document exchange in various industries and applications.
152
Consider these factors when determining whether to use a PDF file. While PDF offers extensive
document features and compatibility, its limitations in terms of image quality, web display, and
specialized image editing may make it less suitable for certain scenarios focused primarily on
images or online use.
These are just a few examples of image file formats, each with its own features, compression
methods, and recommended use cases. The choice of format depends on factors such as image
complexity, desired file size, transparency requirements, and intended use (web, print, or
editing). Understanding image file formats enables efficient image storage, sharing, and display
while maintaining optimal image quality.
Image file formats possess various features that determine their capabilities, compression
methods, and suitability for different applications. Here are some key features commonly found
in image file formats:
1. Compression:
o Image file formats may employ different compression methods to reduce file size.
o Lossless Compression: Some formats use lossless compression, which allows for the exact
reconstruction of the original image without any loss in quality. Examples include PNG
and TIFF.
o Lossy Compression: Other formats use lossy compression, sacrificing some image details
to achieve smaller file sizes. JPEG is a well-known example of a lossy compressed format.
3. Transparency:
o Some image formats support transparency, allowing certain parts of an image to be fully
or partially transparent.
o GIF: GIF supports indexed transparency, where a single color is designated as transparent.
o PNG: PNG supports alpha-channel transparency, allowing for smooth and variable
transparency levels.
4. Animation:
o Certain image formats, like GIF, APNG (Animated PNG), and MNG (Multiple-image
Network Graphics), can store multiple frames to create animations.
153
5. Metadata:
o Image file formats often provide the ability to store additional metadata, such as camera
settings, geolocation, timestamps, and copyright information.
o Formats like JPEG and TIFF support metadata standards such as Exif (Exchangeable Image
File Format) and IPTC (International Press Telecommunications Council) for storing image-
related data.
6. Layer Support:
o Some image formats, such as PSD (Photoshop Document), allow for the preservation of
image layers, enabling advanced editing capabilities in graphic design software.
7. Scalability:
o Vector-based image formats, like SVG and AI, are inherently scalable as they are defined
by mathematical equations rather than pixels. They can be resized without any loss of
quality.
8. Platform Compatibility:
o Image formats vary in terms of their support across different operating systems, web
browsers, and image editing software.
o Common formats like JPEG, PNG, and GIF are widely supported on various platforms,
making them highly compatible.
These features contribute to the functionality and versatility of image file formats, allowing users
to select the most suitable format based on their specific needs, desired image quality, and
intended use.
154
What is an audio file?
Audio files consist of audio samples that capture the amplitude (loudness) of the sound
waveform at different points in time. These samples are taken at a specific rate called the
sampling rate, which determines the quality and fidelity of the audio. The higher the sampling
rate, the more accurately the original sound can be reproduced.
Audio files can be categorized into different formats, each with its own characteristics and
features. The choice of audio file format depends on factors such as intended use, sound quality
requirements, compatibility, and compression preferences. Let's explore the various categories
of audio formats:
Each audio format has its own advantages and considerations in terms of audio quality, file size,
compatibility, and usage scenarios. It's important to choose the appropriate format based on the
specific requirements and constraints of your application, whether it's music production,
streaming, broadcasting, or personal listening.
• PCM (Pulse Code Modulation): PCM is the most basic and widely used uncompressed audio
format. It samples the audio waveform at regular intervals, quantizes the samples into
numerical values, and stores them as raw data. PCM is the standard format for audio CDs
and is commonly used in professional audio production.
• WAV (Waveform Audio File Format): WAV is a popular uncompressed audio format
developed by Microsoft and IBM. It stores audio data in the PCM format and supports
155
various bit depths, sample rates, and channels. WAV files are commonly used for storing
high-quality audio and are compatible with a wide range of software and hardware devices.
• AIFF (Audio Interchange File Format): AIFF is an uncompressed audio format developed by
Apple. It is similar to WAV and also stores audio data in PCM format. AIFF files are widely
used in Apple's macOS and iOS platforms and are supported by many audio applications and
devices.
• BWF (Broadcast Wave Format): BWF is an extension of the WAV format that adds additional
metadata specifically for broadcasting purposes. It includes timecode information, cue
markers, and other details that are useful in professional audio and video production
workflows.
These uncompressed audio formats provide a faithful representation of the original audio, but
they tend to result in larger file sizes compared to compressed formats. They are commonly used
in situations where audio quality is critical, such as professional music production, mastering,
audio archiving, and broadcasting. It's important to note that the choice of format depends on
the specific requirements of the audio project and the compatibility of the intended playback or
editing systems.
• MP3 (MPEG-1 Audio Layer 3): MP3 is one of the most popular and widely supported audio
formats. It uses perceptual audio coding to remove audio data that is considered less audible
to the human ear. This compression technique allows for substantial file size reduction while
maintaining acceptable audio quality. MP3 files are commonly used for music streaming,
digital downloads, and portable music players.
• AAC (Advanced Audio Coding): AAC is a successor to MP3 and provides improved audio
quality at lower bit rates. It offers better compression efficiency and supports a wider range
of audio frequencies, making it suitable for various applications including music streaming,
online videos, and mobile devices. AAC is the default format for iTunes and is widely
supported by most media players and devices.
• OGG (Ogg Vorbis): OGG is an open and royalty-free audio format. It uses a lossy compression
algorithm to reduce file size while maintaining good audio quality. OGG files are commonly
used for streaming, online distribution, and gaming applications. The format is known for its
efficient compression and high-quality audio at lower bit rates.
• WMA (Windows Media Audio): WMA is a proprietary audio format developed by Microsoft.
It offers a range of compression options, including both lossy and lossless formats. Lossy
WMA files provide good audio quality at lower bit rates and are compatible with Windows-
based devices and software applications. WMA is commonly used for online music stores,
streaming services, and Windows Media Player.
156
These lossy compressed audio formats are widely supported, have good compatibility across
devices and platforms, and offer efficient file sizes suitable for streaming, online distribution, and
portable media. However, it's important to consider the desired level of audio quality, bit rate
settings, and the intended playback environment when choosing a specific format for your audio
needs.
• FLAC (Free Lossless Audio Codec): FLAC is a widely used lossless audio format known for its
excellent compression efficiency. It can compress audio files to about 50-60% of their original
size without any loss of audio quality. FLAC files are popular among audiophiles, music
archivists, and professionals who require high-quality audio without the storage requirements
of uncompressed formats.
• ALAC (Apple Lossless Audio Codec): ALAC is a lossless audio format developed by Apple. It
provides similar compression ratios as FLAC, preserving the original audio quality while
reducing file sizes. ALAC files are commonly used in Apple's ecosystem and are compatible
with iTunes, iOS devices, and macOS.
• WMA Lossless (Windows Media Audio Lossless): WMA Lossless is a lossless audio format
developed by Microsoft. It offers lossless compression with smaller file sizes compared to
uncompressed formats. WMA Lossless files are compatible with Windows-based devices and
software applications, making them suitable for Windows users.
• APE (Monkey's Audio): APE is a highly efficient lossless audio format that achieves high
compression ratios. It provides bit-perfect audio reproduction and is popular among
audiophiles and music enthusiasts who value preserving audio quality while minimizing
storage space.
These lossless audio formats are preferred when maintaining the highest audio fidelity is crucial,
such as in professional audio production, archiving, or personal music libraries. They are suitable
for applications where storage space is a concern, but uncompressed audio quality is desired. It's
important to note that lossless audio files are typically larger than their lossy counterparts, so
considerations regarding storage capacity and playback compatibility should be taken into
account.
What is video?
Video, in a broader sense, refers to the visual representation of a sequence of images in motion.
It is a medium for conveying visual content, capturing moments, and sharing stories. Videos can
contain various types of content, including movies, TV shows, documentaries, music videos,
advertisements, and user-generated videos. They are widely used for entertainment, information
dissemination, communication, and artistic expression.
Videos are created using cameras, video recording devices, or computer-generated graphics.
They can be edited, processed, and enhanced using video editing software to achieve desired
effects, transitions, and visual storytelling. Once created, videos can be stored, shared, and
played back using various devices and platforms, including computers, televisions, smartphones,
and streaming services.
Video file formats play a crucial role in ensuring compatibility, efficient storage, and reliable
playback of video content across different devices and software applications. The choice of video
format depends on factors such as intended use, quality requirements, file size considerations,
platform compatibility, and delivery methods.
Video formats are designed to balance factors such as video quality, file size, compatibility, and
playback efficiency. Different video formats employ different compression techniques to reduce
the file size while maintaining acceptable video quality. These formats also determine how the
audio is encoded and synchronized with the video.
Here are the top VOB players you can choose from:
• MTS (MPEG Transport Stream)
o File Extension: .mts
o MTS is a video format used for AVCHD (Advanced Video Coding High Definition)
video recording.
o It supports high-definition video and is commonly used in camcorders.
• M4V:
o File Extension: .m4v
o M4V is a video format developed by Apple and is similar to MP4.
o It is primarily used for video playback in iTunes and supports DRM (Digital Rights
Management) protection.’
• F4V:
o File Extension: .f4v
o F4V is a video format based on the ISO base media file format.
o It is commonly used for streaming video content over the internet, often
associated with Adobe Flash technology.
• WebM:
o File Extension: .webm
o WebM is an open-source video format developed by Google.
o It uses the VP8 or VP9 video codec and is widely supported by modern web
browsers for HTML5 video playback.
• 3GP:
o File Extension: .3gp
159
o 3GP is a video format commonly used for mobile devices and video sharing.
o It provides efficient compression for small file sizes and is compatible with many
mobile platforms.
• FLV (Flash Video) & SWF (Shockwave Flash):
o File Extensions: .flv, .swf
o FLV is a video format used for streaming video content over the internet, often
associated with Adobe Flash technology.
o SWF is a multimedia format used for vector graphics, animation, and interactive
content.
• MP4/MPEG-4:
o File Extension: .mp4
o MP4 is a popular video format widely supported across different platforms and
devices.
o It uses the MPEG-4 video compression standard and supports various codecs.
• DivX:
o File Extension: .divx
o DivX is a video codec known for its high-quality video compression.
o It provides efficient compression for smaller file sizes while maintaining good
video quality.
• MKV (Matroska):
o File Extension: .mkv
o MKV is an open-source container format that can hold multiple audio, video, and
subtitle streams.
o It supports high-quality video and audio and is often used for storing HD videos or
video
160
References
Books / Ebook:
Burd S., (2010) Systems Architecture, 6th Edition, Course Technology, Cengage Learning, Boston,
Massachusetts
David Money Harris & Sarah L. Harris (2013) Digital Design and Computer Architecture, 2nd
Edition, Elsevier Inc., USA
David Patterson & John Hennessy (2013), Computer Organization and Design, Elsevier Inc., USA
Patterson, D. A., & Hennessy, J. L. (2013). Computer Organization and Design MIPS Edition: The
Hardware/Software Interface (The Morgan Kaufmann Series in Computer Architecture and
Design) 6th Edition.
Stallings, W. (2022). Computer Organization and Architecture, 11th edition. Pearson.
Stallings, W. (2015). Computer Organization and Architecture: Designing for Performance.
Pearson.
Stallings, W. (2017). Data and Computer Communications. Pearson. Stallings, W. (2014).
Operating Systems: Internals and Design Principles. Pearson.
Hennessy, J. L., & Patterson, D. A. (2017). Computer Architecture: A Quantitative Approach (The
Morgan Kaufmann Series in Computer Architecture and Design).
Hamacher, V. C., Vranesic, Z. G., & Zaky, S. H. (2012). Computer Organization and Embedded
Systems 6th Edition. McGraw-Hill Education.
Tanenbaum A., (2002) Structured Computer Organization, 4th Edition, Pearson Education Asia
Pte Ltd, Upper Saddle River, New Jersey
161
Silberschatz, A., Galvin, P. B., & Gagne, G. (2018). Operating System Concepts. Wiley.
Abraham, S. S. (2019). Operating Systems: A Modern Perspective. PHI Learning.
Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C. (2009). Introduction to Algorithms. MIT
Press.
Muchnic, S. S. (1997). Advanced Compiler Design and Implementation. Morgan Kaufmann.
Cooper, K. D., & Torczon, L. (2011). Engineering a Compiler. Morgan Kaufmann.
Smith, J. E., & Nair, R. (2005). Virtual Machines: Versatile Platforms for Systems and Processes.
Morgan Kaufmann.
Herlihy, M., & Shavit, N. (2012). The Art of Multiprocessor Programming. Morgan Kaufmann
Drepper, U. (2007). What Every Programmer Should Know About Memory. LWN.net.
Web-links:
https://www.geeksforgeeks.org/computer-organization-and-architecture-tutorials/
https://web.cs.ucdavis.edu/~liu/courses/computer-architecture/index.html
https://www.elsevier.com/books/computer-organization-and-design/patterson/978-0-12-
407726-3
https://www.coursera.org/learn/comparch
https://www.geeksforgeeks.org/computer-organization-von-neumann-architecture/?ref=lbp
https://www.javatpoint.com/store-program-control-concept
https://edu.gcfglobal.org/en/computerbasics/
https://www.redhat.com/sysadmin/cpu-components-functionality
https://computersciencewiki.org/index.php/Architecture_of_the_central_processing_unit_(CP
U)
https://www.tutorialspoint.com/computer_logical_organization/cpu_architecture.htm
https://www.deskdecode.com/what-is-cpu-central-processing-unit-and-how-its-work/
https://en.wikipedia.org/wiki/Central_processing_unit
https://homepage.cs.uri.edu/faculty/wolfe/book/Readings/Reading04.htm
https://www.computerhope.com/jargon/c/cpu.htm#:~:text=Alternately%20referred%20to%20
as%20a,software%20running%20on%20the%20computer.&text=The%20CPU%20is%20a%2
0chip%20inside%20the%20computer.
https://www.educba.com/types-of-cpu/
162
http://laptopabj.blogspot.com/2012/09/types-of-personal-computers.html
https://www.dummies.com/computers/computer-networking/networking-components/the-
front-of-your-computer-console/
https://www.dummies.com/computers/computer-networking/networking-components/the-
back-of-your-computer-console/
https://images.wisegeek.com/opticalmousebottomshot.jpg
https://steemit.com/life/@jackgallenhall/some-traditional-computer-stuff-that-today-s-kids-
won-t-understand-how-it-feels-like
https://edu.gcfglobal.org/en/computerbasics/
https://news-cdn.softpedia.com/images/news2/Dissecting-the-Motherboard-2.jpg
https://news.softpedia.com/news/Dissecting-the-Motherboard-41987.shtml#sgal_1
https://news-cdn.softpedia.com/images/news2/Dissecting-the-Motherboard-4.jpg
https://www.computerhope.com/jargon/p/pciexpre.htm
https://www.youtube.com/watch?v=PrXwe21biJo
https://www.computerhope.com/jargon/p/pci.htm
https://www.computerhope.com/jargon/a/agp.htm
https://www.computerhope.com/jargon/m/mothboar.htm
https://www.computerhope.com/jargon/c/connect.htm
https://www.computerhope.com/issues/ch000420.htm
https://www.lifewire.com/what-is-a-sound-card-2618160
https://www.amazon.com/IO-Crest-SY-PEX23063-Wireless-Bluetooth/dp/B00VBNS0NE
https://www.flipkart.com/platina-v5-0-car-bluetooth-device-audio-receiver-adapter-dongle-
fm-transmitter/p/itmfdywenxebnzgv
https://en.wikipedia.org/wiki/Computer_cooling#/media/File:AMD_heatsink_and_fan.jpg
http://ixbtlabs.com/articles3/mainboard/foxconn-h55mx-s-i55h-p1.html
https://www.computerhope.com/jargon/p/p4.htm
https://www.pcinside.info/inside/inside-power-supplies/power-supply-cables-connectors/
https://www.techpowerup.com/forums/threads/sexy-hardware-close-up-pic-
clubhouse.71955/page-9#lg=_xfUid-4-1557454375&slide=0
https://whatis.techtarget.com/definition/inductor
163
https://sibay-rb.ru/en/motors/practical-tips-for-repairing-motherboards-restoring-the-
motherboard.html
https://www.techopedia.com/definition/2857/central-processing-unit-cpu-socket-cpu-socket
http://forum.notebookreview.com/threads/cpu-upgradeable-laptops.805499/
https://www.hardwarezone.com.sg/tech-news-overheard-intel-discontinue-their-lga1366-and-
lga1156-processors
https://www.techopedia.com/definition/1283/pin-grid-array-pga
https://www.techopedia.com/definition/27799/land-grid-array-lga
https://www.webopedia.com/TERM/N/Northbridge.html
https://www.chegg.com/homework-help/purpose-raised-screw-holes-standoffs-installed-
motherboard-c-chapter-2-problem-4rt-solution-9781285605685-exc
https://www.techwalla.com/articles/definition-of-ram-slots
https://www.reboot-it.com.au/images/DimmSlot.jpg
https://www.quora.com/What-is-a-motherboard-I-O-for-and-why-is-it-used
https://en.wikipedia.org/wiki/Super_I/O
https://en.wikipedia.org/wiki/Super_I/O#/media/File:ITE_IT8712F-
A_and_TI_98A3XRK_20100419.jpg
https://www.computerhope.com/jargon/f/flopcabl.htm
https://www.computerhope.com/jargon/c/channel.jpg
https://itstillworks.com/types-motherboard-connectors-7272790.html
https://www.lifewire.com/parallel-ata-pata-2625957
https://aio.lv/en/computer-components-monitors-peripherals-software/cables-and-
adapters/pc-internal-cables/4world-08501-pata-cable-ataatapi-5-ultra-ata66-flat:21628
https://www.computerhope.com/jargon/a/atxstyle.htm
https://www.wisegeek.com/what-is-sata-or-serial-ata.htm#
https://itstillworks.com/types-motherboard-connectors-7272790.html
https://www.lifewire.com/serial-ata-sata-2626009
https://m.media-amazon.com/images/I/81n4DZVko0L._AC_SL1500_.jpg
https://www.computerhope.com/jargon/t/tvtuner.png
https://media.startech.com/cms/products/gallery_large/2p6gr-pcie-sata-card.main.jpg
164
https://m.media-amazon.com/images/I/61n27TdFRgL._AC_SX569_.jpg
https://m.media-amazon.com/images/I/51bk--2G1wL._AC_SL1500_.jpg
https://m.media-amazon.com/images/I/51uJNigG65L._AC_SL1124_.jpg
Inductors have several important characteristics and applications:
https://www.deskdecode.com/cmos-battery/
http://www.daossoft.com/images/bios-tips/remove-forgotten-unknown-bios-
password/remove-mainboard-battery.jpg
https://searchstorage.techtarget.com/definition/RAID
https://www.altushost.com/why-all-good-web-hosting-firms-recommend-raid-10-to-their-
clients/
https://techreport.com/blog/12098/front-panel-connectors
https://multimonitorcomputer.com/how-to-build-a-computer.php
https://www.computerhope.com/jargon/f/fwh.htm
https://www.arlabs.com/bios_history.html
https://www.techopedia.com/definition/2297/southbridge
https://www.webopedia.com/TERM/S/serial_port.html
https://www.ebay.com/itm/100ft-DB25-25-Pin-Serial-Printer-Cable-Cord-28-AWG-Male-M-M-
RS-232-Port-PC-Modem-/362017462393
https://www.computerhope.com/jargon/u/usbhead.htm
https://www.quora.com/What-are-motherboard-jumpers-and-how-do-they-work
https://whatis.techtarget.com/definition/integrated-circuit-IC
https://www.hardwaresecrets.com/everything-you-need-to-know-about-the-spdif-connection/
https://www.webopedia.com/TERM/P/power_supply.html
http://www.pcguide.com/ref/power/sup/func.htm
http://www.pcguide.com/ref/power/sup/output.htm
http://www.pcguide.com/ref/power/sup/output_Peak.htm
https://www.quora.com/What-are-hard-drive-sizes
https://www.techwalla.com/articles/what-is-a-laptop-used-for
https://blog.blinq.com/tech-tips/types-of-laptops-pc/
https://edu.gcfglobal.org/en/computerbasics/laptop-computers/1/
165
https://urlzs.com/i7S5Z
https://urlzs.com/hsd8a
https://urlzs.com/P1GPx
https://www.tutorialspoint.com/computer_fundamentals/computer_number_system.htm
https://www.tutorialspoint.com/computer_logical_organization/binary_arithmetic.htm
https://ascii.cl/
https://whatis.techtarget.com/definition/file-format
https://whatis.techtarget.com/fileformat/TXT-ASCII-text-formatted-data
https://www.lifehack.org/articles/technology/why-geeks-love-plain-text-and-why-you-should-
too.html
https://digital-photography-school.com/understanding-all-the-different-image-file-formats/
https://99designs.com/blog/tips/image-file-types/
https://www.computerhope.com/jargon/a/audio.htm
https://www.makeuseof.com/tag/audio-file-format-right-needs/
http://www.businessdictionary.com/definition/video.html
https://www.elmedia-video-player.com/popular-video-audio-formats.html
166