0% found this document useful (0 votes)
12 views11 pages

Comprehensive Guide To IT Hardware Components

The document provides a comprehensive overview of key IT hardware components, including ROM, RAM, CPU, GPU, storage subsystems, and firewalls. It discusses the characteristics, types, and applications of these components, emphasizing their roles in computing systems. Additionally, it covers topics like encryption and cloud computing, highlighting their importance in data security and resource management.

Uploaded by

maswud.hassan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views11 pages

Comprehensive Guide To IT Hardware Components

The document provides a comprehensive overview of key IT hardware components, including ROM, RAM, CPU, GPU, storage subsystems, and firewalls. It discusses the characteristics, types, and applications of these components, emphasizing their roles in computing systems. Additionally, it covers topics like encryption and cloud computing, highlighting their importance in data security and resource management.

Uploaded by

maswud.hassan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Comprehensive Guide to IT Hardware Components

& Related Topics


ROM

ROM (Read-Only Memory) is a type of non-volatile memory used in computers and other electronic
devices. Unlike RAM (Random Access Memory), which is volatile and loses its data when the power is
turned off, ROM retains its stored information even when the device is powered down. This makes ROM
ideal for storing critical system instructions and firmware that are required to boot up the device and
load the operating system.

Key Characteristics of ROM:

 Non-Volatile: Data remains intact even without power.


 Permanent Storage: Typically used for firmware, BIOS, and other essential system software.
 Read-Only: Data is written during manufacturing and cannot be easily modified by end-users.

Types of ROM:

1. Mask ROM: Data is permanently written during manufacturing. No modifications are possible.
2. PROM (Programmable ROM): Can be programmed once by the user using a special device.
3. EPROM (Erasable Programmable ROM): Can be erased and reprogrammed using ultraviolet light.
4. EEPROM (Electrically Erasable Programmable ROM): Can be erased and reprogrammed electrically,
making it more flexible than EPROM.

Applications of ROM:

 Storing the BIOS or UEFI firmware in computers.


 Holding firmware in embedded systems, such as routers, printers, and gaming consoles.
 Preserving critical software in devices where changes are rarely needed.

ROM plays a foundational role in computing by ensuring that essential instructions are always available,
enabling devices to start up and function correctly. While it lacks the flexibility of RAM, its reliability and
permanence make it indispensable for system stability and long-term storage of unchanging data.

Types:

 Mask ROM: Factory-written, unchangeable.


 PROM: Programmable once.
 EPROM: Erasable with UV light.
 EEPROM: Electrically erasable (used in modern firmware).

Uses: Boot processes, embedded systems, firmware storage.


RAM

RAM (Random Access Memory) is the hardware in a computing device where the operating system (OS),
application programs, and data in current use are kept so the device's processor can quickly reach them.
RAM is the main memory in a computer, and it is much faster to read from and write to than other kinds
of storage, such as a hard disk drive (HDD), solid-state drive (SSD), or optical drive.

Random Access Memory is volatile. That means data is retained in RAM as long as the computer is on,
but it is lost when the computer is turned off. When the computer is rebooted, the OS and other files
are reloaded into RAM, usually from an HDD or SSD.

Because of its volatility, Random Access Memory can't store permanent data. RAM can be compared to
a person's short-term memory, and a hard drive to a person's long-term memory. Short-term memory is
focused on immediate work, but it can only keep a limited number of facts in view at any one time.
When a person's short-term memory fills up, it can be refreshed with facts stored in the brain's long-
term memory.

A computer also works this way. If RAM fills up, the computer's processor must repeatedly go to the
hard disk to overlay the old data in RAM with new data. This process slows the computer's operation.

The term random access as applied to RAM comes from the fact that any storage location, also known as
any memory address, can be accessed directly. Originally, the term Random Access Memory was used to
distinguish regular core memory from offline memory.

Offline memory typically referred to magnetic tape from which a specific piece of data could only be
accessed by locating the address sequentially, starting at the beginning of the tape. RAM is organized
and controlled in a way that enables data to be stored and retrieved directly to and from specific
locations.

Key Features:

 Volatile: Loses data when power is off.


 Fast Access: Much quicker than HDDs/SSDs.
 Dynamic (DRAM) vs. Static (SRAM): DRAM is common in PCs; SRAM is faster but more expensive (used
in cache).

Uses: Running applications, OS operations, multitasking.

CPU (Central Processing Unit) - Complete Guide

The Central Processing Unit (CPU) is the primary computational and control unit of a computer, often
called the "brain" of the system. It executes instructions from software and hardware, performing
calculations, logic operations, and data management to run applications and the operating system.

1. CPU Architecture & Components

A modern CPU consists of several key components:


A. Control Unit (CU)

 Manages instruction execution flow.


 Fetches, decodes, and schedules operations.
 Coordinates communication between CPU, RAM, and I/O devices.

B. Arithmetic Logic Unit (ALU)

 Performs mathematical (addition, subtraction) and logical (AND, OR, NOT) operations.
 Handles integer calculations (floating-point operations are often offloaded to the FPU).

C. Registers

 Ultra-fast memory locations (small in size) that hold data being processed.
 Examples:
o Program Counter (PC): Tracks the next instruction.
o Accumulator (ACC): Stores intermediate results.
o Instruction Register (IR): Holds the current instruction.

D. Cache Memory

 Small, high-speed memory inside the CPU to reduce RAM access delays.
o L1 Cache (32-64KB per core): Fastest but smallest.
o L2 Cache (256KB-1MB per core): Balances speed and capacity.
o L3 Cache (16-64MB, shared): Slower but larger, shared across cores.

E. Clock & Clock Speed

 The CPU operates in cycles (measured in Hertz, Hz).


 Base Clock: Minimum guaranteed speed (e.g., 3.0 GHz).
 Boost Clock: Maximum speed under load (e.g., 5.0 GHz).

2. How a CPU Works

1. Fetch: Retrieves the next instruction from RAM.


2. Decode: Interprets the instruction.
3. Execute: Performs the operation (calculation, data move, etc.).
4. Store: Writes results back to RAM or registers.

This cycle is known as the Fetch-Decode-Execute (FDE) cycle.

3. CPU Performance Factors

Factor Impact Example

Cores More cores = better multitasking 4-core vs. 16-core CPU


Factor Impact Example

Hyper-Threading (Intel) / SMT (AMD)


Threads 4-core/8-thread CPU
improves efficiency

Higher GHz = faster single-core


Clock Speed 3.5 GHz vs. 5.0 GHz
performance

More cache = less RAM access = faster


Cache Size 16MB vs. 64MB L3 cache
performance

IPC (Instructions Per AMD Zen 4 vs. Intel


Efficiency of architecture
Cycle) Raptor Lake

TDP (Thermal Design


Power consumption & cooling needs 65W vs. 125W CPUs
Power)

4. Types of CPUs

A. By Core Count

 Single-Core: Old CPUs (Pentium 4).


 Dual-Core: Basic computing (Intel Core i3).
 Quad-Core: Mid-range (Intel Core i5, Ryzen 5).
 Hexa/Octa-Core: High-end (Core i7, Ryzen 7).
 12+ Cores: Workstations/servers (Ryzen 9, Xeon).

B. By Use Case

Type Example Best For

Budget CPUs Intel Core i3, AMD Ryzen 3 Office tasks, web browsing

Gaming CPUs Intel Core i5/i7, Ryzen 5/7 High FPS gaming

Workstation CPUs AMD Threadripper, Intel Xeon Video editing, 3D rendering

Mobile CPUs Apple M2, Intel Core U-series Laptops, low power use

Server CPUs AMD EPYC, Intel Xeon Scalable Data centers, cloud computing
5. CPU Manufacturers

A. Intel

 Mainstream: Core i3, i5, i7, i9


 High-End Desktop (HEDT): Core X-series
 Server: Xeon

B. AMD

 Mainstream: Ryzen 3, 5, 7, 9
 HEDT: Threadripper
 Server: EPYC

C. ARM (Apple, Qualcomm, Samsung)

 Apple M1/M2: Used in MacBooks/iPads.


 Snapdragon: Android smartphones.

6. CPU vs. GPU vs. APU

Component Purpose Best For

CPU General computing, logic tasks OS, apps, gaming AI

GPU Parallel processing, graphics Gaming, AI, video rendering

APU CPU + GPU in one chip Budget PCs, laptops

GPU (Graphics Processing Unit) - Parallel Processing Powerhouse

The Graphics Processing Unit (GPU) is a specialized processor designed to accelerate graphics rendering
and parallel computations. Unlike CPUs (which handle general-purpose tasks), GPUs excel at performing
thousands of small calculations simultaneously, making them essential for gaming, AI, video editing, and
scientific computing.

Architectural Comparison:

 NVIDIA: CUDA cores, RT cores, Tensor cores


 AMD: Stream processors, Infinity Cache
 Intel Arc: Xe cores, Deep Link technology

VRAM Considerations:

 GDDR6 vs GDDR6X
 Bandwidth calculations
 Resolutions requirements:
o 1080p: 8GB recommended
o 4K: 12GB+ ideal

Professional vs Gaming GPUs:

 Quadro/Radeon Pro vs GeForce/Radeon RX


 Driver optimizations
 ECC memory in workstation cards

Storage Subsystem - Data Preservation

Technology Comparison Table:

Type Speed Durability Cost/GB Best Use Case

HDD 80-160MB/s 3-5 years $0.03 Cold storage

SATA SSD 500MB/s 5-10 years $0.08 General computing

NVMe SSD 3,500-7,000MB/s 5-10 years $0.12 High-performance

Optane 2,500MB/s 30+ DWPD $0.50 Enterprise caching

Advanced Topics:

 RAID configurations (0,1,5,10)


 TRIM and garbage collection
 Wear leveling algorithms
 ZFS/Btrfs filesystems

Motherboard - The Nervous System

Component Interconnections:

 PCIe Lanes: Gen 3/4/5 bandwidth differences


 Chipset Differences: Z790 vs B760 vs H610
 VRM Quality: Phase count and power staging

BIOS/UEFI Features:

 Secure Boot implementations


 Fan curve customization
 PCIe bifurcation support
 Memory training procedures

Power Delivery - The Circulatory System

Power Supply Specifications:

 80 Plus efficiency levels explained


 Single-rail vs multi-rail designs
 Modular cabling benefits
 Power factor correction (PFC)

Load Calculations:

 GPU power spikes


 CPU turbo power requirements
 Peripheral power draw

Thermal Management - Preventing Overheating

Cooling Solutions:

 Air cooler performance metrics (CFM, static pressure)


 Liquid cooling:
o AIO vs custom loops
o Radiator efficiency
o Coolant types
 Phase-change and LN2 extreme cooling

Thermal Interface Materials:

 Thermal paste application methods


 Liquid metal considerations
 Graphite pad alternatives

Peripheral Ecosystem

Input Devices:

 Mechanical keyboard switch types


 Mouse sensor technologies (optical vs laser)
 Haptic feedback systems
Display Technologies:

 OLED burn-in mitigation


 Mini-LED backlighting
 Refresh rate vs response time

Firewall

In computing, a firewall is software or firmware that enforces a set of rules regarding which data packets
are allowed to enter or leave a network. Firewalls are incorporated into a wide variety of networked
devices to filter traffic and reduce the risk that malicious packets traveling over the public internet pose
to the security of a private network. Firewalls may also be purchased as stand- alone software
applications.

The term " firewall " is a metaphor that compares a type of physical barrier used to limit the damage a
fire can cause with a virtual barrier that restricts damage from external or internal cyberattacks. When
situated at the perimeter of a network, firewalls provide low- level network protection, as well as
important logging and auditing functions.

While the two main types of firewalls are host- based and network- based, various other types can be
found in different locations and serving different functions. A host- based firewall is installed on
individual servers and monitors incoming and outgoing signals. A network- based firewall can be
embedded in the cloud' s infrastructure, or it can take the form of a virtual firewall service.

Types of firewalls

Other types of firewalls include packet- filtering firewalls, stateful inspection firewalls, proxy firewalls,
and next- generation firewalls (NGFWs).

• A packet- filtering firewall examines packets in isolation and does not consider the packet' s
context.

• A stateful inspection firewall analyzes network traffic to determine whether one packet is
related to another.

• A proxy firewall inspects packets at the application layer of the Open Systems Interconnection
(OSI) reference model.

• An NGFW employs a multilayered approach to integrate enterprise firewall capabilities with an


intrusion prevention system (IPS) and application control.

When organizations transitioned from mainframe computers and dumb clients to the client- server
model, controlling access to the server became a priority. Before the first firewalls emerged from work
done in the late 1980 s, the primary form of network security was enforced through access control lists
(ACLs) on routers. ACLs specified which Internet Protocol (IP) addresses were granted or denied access
to the network.

Encryption

In computing, encryption is the method by which plaintext or any other type of data is converted from a
readable form to an encoded version that can only be decoded by another entity if they have access to a
decryption key. Encryption is one of the most important methods for providing data security, especially
for end-to-end protection of data transmitted across networks.

Encryption is widely used on the internet to protect user information being sent between a browser and
a server, including passwords, payment information and other personal information that should be
considered private. Organizations and individuals also commonly use encryption to protect sensitive
data stored on computers, servers and mobile devices like phones or tablets.

How encryption works

Unencrypted data, often referred to as plaintext, is encrypted using an encryption algorithm and an
encryption key. This process generates cipher text that can only be viewed in its original form if
decrypted with the correct key. Decryption is simply the inverse of encryption, following the same steps
but reversing the order in which the keys are applied. Today's most widely used encryption algorithms
fall into two categories: symmetric and asymmetric.

How the encryption operation works

Benefits of encryption

The primary purpose of encryption is to protect the confidentiality of digital data stored on computer
systems or transmitted via the internet or any other computer network. A number of organizations and
standards bodies either recommend or require sensitive data to be encrypted in order to prevent
unauthorized third parties or threat actors from accessing the data. For example, the Payment Card
Industry Data Security Standard requires merchants to encrypt customers' payment card data when it is
both stored at rest and transmitted across public networks.

Modern encryption algorithms also play a vital role in the security assurance of IT systems and
communications as they can provide not only confidentiality, but also the following key elements of
security:

 Authentication: the origin of a message can be verified.


 Integrity: proof that the contents of a message have not been changed since it was sent.
 Nonrepudiation: the sender of a message cannot deny sending the message.
Encryption is used to protect data stored on a system (encryption in place or encryption at rest); many
internet protocols define mechanisms for encrypting data moving from one system to another (data in
transit).
Cloud Computing

Cloud computing is a general term for the delivery of hosted services over the internet.

Cloud computing enables companies to consume a compute resource, such as a virtual machine (VM),
storage or an application, as a utility -- just like electricity -- rather than having to build and maintain
computing infrastructures in house. Whether you are running applications that share photos to millions
of mobile users or you’re supporting the critical operations of your business, a cloud services platform
provides rapid access to flexible and low cost IT resources. With cloud computing, you don’t need to
make large upfront investments in hardware and spend a lot of time on the heavy lifting of managing
that hardware. Instead, you can provision exactly the right type and size of computing resources you
need to power your newest bright idea or operate your IT department. You can access as many
resources as you need, almost instantly, and only pay for what you use.

Cloud computing provides a simple way to access servers, storage, databases and a broad set of
application services over the Internet. A Cloud services platform such as Amazon Web Services owns and
maintains the network-connected hardware required for these application services, while you provision
and use what you need via a web application.

Six Advantages and Benefits of Cloud Computing

 Trade capital expense for variable expense


 Benefit from massive economies of scale
 Stop guessing capacity
 Increase speed and agility
 Stop spending money on running and maintaining data centers
 Go global in minutes

BIG DATA

Big data is a term that describes the large volume of data – both structured and unstructured – that
inundates a business on a day-to-day basis. But it’s not the amount of data that’s important. It’s what
organizations do with the data that matters. Big data can be analyzed for insights that lead to better
decisions and strategic business moves. While the term “big data” is relatively new, the act of gathering
and storing large amounts of information for eventual analysis is ages old. The concept gained
momentum in the early 2000s when industry analyst Doug Laney articulated the now-mainstream
definition of big data as the three Vs:

Volume. Organizations collect data from a variety of sources, including business transactions, social
media and information from sensor or machine-to-machine data. In the past, storing it would’ve been a
problem – but new technologies (such as Hadoop) have eased the burden.

Velocity. Data streams in at an unprecedented speed and must be dealt with in a timely manner. RFID
tags, sensors and smart metering are driving the need to deal with torrents of data in near-real time.
Variety. Data comes in all types of formats – from structured, numeric data in traditional databases to
unstructured text documents, email, video, audio, stock ticker data and financial transactions.

At SAS, we consider two additional dimensions when it comes to big data:

Variability. In addition to the increasing velocities and varieties of data, data flows can be highly
inconsistent with periodic peaks. Is something trending in social media? Daily, seasonal and event-
triggered peak data loads can be challenging to manage. Even more so with unstructured data.

Complexity. Today's data comes from multiple sources, which makes it difficult to link, match, cleanse
and transform data across systems. However, it’s necessary to connect and correlate relationships,
hierarchies and multiple data linkages or your data can quickly spiral out of control.

You might also like