0% found this document useful (0 votes)
111 views49 pages

Day 2 Intro To Servers

The document provides an introduction to server hardware, detailing its components, differences from personal computers, and types of servers. It covers key aspects like processors, memory, storage, and server architecture, emphasizing reliability, scalability, and performance. Additionally, it discusses server operating systems, administration, maintenance, and backup procedures essential for effective server management.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
111 views49 pages

Day 2 Intro To Servers

The document provides an introduction to server hardware, detailing its components, differences from personal computers, and types of servers. It covers key aspects like processors, memory, storage, and server architecture, emphasizing reliability, scalability, and performance. Additionally, it discusses server operating systems, administration, maintenance, and backup procedures essential for effective server management.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 49

Introduction to Servers

Trainer: Rakesh Sinnya


Cypher Technology P. Ltd.
[email protected]

Co-Trainer: Raju Shrestha


Cypher Technology P. Ltd.
[email protected]
What is Server Hardware?

Server hardware refers to the physical


components that make up a server. It is
specifically designed to manage, store,
process, and deliver data in a reliable,
scalable, and efficient manner.
Difference Between PC and Server

PC: Designed for individual use, handling tasks like web


browsing, document editing, gaming, and multimedia.

Server: Designed to provide services to multiple users or


devices, such as hosting websites, managing databases,
running applications, or handling network resources.
Hardware Differences

Processor

PC: Typically uses consumer-grade processors (e.g., Intel Core


i7, AMD Ryzen), optimized for performance in single-user
scenarios.

Server: Uses server-grade processors (e.g., Intel Xeon, AMD


EPYC), designed for multitasking, reliability, and 24/7
operation.
Memory (RAM):

PC: Limited memory capacity, usually between 8GB and 64GB.

Server: Higher memory capacity, often supports ECC (Error-


Correcting Code) RAM to ensure data integrity, with capacities
ranging from 32GB to several TBs.
Storage:

PC: Focuses on speed and capacity for individual use, with


SSDs and HDDs.

Server: Designed for redundancy and scalability, often using


RAID configurations and hot-swappable drives.
Form Factor:

PC: Comes in a variety of sizes, from compact desktops to


laptops.

Server: Usually rack-mounted, blade, or tower form,


optimized for space efficiency and airflow in data centers.
Power and Cooling:

PC: Uses standard power supplies and cooling systems.

Server: Requires redundant power supplies and advanced


cooling systems to handle 24/7 operation.
Software and Operating Systems

PC: Runs consumer-focused operating systems like Windows,


macOS, or Linux desktop distributions. Applications are user-
centric.

•Server: Runs server-specific operating systems, such as


Windows Server, Linux server distributions (e.g., Ubuntu
Server, CentOS), designed for stability, scalability, and
security.
Network and Connectivity

PC: Usually equipped with standard networking capabilities for


personal internet access.

•Server: Has advanced networking capabilities (multiple NICs,


high-speed connections) to manage large data traffic and
connect with other servers or devices.
Reliability and Availability

PC: Not designed for continuous operation; downtime is


acceptable.

Server: Built for high reliability with features like redundancy


(RAID, power supplies), hot-swappable components, and
failover mechanisms.
Some Popular Manufacturer for Server Manufacturer

• IBM ( Now Lenevo)

• HP

• Dell

• Supermicro

• Huawei

• Cisco
X86 Server Architecture
X86 Server Architecture
Server Top View
Server Motherboard
Cooling Fans
Memory Slots
Processor Placement
Riser Card
Server Power Supply
Key Components of Server Hardware:

Processor (CPU):
• Multi-core processors designed for high-performance computing.
• Examples: Intel Xeon, AMD EPYC.
• Supports virtualization and large-scale parallel tasks.

Memory (RAM):
• High-capacity RAM (ECC—Error-Correcting Code memory) to ensure data integrity.
• Scalable for intensive workloads like database operations and virtualization.

Storage:
• Hard Disk Drives (HDDs): Cost-effective, high-capacity storage.
• Solid-State Drives (SSDs): High-speed data access with lower latency.
• NVMe Drives: Superior performance for demanding applications.
Key Components of Server Hardware:

Management Interfaces:
Integrated tools like IPMI (Intelligent Platform Management Interface) or vendor-specific
solutions (e.g., Dell iDRAC, HPE iLO) for remote monitoring and maintenance.

Network Interface Card (NIC):


High-speed NICs (e.g., 1GbE, 10GbE, or 40GbE) for
efficient data transfer.

Motherboard:
Specialized for server environments with features like multiple CPU
sockets, expanded memory slots, and RAID controller integration.
Key Components of Server Hardware:

Cooling Systems:
Designed for continuous operation, including fans, liquid
cooling, or airflow management for rack systems.

RAID Controllers:
Manages multiple storage drives in various RAID
configurations for performance and redundancy.

Power Supply Unit (PSU):


Redundant and efficient power
supplies to ensure uptime.
Server Types
• Rack Servers

• Tower Servers

• Blade Servers
Blade Servers

A blade server is a highly compact, modular server designed to fit


into a chassis, which can house multiple blade servers in a single
unit. Each blade is a server that includes processors, memory, and
network connections but relies on the chassis for power, cooling,
and management.
Use Case:
• Data Centers: To reduce space and improve energy efficiency
while handling large workloads.
• High-Performance Computing (HPC): For applications that
need significant computing power and need to manage many
servers with minimal overhead.
• Virtualization: Blade servers are often used in virtualized
environments where multiple virtual machines are hosted on
each server.
Blade Servers
Blade Servers
Blade Servers
Blade Servers Interconnectivity
Blade Servers
Blade Servers Monitoring
Introduction
• Servers are specialized computers designed for specific purposes.
• Unlike regular desktop computers, servers are optimized for high performance, reliability, and
scalability to handle multiple users and heavy workloads simultaneously.
• Built for durability, longevity, and extended operational periods compared to desktop PCs.
Functions of a Server
Primary Role:
• Provides services and functionalities to other computers over a network.
• Connected computers are referred to as "clients" in the client-server model.
General Use:
• Any computer can act as a server using its operating system features.
• However, regular computers are limited by hardware and OS capacity, making them unsuitable
for handling large numbers of connections.
Server Hardware:
• Uses components similar to desktop PCs.
• Designed for durability and continuous operation in demanding conditions.
Server Architecture and Components
Definition:
The structural design and conceptual framework defining how a server is built,
deployed, and managed.
Core Components:
• Hardware: CPU, RAM, storage (HDD/SSD), NICs (Network Interface Cards), and power supply.
• Software: Operating systems (e.g., Linux, Windows Server), middleware, virtualization tools, and management
software.
• Network: Interfaces for communication, firewalls, and integration with data centers or cloud environments.
Design Principles:
• Scalability: Ensures support for increasing workloads.
• Reliability: Includes failover mechanisms and redundancy.
• Security: Implementation of firewalls, access controls, and encryption.
• Performance Optimization: Resource allocation, caching, and load balancing.
Deployment Types:
• On-Premises
• Virtualized (VM-based)
• Cloud-Based (SaaS)
Components of the Server
CPU:
Processors are the core components of a computer, responsible for executing
calculations, tasks, and functions. Server processors differ from PC processors as
they are built to manage more demanding and complex workloads.
Key CPU terms
Processing cores are the physical units of compute within a processor.
Threads are virtual lines of code that the processor core executes; most cores can
process up to two threads.
Frequency, or clock speed, measures how fast a core can process a thread.
Cache refers to the processor’s dedicated, onboard memory that helps the
processor execute workloads.
Single-Socket vs. Dual-Socket CPUs
In single-socket CPUs, a single processor handles all tasks, with multiple cores
managing the workload.
In dual-socket CPUs, two processors work in parallel, allowing for better
workload distribution and higher performance through parallel processing,
particularly for multi-threaded applications. This is common in high-performance
computing tasks like virtualization.
Memory:

Servers use ECC rams (Error correcting ram), that is if there occurs an error the ram itself
checks for errors and corrects it making the servers more reliable. The server ram comes in
large amount (up to 100GB or 128GB stick).
Memory Placement in Single-Socket Servers
Single Socket:
• A single-socket server has one CPU, and its memory controller directly accesses the
installed RAM.
• Optimal memory placement: Populate memory evenly across available channels to
maximize bandwidth and reduce latency.
• Example: If the CPU supports four memory channels, install identical memory modules in
all four channels for balanced performance.
Dual Socket:
• A dual-socket server has two CPUs, each with its own memory controller.
• Memory for each CPU is local to it,but can also be accessed by the other CPU (referred to as
NUMA - Non-Uniform Memory Access).
• Optimal memory placement: Distribute memory evenly between the CPUs to ensure
balanced access and avoid bottlenecks.
• Example: If each CPU supports four memory channels, install an equal number of modules
for each CPU.
Storage:
Server hard disks are more durable and resistant to wear, tear, and vibrations
compared to cheaper desktop PC drives. Servers often use multiple hard drives
connected via RAID configurations, which automatically distribute data across the
drives. If a drive fails, RAID software rebuilds the data onto a new drive, ensuring data
integrity.
Understanding Hard Drive Interfaces: Types and Their Functions
SATA: A common, affordable connection for most regular hard drives. It works well
for everyday storage needs but isn’t the fastest.
SAS: A faster and more reliable connection used in high-performance servers. It’s
built for heavy-duty tasks and can handle multiple drives at once.
PCIe: The fastest connection, mostly used for SSDs (solid-state drives). It’s ideal for
systems that need super-fast data access, like gaming or large databases.
Fibre Channel: A very fast connection used in big data centers where a lot of data
needs to move quickly between servers and storage devices.
Connection Type Speed Protocol/Technol Description Key Description
ogy

SATA (Serial ATA) Up to 6 Gbps AHCI protocol SATA is a common and affordable connection for regular Affordable and reliable but
hard drives. It uses the AHCI protocol, which was designed slower for modern storage
for spinning disks and doesn’t optimize for the high-speed needs, primarily used in
requirements of flash storage. It is slower and has higher consumer systems.
latency compared to newer technologies.
SAS (Serial Attached Up to 12 Gbps SCSI protocol SAS is a faster and more reliable connection for enterprise Enterprise-grade with support
SCSI) environments, using the SCSI protocol. It supports multi- for multiple drives, offering
drive configurations, providing redundancy, and is more better reliability than SATA but
robust than SATA, but still slower compared to PCIe-based still limited in speed.
connections.
PCIe (Peripheral Varies: Up to 32 Gbps PCIe protocol PCIe is a high-speed interface that connects various High-speed data transfer
Component Interconnect (PCIe Gen 4) components to the CPU and memory, such as SSDs, suitable for modern systems,
Express) graphics cards, and network cards. It enables faster data supporting a wide range of
transfer by directly linking devices to the processor via PCIe components like SSDs, GPUs,
lanes, providing high bandwidth and low latency. and network devices.

NVMe (Non-Volatile Varies: Up to 64 Gbps NVMe protocol over NVMe over PCIe is a protocol designed for flash-based Fastest storage technology
Memory Express) over (PCIe Gen 5) PCIe interface storage to exploit the full potential of PCIe lanes. NVMe with low latency and high
PCIe utilizes direct communication between the storage device throughput, ideal for real-time
and CPU, reducing latency and increasing throughput by applications and data-heavy
allowing multiple I/O queues. It leverages the PCIe interface environments.
for high bandwidth (up to 64 Gbps with PCIe Gen 5) and
low-latency data access, making it ideal for data-intensive
tasks.

Fibre Channel Up to 128 Gbps Fibre Channel Fibre Channel is used in large data centers, providing high- Optimized for data centers and
protocol speed communication between servers and storage storage area networks (SANs),
devices. It supports high-throughput and low-latency data supporting extremely fast data
transfer, making it suitable for high-performance transfer speeds in high-
environments. It operates on a separate network from performance environments.
standard Ethernet, designed for storage area networks
(SANs).
Server Types
Server Type Definition Use Case

Rack Server A standardized server housed in a rackmount chassis, Suited for medium to large enterprises
designed for vertical stacking in a server rack. Typically used with on-premises server rooms or data
in data centers for efficient space utilization. A standardized centers, where space is limited but
server housed in a rackmount chassis, designed for vertical high processing power and scalability
stacking in a server rack. Typically used in data centers for are required for resource-intensive
efficient space utilization. applications.
Blade Server A compact server housed in a modular chassis, or "blade Ideal for medium to large businesses
bay," where multiple servers (blades) are installed in a with high-density server environments
horizontal orientation to save space. where space efficiency is crucial, but
multiple servers are needed to handle
compute-heavy tasks and ensure
redundancy.
Tower Server A standalone, vertically-oriented server resembling a Typically used by small businesses or
traditional desktop PC, but with enhanced hardware for home offices requiring a single server
server-specific functionality. for file storage, network resource
management, or basic application
hosting, with minimal need for future
expansion or scalability.
Server OS
Operating systems suitable for server environments
Ubuntu Server:
• Open-source and widely used for web hosting and cloud services.
• Known for its simplicity, security, and large community support.
Oracle Linux:
• Enterprise-grade Linux distribution optimized for database applications.
• Provides stability, performance, and integration with Oracle's software and hardware products.
RHEL (Red Hat Enterprise Linux):
• Renowned for its stability, security, and long-term support.
• Commonly used in enterprise environments requiring reliable, high-performance systems.
Rocky Linux:
• A community-driven, open-source alternative to CentOS.
• Offers enterprise-level performance and stability, especially after CentOS’s shift to CentOS Stream.
Windows Server:
• A server-specific version of Microsoft Windows offering familiar GUI and integration with Active Directory,
file services, and other Microsoft applications.
• Ideal for organizations already using Windows-based infrastructure.
Server Administration and Configuration
Server Installation & Setup:
OS Installation:
• Install the server’s operating system (e.g., Ubuntu Server, Windows Server, CentOS).
• Configure system-level settings, including:
• Time zones.
• Network interfaces.
• Disk partitions.
Note: In a virtualized environment with VMs, create separate disks for each drive instead of
partitioning a single drive for easier
expansion of disk space as the drives reach capacity.
Role Assignment:
• Assign users to predefined roles (e.g., Admin, Manager, Analyst, Viewer).
• Each role corresponds to a specific set of permissions and access levels:
• Admin: Full access and control over the system and services.
• Manager: Access to manage certain services and settings, but not full control.
• Analyst: Read-only access for data analysis and reports.
• Viewer: Limited access, typically for viewing specific data or logs.
Server Maintenance
Patch Management
• Plan regular update schedules to avoid long downtimes and test
important systems to ensure updates don’t cause problems.
• Monitor CVEs (Common Vulnerabilities and Exposures) and
prioritize patches for high-risk vulnerabilities.
CVE database: Nist, vulners
• Apply security patches and updates for operating systems,
firmware, and applications.
Backup and Restore Procedures
• Configure incremental, differential, or full backups using software such as Veeam, Nakivo,
or native tools like Windows Backup or rsync.
• When configuring backups, it's essential to factor in Recovery Time Objective (RTO) and
Recovery Point Objective (RPO) to ensure your data recovery strategy aligns with business
needs.
RTO (Recovery Time Objective): This is the maximum amount of time your system or
data can be offline after an incident before it impacts business operations. Choose
backup tools and configurations that support fast recovery to meet your RTO
requirements.
RPO (Recovery Point Objective): This defines the maximum acceptable amount of
data loss measured in time (e.g., 4 hours of data). Configure backup frequency (e.g.,
hourly, daily) to ensure the data loss stays within the acceptable limit.
• Test disaster recovery (DR) plans regularly to validate the reliability and efficiency of the
backup system.
Server Monitoring and Logging

Remote Management Tools


• Tools like IPMI (Intelligent Platform Management Interface), Integrated
Dell Remote Access Controller (iDRAC) or iLO (Integrated Lights-Out)
allow administrators to manage the server remotely, even when it's
turned off or the OS is unresponsive.
Monitoring Software
• Monitoring tools such as Nagios, Zabbix track server performance (CPU
load, memory usage, etc.) and alert administrators to potential issues.
Performance Tuning:
What is Performance Tuning?

• Process of optimizing server performance.


• Identifies bottlenecks and resolves inefficiencies.
• Enhances speed, reliability, and resource utilization.
Key Areas of Performance Tuning:
Hardware Tuning:
CPU Allocation: Tuning the CPU allocation helps to ensure that the application has enough processing power to handle its workload efficiently.
Adjustments might include:
• CPU pinning: Assigning specific CPU cores to certain applications to optimize performance.
• Hyperthreading: Making use of hyperthreading for workloads that can benefit from simultaneous multithreading (e.g., multi-threaded
applications)

Storage Optimization: Choosing the appropriate storage medium and optimizing its configuration: RAID Configuration: Using RAID levels such as
RAID 1, RAID 5, or RAID 10 for balancing performance and redundancy.
• Solid-State Drives (SSD): Replacing hard drives with SSDs for faster data access speeds.
• Storage Caching: Implementing storage cache mechanisms to reduce read/write latency.

Operating System Tuning:


• Service Management: Disabling unnecessary services and background processes to free up system resources and reduce load.
• Systemd or Init Services: Disable services that are not needed for the specific workload (e.g., printing services, unused network
services).
Network Tuning:
EtherChannel Configuration: Aggregating multiple physical network links into a single logical link using EtherChannel or Link Aggregation to
increase bandwidth, provide redundancy, and improve fault tolerance.
• Load Balancing: Distributing network traffic across the available links to prevent bottlenecks.
• Fault Tolerance: Ensuring that if one link fails, traffic will automatically reroute over the remaining links without disruption.
Tools for Performance Tuning:
1. System Monitoring:
• top: A command-line utility that provides real-time system information, including CPU and memory usage, processes, and
system load.
• htop: An enhanced version of top, providing a more user-friendly and interactive interface for monitoring system processes
and resource usage.
• Task Manager (Windows): A graphical utility in Windows for monitoring running applications, CPU usage, memory usage,
and disk activity.

2. Network Analysis:
• Wireshark: A network protocol analyzer that helps capture and analyze packets in real-time, making it ideal for diagnosing
network issues and performance bottlenecks.
• iperf: A network performance testing tool used to measure the bandwidth, latency, and packet loss between two devices
over a network.

3. Database Tools:
• IOmeter: A benchmarking tool used to measure the performance of storage subsystems and identify I/O bottlenecks.
THANKYOU

You might also like