0% found this document useful (0 votes)
16 views

Infrastructure Building Blocks: Compute

Exploring Computing Concepts and Technologies

Uploaded by

Esther Quite
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views

Infrastructure Building Blocks: Compute

Exploring Computing Concepts and Technologies

Uploaded by

Esther Quite
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 64

Exploring Computing Concepts

and Technologies
A Comprehensive Journey Through the Evolution and Application of Computing
Table of contents

Introduction to Computing Concepts 01


Key Entities in the Field of Computing 02
Importance of Computational Thinking 03
Evolution of Computer Hardware 04
The ENIAC and Transistor-Based Computers 05
Microprocessors and Personal Computers 06
Types of Computer Housing 07
Blade Enclosures and Blade Servers 08
Challenges in Using Newer Server Blades 09
Understanding CPU Instruction Sets 10
Evolution of CPU Word Sizes 11
AMD vs. Intel in CPU Market 12
ARM Processors and Oracle SPARC 13
Table of contents

SPARC and IBM POWER Processors 14


Evolution of Computer Memory Technology 15
Components and Functions of Computer Memory 16
Evolution of Serial Communication Interfaces 17
Modern Laptop Connectors and Cables 18
Differences Between PCI and PCIe Bus Architectures 19
Types of PCI and PCIe Lanes 20
Compute Virtualization Concept 21
Virtualization Architecture Overview 22
VMware's Leadership in x86 Virtualization 23
Software Defined Compute (SDC) 24
Hypervisor Management of Virtual Machines 25
Drawbacks of Virtualization in IT Infrastructure 26
Table of contents

Emulation and Logical Partitions 27


Logical Partitions and Hypervisors 28
Virtualization Techniques and Hypervisor Types 29
Memory Management in Virtual Machines 30
Container Technology Overview 31
Implementation of Containers in Development 32
Container Security Implications 33
Repository and Container Orchestration 34
Mainframes in High-Volume Computing 35
Mainframe Architecture Overview 36
Processing Units in Mainframe Systems 37
Control Units and Mainframe Virtualization 38
Midrange Systems Overview 39
Table of contents

Evolution of Minicomputers 40
Midrange System Architecture 41
UMA Architecture in Midrange Systems 42
NUMA Architecture in Midrange Systems 43
Midrange Virtualization Technologies 44
x86 Servers Architecture Evolution 45
Virtualization on the x86 Platform 46
Performance of Computers and Moore's Law 47
CPU Execution Process and Clock Speed 48
CPU Caching and Memory Organization 49
CPU Pipelines and Instruction Execution 50
Prefetching and Branch Prediction 51
Superscalar CPUs and Parallel Execution 52
Table of contents

Limitations of CPU Clock Speeds 53


Multi-Core CPUs and Heat Generation 54
Impact of Moore's Law on CPU Cores 55
Virtualization Impact on CPU Usage 56
Physical and Virtual Security Measures 57
Minimizing Hypervisor Security Risks 58
Introduction to Computing Concepts

Introduction to Computing Concepts

Definition of Compute: Refers to computers in datacenters, physical or virtual


machines.

Components of Physical Computers: Power supplies, CPUs, BIOS, memory,


network connectivity.

Groups of Compute Systems: Mainframes, midrange systems, x86 servers.

History of Computers: Evolution from manual calculations to programmable


computers.

British Colossus Computer: World's first programmable computer, created during


World War II.

1 / Exploring Computing Concepts and Technologies


Key Entities in the Field of Computing

CPU (Central RAM (Random Hyper-threading Intel CPUs Simultaneous


Processing Unit) Access Memory) technology Multithreading
(SMT)
Definition and function Definition and function Definition and function Overview of Intel processors Definition and function

Importance in computing Role in system performance Benefits in processing Key features and Comparison with Hyper-
advancements threading

2 / Exploring Computing Concepts and Technologies


Importance of Computational Thinking

Computational Thinking: A Fundamental Skill

Computational thinking is a fundamental skill in the digital age, emphasizing


problem-solving through logical and algorithmic approaches.

It involves breaking down complex problems into smaller, manageable parts to


develop step-by-step solutions.

Computational thinking fosters creativity, critical thinking, and the ability to


analyze and solve problems efficiently.

By applying computational thinking, individuals can approach challenges in


various fields, not just limited to computer science.

Developing computational thinking skills is crucial for students and professionals


to thrive in today's technology-driven world.

3 / Exploring Computing Concepts and Technologies


Evolution of Computer Hardware

01 02 03

Evolution from Manual to Microprocessor High-Performance


Programmable Revolution Computing

Evolution from manual Development of the first Transition from single


calculations to mechanical universal microprocessor, the supercomputers to clustered
calculators and programmable Intel 4004, in 1971. computers in the 1990s for high-
computers. performance computing.
Increase in CPU power
Introduction of the British exponentially since the Utilization of specialized
Colossus computer during World introduction of microprocessors, hardware like GPUs for specific
War II as the first programmable following Moore's Law. calculations due to their
computer. massively parallel architecture.

4 / Exploring Computing Concepts and Technologies


The ENIAC and Transistor-Based
Computers

ENIAC: A Pioneer in The Transition to Transistor-


Computing Based Computers

The ENIAC (Electronic Numerical Transistor-based computers replaced


Integrator and Computer) was one of vacuum tubes with transistors,
the earliest general-purpose leading to smaller, faster, and more
computers, developed during World reliable machines.
War II.
Transistors revolutionized computing
ENIAC was massive, weighing about by reducing power consumption and
30 tons and taking up a significant heat generation while increasing
amount of space. processing speed.

It used vacuum tubes for processing, The transition from ENIAC to


making it prone to overheating and transistor-based computers marked a
requiring frequent maintenance. significant advancement in computer
technology.

5 / Exploring Computing Concepts and Technologies


Microprocessors and Personal Computers

Microprocessors and Personal Computers

Microprocessors are the central processing units (CPUs) of personal computers,


responsible for executing instructions and performing calculations.

Intel x86 processors, starting from the 8088 CPU to the latest 22-core E5-2699A
Xeon Processor, have been pivotal in shaping computer architectures.

AMD is a major competitor to Intel in the CPU market, offering alternative


processor options for personal computers.

The evolution of CPU word sizes, from 4 bits to 64 bits, has influenced the
capabilities of personal computers in handling data and memory.

Personal computers utilize microprocessors to run operating systems and


applications efficiently, enhancing user experiences and productivity.

6 / Exploring Computing Concepts and Technologies


Types of Computer Housing

Types of Computer
Housing
Stand-alone complete systems
like pedestal or tower computers
were the original computer
housing.

Rack-mounted servers are


complete machines requiring
their own power, network, and
SAN cables.

Blade servers lack their own


power supply or expansion slots
and are placed in blade
enclosures for high server
density.

Blade servers are connected to


shared power supplies through a
backplane, reducing costs by
sharing components like power
supplies and fans.

7 / Exploring Computing Concepts and Technologies


Blade Enclosures and Blade Servers

Blade Enclosures and Blade Servers

Blade enclosures are used to house blade servers, providing a compact and high-
density solution for data centers.

Blade servers are servers without their own power supply or expansion slots,
designed to be placed in blade enclosures for efficient space utilization.

Newer blade servers may have higher power, cooling, and network bandwidth
requirements, posing challenges for compatibility with existing enclosures.

Blade servers are connected to shared power supplies through a backplane,


allowing for cost-effective sharing of components like power supplies and fans.

Enclosures are not limited to blade servers but also accommodate storage
components like disks, controllers, and SAN switches.

8 / Exploring Computing Concepts and Technologies


Challenges in Using Newer Server Blades

Challenges in Using than rack-mounted servers due


Newer Server Blades to shared components like power
supplies and fans.
Newer server blades may not fit
existing enclosures due to
increased power, cooling, or
bandwidth requirements.

Upgrading power supplies may


be necessary if newer blades
require more power than the
enclosure can provide.

Compatibility issues may arise if


newer blades allow for higher
network throughput not
supported by the enclosure.

Blade enclosures are not only


used for blade servers but also
for storage components like disks
and SAN switches.

Systems based on blade servers


are generally more cost-effective

9 / Exploring Computing Concepts and Technologies


Understanding CPU Instruction Sets

Understanding CPU Instruction Sets

CPU instruction sets are represented as binary codes and mnemonics in assembly
language.

Programmers write machine code using mnemonics, which are then translated
into machine instruction codes by an assembler.

Assembly language is specific to a particular CPU architecture and provides


mnemonics for easier human understanding.

Each instruction has a one-to-one correspondence between assembly language


instructions and machine code instructions.

The assembler translates mnemonics to machine instruction codes, allowing them


to run directly on the CPU.

10 / Exploring Computing Concepts and Technologies


Evolution of CPU Word Sizes

Evolution of CPU Word architectures.


Sizes
CPU word sizes have evolved
over time, starting from 4 bits in
early CPUs.

8-bit CPUs gained popularity


quickly, allowing storage of
values between 0 and 255 in a
single memory register.

The Intel 8086, a 16-bit


microprocessor, became widely
used, leading to the x86
microprocessor family.

Today's 64-bit CPUs can address


significantly larger memory
spaces compared to 32-bit CPUs.

The Intel x86 processors,


including the latest 22-core E5-
2699A Xeon Processor, have
been pivotal in computer

11 / Exploring Computing Concepts and Technologies


AMD vs. Intel in CPU Market

AMD and Intel in the CPU Market

AMD and Intel are major competitors in the CPU market.

Intel processors have been the de-facto standard for many computer
architectures.

AMD is the second-largest global supplier of CPUs, competing fiercely with Intel.

Intel's x86 processors have a long history, starting from the 8088 CPU to the
latest 22-core E5-2699A Xeon Processor.

AMD's x86 processors have also made significant advancements, challenging


Intel's dominance in the market.

12 / Exploring Computing Concepts and Technologies


ARM Processors and Oracle SPARC

ARM Processors Oracle SPARC

ARM processors are a type of CPU SPARC (Scalable Processor


architecture commonly used in Architecture) is a RISC-based CPU
mobile devices and embedded architecture developed by Sun
systems. Microsystems, now owned by Oracle.

Known for their energy efficiency and SPARC processors are designed for
low power consumption, making high-performance computing tasks,
them ideal for portable devices. such as database management and
enterprise applications.
ARM processors are based on
Reduced Instruction Set Computing Known for their scalability and
(RISC) principles, focusing on reliability, SPARC processors are
simplicity and efficiency. commonly used in data centers and
server environments.

13 / Exploring Computing Concepts and Technologies


SPARC and IBM POWER Processors

SPARC Processors
SPARC processors are fully open
and non-proprietary, allowing any
manufacturer to produce a
SPARC CPU.

The latest SPARC model in 2017


is the 32-core SPARC M7 CPU,
running at 4.1 GHz.

IBM POWER Processors


IBM POWER processors, also
known as PowerPC, were
introduced by IBM in 1990 and
are used in high-end server
products.

The latest IBM POWER model in


2017 is the 24-core POWER9
CPU, running at 4 GHz.

14 / Exploring Computing Concepts and Technologies


Evolution of Computer Memory Technology

01 02 03 04 05

Introduction of Transition to Development of Evolution to Static Emergence of Flash


Magnetic Core Semiconductor Dynamic RAM RAM (SRAM) Memory
Memory Memory (DRAM)
Early computer memory Shift towards Introduction of DRAM for Advancement to SRAM for Innovation of non-volatile
technology using tiny semiconductor-based higher memory density but faster and more stable flash memory for portable
magnetic rings to store memory like RAM and ROM requiring constant memory storage. devices and data storage
data. for faster access. refreshing. solutions.

15 / Exploring Computing Concepts and Technologies


Components and Functions of Computer
Memory

Dynamic Random Access Basic Input/Output Interfaces for External


Memory (DRAM) System (BIOS) Peripherals

Utilizes capacitors to store data Stored on a memory chip on the Different interfaces like RS-232
motherboard are used
Requires regular refreshing to
maintain data integrity Controls the computer from Facilitates serial communication
startup to loading the operating
system

Updating BIOS software is crucial


for system stability and
performance enhancement

16 / Exploring Computing Concepts and Technologies


Evolution of Serial Communication Interfaces

Evolution of Serial Communication Interfaces

Serial communication interfaces have evolved from early simple connections to


complex data transmission systems.

Initially, serial interfaces used a single data line to send and receive information
sequentially.

Over time, advancements led to the development of faster serial interfaces with
improved data transfer rates.

Modern serial communication interfaces utilize protocols like RS-232, USB, and
Ethernet for diverse connectivity needs.

The evolution of serial interfaces has enabled efficient data exchange between
devices in various industries.

17 / Exploring Computing Concepts and Technologies


Modern Laptop Connectors and Cables

01 02 03

USB Type-C Connector Thunderbolt Technology PCI and PCIe


Evolution

Capable of transferring up to Thunderbolt 1: 10 Gbit/s bi- PCI (Peripheral Component


100W of electricity. directional data transfers. Interconnect) used in x86
servers for internal expansion
Used for USB 3.1 protocol and Thunderbolt 2: Throughput of 20 slots.
Thunderbolt 3. Gbit/s.
PCI uses shared parallel bus
Thunderbolt 3: Maximum architecture.
throughput of 40 Gbit/s, provides
100W power, backward Optional 64-bit support available.
compatible with USB 3.1.
PCIe (PCI Express) offers faster
data transfer speeds and is
commonly used in modern
systems.

18 / Exploring Computing Concepts and Technologies


Differences Between PCI and PCIe Bus Architectures

PCI Express (PCIe) vs PCI

PCI Express (PCIe) uses point-to-point serial links, unlike PCI's shared parallel bus
architecture.

PCIe connections are built from lanes, with devices supporting various lane
configurations.

PCIe provides faster data transfers due to its serial link topology compared to PCI.

Thunderbolt technology, like Thunderbolt 3, can also use the USB Type-C
connector for high-speed data transfers.

Despite PCIe's advantages, PCI remains common in computers due to its


widespread use.

19 / Exploring Computing Concepts and Technologies


Types of PCI and PCIe Lanes

Different types of PCI PCIe 1.0 PCIe 2.0 PCIe 3.0 PCIe 4.0
and PCIe lanes with
corresponding speeds
in Gbit/s
PCI: 2, 4, 8, 16, 32, 64 lanes 4, 8, 16, 32, 64, 128 lanes 8, 16, 32, 64, 128, 256 lanes 16, 32, 64, 128, 256, 512
lanes
32-bit/33 MHz

32-bit/66 MHz

64-bit/33 MHz

64-bit/66 MHz

20 / Exploring Computing Concepts and Technologies


Compute Virtualization Concept

Compute Virtualization

Compute virtualization is the process of creating virtual versions of physical


computers to optimize resource utilization.

Virtualization allows for the dynamic allocation of CPU, memory, disk, and
networking resources to virtual machines.

Benefits of compute virtualization include cost savings on hardware, power,


cooling, maintenance, and reduced risk of hardware failures.

Virtualization platforms like VMware, Microsoft Hyper-V, and Citrix XenServer


enable the efficient management and movement of virtual machines.

Virtual machines can be provisioned quickly without the need for upfront
hardware purchases, enhancing flexibility and scalability.

21 / Exploring Computing Concepts and Technologies


Virtualization Architecture Overview

Virtualization Key virtualization platforms


Architecture Overview include VMware ESX, Microsoft
Hyper-V, Citrix XenServer, Oracle
Virtualization decouples and VirtualBox, and Red Hat RHEV.
isolates virtual machines from
physical machines and other
virtual machines.

Virtual machines are logical


representations of physical
computers in software, allowing
for independent operation and
isolation.

Benefits include quick


provisioning without upfront
hardware purchases and
improved resource utilization.

History of virtual machines dates


back to IBM's mainframe
System/370 in 1972, with a shift
to x86 platforms in the early
2000s.

22 / Exploring Computing Concepts and Technologies


VMware's Leadership in x86 Virtualization

01 02 03

VMware's Leadership in Cost Savings and Server Virtualization and


x86 Virtualization Efficiency Software Defined
Compute
VMware established leadership Cost savings on hardware, Server virtualization is described
in x86 virtualization with power, and cooling are achieved as Software Defined Compute,
products like VMware by consolidating physical enabling efficient resource
Workstation and VMware GSX. computers into virtual machines management and optimization
on fewer, larger physical through centralized management
Other vendors offering x86 machines. and APIs.
virtualization include Citrix, Red
Hat, and Microsoft. Maintenance costs and hardware
failure risks are reduced with
fewer physical machines needed.

23 / Exploring Computing Concepts and Technologies


Software Defined Compute (SDC)

Software Defined Compute (SDC)

Involves managing virtual machines using a centralized system and APIs for
efficient resource management and optimization.

Centralized management of resources and automatic optimization eases


management efforts in server virtualization.

Enables systems managers to handle more machines with the same staff,
enhancing operational efficiency.

Server virtualization can be viewed as Software Defined Compute, streamlining


resource management and allocation.

24 / Exploring Computing Concepts and Technologies


Hypervisor Management of Virtual Machines

01 02 03 04

Hypervisors Virtualization Platforms Benefits Virtual Machine Images

Manage virtual machines by VMware ESX VMotion and Load balancing Easily created and managed
allocating CPU, memory, disk, Microsoft Hyper-V Live Migration
and networking resources enable automatic movement of Hardware maintenance Caution needed to avoid creating
dynamically. virtual machines between an excessive number of virtual
physical machines. High availability features like machines.
automatic restarts in case of
failures.

25 / Exploring Computing Concepts and Technologies


Drawbacks of Virtualization in IT Infrastructure

01 02 03 04 05

Virtual Machine Additional Specialized Virtualization Lack of Vendor


Sprawl Management Efforts Hardware Overhead Support
Requirements
Virtual machine sprawl is a Introduction of an additional Some servers may require Virtualization overhead, Lack of support from
common issue in IT layer in the infrastructure specialized hardware for especially in running high- application vendors for
infrastructure due to the for virtualization requires virtualization, such as performance databases, can virtualized environments
creation of numerous virtual extra management efforts, modem cards or high-speed impact overall system can lead to challenges in
machines for various including license fees, I/O, which can add performance due to troubleshooting and
purposes, leading to training, and maintenance complexity and cost to the resource allocation and maintenance, requiring
management challenges of additional tools. infrastructure. isolation requirements. additional steps for issue
and resource consumption. resolution.

26 / Exploring Computing Concepts and Technologies


Emulation and Logical Partitions

Emulation Logical Partitions (LPARs)

Emulation is a software process that Logical Partitions (LPARs) are subsets


enables programs to run on a of a computer's hardware resources
different computer system than virtualized as separate computers.
originally intended.
Commonly used in mainframe and
Emulators like Hercules, Charon, and midrange systems.
Bochs allow for running programs on
varied platforms.

27 / Exploring Computing Concepts and Technologies


Logical Partitions and Hypervisors

Logical Partitions
(LPARs)
Subsets of a computer's
hardware resources virtualized as
separate computers.

Commonly utilized in mainframe


and midrange systems.

Role of Hypervisors
Manage virtual machines.

Allow for dynamic allocation of


resources like CPU, memory,
disk, and networking to virtual
machines.

28 / Exploring Computing Concepts and Technologies


Virtualization Techniques and Hypervisor
Types

01 02 03

Virtualization Techniques Types of Hypervisors Virtualization Benefits

Emulation: Software allowing Type 1 Hypervisor: Directly Resource Optimization: Efficient


programs to run on different installed on physical hardware, use of hardware resources.
computer systems. like VMware ESXi.
Flexibility: Easy provisioning and
Logical Partitions: Subset of Type 2 Hypervisor: Installed on management of virtual
hardware resources virtualized an operating system, like machines.
as separate computers. VMware Workstation.
Isolation: Ensuring security and
Hypervisors: Software managing independence of virtual
virtual machines and their environments.
resources.

29 / Exploring Computing Concepts and Technologies


Memory Management in Virtual Machines

Memory Management in Virtual Machines

Memory management in virtual machines involves allocating and managing


memory resources for virtualized environments.

Virtual machines require memory allocation from the physical host machine to
operate efficiently.

Memory management ensures that each virtual machine has access to the
necessary memory resources without impacting other virtual machines.

Techniques like memory ballooning and memory sharing are used to optimize
memory usage in virtualized environments.

Proper memory management is crucial for the performance and stability of virtual
machines in datacenter environments.

30 / Exploring Computing Concepts and Technologies


Container Technology Overview

Container Technology
Container technology allows for
multiple isolated user-space
instances on a single operating
system kernel.

Containers provide isolation,


portability, and easy deployment
of applications by encapsulating
all components, including
dependencies and services.

History of containers dates back


to UNIX-based containers in
1979, integrated into the Linux
kernel in 2008.

Key benefits of containers


include isolation, portability, and
easy deployment of applications.

31 / Exploring Computing Concepts and Technologies


Implementation of Containers in Development

Containers in Development

Containers in development allow for encapsulating applications and their


dependencies for easy deployment.

Implementation of containers involves creating isolated user-space instances on a


single operating system kernel.

Containers provide portability, isolation, and efficient deployment of applications


by encapsulating all necessary components.

Developers can leverage containers to run applications independently, sharing


the same kernel for efficient resource utilization.

32 / Exploring Computing Concepts and Technologies


Container Security Implications

Isolation Vulnerabilities Access Control Patch Management Network Security

Containers provide a high Despite their benefits, Implementing strict access Regularly updating and Securing container
level of isolation for containers can introduce controls and monitoring patching container images networking through
applications, ensuring that security risks if not properly mechanisms is crucial to and runtime environments measures like network
each operates configured or managed, prevent unauthorized is essential to address segmentation, encryption,
independently and securely potentially leading to data access to containerized known vulnerabilities and and firewalls is vital to
within its own environment. breaches or unauthorized applications and sensitive enhance overall security. protect against external
access. data. threats and unauthorized
communication.

33 / Exploring Computing Concepts and Technologies


Repository and Container Orchestration

Implementing a are used for managing container


Repository images and orchestrating the
container lifecycle.
Implementing a repository with
predefined and approved
container components is crucial Container Orchestrators
for managing unlicensed
Container orchestrators enable
software effectively.
containers to run anywhere on a
cluster of machines and schedule
Container Orchestration them based on available
resources.
Container orchestration, also
known as a datacenter operating
system, abstracts resources of a Popular Frameworks
cluster of machines and provides
Docker Swarm, Apache Mesos,
services to containers.
Google's Kubernetes, Rancher,
Pivotal CloudFoundry, and
Frameworks for Mesosphere DC/OS are popular
frameworks for container
Management
management.
Various frameworks like Docker
Swarm, Kubernetes, and Mesos

34 / Exploring Computing Concepts and Technologies


Mainframes in High-Volume Computing

01 02 03 04

Mainframes Overview Mainframe Optimization Market Presence Reliability

Mainframes are high- Mainframes are optimized for IBM is a major vendor in the Mainframes offer high reliability
performance computers designed handling large volumes of data mainframe market, holding a with built-in redundancy for
for high-volume, I/O-intensive efficiently. significant market share. hardware upgrades and repairs
computing. without downtime.

They were the first commercially


available computers and are still
widely used today.

35 / Exploring Computing Concepts and Technologies


Mainframe Architecture Overview

01 02 03 04 05

Mainframe Processing Units Central Processor Design and Market Presence


Components Complex Reliability

Mainframes consist of Mainframes use specialized The CPC contains one to Mainframes are designed for Mainframes are still widely
processing units, memory, Processing Units (PUs) four book packages, each high-volume, I/O-intensive used today, with IBM being
I/O channels, control units, within a Central Processor with processors, memory, computing and are highly the largest vendor in the
and devices placed in racks Complex (CPC). and I/O connections. reliable with built-in market.
or frames. redundancy.

36 / Exploring Computing Concepts and Technologies


Processing Units in Mainframe Systems

Processing Units in Mainframe Systems

Mainframes use Processing Units (PUs) instead of CPUs for computation.

A mainframe typically contains multiple PUs within a Central Processor Complex


(CPC).

The CPC consists of one to four book packages, each containing processors,
memory, and I/O connections.

Specialized PUs, like the quad-core z10 mainframe processor, are utilized in
mainframes instead of off-the-shelf CPUs.

Processors within the CPC start as equivalent PUs and can be characterized for
specific tasks during installation or at a later time.

37 / Exploring Computing Concepts and Technologies


Control Units and Mainframe Virtualization

01 02 03 04

Control Units in Mainframe Mainframe Virtualization Logical Partitions (LPARs) IBM Mainframe LPAR Limit
Systems

Control units in mainframe Mainframes are designed for Mainframes offer logical The largest IBM mainframe today
systems are similar to expansion virtualization and can run partitions (LPARs) as a has an upper limit of 54 LPARs,
cards in x86 systems and are multiple virtual machines with virtualization solution, with each allowing for efficient resource
responsible for working with different operating systems. LPAR running its own mainframe allocation and management.
specific I/O devices. operating system.

38 / Exploring Computing Concepts and Technologies


Midrange Systems Overview

Positioning of Midrange Offer high availability and


Systems security due to their stable
platform nature.
Midrange systems are positioned
between mainframe and x86
platforms in terms of size, cost, History of Midrange
workload, availability, Systems
performance, and maturity.
Traces back to the evolution of
minicomputers.
Production and
The DEC PDP-8 was the first
Components
commercially successful
Typically produced by IBM, minicomputer.
Hewlett-Packard, and Oracle.
Enabled smaller businesses and
Use parts from one vendor and scientific laboratories to have
run an operating system their own computing capabilities.
provided by that vendor.

High Availability and


Security

39 / Exploring Computing Concepts and Technologies


Evolution of Minicomputers

01 02 03 04 05

Introduction of Features of Evolutionary Market Impact Legacy and Influence


Minicomputers Minicomputers Advancements

Minicomputers emerged as They were characterized by Over time, minicomputers Minicomputers found The legacy of
a bridge between their moderate computing evolved to offer improved popularity in small to minicomputers can be seen
mainframes and power, compact size, and processing capabilities, medium-sized businesses, in modern computing
microcomputers, offering affordability compared to storage capacities, and research institutions, and devices, contributing to the
more power than mainframes. connectivity options. educational settings due to development of personal
microcomputers but smaller their cost-effectiveness and computers and server
scale than mainframes. versatility. technologies.

40 / Exploring Computing Concepts and Technologies


Midrange System Architecture

Midrange System Evolution of Midrange


Architecture Systems
Positioned between mainframe Evolved over time to cater to a
and x86 platforms in terms of specific niche in the computing
size, cost, workload, availability, landscape.
performance, and maturity.

History of Midrange
Production and Systems
Operating Systems
Traces back to the era of
Typically produced by IBM, minicomputers.
Hewlett-Packard, and Oracle.
The DEC PDP-8 was a notable
Uses parts from one vendor and early success in this category.
runs an operating system
provided by that vendor.

Availability and Security


Offers higher availability and
security due to their stable
platform nature.

41 / Exploring Computing Concepts and Technologies


UMA Architecture in Midrange Systems

Uniform Memory Access (UMA) Architecture

One of the earliest styles of multi-CPU architectures, typically used in systems


with no more than 8 CPUs.

In an UMA system, the machine is organized into a series of nodes containing


either a processor or a memory block, interconnected usually by a shared bus.

Each processor in an UMA system can access all memory blocks via the shared
bus, creating a single memory address space for all CPUs.

UMA architecture provides a straightforward memory access model, allowing each


processor equal access to all memory blocks in the system.

42 / Exploring Computing Concepts and Technologies


NUMA Architecture in Midrange Systems

01 02 03 04

NUMA Architecture System Structure Performance Benefits Efficiency Optimization


Overview

NUMA stands for Non-Uniform NUMA systems have nodes Beneficial for systems with Optimizes memory access by
Memory Access. interconnected by a network. multiple CPUs. reducing latency.

Allows each processor to access Enables efficient memory access. Enhances performance. Improves overall system
memory blocks directly in efficiency.
midrange systems.

43 / Exploring Computing Concepts and Technologies


Midrange Virtualization Technologies

01 02 03 04 05

Midrange Logical Partitions IBM AIX HP-UX Virtualization Oracle Solaris


Virtualization (LPARs) Workload/Working Technology Virtualization
Technologies Partitions (WPARs)
Midrange virtualization LPARs are commonly used in IBM AIX offers WPARs within HP-UX virtualization Oracle Solaris virtualization
technologies are provided mainframe and midrange LPARs, enabling the transfer technology includes nPARs features zones and
by various vendors, each systems, allowing for the of running workloads and vPARs, offering containers, providing a
offering unique solutions virtualization of hardware between physical systems comparable functionalities protected environment for
tailored to specific system resources as separate seamlessly. to LPARs and WPARs for applications with shared
requirements. computers. efficient resource allocation. kernels but individual
configurations.

44 / Exploring Computing Concepts and Technologies


x86 Servers Architecture Evolution

Introduction of x86 Evolution of x86 Systems


Servers
Earlier x86 systems utilized a
x86 servers are a prevalent Northbridge/Southbridge
server architecture used in architecture for data transfer
datacenters. between the CPU, RAM memory,
and PCIe bus.

Origin and Popularity


Diverse Implementations
Initially based on the IBM PC
architecture, x86 servers gained x86 servers are implemented by
popularity due to their low cost various vendors like HP, Dell,
and compatibility with Windows HDS, and Lenovo, with diverse
and Linux. hardware components available.

x86 Architecture Building


Blocks
x86 architecture is defined by
building blocks integrated into
specialized chips known as an
x86 chipset.

45 / Exploring Computing Concepts and Technologies


Virtualization on the x86 Platform

Virtualization on x86 Platform

Virtualization on the x86 platform allows running multiple operating systems like
Windows or Linux on x86 servers.

x86 servers typically run one application per server, unlike midrange and
mainframe systems.

Resource utilization is improved by running multiple operating systems in


separate virtual machines on a single x86 server.

Popular virtualization platforms for x86 servers include VMware vSphere,


Microsoft Hyper-V, Citrix XenServer, Oracle VirtualBox, and Red Hat RHEV.

46 / Exploring Computing Concepts and Technologies


Performance of Computers and Moore's Law

Performance of Computers Moore's Law

The performance of computers is influenced by server architecture, Moore's Law states that CPU power has exponentially increased
memory, CPU speed, and bus speed. since the introduction of microprocessors in 1971.

The Intel 4004 microprocessor, the first universal microprocessor,


revolutionized computer technology by combining multiple IC
functions into a single chip.

The 4004 chip had 2,300 transistors and could perform 60,000
instructions per second, making computers faster and smaller.

Moore's Law leads to more CPU cores on a single CPU instead of


higher clock speeds, impacting computer performance.

Hyper-threading technology in Intel CPUs allows a single processor


core to work as a multi-core processor, improving system
performance.

47 / Exploring Computing Concepts and Technologies


CPU Execution Process and Clock Speed

CPU Execution Process GHz: CPUs operate at speeds


defined in GHz (billions of clock
Fetching: Instructions are ticks per second).
retrieved from memory.
Performance: Higher clock
Decoding: Instructions are speeds allow CPUs to execute
translated into signals the CPU instructions faster.
can understand.

Executing: Instructions are


carried out by the CPU.

Writing Back: Results are stored


back in memory.

Clock Speed
Clock Ticks: Each step is
triggered by a clock tick.

Clock Cycles: The number of


clock ticks needed varies based
on CPU architecture.

48 / Exploring Computing Concepts and Technologies


CPU Caching and Memory Organization

CPU Caching and Memory Organization CPU Pipelines

CPU cache is crucial for optimizing memory access. Pipelines in CPUs allow for simultaneous handling of multiple
instructions.
It is organized in levels, with L1 cache closest to the core.
Processes are split into stages like fetching, decoding, executing,
L1 cache is fed by L2 cache, which in turn gets data from RAM. and writing back results.

Some multi-core CPUs feature a shared L3 cache for improved


memory usage.

Exclusive cache systems ensure cached memory is in only one


cache.

49 / Exploring Computing Concepts and Technologies


CPU Pipelines and Instruction Execution

Fetching Decoding Executing Writing Back


Instructions are fetched Instructions are Instructions are Results are written back
from memory. decoded to understand executed by the CPU. to memory after
01 02 their meaning. 03 04 execution.

50 / Exploring Computing Concepts and Technologies


Prefetching and Branch Prediction

Prefetching Branch Prediction Optimization of Instruction


Delivery
Prefetching is a technique used by CPUs to Branch prediction is a method employed CPUs utilize prefetching and branch
fetch data from memory before it is by CPUs to anticipate the outcome of prediction to optimize instruction delivery,
actually needed, aiming to reduce delays branch instructions, enhancing with over 80% of processor instructions
in instruction execution. performance by reducing cache misses. being delivered from cache memory using
these techniques.

51 / Exploring Computing Concepts and Technologies


Superscalar CPUs and Parallel Execution

Superscalar CPUs

Designed to process multiple instructions per clock tick by dispatching


instructions to multiple functional units within the processor.

Achieve this by simultaneously executing multiple instructions in separate data


paths within the CPU.

Parallel execution allows for more efficient use of clock cycles, enhancing the
overall performance of the CPU.

Have multiple functional units like arithmetic logic units (ALUs) and multipliers to
handle different types of instructions simultaneously.

52 / Exploring Computing Concepts and Technologies


Limitations of CPU Clock Speeds

Limitations of CPU Clock speeds.


Speeds
To address the limitations of high
Increasing clock speeds beyond 1 clock speeds, the industry has
GHz can lead to interference on shifted towards multi-core CPUs
circuit boards due to high- as a solution for enhancing
frequency signals acting as radio performance without solely
antennas. relying on clock speed
increments.
Circuit board interference can
cause instability in the system,
requiring very rigid circuit board
designs to manage the effect.

Clock speeds above 3 GHz can


result in signal phase issues on
circuit boards, affecting the
synchronization of data
processing.

The physical limitations of circuit


board design and signal
propagation pose challenges in
sustaining very high CPU clock

53 / Exploring Computing Concepts and Technologies


Multi-Core CPUs and Heat Generation

01 02 03 04 05

Prevalence of Multi- Workload Heat Generation Addressing Heat Trend in Processor


core CPUs Distribution Concerns Issues Development

Multi-core CPUs have The distribution of workload Heat generation is a The introduction of multi- The trend in processor
become prevalent in among multiple cores in significant concern with core CPUs has addressed development has shifted
modern computing systems. multi-core CPUs helps in CPUs running between 70- heat issues by spreading the towards CPUs with multiple
reducing power 90 degrees Celsius and not workload across multiple cores to enhance
consumption. exceeding 95 degrees cores. performance and efficiency.
Celsius.

54 / Exploring Computing Concepts and Technologies


Impact of Moore's Law on CPU Cores

Moore's Law and CPU Cores Benefits of More CPU Cores BIOS Support and Hyper-
Threading
Moore's Law leads to an increase in the Having more CPU cores allows for better The importance of BIOS support for
number of CPU cores on a single CPU parallel processing and multitasking technologies like hyper-threading is crucial
rather than higher clock speeds. capabilities. for maximizing the benefits of multiple
CPU cores.
The shift towards more CPU cores has While individual CPU cores may not run
become prominent due to the limitations significantly faster than older CPUs, the
posed by Moore's Law on transistor overall performance is enhanced by
density. having more cores.

55 / Exploring Computing Concepts and Technologies


Virtualization Impact on CPU Usage

Virtualization Impact on CPU Usage

Virtualization on x86 platforms allows for running multiple operating systems on a


single server, optimizing resource utilization.

x86 servers typically run one application per server, unlike midrange and
mainframe systems, leading to less efficient hardware usage.

By running multiple operating systems in virtual machines on a large x86 server,


resource utilization can be improved.

Virtualization layer on x86 platforms achieves application isolation similar to


midrange and mainframe systems.

Popular virtualization products for x86 platforms include VMware vSphere,


Microsoft Hyper-V, Citrix XenServer, Oracle VirtualBox, and Red Hat RHEV.

56 / Exploring Computing Concepts and Technologies


Physical and Virtual Security Measures

01 02 03 04 05

Implementing Utilizing Encryption Implementing Regular Software Conducting Regular


Physical Security Protocols Cybersecurity and Firmware Security Audits
Measures Measures Updates
Access control systems Secure socket layer (SSL) Firewalls Patch security vulnerabilities Penetration testing
certificates
Surveillance cameras Intrusion detection systems Enhance system resilience Identify and address
Safeguard data transmission potential security
Biometric authentication over networks Antivirus software weaknesses

57 / Exploring Computing Concepts and Technologies


Minimizing Hypervisor Security Risks

Minimizing Hypervisor Continued Measures


Security Risks

Implement regular security updates Regularly monitor and audit


and patches to address hypervisor activity for any suspicious
vulnerabilities. behavior or unauthorized access
attempts.
Utilize secure hypervisor
configurations to reduce attack
surfaces.

Employ network segmentation to


isolate virtual machines and limit
lateral movement.

Enable encryption for data at rest


and in transit within the virtual
environment.

Implement strong access controls


and authentication mechanisms to
prevent unauthorized access.
58 / Exploring Computing Concepts and Technologies

You might also like