Infrastructure Building Blocks: Compute
Infrastructure Building Blocks: Compute
and Technologies
A Comprehensive Journey Through the Evolution and Application of Computing
Table of contents
Evolution of Minicomputers 40
Midrange System Architecture 41
UMA Architecture in Midrange Systems 42
NUMA Architecture in Midrange Systems 43
Midrange Virtualization Technologies 44
x86 Servers Architecture Evolution 45
Virtualization on the x86 Platform 46
Performance of Computers and Moore's Law 47
CPU Execution Process and Clock Speed 48
CPU Caching and Memory Organization 49
CPU Pipelines and Instruction Execution 50
Prefetching and Branch Prediction 51
Superscalar CPUs and Parallel Execution 52
Table of contents
Importance in computing Role in system performance Benefits in processing Key features and Comparison with Hyper-
advancements threading
01 02 03
Intel x86 processors, starting from the 8088 CPU to the latest 22-core E5-2699A
Xeon Processor, have been pivotal in shaping computer architectures.
The evolution of CPU word sizes, from 4 bits to 64 bits, has influenced the
capabilities of personal computers in handling data and memory.
Types of Computer
Housing
Stand-alone complete systems
like pedestal or tower computers
were the original computer
housing.
Blade enclosures are used to house blade servers, providing a compact and high-
density solution for data centers.
Blade servers are servers without their own power supply or expansion slots,
designed to be placed in blade enclosures for efficient space utilization.
Newer blade servers may have higher power, cooling, and network bandwidth
requirements, posing challenges for compatibility with existing enclosures.
Enclosures are not limited to blade servers but also accommodate storage
components like disks, controllers, and SAN switches.
CPU instruction sets are represented as binary codes and mnemonics in assembly
language.
Programmers write machine code using mnemonics, which are then translated
into machine instruction codes by an assembler.
Intel processors have been the de-facto standard for many computer
architectures.
AMD is the second-largest global supplier of CPUs, competing fiercely with Intel.
Intel's x86 processors have a long history, starting from the 8088 CPU to the
latest 22-core E5-2699A Xeon Processor.
Known for their energy efficiency and SPARC processors are designed for
low power consumption, making high-performance computing tasks,
them ideal for portable devices. such as database management and
enterprise applications.
ARM processors are based on
Reduced Instruction Set Computing Known for their scalability and
(RISC) principles, focusing on reliability, SPARC processors are
simplicity and efficiency. commonly used in data centers and
server environments.
SPARC Processors
SPARC processors are fully open
and non-proprietary, allowing any
manufacturer to produce a
SPARC CPU.
01 02 03 04 05
Utilizes capacitors to store data Stored on a memory chip on the Different interfaces like RS-232
motherboard are used
Requires regular refreshing to
maintain data integrity Controls the computer from Facilitates serial communication
startup to loading the operating
system
Initially, serial interfaces used a single data line to send and receive information
sequentially.
Over time, advancements led to the development of faster serial interfaces with
improved data transfer rates.
Modern serial communication interfaces utilize protocols like RS-232, USB, and
Ethernet for diverse connectivity needs.
The evolution of serial interfaces has enabled efficient data exchange between
devices in various industries.
01 02 03
PCI Express (PCIe) uses point-to-point serial links, unlike PCI's shared parallel bus
architecture.
PCIe connections are built from lanes, with devices supporting various lane
configurations.
PCIe provides faster data transfers due to its serial link topology compared to PCI.
Thunderbolt technology, like Thunderbolt 3, can also use the USB Type-C
connector for high-speed data transfers.
Different types of PCI PCIe 1.0 PCIe 2.0 PCIe 3.0 PCIe 4.0
and PCIe lanes with
corresponding speeds
in Gbit/s
PCI: 2, 4, 8, 16, 32, 64 lanes 4, 8, 16, 32, 64, 128 lanes 8, 16, 32, 64, 128, 256 lanes 16, 32, 64, 128, 256, 512
lanes
32-bit/33 MHz
32-bit/66 MHz
64-bit/33 MHz
64-bit/66 MHz
Compute Virtualization
Virtualization allows for the dynamic allocation of CPU, memory, disk, and
networking resources to virtual machines.
Virtual machines can be provisioned quickly without the need for upfront
hardware purchases, enhancing flexibility and scalability.
01 02 03
Involves managing virtual machines using a centralized system and APIs for
efficient resource management and optimization.
Enables systems managers to handle more machines with the same staff,
enhancing operational efficiency.
01 02 03 04
Manage virtual machines by VMware ESX VMotion and Load balancing Easily created and managed
allocating CPU, memory, disk, Microsoft Hyper-V Live Migration
and networking resources enable automatic movement of Hardware maintenance Caution needed to avoid creating
dynamically. virtual machines between an excessive number of virtual
physical machines. High availability features like machines.
automatic restarts in case of
failures.
01 02 03 04 05
Logical Partitions
(LPARs)
Subsets of a computer's
hardware resources virtualized as
separate computers.
Role of Hypervisors
Manage virtual machines.
01 02 03
Virtual machines require memory allocation from the physical host machine to
operate efficiently.
Memory management ensures that each virtual machine has access to the
necessary memory resources without impacting other virtual machines.
Techniques like memory ballooning and memory sharing are used to optimize
memory usage in virtualized environments.
Proper memory management is crucial for the performance and stability of virtual
machines in datacenter environments.
Container Technology
Container technology allows for
multiple isolated user-space
instances on a single operating
system kernel.
Containers in Development
Containers provide a high Despite their benefits, Implementing strict access Regularly updating and Securing container
level of isolation for containers can introduce controls and monitoring patching container images networking through
applications, ensuring that security risks if not properly mechanisms is crucial to and runtime environments measures like network
each operates configured or managed, prevent unauthorized is essential to address segmentation, encryption,
independently and securely potentially leading to data access to containerized known vulnerabilities and and firewalls is vital to
within its own environment. breaches or unauthorized applications and sensitive enhance overall security. protect against external
access. data. threats and unauthorized
communication.
01 02 03 04
Mainframes are high- Mainframes are optimized for IBM is a major vendor in the Mainframes offer high reliability
performance computers designed handling large volumes of data mainframe market, holding a with built-in redundancy for
for high-volume, I/O-intensive efficiently. significant market share. hardware upgrades and repairs
computing. without downtime.
01 02 03 04 05
Mainframes consist of Mainframes use specialized The CPC contains one to Mainframes are designed for Mainframes are still widely
processing units, memory, Processing Units (PUs) four book packages, each high-volume, I/O-intensive used today, with IBM being
I/O channels, control units, within a Central Processor with processors, memory, computing and are highly the largest vendor in the
and devices placed in racks Complex (CPC). and I/O connections. reliable with built-in market.
or frames. redundancy.
The CPC consists of one to four book packages, each containing processors,
memory, and I/O connections.
Specialized PUs, like the quad-core z10 mainframe processor, are utilized in
mainframes instead of off-the-shelf CPUs.
Processors within the CPC start as equivalent PUs and can be characterized for
specific tasks during installation or at a later time.
01 02 03 04
Control Units in Mainframe Mainframe Virtualization Logical Partitions (LPARs) IBM Mainframe LPAR Limit
Systems
Control units in mainframe Mainframes are designed for Mainframes offer logical The largest IBM mainframe today
systems are similar to expansion virtualization and can run partitions (LPARs) as a has an upper limit of 54 LPARs,
cards in x86 systems and are multiple virtual machines with virtualization solution, with each allowing for efficient resource
responsible for working with different operating systems. LPAR running its own mainframe allocation and management.
specific I/O devices. operating system.
01 02 03 04 05
Minicomputers emerged as They were characterized by Over time, minicomputers Minicomputers found The legacy of
a bridge between their moderate computing evolved to offer improved popularity in small to minicomputers can be seen
mainframes and power, compact size, and processing capabilities, medium-sized businesses, in modern computing
microcomputers, offering affordability compared to storage capacities, and research institutions, and devices, contributing to the
more power than mainframes. connectivity options. educational settings due to development of personal
microcomputers but smaller their cost-effectiveness and computers and server
scale than mainframes. versatility. technologies.
History of Midrange
Production and Systems
Operating Systems
Traces back to the era of
Typically produced by IBM, minicomputers.
Hewlett-Packard, and Oracle.
The DEC PDP-8 was a notable
Uses parts from one vendor and early success in this category.
runs an operating system
provided by that vendor.
Each processor in an UMA system can access all memory blocks via the shared
bus, creating a single memory address space for all CPUs.
01 02 03 04
NUMA stands for Non-Uniform NUMA systems have nodes Beneficial for systems with Optimizes memory access by
Memory Access. interconnected by a network. multiple CPUs. reducing latency.
Allows each processor to access Enables efficient memory access. Enhances performance. Improves overall system
memory blocks directly in efficiency.
midrange systems.
01 02 03 04 05
Virtualization on the x86 platform allows running multiple operating systems like
Windows or Linux on x86 servers.
x86 servers typically run one application per server, unlike midrange and
mainframe systems.
The performance of computers is influenced by server architecture, Moore's Law states that CPU power has exponentially increased
memory, CPU speed, and bus speed. since the introduction of microprocessors in 1971.
The 4004 chip had 2,300 transistors and could perform 60,000
instructions per second, making computers faster and smaller.
Clock Speed
Clock Ticks: Each step is
triggered by a clock tick.
CPU cache is crucial for optimizing memory access. Pipelines in CPUs allow for simultaneous handling of multiple
instructions.
It is organized in levels, with L1 cache closest to the core.
Processes are split into stages like fetching, decoding, executing,
L1 cache is fed by L2 cache, which in turn gets data from RAM. and writing back results.
Superscalar CPUs
Parallel execution allows for more efficient use of clock cycles, enhancing the
overall performance of the CPU.
Have multiple functional units like arithmetic logic units (ALUs) and multipliers to
handle different types of instructions simultaneously.
01 02 03 04 05
Multi-core CPUs have The distribution of workload Heat generation is a The introduction of multi- The trend in processor
become prevalent in among multiple cores in significant concern with core CPUs has addressed development has shifted
modern computing systems. multi-core CPUs helps in CPUs running between 70- heat issues by spreading the towards CPUs with multiple
reducing power 90 degrees Celsius and not workload across multiple cores to enhance
consumption. exceeding 95 degrees cores. performance and efficiency.
Celsius.
Moore's Law and CPU Cores Benefits of More CPU Cores BIOS Support and Hyper-
Threading
Moore's Law leads to an increase in the Having more CPU cores allows for better The importance of BIOS support for
number of CPU cores on a single CPU parallel processing and multitasking technologies like hyper-threading is crucial
rather than higher clock speeds. capabilities. for maximizing the benefits of multiple
CPU cores.
The shift towards more CPU cores has While individual CPU cores may not run
become prominent due to the limitations significantly faster than older CPUs, the
posed by Moore's Law on transistor overall performance is enhanced by
density. having more cores.
x86 servers typically run one application per server, unlike midrange and
mainframe systems, leading to less efficient hardware usage.
01 02 03 04 05