0% found this document useful (0 votes)
3 views

Green Computing

Green computing focuses on minimizing environmental impact through energy-efficient technologies and practices, aiming to reduce energy consumption and electronic waste. Moore's Law and Amdahl's Law are significant in energy-efficient computing, highlighting the importance of hardware advancements and the limitations of parallelism. Dennard's Scaling describes how power density remains constant as transistors shrink, enabling improved energy efficiency and performance, though it has faced challenges in recent years.

Uploaded by

adityakp
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

Green Computing

Green computing focuses on minimizing environmental impact through energy-efficient technologies and practices, aiming to reduce energy consumption and electronic waste. Moore's Law and Amdahl's Law are significant in energy-efficient computing, highlighting the importance of hardware advancements and the limitations of parallelism. Dennard's Scaling describes how power density remains constant as transistors shrink, enabling improved energy efficiency and performance, though it has faced challenges in recent years.

Uploaded by

adityakp
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 93

Green Computing

Sugata Gangopadhyay
Green Computing Overview (1/2)
• Definition: Green computing involves the design, development, and
implementation of computing technologies and practices to minimize
environmental impact.

• Goals:
• Reduce energy consumption.
• Decrease electronic waste.
• Promote sustainability in technology use and manufacturing.
Green Computing Overview (2/2)
• Examples:
• Cloud computing to optimize resources.
• Energy-efficient processors and hardware.
• Virtualization for reducing physical server needs.
Moore's Law
• Moore's Law is named after Gordon E. Moore, a co-founder of Intel
Corporation, who observed in a 1965 paper that the number of
transistors on an integrated circuit (IC) doubles approximately every
two years while the cost per transistor decreases. This observation
has since become a guiding principle for the semiconductor industry
and is often extended to imply the exponential growth of
computational power over time.
Significance of Moore's Law in Energy-
Efficient Computing (1/3)
• Miniaturization and Energy Efficiency: As transistors become smaller,
they consume less power while providing higher performance. This
reduction in size and energy usage has enabled more energy-efficient
devices and systems.

• Power-Performance Trade-Off: Moore's Law has supported the


development of processors that perform more computations per watt
of energy, which is critical for applications where energy is
constrained, such as mobile devices and embedded systems.
Significance of Moore's Law in Energy-
Efficient Computing (2/3)
• Thermal Management: Smaller, more efficient transistors generate
less heat, enabling the design of compact, high-performance devices
without excessive cooling requirements.

• Advancements in Battery-Powered Devices:Energy-efficient


computing has allowed for longer battery life in portable devices like
smartphones, laptops, and IoT devices, supporting the growth of
mobile and edge computing.
Significance of Moore's Law in Energy-
Efficient Computing (3/3)
• Sustainability and Green Computing: By driving innovations in
energy-efficient chips, Moore's Law has contributed to reducing the
carbon footprint of computing infrastructure, which is important for
data centers and large-scale computing environments.

• Enabling New Technologies: The energy efficiency enabled by


Moore's Law has been pivotal for emerging technologies such as
artificial intelligence (AI), quantum computing, and wearable tech,
which rely on powerful yet efficient computation.
Conclusion
• While Moore's Law has faced challenges due to physical and
economic limitations in recent years, its principles have inspired
alternative approaches, such as 3D chip architectures, specialized
processors (e.g., GPUs and TPUs), and advanced materials, to
continue improving energy-efficient computing.
Amdahl's Law
• Amdahl's Law is named after Gene Amdahl, a computer scientist who introduced
it in 1967. Amdahl's Law is a formula that predicts the theoretical speedup of a
computational task when only part of the task can be parallelized. It is expressed
as: 1
𝑆=
𝑃
1−𝑃 +
where 𝑁
• 𝑆 is the speedup
• 𝑃 is the proportion of the program that can be parallelized
• 𝑁 is the number of processing units (e.g., cores)
• Amdahl's Law highlights the diminishing returns in performance improvement as
more processors are added, given that certain parts of a program are inherently
sequential.
Significance of Amdahl's Law in Energy-
Efficient Computing
• Optimal Resource Utilization: Amdahl's Law helps identify the
diminishing returns of adding more processors or computational
resources. By avoiding overprovisioning, it aids in designing energy-
efficient systems that do not waste power on redundant parallelism.
• Workload Analysis: By understanding the balance between
sequential and parallelizable tasks, developers can focus on
optimizing energy usage for the most critical parts of the
computation.
Significance of Amdahl's Law in Energy-
Efficient Computing
• Energy-Performance Trade-Offs: Increasing the number of processors
increases energy consumption, but Amdahl's Law reveals the point
where adding more processors has minimal impact on performance.
This understanding leads to better energy-efficient designs.
• System Scalability:It guides the development of scalable systems
where performance gains are proportionate to energy costs,
especially for large-scale parallel systems such as supercomputers and
data centers.
Significance of Amdahl's Law in Energy-
Efficient Computing
• Design of Specialized Hardware: Amdahl's Law informs decisions
about integrating specialized accelerators, such as GPUs or TPUs, to
optimize parallel portions of workloads while ensuring energy
efficiency.
• Focus on Sequential Optimization: Since the sequential portion of
tasks imposes a hard limit on speedup, optimizing these bottlenecks
often yields significant energy efficiency gains.
• Energy-Aware Parallelism: Amdahl’s insights encourage careful
balancing of the degree of parallelism against energy overheads,
particularly in systems with constrained power budgets, such as
mobile and edge computing devices.
Significance of Amdahl's Law in Energy-
Efficient Computing
• Amdahl's Law complements energy-efficient computing by providing
a framework for assessing the trade-offs between performance
improvements and energy costs in parallel and hybrid computational
systems. It emphasizes that smarter design, rather than merely
adding resources, is crucial for achieving sustainable computing.
Problem 1 (Application of Moore’s law)
• A processor has 1 billion transistors in 2020. According to Moore's
Law, the number of transistors doubles every two years.
• How many transistors will the processor have in 2024?
• If the power consumption per transistor is reduced by 50% in the same
period, what is the total power consumption in 2024 compared to 2020?
• Solution:
• Number of transistors:
• 2020: 1 × 109 (1 billion)
• 2022: 2 × 109 (doubled)
• 2024: 4 × 109 (doubled again)
• So, the processor will have 4 × 109 transistors in 2024.
Problem 1
• Power consumption: If the power per transistor is halved, the power
per transistor in 2024 is 0.5 × 𝑃2020 .
• Total power consumption
𝑃2024 = 𝑁𝑜. 𝑜𝑓 𝑡𝑟𝑎𝑛𝑠𝑖𝑠𝑡𝑜𝑟𝑠 × 𝑃𝑜𝑤𝑒𝑟 𝑝𝑒𝑟 𝑡𝑟𝑎𝑛𝑠𝑖𝑠𝑡𝑜𝑟
𝑃2024 = 4 × 109 × 0.5 × 𝑃2020
𝑃2024 = 2 × 109 × 𝑃2020
• Compared to 𝑃2020 :
𝑃2024
• =2
𝑃2020
• The total power consumption in 2024 is 2 times higher than in 2020.
Problem 2 (Application of Amdahl’s law)
• A program consists of 70% parallelizable tasks and 30% sequential
tasks.What is the maximum speedup achievable if infinite processors
are available?What is the speedup if 4 processors are used?
• Solution: Amdahl’s law:
1
𝑆=
𝑃
1−𝑃 +
𝑁
where 𝑃 = 0.7 (parallel portion), 1 − 𝑃 = 0.3 (sequential portion),
and 𝑁 is the number of processors.
Problem 2 (Application of Amdahl’s law)
𝑃
• Maximum speedup (𝑁 = ∞): when 𝑁 → ∞, the term → 0:
𝑁
1 1
𝑆= = = 3.33
1 − 𝑃 0.3
the maximum speedup is 3.33 ×
• Speedup with 4 processors (𝑁 = 4):
1
𝑆=
𝑃
1−𝑃 +
𝑁
Substituting 𝑃 = 0.7 and 𝑁 = 4:
1 1
𝑆= = ≈ 2.11
0.7 0.475
0.3 +
4
Problem 3: Combining Moore's Law and
Amdahl's Law
• A computer is upgraded every two years, doubling the processing
power due to Moore's Law. A program has 50% of its workload
parallelizable. Initially, it takes 20 seconds to run on a single-core
processor.How long will the program take to run after two upgrades,
assuming parallelism is optimally utilized?
• Solution:
• Processing power increase after 4 years = 4 × 𝑃𝑖𝑛𝑖𝑡 (Moore’s law)
• Amdahl’s law with 𝑁 = 4 (since the processing power is 4 times as before)
1 1 1 1
𝑆= = = = = 1.6
𝑃 0.5 0.5 + 0.125 0.625
1−𝑃 + 0.5 +
𝑁 𝐼𝑛𝑖𝑡𝑖𝑎𝑙 4
𝑟𝑢𝑛𝑡𝑖𝑚𝑒 20
• The new runtime is = = = 12.5 sec.
𝑆 1.6
Conclusion
• These problems demonstrate the interplay of Moore’s Law (hardware
advancements) and Amdahl’s Law (limitations of parallelism) in
energy-efficient and high-performance computing.
Dennard's Scaling
• Dennard's Scaling, introduced in a 1974 paper by Robert Dennard and
colleagues, describes how the power density (power per unit area) of
transistors remains constant as they shrink in size, provided voltage
and current are scaled proportionally. This principle enabled the rapid
miniaturization of transistors while maintaining energy efficiency.
Key points to note
• As transistor dimensions (length, width) scale down by a factor 𝑆,
their:
• Voltage and current scale down by 𝑆,
• Power per transistor scales down by 𝑆 2 ,
• Circuit performance (frequency) improves by 𝑆.
• This scaling allowed processors to become smaller, faster, and more
energy-efficient without generating excess heat.
Application to Energy-Efficient Computing
• Improved Energy Efficiency:
• Smaller transistors consume less power, allowing more transistors on a chip
while keeping power consumption manageable.
• Higher Performance per Watt:
• Faster circuits can operate within the same power budget, boosting
computational power without exceeding thermal limits.
• Compact Device Design:
• Scaling enables the creation of portable and energy-efficient devices like
smartphones and wearables.
Application to Energy-Efficient Computing
• Challenges:
• Dennard's scaling broke down around the mid-2000s due to physical
limitations, such as leakage currents and heat dissipation issues. This led to
the shift toward multicore processors, heterogeneous architectures, and
specialized accelerators.
Example 1: Power Scaling
• A transistor operates at a supply voltage of 2V and consumes 4mW of
power. If the transistor is scaled down by a factor of 2, what will its
power consumption be under Dennard's scaling?
• Scaling factor 𝑆 = 2
1 1 ′ 2
• Voltage scales as = : 𝑉 = = 1𝑉
𝑆 2 2
1 1
• Current scales as , so power scales as: 𝑃′ =𝑃× . Substituting 𝑃 = 4mW
𝑆 𝑆2
1 1
and 𝑆 = 2: 𝑃′ = 4 × 2 = 4 × = 1mW
2 4
• The scaled transistor consumes 1mW power.
Example 2: Performance Scaling
• A processor operates at a clock frequency of 1 GHz and consumes 10
W of power. If the transistor dimensions are scaled down by a factor
of 2 under Dennard's scaling:
• What is the new clock frequency?
• What is the new power consumption?
• Solution:
• Clock frequency
• Frequency scales as 𝑆: 𝑓 ′ = 𝑓 × 𝑆 = 1GHz × 2 = 2GHz
• Power consumption:
1 1 1
• Power scales as : 𝑃′ = 𝑃 × = 10W × = 10W × 0.25 = 2.5W
𝑆2 𝑆2 22
Example 3: Device Density
• A chip contains 1 million transistors consuming 10 W of power. If
transistor dimensions are scaled down by 𝑆 = 2, how many
transistors can the chip support while keeping power consumption
constant?
• Solution
1
• Power per transistor scales as 2:
𝑆
𝑃new per transistor = 𝑃old per transistor × 1/𝑆 2
• If the power is constant, the number of transistors can scale as 𝑆 2 :
𝑁 ′ = 𝑁 × 𝑆 2 = 1 million × 22 = 4 million
ARM Cortex-A Series (Mobile and Embedded
Systems)
• Overview:
ARM’s Cortex-A series processors are designed for mobile and embedded systems, emphasizing energy
efficiency while maintaining high performance. These processors are used in smartphones, tablets, and IoT
devices.
• Key Features:
• Low Power Consumption: ARM chips use a reduced instruction set computing (RISC) architecture, which
leads to lower power consumption compared to complex instruction set computing (CISC) architectures.
• Dynamic Voltage and Frequency Scaling (DVFS): ARM processors implement DVFS, allowing them to adjust
voltage and frequency according to workload demands, reducing power consumption during idle or low-
performance periods.
• Big.LITTLE Architecture: ARM introduced the Big.LITTLE architecture, which pairs high-performance cores
with low-power cores. The high-performance cores are used for intensive tasks, while the low-power cores
handle less demanding operations, optimizing energy usage based on workload.
• Impact on Energy-Efficient Computing:
• The ARM Cortex-A processors are widely adopted in mobile devices because they provide a balance
between power efficiency and performance. The Big.LITTLE design enables intelligent workload distribution,
achieving significant energy savings without compromising user experience.
What is ARM?
• ARM (originally Acorn RISC Machine, later renamed to Advanced
RISC Machines) is a company and architecture that designs RISC
(Reduced Instruction Set Computing) microprocessors and related
technology. ARM processors are widely used in various devices, from
smartphones and tablets to embedded systems, IoT devices, and
even laptops. ARM is known for its energy-efficient processors, which
provide a balance between performance and power consumption.
Intel’s Low Power Atom Processors (Mobile
and Netbooks)
• Overview:
Intel's Atom processors are designed specifically for energy-efficient computing in netbooks, tablets, and
other mobile devices. The Atom chips are based on Intel's x86 architecture but optimized for low power.
• Key Features:
• Power-Saving Technologies: Atom processors integrate various power-saving technologies, such as
Enhanced Intel SpeedStep Technology (EIST), which adjusts the processor’s voltage and frequency
dynamically based on workload.
• System on a Chip (SoC) Design: Atom processors often come as SoCs, which combine multiple components
(CPU, GPU, memory controller, etc.) onto a single chip, reducing the power overhead from interconnects and
enhancing power efficiency.
• Low-Voltage Operation: Atom chips are built to run at lower voltages (e.g., 0.8V-1.1V) compared to standard
processors, which directly translates into lower energy consumption.
• Impact on Energy-Efficient Computing:
• Atom processors have enabled the development of energy-efficient devices like netbooks and tablets,
offering low power consumption while still supporting the needs of lightweight computing tasks such as
browsing, media consumption, and office applications.
NVIDIA’s Volta Architecture (GPUs for Deep
Learning and AI)
• Overview:
NVIDIA’s Volta architecture (used in Tesla V100 GPUs) focuses on energy-efficient processing for high-performance computing
tasks like deep learning and AI. These GPUs have been a game-changer in terms of energy efficiency in specialized computing.
• Key Features:
• Tensor Cores: Volta architecture includes specialized tensor cores optimized for AI and machine learning tasks. These cores are
designed to execute operations with significantly higher throughput, thus improving performance-per-watt.
• NVLink: NVIDIA’s NVLink interconnect provides high-bandwidth communication between GPUs, allowing for efficient scaling of
workloads. This reduces the need for high power consumption from traditional interconnects like PCIe.
• Deep Learning Efficiency: The Volta GPUs are optimized for matrix operations, which are common in machine learning models. By
enhancing these operations' energy efficiency, the architecture offers better performance with lower power consumption.
• AI-Accelerated Performance: Volta chips support lower precision computations (e.g., FP16, INT8), which require less power than
full-precision floating-point operations but still offer high accuracy for AI workloads.
• Impact on Energy-Efficient Computing:
• The Volta architecture demonstrates how specialized hardware for AI and deep learning can achieve significant performance-per-
watt improvements, making large-scale AI research and applications more energy-efficient. These GPUs have made a significant
impact on industries like healthcare, finance, and autonomous vehicles.
Google’s Tensor Processing Units (TPUs) for
Machine Learning
• Overview:
Google designed Tensor Processing Units (TPUs) as custom accelerators specifically for machine learning workloads. TPUs are
optimized for high-throughput, low-latency, and energy-efficient execution of machine learning models.
• Key Features:
• Application-Specific Integrated Circuits (ASICs): Unlike general-purpose processors like CPUs and GPUs, TPUs are custom-designed
ASICs that are highly optimized for the specific operations required by machine learning, such as matrix multiplications and
convolutions.
• Low Power per Operation: TPUs are designed to consume less power per operation than GPUs or CPUs. They achieve this by using
specialized circuits and lower precision arithmetic (e.g., 8-bit integer operations) which reduce energy consumption while still
delivering excellent performance for ML tasks.
• High Throughput: TPUs are designed to maximize throughput for matrix operations that are typical in machine learning models,
leading to better energy efficiency for large-scale computations.
• Cloud Deployment: Google’s TPUs are available through Google Cloud, allowing for energy-efficient execution of ML models at
scale without requiring on-premises infrastructure.
• Impact on Energy-Efficient Computing:
• TPUs have revolutionized machine learning by delivering high performance with lower power consumption compared to general-
purpose processors. They allow for the scaling of AI workloads with a reduced environmental impact, supporting Google's energy-
efficient cloud computing goals.
Apple’s M1 Chip (System on a Chip for
Personal Computers)
• Overview:
Apple's M1 chip, based on ARM architecture, is designed for MacBooks, iMacs, and other Apple devices. The M1 chip integrates
CPU, GPU, Neural Engine, I/O, and memory into a single chip, allowing for high efficiency.
• Key Features:
• Unified Memory Architecture (UMA): The M1 uses a unified memory architecture, which allows both the CPU and GPU to access
the same memory pool, reducing the need for separate memory modules and minimizing energy overhead.
• Energy-Efficient ARM-Based Design: The ARM architecture used in the M1 chip is known for its low power consumption compared
to Intel's x86-based chips. Apple’s focus on power-efficient design provides long battery life in portable devices while maintaining
high performance.
• Advanced Power Management: The M1 chip uses sophisticated power management techniques, including dynamic power scaling,
which adjusts the power and performance of different cores depending on workload demands.
• Neural Engine: The inclusion of a dedicated neural engine for machine learning tasks helps offload AI workloads from the main
CPU, enabling faster computations with lower power consumption.
• Impact on Energy-Efficient Computing:
• The M1 chip is a prime example of how energy-efficient architecture can deliver substantial performance gains while keeping
power consumption low. Its success in consumer devices like the MacBook Air (with impressive battery life) demonstrates how
hardware design can optimize both power efficiency and performance for everyday computing tasks.
Microsoft Project Catapult (FPGAs for Data
Centers)
• Overview:
Microsoft’s Project Catapult integrates FPGAs (Field-Programmable Gate Arrays) into its data center infrastructure to accelerate
cloud-based services, including Bing search and Azure services, with a focus on energy efficiency.
• Key Features:
• Field-Programmable Gate Arrays (FPGAs): FPGAs are reconfigurable hardware devices that can be optimized for specific
workloads. In this case, FPGAs accelerate algorithms like search ranking, compression, and encryption.
• Low Power, High Throughput: FPGAs are capable of executing tasks with much lower power consumption than traditional CPUs
and GPUs because they are custom-configured for the task at hand.
• Integration with Data Centers: FPGAs are used in Microsoft's Azure data centers, allowing for the optimization of energy
consumption while maintaining high performance for cloud applications.
• Parallelism and Customization: FPGAs are highly parallel, meaning that they can process many tasks simultaneously, and their
configurability allows Microsoft to optimize for a wide variety of workloads with minimal energy waste.
• Impact on Energy-Efficient Computing:
• Project Catapult demonstrates how FPGAs, when integrated into data centers, can provide an energy-efficient alternative to
traditional processors for specific workloads, drastically improving performance-per-watt for cloud-based services.
Conclusion
• These case studies highlight the diversity of approaches to energy-
efficient computing in hardware and architecture. From ARM's mobile
chips to Google's specialized TPUs, each example demonstrates how
specific hardware optimizations—such as custom processors,
dynamic voltage scaling, specialized cores, and system integration—
can lead to significant improvements in energy efficiency while still
achieving high performance. These advancements are crucial for
addressing the increasing energy demands of modern computing,
particularly in areas like mobile devices, AI, and cloud infrastructure.
Google’s Strategies for Managing
Energy Consumption in Data
Centers
https://www.google.com/about/datacenters/efficiency/
Data Center Design and Infrastructure
• Efficient Cooling Systems: Google uses advanced cooling techniques
such as free cooling (using outside air), liquid cooling, and optimized
airflow management.

• Custom Hardware: Google designs custom servers that are energy


efficient and tailored for their workloads.
Software-Driven Optimization
• Workload Management: Google uses machine learning models to
predict and manage workloads. This includes adjusting workloads
dynamically based on energy availability, such as renewable energy
supply.
• Scheduling and Optimization Models:
• ILP and related optimization techniques can be applied to allocate workloads
to servers and schedule tasks in ways that minimize energy consumption or
match renewable energy supply.
• For example, the objective function in an ILP model could minimize total
energy consumption subject to constraints like workload deadlines, server
capacities, and thermal limits.
Use of Renewable Energy
• Energy Matching: Google strives to match its energy consumption
with renewable energy supply in real time. This involves sophisticated
forecasting models for renewable energy generation (e.g., solar and
wind) and aligning data center workloads accordingly.

• Battery and Grid Interaction: When renewable energy is not available,


Google leverages batteries or purchases energy from grids, potentially
applying optimization models to minimize costs and carbon footprint.
ILP and Optimization Approaches
• Typical ILP Model Components for Energy Management:
- Decision Variables: Server on/off states, task assignments, cooling
system settings.
- Objective Function: Minimize total energy consumption or costs.
- Constraints:
- Capacity constraints for servers and cooling systems.
- Task deadlines and dependencies.
- Temperature and thermal limits.
• Variants: Mixed Integer Linear Programming (MILP) models are often
used to handle more complex systems involving both continuous
(e.g., cooling) and discrete (e.g., server states) variables.
AI-Powered Control Systems
• Google has developed AI systems, such as DeepMind, to optimize
data center cooling. These systems use reinforcement learning to
adjust cooling settings in real time, achieving substantial energy
savings.
Open Research and Shared Knowledge
• Google has shared insights into its practices through research papers
and case studies. While the details of specific ILP models may not be
public, general approaches to data center optimization are well-
documented in academic and industry literature.
• https://www.google.com/about/datacenters/efficiency/
Data Center Energy Optimization Example
• Problem Statement
• A data center has:
1. 𝑁 servers, each with a capacity 𝐶𝑖 .
2. 𝑀 tasks, each requiring 𝑇𝑗 computing units.
3. Cooling systems with adjustable power settings 𝑃𝑐 .
4. Renewable energy availability 𝐸𝑟 .
• The objective is to allocate tasks to servers and configure cooling to
minimize total energy consumption while ensuring all tasks are
completed.
Mathematical Formulation
• Decision Variables
1. 𝑥𝑖𝑗 : Binary variable. 𝑥𝑖𝑗 = 1 if task 𝑗 is assigned to server 𝑖,
otherwise 𝑥𝑖𝑗 = 0.

2. 𝑦𝑖 : Binary variable. 𝑦𝑖 = 1 if server 𝑖 is active, otherwise 𝑦𝑖 = 0.

3. 𝑃𝑐 : Continuous variable representing the cooling system power


consumption.
Objective Function
• Minimize total energy consumption:
𝑁

𝐸 = ෍ 𝑦𝑖 ⋅ 𝐸𝑠 + 𝑃𝑐 − 𝐸𝑟
𝑖=1
where 𝐸𝑠 is the energy consumption of an active server.
Constraints
• Task Assignment: Each task must be assigned to one server.
𝑁

෍ 𝑥𝑖𝑗 = 1 ∀𝑗 ∈ 1, 2, … , 𝑀
𝑖=1
• Server Capacity: The total task load on a server must not exceed its
capacity.
𝑀

෍ 𝑥𝑖𝑗 ⋅ 𝑇𝑗 ≤ 𝑦𝑖 ⋅ 𝐶𝑖 ∀ 𝑖 ∈ 1,2, … , 𝑁
𝑗=1
Constraints
• Server Activation: A server is active if at least one task is assigned to
it.
𝑦𝑖 ≥ 𝑥𝑖𝑗 ∀𝑖, 𝑗
• Cooling Energy: Cooling power must cover the heat generated by
active servers.
𝑁

𝑃𝑐 ≥ 𝛼 ⋅ ෍ 𝑦𝑖 ⋅ 𝐸𝑠
𝑖=1
• Renewable Energy: Ensure renewable energy usage doesn’t exceed
its availability.
𝑃𝑐 − 𝐸𝑟 ≤ 0
Interpretation

• This model demonstrates the interplay between:


1. Task allocation to servers.
2. Balancing energy consumption and renewable energy availability.
3. The role of cooling in overall energy efficiency.
Example problem
• The data center now has the following parameters:
• Energy Consumption:
• Zone 1: Each server consumes 4 kWh.
• Zone 2: Each server consumes 7 kWh.
• Computational Capacity:
• Zone 1: Each server provides 8 units of computational power.
• Zone 2: Each server provides 12 units of computational power.
• Constraints:
• The total computational demand is at least 96 units.
• The total number of servers activated across both zones cannot exceed 14.
• Due to cooling limitations, Zone 2 can activate at most 7 servers.
• The number of servers in each zone must be integers.
• Objective: Minimize the total energy consumption.
Mathematical Formulation
• Let
• 𝑥1 : number of servers activated in Zone 1
• 𝑥2 : number of servers activated in Zone 2
• Objective function
• 𝑍 = 4𝑥1 + 7𝑥2
• Constraints
• Computational demand: 8𝑥1 + 12𝑥2 ≥ 96
• Total server: 𝑥1 + 𝑥2 ≤ 14
• Zone 2 cooling limitation: 𝑥2 ≤ 7
• Non-negativity and integer constraints: 𝑥1 , 𝑥2 ≥ 0, 𝑥1 , 𝑥2 ∈ ℤ
Step 1: Solve the relaxed LP
Step 1: Solve the relaxed LP
Step 1: Solve the relaxed LP
Branch and Bound
Solve the left branch
Solve the right branch
Security and Energy
Public key cryptosystems

We define a cryptosystem by a five-tuple of sets


(𝒫, 𝒞, 𝒦, ℰ, 𝒟) where
• 𝒫 is the set of all possible plaintexts;
• 𝒞 is the set of all possible ciphertexts;
• 𝒦 is the key-space consisting of all possible
keys;
• ℰ and 𝒟 are sets of encryption and decryption
functions, respectively.
Public key cryptosystems

• For each 𝐾 ∈ 𝒦, there exists an encryption


function 𝑒𝐾 : 𝒫 → 𝒞 and
• a decryption function 𝑑𝐾 : 𝒞 → 𝒫 such that
𝑑𝐾 𝑒𝐾 𝑥 = 𝑥, for all 𝑥 ∈ 𝒫.
Public key cryptosystem
• In public key cryptosystems each key
has a public component and a private
(secret) component. That is, 𝐾 =
𝐾𝑝𝑢𝑏 , 𝐾𝑠𝑒𝑐 .
• Alice, the sender, computes the
encryption function exclusively using
the knowledge of 𝐾𝑝𝑢𝑏 ; so we write
𝑒𝐾 = 𝑒𝐾𝑝𝑢𝑏 .
Public key cryptosystem
• The cryptosystem must be such that
without the knowledge of 𝐾sec , it is not
possible for anyone, including Alice, to
compute 𝑑𝐾 .
• The receiver, Bob, therefore, may
publish the public part of his key 𝐾𝑝𝑢𝑏 ,
and unlike secret-key cryptography,
need not share any secret key with Alice,
the sender.
Public key cryptosystem
• Alice encrypts a message 𝑚 to 𝑒𝐾𝑝𝑢𝑏 𝑚 and
sends to Bob. Since computing 𝑑𝐾 from 𝑒𝐾𝑝𝑢𝑏
is computationally infeasible, the encrypted
text is secure from all except the intended
receiver Bob who has the decryption function
𝑑𝐾 due to the privileged knowledge of 𝐾sec .

• Due to the use of a public key, we call this


model of encryption public-key cryptography
or asymmetric cryptography.
Diffie-Hellman 1976

• Diffie and Hellman proposed the first public-key


encryption algorithms based on modular
exponentiation that is known as Diffie-Hellman
key exchange protocol.
Diffie-Hellman key exchange protocol
• Diffie-Hellman key exchange protocol uses
modular exponentiation in ℤ𝑝 for encryption of
plaintext where 𝑝 is a large prime.

• A generator of the group ℤ𝑝∗ is used as the public


key.

• The underlying intractable problem is called the


Discrete Logarithm Problem (DLP).
Description the Diffie-Hellman protocol
• Diffe-Hellamn Key Exchange Protocol
COMMON INPUT: 𝑝, 𝑔 : 𝑝 is a large prime, 𝑔 is a generator of ℤ∗𝑝
OUTPUT: An element in ℤ∗𝑝 shared between Alice and Bob
— Alice picks up 𝑎 ∈𝑈 1, 𝑝 − 1 ; computes 𝑔𝑎 ← 𝑔𝑎 (mod 𝑝); sends 𝑔𝑎 to Bob
— Bob picks up 𝑏 ∈𝑈 [1, 𝑝 − 1]; computes 𝑔𝑏 ← 𝑔𝑏 (mod 𝑝); sends 𝑔𝑏 to Alice
— Alice computes 𝑘 ← 𝑔𝑏𝑎 (mod 𝑝);
— Bob computes 𝑘 ← 𝑔𝑎𝑏 (mod 𝑝)

For Alice 𝑘 = 𝑔𝑏𝑎 (mod 𝑝) and for Bob 𝑘 = 𝑔𝑎𝑏 (mod 𝑝).
Since 𝑎𝑏 ≡ 𝑏𝑎 (mod 𝑝 − 1), the two parties have computed the same value.
Example: Diffie-Hellman protocol
• Let 𝑝 = 5 and 𝑔 = 2. Suppose Alice chooses 𝑎 = 2 and
sends
𝑔𝑎 = 22 mod 5 = 4
Suppose Bob chooses 𝑏 = 3 and sends 𝑔𝑏 = 23 mod 5 = 3
to Alice.

• Alice computes 𝑔𝑏2 mod 5 = 4, and Bob computes


𝑔𝑎3 mod 5 = 4.

• In this way Alice and Bob agrees upon the secret key 𝐾 = 4.
Security of Diffie-Hellman protocol
• Let 𝑝 be a prime number, 𝑔 ∈ ℤ𝑝∗ is a
generator, and ℎ = 𝑔𝑎 mod 𝑝.

• The problem of finding 𝑎 ∈ ℤ𝑝∗ given 𝑝, 𝑔


and ℎ is said to be Diffie-Hellman problem.
Security of Diffie-Hellman protocol
• It is generally assumed that Diffie-Hellman
problem is equivalent to the
Computational Diffie-Hellman problem
that is to find 𝑔𝑎𝑏 mod 𝑝 given
𝑝, 𝑔, 𝑔𝑎 mod 𝑝, and 𝑔𝑏 mod 𝑝, for any
choice 𝑎 and 𝑏 with 1 < 𝑎, 𝑏 < 𝑝 − 1.

• This is the theoretical basis of the security


of Diffie-Hellman protocol.
Rivest-Shamir-Adleman Cryptosystem

• In the year 1973, Clifford Cocks working at


Government Communications Headquarters
(GCHQ) of the United Kingdom discovered a public
key cryptosystem based on the difficulty of integer
factorization that is essentially the same as what is
now known as the Rivest-Shamir-Adleman (RSA)
cryptosystem.
Rivest-Shamir-Adleman Cryptosystem

• Cocks was influenced by a classified article on non-


secret encryption by James H. Ellis also working for the
GCHQ. Another cryptographer at GCHQ, Malcom J.
Williamson invented what is now known as Diffie-
Hellman key exchange in 1974.
• All these developments were classified by the British
government until 1997.
Rivest-Shamir-Adleman Cryptosystem
• In 1976, Diffie and Hellman proposed their key
exchange protocol.

• In 1978 Rivest, Shamir, and Adleman proposed RSA.

• In the next slide we will describe the RSA algorithm.


RSA algorithm
Let 𝑛 = 𝑝𝑞 where 𝑝 and 𝑞 are primes. Let 𝒫 =
𝒞 = ℤ𝑛 , and define
𝒦 = 𝑛, 𝑝, 𝑞, 𝑎, 𝑏 ∶ 𝑎𝑏 ≡ 1 mod 𝜙 𝑛
For 𝐾 = (𝑛, 𝑝, 𝑞, 𝑎, 𝑏), define
𝑒𝐾 𝑥 = 𝑥 𝑏 mod 𝑛
and
𝑑𝐾 𝑦 = 𝑦 𝑎 mod 𝑛
(𝑥, 𝑦 ∈ ℤ𝑛 ). The values 𝑛 and 𝑏 comprise the
public key, and the values 𝑝, 𝑞 and 𝑎 form the
private key. That is 𝐾𝑝𝑢𝑏 = (𝑛, 𝑏) and 𝐾𝑠𝑒𝑐 =
(𝑝, 𝑞, 𝑎).
Intersection of Diffie-Hellman and Green
Computing
• Efficient cryptographic practices, like optimized implementations of
the Diffie-Hellman key exchange, contribute to green computing by:
• Reducing Computational Load: Algorithms optimized for lower power
consumption reduce the energy footprint.
• Efficient Hardware Usage: Hardware acceleration (e.g., using GPUs or
specialized chips) for cryptographic tasks lowers energy per
operation.
• Sustainable Cryptography: Transitioning to quantum-resistant and
energy-efficient cryptographic schemes aligns with green computing
goals.
Internet of Things (IoT) Devices

•Examples: Smart sensors, wearables, home automation systems, and industrial IoT.

•Why Energy Efficiency Matters:


•These devices often run on small batteries with limited capacity.
•Cryptographic operations like DHKE can be computationally expensive, draining battery life.

•Use Case:
•Securely establishing shared encryption keys for communication between IoT devices.

•Approaches:
•Lightweight cryptographic protocols optimized for constrained devices.
Mobile Devices and Applications
• Examples: Smartphones, tablets, and mobile apps using end-to-end
encryption.
• Why Energy Efficiency Matters:
• Cryptographic operations during secure messaging or secure browsing can
impact battery life.
• Use Case:
• DHKE is used in protocols like Signal for secure key exchange in messaging
apps.
• Approaches:
• Leveraging elliptic curve variants (ECDH) for reduced computational
complexity.
Wireless Sensor Networks (WSNs)
• xamples: Environmental monitoring, health monitoring, and disaster
response systems.
• Why Energy Efficiency Matters:
• Sensors often have limited energy reserves and operate in remote locations.
• Use Case:
• Establishing secure communication channels between sensors.
• Approaches:
• Using optimized DHKE algorithms tailored for minimal energy consumption.
Vehicular Ad Hoc Networks (VANETs)

•Examples: Communication systems in autonomous or connected vehicles.

•Why Energy Efficiency Matters:


•Vehicles need to perform secure key exchanges rapidly and efficiently without taxing
onboard systems.

•Use Case:
•Secure communication between vehicles and infrastructure (V2I/V2V).

•Approaches:
•Energy-efficient cryptographic modules integrated into vehicular hardware.
Low-Power Wide Area Networks (LPWANs)

•Examples: LoRaWAN, Sigfox, and other networks for long-range, low-power communication.

•Why Energy Efficiency Matters:


•Devices in LPWANs often operate for years on a single battery.

•Use Case:
•Secure key establishment between endpoints and base stations.

•Approaches:
•Incorporating lightweight DHKE implementations to align with LPWAN constraints.
Secure Embedded Systems
• Examples: Medical implants, smart cards, and RFID systems.
• Why Energy Efficiency Matters:
• These systems have strict size, power, and heat dissipation constraints.
• Use Case:
• Secure initialization and communication in embedded devices.
• Approaches:
• Hardware-based acceleration for energy-efficient DHKE.
Cloud-Connected Devices with Intermittent
Connectivity
• Examples: Remote monitoring stations or agricultural IoT.
• Why Energy Efficiency Matters:
• Devices rely on sporadic network access and limited local energy sources like
solar panels.
• Use Case:
• Secure key exchange to encrypt data uploaded to the cloud.
• Approaches:
• Efficient DHKE combined with session resumption to reduce frequent re-
negotiations.
Satellite Communication Systems
•Examples: CubeSats, nanosatellites, and space probes.

•Why Energy Efficiency Matters:


•Satellites have constrained power budgets and must minimize
cryptographic overhead.

•Use Case:
•Secure key exchange for data transmission to and from ground stations.

•Approaches:
•Lightweight DHKE algorithms designed for space-grade hardware.
Optimizing DHKE for Energy Efficiency
•Elliptic Curve Diffie-Hellman (ECDH):
•Uses smaller key sizes for the same security level as traditional DHKE,
reducing computational overhead.

•Pre-computation:
•Offloading heavy computations during idle periods.

•Hardware Acceleration:
•Utilizing dedicated cryptographic hardware for low-power operations.

•Protocol Design:
•Minimizing unnecessary key exchanges and handshake steps.
Optimizing DHKE for Energy Efficiency
• By optimizing DHKE for these scenarios, we can achieve secure
communication while minimizing energy consumption, making it
suitable for modern, resource-constrained systems.
Diffie-Hellman key exchange in secure
systems
• The Diffie-Hellman key exchange (DHKE) is widely used in secure
systems, especially in contexts where energy efficiency is critical due
to resource constraints. Below are key areas where DHKE is used,
highlighting the importance of energy efficiency:
Elliptic curve addition
Comparison of RSA and Elliptic Curve Diffie-
Hellman (ECDH) Computation Costs

Aspect RSA (e.g., 2048-bit ECDH (e.g., 256-bit keys)


keys)
Key Size 2048 bits 256 bits

Security Level (bits) 112 bits 128 bits

Computation Cost High (due to large key Low (smaller key size and
size) operations)
Energy Efficiency Low (more power High (optimized for
needed for resource constraints)
computation)
Comparison of RSA, ECDH, and NIST Standard PQC Computation Costs
Aspect RSA (e.g., 2048- ECDH (e.g., 256- NIST Standard
bit keys) bit keys) PQC (e.g.,
CRYSTALS-
Kyber)
Key Size 2048 bits 256 bits Public: ~1184
bytes, Private:
~2400 bytes (for
Kyber-512)
Security Level (bits) 112 bits 128 bits 128 bits (for
Kyber-512)
Computation Cost High (large key Low (smaller key Moderate
size) size and (matrix-based
operations) arithmetic)
Energy Efficiency Low High (optimized Moderate (larger
(computationally for small arithmetic but
expensive) devices) optimized
designs)
Quantum Resistance Not resistant Not resistant Resistant to
quantum attacks

You might also like