0% found this document useful (0 votes)
16 views21 pages

Unit-Iv: With The Help of An Example Explain The Fair Queue Scheduling Algorithm in Cloud Computing

CLOUD COMPUTING

Uploaded by

rishicse547
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views21 pages

Unit-Iv: With The Help of An Example Explain The Fair Queue Scheduling Algorithm in Cloud Computing

CLOUD COMPUTING

Uploaded by

rishicse547
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

UNIT-IV

With the help of an example explain the fair queue scheduling algorithm in cloud computing.

Fair queue scheduling is a method used in cloud computing to allocate resources fairly among multiple
users or processes, ensuring that no single user monopolizes the system. It aims to provide each user
with an equitable share of resources while maintaining overall system efficiency.

Example:

Consider a cloud computing environment where three users (User A, User B, and User C) are running
applications that require CPU time. Each user has submitted a task that needs to be executed:

- User A:Task 1 requires 6 CPU time units.

- User B: Task 2 requires 4 CPU time units.

- User C: Task 3 requires 8 CPU time units.

Fair Queue Scheduling Steps:

1. Determine Time Quantum:The scheduler defines a time quantum (e.g., 2 time units). Each task gets to
run for this duration in a round-robin fashion.

2. Execution Cycle:

- Cycle 1:

- User A runs for 2 units (4 units remaining).

- User B runs for 2 units (2 units remaining).

- User C runs for 2 units (6 units remaining).

- Cycle 2:

- User A runs for another 2 units (2 units remaining).

- User B runs for its remaining 2 units (0 units remaining, completes).

- User C runs for 2 units (4 units remaining).

- Cycle 3:

- User A runs for its remaining 2 units (0 units remaining, completes).

- User C runs for 2 units (2 units remaining).


- Cycle 4:

- User C runs for its remaining 2 units (0 units remaining, completes).

Result:

- Completion Order: User B (first), User A (second), User C (third).

- Resource Allocation: Each user gets equal CPU time slices in each cycle, ensuring fairness.

Benefits of Fair Queue Scheduling:

- Fairness: No user can hog the CPU; each gets an equal opportunity.

- Predictability: Users can expect consistent response times.

- Efficiency:Tasks are completed in a timely manner without significant delays for any user.

In summary, fair queue scheduling in cloud computing helps maintain balance among users, preventing
resource starvation and ensuring efficient utilization of available resources.

2.With the help of an example explain the start time fair queuing scheduling
algorithm in cloud computing.

Start Time Fair Queuing (STFQ) is a scheduling algorithm used to ensure fair resource allocation in cloud
computing. It calculates a **virtual start time** for each task, allowing the system to schedule tasks in
an order that mimics a fair queuing system. This prevents any one task from monopolizing resources and
promotes fairness among all tasks.

Example of STFQ Scheduling

Consider a cloud system with three tasks arriving at different times, and each with a certain processing
time requirement.

- Task A: Arrives at time 0 and requires 4 units of processing time.

- Task B: Arrives at time 2 and requires 2 units of processing time.

- Task C: Arrives at time 3 and requires 1 unit of processing time.


The cloud system processes one task at a time.

Steps in STFQ Algorithm

1. Arrival and Virtual Start Time Calculation:

For each task, STFQ calculates a virtual start time based on when it arrives and the current workload.
This start time is calculated as:

Virtual Start Time = max(Finish Time of Previous Task, Arrival Time of Current Task)

This ensures that each task "starts" after any prior tasks have completed or at its arrival time, whichever
is later.

2. Determine Scheduling Order:

The system orders tasks based on their virtual start times. Tasks with earlier virtual start times are
scheduled first, providing a fair distribution of resources.

3. Execute Tasks:

The system processes tasks in the order determined by their virtual start times.

Walkthrough of Example

et's apply these steps to our example:

- Task A:

- Arrives at time 0 with a processing time of 4.

- Virtual Start Time for Task A = 0 (since it’s the first task).

- Task A finishes at time- 4.

- Task B:

- Arrives at time 2 with a processing time of 2.

- Virtual Start Time for Task B = 4 (Task B arrives at 2 but must wait for Task A to finish).

- Task B finishes at time 6.


- Task C:

- Arrives at time 3 with a processing time of 1.

- Virtual Start Time for Task C = 6 (Task C arrives while Task B is executing).

- Task C finishes at time 7.

Resulting Scheduling Order Based on their virtual start times, the tasks are scheduled as follows:

1. Task A : Virtual Start Time = 0

2. Task B: Virtual Start Time = 4

3. Task C : Virtual Start Time = 6

Execution :

| Task | Arrival Time | Processing Time | Virtual Start Time | Finish Time |

|------|--------------|----------------|--------------------|-------------|

|A |0 |4 |0 |4 |

|B |2 |2 |4 |6 |

|C |3 |1 |6 |7 |

Benefits of STFQ

- Fairness : Ensures each task is given a fair share of resources.

- Efficiency : Reduces resource monopolization in cloud environments.

- Improved Responsiveness : Smaller tasks are processed quickly, improving response times in mixed
workloads.

STFQ is especially valuable in cloud computing, where multiple tasks with diverse resource needs must
share limited resources effectively. It achieves a balance between fairness and efficiency, enhancing the
overall performance and user experience.
3. Explain some common mechanisms for monitoring and managing resource
utilization in a cloud environment.

In a cloud environment, monitoring and managing resource utilization is essential for maintaining
performance, controlling costs, and ensuring scalability. Here are some common mechanisms used:

1. Metrics and Monitoring Dashboards

Cloud Provider Dashboards: Major cloud providers like AWS, Azure, and Google Cloud offer built-in
monitoring tools (e.g., AWS CloudWatch, Azure Monitor, Google Cloud Operations Suite). These tools
provide dashboards to track metrics like CPU, memory usage, disk I/O, and network traffic.

Custom Metrics: In addition to standard metrics, users can define custom metrics to monitor specific
aspects of their applications or infrastructure.

Data Visualization: Dashboards allow visualization of metrics over time, enabling users to spot trends
and anomalies quickly.

2. Alerts and Notifications

Threshold Alerts: Users can set thresholds for key metrics. When a resource exceeds these thresholds
(e.g., CPU usage above 80%), the system triggers an alert.

Event-Based Notifications: Alerts can be configured for certain events, such as instances being
added, removed, or failing health checks.

Integration with Notification Services: These alerts can be integrated with email, SMS, or third-party
services like Slack, PagerDuty, or Opsgenie for real-time notifications.
3. Auto Scaling

Horizontal Scaling (Scaling Out/In): Based on predefined policies, additional instances can be added
when demand increases and removed when demand decreases.

Vertical Scaling (Scaling Up/Down): Adjusting the resources of existing instances (e.g., upgrading
CPU or memory) based on demand.

Dynamic Scaling: Some cloud providers offer dynamic scaling, automatically adjusting resources
in real-time based on usage patterns.

4. Cost Management and Optimization Tools

Budgets and Cost Alerts: Cloud providers allow users to set budgets and receive alerts if spending
approaches or exceeds those budgets.

Cost Analysis: Detailed cost reports and breakdowns by service, project, or department help identify
high-cost resources.

Resource Right-Sizing: Tools can recommend optimal instance types and storage options based on
actual usage, potentially reducing costs.

5. Log Management and Analysis

Centralized Logging: Services like AWS CloudTrail, Azure Log Analytics, and Google Cloud Logging allow
aggregation of logs from multiple resources, making it easier to analyze and troubleshoot.
Log Analysis and Insights: Logs can reveal patterns in usage or issues (e.g., high error rates), supporting
performance optimization and anomaly detection.

Retention Policies: Users can define log retention periods to balance storage costs with the need for
historical data.

6. Resource Tagging and Organization

Tagging Resources: Tags are metadata attached to resources that help in organizing, categorizing, and
identifying resources by purpose, owner, or environment (e.g., production vs. development).

Resource Grouping: Tags and labels enable users to create resource groups, simplifying the monitoring
and management of resources associated with specific projects or departments.

7. Capacity Planning and Forecasting

Historical Analysis: By analyzing past usage trends, cloud users can forecast future demands and
prepare for peak periods, reducing the risk of performance issues.

Predictive Scaling: Some cloud providers offer predictive scaling tools that automatically adjust
resources in anticipation of increased or decreased demand based on historical data.

8. Security and Compliance Monitoring

Security Posture Management: Tools like AWS Security Hub, Azure Security Center, and Google Cloud
Security Command Center assess and report on security and compliance posture.

Access Control Auditing: Monitoring access controls, permissions, and changes helps ensure only
authorized users have access to resources, mitigating security risks.

Using a combination of these mechanisms helps cloud administrators maintain high performance,
optimize costs, and support reliable operations in their cloud environments.
4. Write some common control algorithms and techniques used in task scheduling on a cloud
platform.

Task scheduling on cloud platforms is crucial for optimizing resource utilization, improving performance,
and meeting quality-of-service (QoS) requirements. Here are some common control algorithms and
techniques used in cloud task scheduling:

1. Round Robin (RR) Scheduling

Tasks are assigned to resources in a cyclic manner without considering specific task requirements or
resource capabilities.

Pros: Simple and fair in resource allocation.

Cons: Inefficient for tasks with varying workloads or resource needs.

2. First-Come, First-Served (FCFS)

Tasks are scheduled in the order of their arrival.

Pros: Simple and straightforward.

Cons: Not suitable for environments with diverse task sizes, as it can lead to inefficiencies and delays.

3. Shortest Job Next (SJN)

Prioritizes tasks with shorter estimated execution times.

Pros: Minimizes the average waiting time and improves system throughput.

Cons: Estimating job length is difficult, and longer tasks might face starvation.

4. Priority-Based Scheduling

Each task is assigned a priority, and high-priority tasks are scheduled before lower-priority ones.

Pros: Enables QoS and handles critical tasks efficiently.


Cons: Low-priority tasks may be delayed or even starved.

5. Heuristic-Based Scheduling

Uses heuristics (e.g., Min-Min, Max-Min) to match tasks to resources based on task and resource
characteristics.

Min-Min: Selects tasks with the minimum completion time and assigns them to the most suitable
resource.

Max-Min: Prioritizes tasks with the maximum completion time to reduce overall delay.

Pros: Often effective in reducing makespan and improving efficiency.

Cons: Limited to specific workload types and requires careful heuristic selection.

6. Genetic Algorithm (GA)

A population-based optimization technique that iteratively evolves a set of possible solutions (task
schedules) to find the best one.

Pros: Suitable for complex and dynamic scheduling environments.

Cons: High computational cost and may converge slowly.

7. Particle Swarm Optimization (PSO)

Uses a population of particles representing potential solutions, which "fly" through the search space,
adjusting their positions based on personal and global best-known solutions.

Pros: Fast convergence and relatively simple to implement.

Cons: Can suffer from premature convergence or local optima issues.

8. Dynamic Voltage and Frequency Scaling (DVFS)

Adjusts the CPU frequency and voltage based on task load to conserve energy.

Pros: Energy-efficient and useful in minimizing operational costs.


Cons: May impact performance if used excessively.

09. Load Balancing Techniques

Redistributes tasks among available resources to ensure balanced workloads.

Common algorithms include:

Least Connections: Assigns tasks to the resource with the least active connections.

Least Load: Prioritizes resources with the lowest CPU/memory utilization.

Pros: Prevents resource bottlenecks and improves response times.

Cons: Overhead from constant monitoring and reallocation.

10. Hybrid Scheduling Algorithms

Combines two or more scheduling techniques (e.g., GA-ACO, PSO-Min-Min) to leverage their strengths.

Pros: Provides flexibility and better performance across various metrics.

Cons: More complex to implement and may require careful tuning.

Each of these algorithms has strengths and weaknesses, and the choice often depends on the specific
requirements of the cloud environment, such as load patterns, resource diversity, and QoS constraints.

5)Describe how stability plays a role in the effectiveness of two-level resource

allocation architecture.

We can assimilate a server with a closed-loop control system and we can apply

control theory principles to resource allocation. In this section we discuss a two-level


resource
allocation architecture based on control theory concepts for the entire cloud. The
automatic

resource management is based on two levels of controllers, one for the service provider
and

one for the application, see Figure 6.2.

The main components of a control system are the inputs, the


control system components, and the outputs. The inputs in such
models are the offered workload and the policies for admission
control, the capacity allocation, the load balancing, the energy
optimization, and the QoS guarantees in the cloud. The system
components are Sensors used to estimate relevant measures of
performance and Contrillers that
implement various policies; the output is the resource allocations
to the individual applications. The controllers use the feedback
provided by sensors to stabilize the system; stability is related to
the change of the output. If the change is too large, the system
may become unstable. In our context the system could
experience thrashing, the amount of useful time dedicated to the
execution of applications becomes increasingly small and most of
the system resources are occupied by management functions.

There are three main sources of instability in any control system:


1. The delay in getting the system reaction after a control action.
2. The granularity of the control, the fact that a small change
enacted by the
controllers leads to very large changes of the output.
3. Oscillations, which occur when the changes of the input are too
large and the control is too weak, such that the changes of the
input propagate directly to the
output.
Two types of policies are used in autonomic systems: (i)
threshold-based policies
and (ii) sequential decision policies based on Markovian decision
models. In the first case, upper and lower bounds on performance
trigger adaptation through resource
reallocation. Such policies are simple and intuitive but require
setting per-application thresholds. Lessons learned from the
experiments with two levels of controllers and the two
types of policies are discussed in. A first observation is that the
actions of the control system should be carried out in a rhythm
that does not lead to instability. Adjustments should be carried out
only after the performance of the system has stabilized. The
controller should measure the time for an application to stabilize
and adapt to the manner in which the controlled system reacts.
If upper and lower thresholds are set, instability occurs when they
are too close to one another if the variations of the workload are
large enough and the time required to adapt does not allow the
system to stabilize. The actions consist of allocation/deallocation
of one or more virtual machines; sometimes
allocation/deallocation of a single VM required by one of the
thresholds may cause crossing of the other threshold and this
may represent, another source of instability.

6. Explain the mechanisms or protocols can be used to enable effective


coordination among specialized autonomic performance managers.

To enable effective coordination among specialized autonomic performance


managers, several mechanisms and protocols can be implemented. These
mechanisms facilitate communication, decision-making, and resource allocation
among decentralized systems. Here are some key approaches:

1. Service-Oriented Architecture (SOA)

Utilize SOA to create loosely coupled services that can interact with one another
through standard protocols.

- Mechanism: Performance managers can expose their functionalities as services,


allowing others to request performance metrics, adjust parameters, or receive
alerts.

2. Event-Driven Architecture

Implement an event-driven model where performance managers react to


specific events or triggers.

- Mechanism: Use publish-subscribe models, where managers publish events


(e.g., performance degradation) and others subscribe to relevant events to take
appropriate actions.

3. Multi-Agent Systems

Deploy multiple autonomous agents that represent different performance


managers, allowing them to negotiate and collaborate.

- Mechanism: Agents can use protocols such as FIPA (Foundation for Intelligent
Physical Agents) for communication, negotiation, and coordination.

4. Distributed Consensus Protocols

Use consensus protocols like Paxos or Raft to ensure all managers agree on the
current state and decisions.

Mechanism: This allows for consistent updates across the performance


managers, helping to maintain a unified approach to performance optimization.
5. Load Balancing and Resource Sharing Protocols

Implement protocols for load balancing and resource allocation based on current
performance metrics.

- Mechanism:Use algorithms to dynamically distribute workloads among


performance managers to optimize resource utilization.

6. Configuration Management Protocols

- Description: Utilize protocols such as Chef or Puppet for managing


configurations across systems.

- Mechanism:Ensure that all performance managers operate with consistent


configurations, facilitating easier coordination and interoperability.

7. RESTful APIs

-Create RESTful APIs for each performance manager, allowing standardized


communication.

- Mechanism: Managers can send and receive HTTP requests to perform actions,
retrieve performance data, or update settings.

8. Data Sharing and Synchronization Protocols

Use protocols for data sharing and synchronization to maintain performance


metrics across managers.

- Mechanism: Implement database synchronization techniques or use


technologies like Apache Kafka for real-time data streaming.
9. Machine Learning for Predictive Coordination

Apply machine learning models to predict performance issues and automate


responses.

Mechanism:Managers can share data and insights to train models that


anticipate performance bottlenecks and coordinate actions preemptively.

10. Common Standards and Protocols

Adhere to common standards (e.g., TMForum Frameworx, ITIL) for performance


management.

Mechanism: Ensure interoperability and a shared understanding among the


performance managers, making coordination smoother.

7) How do cloud providers support customization and fine-grained control over

resource allocation policies? Explain

Cloud providers support customization and fine-grained control over resource allocation
policies through a variety of features and tools designed to meet diverse user needs. Here are
some key methods they employ:

1.User-defined Policies and Rules

Custom Policies: Users can define their own resource allocation policies based on specific
requirements, such as performance thresholds, cost constraints, or application priorities.

Tagging and Labels:Resources can be tagged or labeled, allowing for policies that apply to
specific groups of resources, making management more granular.
2.Infrastructure as Code (IaC)

Configuration Management Tools: Tools like Terraform, AWS CloudFormation, and Ansible
allow users to define and manage infrastructure using code, enabling repeatable and version-
controlled configurations.

Custom Scripts: Users can write scripts to automate the deployment and configuration of
resources, tailoring their setups to specific operational needs.

3. Autoscaling and Load Balancing

Dynamic Scaling: Providers offer autoscaling features that allow users to automatically adjust
resources based on demand, enabling fine-grained control over how and when resources are
allocated.

Load Balancers: Users can configure load balancers to distribute traffic according to custom
rules, ensuring that resources are used efficiently based on real-time conditions.

4. Service Level Agreements (SLAs) and Quality of Service (QoS)

SLA Customization: Users can negotiate SLAs that specify performance and availability
guarantees tailored to their specific needs.

-QoS Settings: Fine-grained QoS controls allow users to prioritize certain applications or
services, influencing how resources are allocated and managed.

5. Resource Quotas and Limits

Quota Management: Cloud providers enable users to set quotas on the resources that can be
consumed by different teams or projects, providing control over budget and resource allocation.

Limit Configurations: Users can define limits on CPU, memory, and storage usage for specific
applications or services, helping to avoid over-provisioning and manage costs.

6. Cost Management Tools

-Cost Allocation Tags: Users can assign cost allocation tags to resources to track expenses
based on specific projects or departments, allowing for better financial management.

Budget Alerts: Many providers offer budgeting tools that send alerts when spending
approaches defined limits, helping users manage resource allocation proactively.
7. Containerization and Microservices

Container Orchestration: Tools like Kubernetes allow users to define resource requests and
limits for individual containers, enabling precise control over how resources are allocated at a
granular level.

Service Mesh: Service meshes can manage traffic and resource allocation between
microservices, providing additional layers of customization.

8. API Access and Integration

Extensive APIs: Cloud providers offer APIs that allow programmatic access to resource
management, enabling users to build custom applications that can dynamically allocate
resources based on real-time conditions.

Integration with Third-party Tools: Users can integrate cloud resources with third-party
monitoring and management tools to create custom workflows and control mechanisms.

9. Monitoring and Analytics

Real-time Monitoring: Providers offer tools to monitor resource usage in real time, allowing
users to adjust allocations based on current performance and usage patterns.

Analytics and Reporting: Users can analyze historical data to identify trends and optimize their
resource allocation policies over time.

8) Explain how control theory enables dynamic adaptation and real-time


decision making in task scheduling on a cloud platform.

Control theory provides a framework for managing and optimizing complex


systems, making it particularly valuable for dynamic adaptation and real-time
decision-making in task scheduling on cloud platforms. Here’s how control theory
enables these capabilities:
1. Feedback Loops

Feedback loops are central to control theory, where the system continuously
monitors its outputs and adjusts inputs based on that feedback.

Application in Cloud Scheduling: In a cloud environment, task scheduling can use


feedback from resource utilization (CPU, memory, I/O) and application
performance (response times, throughput) to dynamically adjust task priorities or
resource allocations.

2. Dynamic Modeling

Control theory involves creating mathematical models of systems to predict


behavior under various conditions.

Implementation in Cloud: By modeling workload patterns and resource


availability, cloud platforms can anticipate demand spikes or drops, allowing for
proactive scheduling of tasks. These models help in simulating various scenarios
to find optimal scheduling strategies.

3. Control Algorithms

Types: Control theory utilizes various algorithms, such as PID (Proportional-


Integral-Derivative), adaptive control, and optimal control.

Real-time Scheduling: Algorithms can adjust scheduling decisions in real time


based on current workload conditions, ensuring that resources are allocated
efficiently to maintain performance and minimize latency.

4. Performance Metrics and Control Parameters

Metrics Monitoring: Control theory emphasizes the importance of performance


metrics, which are continuously monitored to assess system health.
Adaptive Parameters: By adjusting control parameters (like task execution
priorities or resource limits) based on metrics, cloud platforms can dynamically
adapt scheduling strategies to meet service level agreements (SLAs) or optimize
resource utilization.

5. Stability and Robustness

Stability: Control systems aim for stability, ensuring that the system can handle
fluctuations without significant degradation in performance.

Robust Scheduling: Cloud scheduling can be designed to be robust against


uncertainties, such as unpredictable workload changes or failures in underlying
infrastructure, by applying control principles that ensure system resilience.

6. Predictive Control

Forecasting Demand: Control theory allows for predictive models that can
estimate future resource requirements based on historical data and trends.

Proactive Scheduling: By anticipating changes in demand, cloud platforms can


preemptively schedule tasks and allocate resources to avoid bottlenecks before
they occur.

7. Multi-variable Control

Complex Systems: Cloud environments often involve multiple interdependent


variables (e.g., various tasks, different resource types).

Integrated Decision Making: Control theory facilitates the management of


these variables through coordinated scheduling decisions that optimize the
overall system performance rather than focusing on individual components.
8. Simulation and Testing

Model Testing: Control theory allows for simulation of scheduling strategies


under various conditions to evaluate their effectiveness before implementation.

Iterative Improvement: Scheduling algorithms can be refined through iterative


testing and adjustment based on performance feedback, improving their
effectiveness over time.

You might also like