Cloud Computing Module 3 Qn with Answers
Cloud Computing Module 3 Qn with Answers
Ans:-
Aspect Centralized Computing Distributed Computing
Definition All processing is done at a single Processing is spread across multiple
location. systems.
Resource Resources are concentrated in a central Resources are distributed among
Allocation server or data center. multiple computers or nodes.
Scalability Limited scalability due to centralized Highly scalable as additional nodes
control. can be added.
Fault Single point of failure—if the main High fault tolerance—if one node
Tolerance server fails, everything is affected. fails, others can continue.
Response Can be slower due to congestion at the Faster response due to parallel
Time central system. processing.
Security Easier to secure because of centralized More complex security due to
control. multiple nodes.
Cost Lower initial costs but may require Higher initial setup cost but can be
Efficiency expensive maintenance. cost-effective in the long run.
Usage Used in traditional mainframe systems Used in cloud computing, peer-to-
Examples and banking servers. peer networks, and blockchain.
Data Storage All data is stored in a single centralized Data is replicated and distributed
database. across multiple locations.
Flexibility Less flexible due to centralized More flexible and adaptable to
management. changes.
2. List and explain types of clouds with a neat diagram. 10
1. Public
2. Private
3. Hybrid
1. Public Cloud:
o Resources such as computing power, storage and networking are shared among multiple customers.
o Examples: Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP).
o Disadvantages: Limited control over infrastructure, potential security concerns due to shared
environments.
2. Private Cloud:
o Dedicated cloud infrastructure for a single organization, either hosted on premises or by a third-party
provider.
o Ideal for industries with strict regulatory requirements such as healthcare and finance.
3. Hybrid Cloud:
o A combination of public and private cloud infrastructures to optimize performance and cost-efficiency.
o Workloads can be dynamically shifted between private and public clouds based on business needs.
The core of a cloud is the server cluster (or VM cluster). Cluster nodes are used as compute nodes.
A few control nodes are used to manage and monitor cloud activities.
The scheduling of user jobs requires that you assign work to virtual clusters created for users.
The gateway nodes provide the access points of the service from the outside world. These gateway
nodes can be also used for security control of the entire cloud platform.
In physical clusters and traditional grids, users expect static demand of resources. Clouds are
designed to handle fluctuating workloads, and thus demand variable resources dynamically.
Private clouds will satisfy this demand if properly designed and managed.
Data-center server clusters are typically built with large number of servers, ranging from thousands
to millions of servers (nodes). For example, Microsoft has a data center in the Chicago area that
has 100,000 eight-core servers, housed in 50 containers.
In supercomputers, a separate data farm is used, while a data center uses disks on server nodes
plus memory cache and databases.
Data centers and supercomputers also differ in networking requirements, as illustrated in Figure.
Supercomputers use custom-designed high-bandwidth networks such as fat trees or 3D torus
networks. Data-center networks are mostly IP-based commodity networks, such as the 10 Gbps
Ethernet network, which is optimized for Internet access.
Figure shows a multilayer structure for accessing the Internet.
The server racks are at the bottom Layer 2, and they are connected through fast switches (S) as
the hardware core.
The data center is connected to the Internet at Layer 3 with many access routers (ARs) and border
routers (BRs).
4. Explain objectives of cloud computing 5
In traditional IT computing, users must acquire their own computer and peripheral equipment as
capital expenses.
In addition, they have to face operational expenditures in operating and maintaining the computer
systems, including personnel and service costs.
Figure 4.3(a) shows the addition of variable operational costs on top of fixed capital investments
in traditional IT.
The fixed cost is the main cost, and that it could be reduced slightly as the number of users
increases. However, the operational costs may increase sharply with a larger number of users.
Therefore, the total cost escalates quickly with massive numbers of users.
On the other hand, cloud computing applies a pay-per-use business model, in which user jobs are
outsourced to data centers.
To use the cloud, one has no up-front cost in hardware acquisitions. Only variable costs are
experienced by cloud users, as demonstrated in Figure 4.3(b).
Overall, cloud computing will reduce computing costs significantly for both small users and large
enterprises.
Computing economics does show a big gap between traditional IT users and cloud users.
The savings in acquiring expensive computers up front releases a lot of burden for startup
companies.
The fact that cloud users only pay for operational expenses and do not have to invest in permanent
equipment is especially attractive to massive numbers of small users.
This is a major driving force for cloud computing to become appealing to most enterprises and
heavy computer users.
In fact, any IT users whose capital expenses are under more pressure than their operational
expenses should consider sending their overflow work to utility computing or cloud service
providers.
6. Explain cloud ecosystem with a neat diagram. 10
Components: The emergence of Internet clouds has created an ecosystem of providers, users, and
technologies.
Public Cloud Influence: The cloud ecosystem has evolved mainly around public clouds.
Private & Hybrid Clouds: They are not exclusive, as public clouds play a role in both cloud types.
Remote Access: Private/hybrid clouds allow access via web service interfaces like Amazon EC2.
Cloud Management Level: The cloud manager offers virtualized resources using an IaaS platform.
Virtual Infrastructure (VI) Management Level: The VI manager allocates VMs across multiple
server clusters.
Cloud Virtualization Tools: Eucalyptus, Globus Nimbus for virtualizing cloud infrastructure.
Cloud Access Interfaces: Amazon EC2WS, Nimbus WSRF, ElasticHost REST for accessing cloud
services.
VM Management & Generation: OpenNebula, VMware vSphere for handling Xen, KVM, and
VMware tools.
1. Infrastructure-as-a-Service (IaaS)
o Provides essential computing resources (compute, storage, networking) as virtualized services over the
internet.
Key Characteristics:
Examples:
o Microsoft Azure Virtual Machines: Supports both Windows and Linux environments.
Advantages:
o Offers global reach and reduced latency through distributed data centers.
Challenges:
o Includes integrated tools such as databases, runtime environments, and development frameworks.
Examples:
o Google App Engine: Scalable application hosting.
o Microsoft Azure App Services: PaaS offering for .NET, Java, Python, and more.
Benefits:
Challenges:
3. Software-as-a-Service (SaaS):
o Delivers software applications over the internet without requiring installation on local machines.
Examples:
Benefits:
Challenges:
A critical core design of a data center is the interconnection network among all servers in the data center
cluster. This network design must meet five special requirements: low latency, high band width, low cost,
message-passing interface (MPI) communication support, and fault tolerance. Specific design
considerations are :
2. Network Expandability
Server containers make installation efficient, needing only power supply, network link, and
cooling.
• Making common users happy The data center should be designed to provide quality service to the
majority of users for at least 30 years.
• Controlled information flow Information flow should be streamlined. Sustained services and high
availability (HA) are the primary goals.
• Multiuser manageability The system must be managed to support all functions of a data center,
including traffic flow, database updating, and server maintenance.
• Scalability to prepare for database growth The system should allow growth as workload increases.
The storage, processing, I/O, power, and cooling subsystems should be scalable.
• Reliability in virtualized infrastructure Failover, fault tolerance, and VM live migration should be
integrated to enable recovery of critical applications from failures or disasters.
• Low cost to both users and providers The cost to users and providers of the cloud system built over
the data centers should be reduced, including all operational costs.
• Security enforcement and data protection Data privacy and security defense mechanisms must be
deployed to protect the data center against network attacks and system interrupts and to maintain data
integrity from user abuses or network attacks.
• Green information technology Saving power consumption and upgrading energy efficiency are in
high demand when designing and operating current and future data centers.
10. Explain different architectural design of compute and storage clouds 15
Enabling Technologies
Generic Architecture
Users interact via web interfaces; provisioning tools dynamically allocate resources.
2. Layered Cloud Architectural Development
Three-Layer Architecture:
Platform Layer (PaaS): Runtime environments, tools for development and testing.
Includes service request examiner, admission control, and autonomic resource manager.
Hardware Virtualization
Storage Virtualization
IaaS Virtualization
Architectural design of compute and storage clouds involves layered architecture, virtualized
infrastructure, dynamic provisioning, and resilient storage systems. By combining these aspects, cloud
platforms deliver scalable, reliable, and cost-efficient services.
11. Explain the cloud computing Architectural Design Challenges 10
Proprietary APIs prevent easy data migration; standardized APIs enable interoperability.
Public cloud exposes systems to attacks like malware, DoS, and hypervisor hijacking.
Bottleneck removal, wider links, and improved architecture can help optimize performance.
AWS is a leading cloud service platform that follows the Infrastructure-as-a-Service (IaaS) model to
provide scalable, reliable, and cost-effective computing resources over the internet. Its architecture is
designed to deliver services such as computation, storage, database, networking, and messaging to
developers and businesses.
5. SimpleDB:
6. CloudWatch:
o Monitoring tool that provides metrics like CPU usage, memory, network traffic, etc.
8. Auto Scaling:
Microsoft Windows Azure (now Microsoft Azure) is a cloud computing platform introduced in 2008. It
provides a range of services to develop, host, and manage applications through Microsoft-managed data
centers. Azure supports IaaS, PaaS, and SaaS models.
3. Compute Services:
o Virtual machines are deployed using Web Roles (for web hosting) and Worker Roles (for
background processing).
4. Storage Services:
o SQL Azure allows access to relational data using familiar SQL Server tools.
5. Development Environment:
o Supports .NET and other programming frameworks.
6. Application Services:
7. Communication Protocols:
o Azure uses SOAP and REST to integrate with other platforms and services.
Resource provisioning in cloud computing refers to allocating computing resources (like VMs, CPU,
memory) based on user demand. There are three main static methods, illustrated in Figure 4.24
Graph: Shows a flat high capacity line with demand curve below it, leading to shaded waste area.
2. Underprovisioning 1
Description: Resources are provisioned just along average or slightly above-expected demand.
Pros: Better resource utilization than peak provisioning.
Cons: Some user demands are unfulfilled, leading to dissatisfaction and provider loss.
Graph: Capacity line matches middle of demand curve, with shaded area above capacity (lost
demand).
Cons: Severe waste when demand declines; users may abandon service.
Graph: Capacity line is constant, but demand curve declines, creating a large waste area below
capacity.
These static methods are inefficient without elasticity. Both the cloud provider and the user may suffer
from lost revenue and poor experience. Dynamic, adaptive provisioning (like auto-scaling) is preferred in
real-world scenarios.
16. Explain Virtual machine creation and management with a neat diagram 10
Example: Amazon SQS (Simple Queue Service) supports message-based communication between
services.
Services can run independently of each other yet communicate via queues.
Examples:
o Google App Engine (GAE) and Microsoft Azure provide their own APIs.
o OpenNebula
o Amazon EC2
o French Grid’5000
5 Distributed VM Management
17. Explain Inter cloud exchange of cloud resources through brokering with a neat diagram 10
Application Broker
Negotiates with cloud coordinators to lease resources based on SLA (Service Level Agreement).
Cloud Coordinator
Auctioneer enables price negotiation via market models (e.g., auction or commodity market).
Working Mechanism :-
Advantages
Market-driven pricing.