Cloud Computing QNA
Cloud Computing QNA
17
Ans:-
“Cloud is a parallel and distributed computing system consisting of a collection ofinter-
connected and virtualized computers that are dynamically provisioned and presented as one or
more unified computing resources based on service- level agreements (SLA) established
through negotiation between the service provider and consumers.”
“Clouds are a large pool of easily usable and accessible virtualized resources (suchas
hardware, development platforms and/or services). These resources can be dynamically
reconfigured to adjust to a variable load (scale), allowing also for an optimum resource
utilization”
“This pool of resources is typically exploited by a pay-per-use model in which guarantees are
offered by the Infrastructure Provider by means of customized Service Level Agreements.”
“Clouds are hardware based services offering compute, network, and storage capacity where
Hardware management is highly abstracted from the buyer, buyers incur infrastructure costs as
variable OPEX, and infrastructure capacityis highly elastic.”
Cloud services are typically categorized into three main types based on the level of
management and the resources provided to the user. These types are:
a. What it is: PaaS provides a platform that allows customers to develop, run, and
manage applications without dealing with the underlying infrastructure.
b. Examples: Google App Engine, Microsoft Azure App Service, Heroku.
c. Use case: Suitable for developers who want to focus on writing code and
building applications without managing servers, storage, or networking.
3. Software as a Service (SaaS):
a. What it is: SaaS delivers software applications over the internet on a
subscription basis. The provider hosts the software and handles all
maintenance, updates, and security.
b. Examples: Google Workspace, Microsoft 365, Salesforce, Dropbox.
c. Use case: Perfect for businesses or individuals who need ready-to-use software
solutions without worrying about maintenance or infrastructure.
Cloud computing deployment models describe how cloud services are made available to
users. The four primary models are Public Cloud, Private Cloud, Hybrid Cloud, and
Community Cloud. Each model has unique measures and characteristics:
1. Public Cloud
2. Public clouds are owned and operated by third-party cloud providers and deliver
services over the internet. Examples include AWS, Microsoft Azure, and Google Cloud.
Key measures include:
Private clouds are dedicated to a single organization and can be hosted on-premises or by a
third party. Key measures include:
Aishwarya Sonawane
17
Hybrid clouds combine public and private clouds, allowing data and applications to move
between them. Key measures include:
Community clouds are shared by organizations with common goals, such as healthcare or
government entities. Key measures include:
6. Multi-Tenancy: Multiple customers share the same infrastructure while keeping their
data and operations isolated for privacy.
7. Automation: Many cloud services are automated for scaling, load balancing, and
failover management, reducing manual effort.
8. Data Redundancy: Cloud services offer backup and redundancy, ensuring high
availability and disaster recovery.
1. Cost Efficiency: Cloud computing reduces capital expenditures by eliminating the need
for physical infrastructure, offering a pay-per-use model.
2. Scalability: Businesses can quickly scale resources to meet growing or fluctuating
demands, avoiding over-provisioning or under-provisioning.
3. Improved Collaboration: Cloud services enable teams to access shared applications,
data, and tools remotely, improving collaboration across locations.
4. Disaster Recovery: Cloud providers offer automatic backup and recovery options,
reducing the risk of data loss and ensuring business continuity.
5. High Availability: Cloud providers typically offer Service Level Agreements (SLAs)
guaranteeing uptime and performance, ensuring reliability.
6. Automatic Software Updates: Cloud providers handle regular software updates and
patches, ensuring users always have the latest features and security enhancements.
7. Security: Advanced security measures like encryption, authentication, and monitoring
help protect data and applications from threats.
8. Global Reach: Cloud services can be accessed from anywhere, with data centers in
multiple locations providing low-latency access.
9. Environmental Sustainability: Cloud providers often use energy-efficient practices in
data centers, lowering the carbon footprint compared to traditional IT infrastructure.
10. Innovation: Cloud computing enables access to advanced technologies like AI,
machine learning, big data analytics, and IoT, promoting innovation in business
operations.
1. Insider Threats:
2. Cybercriminals:
3. Hackers:
4. Nation-State Actors:
5. Competitors:
6. Hacktivists:
• Description: Potential risks from the cloud provider itself due to vulnerabilities or
misconfigurations.
• Examples: Security lapses by the CSP exposing customer data.
8. Third-Party Vendors:
• Description: External vendors or partners with access to cloud systems, posing a risk if
compromised.
• Examples: Data breaches or weak security practices from third parties.
• Description: Malicious software or bots that exploit cloud resources for criminal
activities.
• Examples: Ransomware infections, botnets using cloud resources for attacks.
Aishwarya Sonawane
17
Eavesdropping attacks are insidious because it’s difficult to know they are occurring. Once
connected to a network, users may unwittingly feed sensitive information — passwords,
account numbers, surfing habits, or the content of email messages — to an attacker.
–Tom King
1. Packet Sniffing: Tools like Wireshark allow attackers to capture and analyze packets on
a network, gaining access to sensitive data being transmitted.
2. Man-in-the-Middle Attacks (MITM): In this attack, the attacker intercepts and possibly
alters communications between two parties without their knowledge.
Aishwarya Sonawane
17
• Encryption: Use of protocols like HTTPS, SSL/TLS, or VPNs to encrypt data in transit,
making it unreadable to attackers.
• Network Segmentation: Isolating sensitive traffic on secure networks, preventing
unauthorized access.
• Strong Authentication: Ensuring that devices and users authenticate properly before
gaining access to the network.
• Firewalls and Intrusion Detection Systems (IDS): To detect and prevent unauthorized
access or malicious activities.
1. Malicious Intent Measures are security strategies designed to protect systems from
harmful activities, such as unauthorized access, data theft, or malware attacks.
2. firewalls act as the first line of defense, monitoring and controlling incoming and
outgoing network traffic based on security rules to block unauthorized access
3. Intrusion Detection and Prevention Systems (IDS/IPS) detect and block suspicious
activities, often in real-time, preventing potential security breaches.
4. encryption, which secures data by making it unreadable to unauthorized users.
5. Multi-Factor Authentication (MFA) strengthens security by requiring multiple forms of
verification (e.g., passwords, biometrics, or one-time codes), making it harder for
attackers to gain access
6. antivirus and anti-malware software are essential in identifying and removing
malicious software, such as viruses, ransomware, and spyware, which can
compromise systems and steal data.
7. These measures work together to safeguard systems from potential threats and
mitigate risks from malicious intent.
A Denial of Service (DoS) attack is a malicious attempt to disrupt the normal functioning of a
server, service, or network by overwhelming it with excessive traffic or resource requests. The
goal is to make the targeted system unavailable to legitimate users. DoS attacks exploit system
vulnerabilities, resource exhaustion, or bandwidth flooding to cause disruption.
In this diagram:
12. Explain the terms encryption, decryption, plain text and cypher text
Encryption
• Definition: The process of converting plain text into an unreadable format (ciphertext)
to protect it from unauthorized access.
• Purpose: Ensures data confidentiality during transmission or storage.
• Example:
Plain text: "Hello World"
Ciphertext: "H3LL0#WRL9@"
Decryption
• Definition: The process of converting ciphertext back into its original readable form
(plain text).
• Purpose: Enables authorized users to access the original data.
Plain Text
Ciphertext
• Definition: Encrypted data that is unreadable without the appropriate decryption key.
Aishwarya Sonawane
17
• Example: Encrypted version of the plain text "Hello" might be "Xyz78@" depending on
the encryption algorithm.
Symmetric Encryption
• Definition: The same key is used for both encryption and decryption.
• Characteristics:
o Faster and efficient for large datasets.
o Requires secure sharing of the key between parties.
• Example Algorithms: AES, DES, RC4.
• Use Case: Securing data in closed systems like databases.
Asymmetric Encryption
• Definition: Uses a pair of keys—public key for encryption and private key for decryption.
• Characteristics:
o More secure but slower compared to symmetric encryption.
o Eliminates the need to share the private key.
• Example Algorithms: RSA, ECC.
• Use Case: Securing communication over the internet (e.g., SSL/TLS)
Hashing is a cryptographic process that transforms any given input (such as a password or file)
into a fixed-length string of characters, called a hash. Unlike encryption, hashing is a one-way
function, meaning it cannot be reversed to retrieve the original data. This makes hashing ideal
for purposes like data integrity and password storage.
Hashing ensures that even a small change in the input produces a significantly different hash
output. For instance, changing "Hello" to "hello" would result in a completely different hash
value, a property known as the Avalanche Effect.
Aishwarya Sonawane
17
Applications of Hashing
1. A digital signature is a cryptographic tool used to verify the authenticity, integrity, and
origin of digital data.
2. It ensures that a message or document is genuinely from the claimed sender and has
not been altered during transmission. This is achieved through a combination of hashing
and encryption.
3. To create a digital signature, the sender first generates a unique hash of the message
using a hash algorithm like SHA-256. T
4. his hash is then encrypted with the sender’s private key, forming the digital signature,
which is sent along with the message.
5. The recipient verifies the signature by decrypting it with the sender’s public key and
comparing the decrypted hash with a newly generated hash of the received message. A
match confirms the message's integrity and authenticity.
6. Digital signatures provide authentication (verifying the sender’s identity), integrity
(ensuring the message is unchanged), and non-repudiation (preventing the sender
from denying their involvement). They are widely used in secure communication, legal
agreements, software verification, and financial transactions.
7. Common algorithms like RSA, DSA, and ECDSA ensure robust security, making digital
signatures a cornerstone of modern digital trust.
11. Verification: The recipient decrypts the digital signature using the sender's
public key to obtain the hash value.
12. Message Integrity: The recipient hashes the original message and compares it
with the decrypted hash value. If they match, the message is verified as intact
and authentic.
13. Key Benefits: Digital signatures ensure authenticity, confirming the sender's
identity, integrity, ensuring the message hasn’t been altered, and non-
repudiation, preventing the sender from denying the message.
14. Applications: Digital signatures are widely used in email security, software
distribution, e-commerce, and legal document signing to provide secure and
trusted digital communication.
15.
Unit – 2
1. Describe how cloud object storage works, including the role of APIs and
scalability. Provide examples of popular object storage systems offered by cloud
providers.
Ans:-
How It Works
Cloud object storage is a method of storing and managing data in a flat structure using
unique identifiers, typically called "objects.
" Each object includes the data itself, metadata (key-value pairs that describe the data),
and a unique identifier for retrieval.
Unlike traditional file or block storage, object storage does not organize data in a
hierarchy, making it highly scalable and suitable for unstructured data.
APIs (Application Programming Interfaces) play a crucial role in object storage, allowing
developers to interact with the storage system programmatically. These APIs provide
functions for uploading, downloading, deleting, and managing data. Examples include
RESTful APIs like Amazon S3 API or Google Cloud Storage API.
Scalability is another defining feature. Cloud providers distribute data across multiple
servers and regions, allowing the system to scale horizontally. This ensures consistent
performance even as data volume grows.
Examples of Popular Object Storage Systems:
Amazon S3 (Simple Storage Service)
Google Cloud Storage
Microsoft Azure Blob Storage
IBM Cloud Object Storage
Aishwarya Sonawane
17
2. What are the advantages of cloud object storage and how do these advantages
make it suitable for storing unstructured data? Illustrate your answer with relevant
use cases.
Scalability:
Object storage systems can handle massive amounts of unstructured data, making
them suitable for applications like video streaming, data lakes, and backups.
Cost Efficiency:
Pay-as-you-go pricing models allow organizations to save money by only paying for the
storage they use. Lower-cost tiers (e.g., cold storage) make it economical for
infrequently accessed data.
Durability and Redundancy:
Providers replicate data across multiple physical locations, ensuring high durability and
availability even in the event of hardware failures.
Accessibility and Integration:
APIs provide seamless integration with applications, enabling efficient data retrieval
and storage management.
Use Cases:
Media Streaming: Platforms like Netflix store videos and images for on-demand
streaming.
Big Data Analytics: Storing and processing large datasets in data lakes for machine
learning and analytics.
IoT Applications: Storing sensor data from millions of devices in real time.
Backup and Archival: Long-term storage for disaster recovery and compliance needs
3. Discuss the challenges associated with cloud object storage. How can these
challenges, such as latency and compliance, be mitigated in practical scenarios?
Challenges:
Latency Issues: Retrieving large objects or accessing data from geographically distant
regions can lead to delays.
a. Mitigation: Use CDNs to cache frequently accessed data closer to end users.
Employ multi-region storage for low-latency access.
Compliance Requirements: Meeting regional regulations (e.g., GDPR, HIPAA) can be
complex.
b. Mitigation: Choose regions that meet compliance standards and implement
encryption for secure storage and transfer.
Security Risks: Unauthorized access to objects or malicious attacks can compromise
sensitive data.
c. Mitigation: Use strong authentication, encryption (TLS for in-transit data, AES-
256 for at-rest data), and access controls.
Data Management Complexity: As datasets grow, organizing and retrieving data
efficiently can become difficult.
d. Mitigation: Use lifecycle policies to archive older data, metadata tagging for
better organization, and analytics tools for insights.
Aishwarya Sonawane
17
These components ensure a smooth, secure, and consistent user experience across
distributed applications by maintaining state across multiple requests.
session management in cloud computing faces key challenges due to the distributed nature of
cloud environments. These challenges include:
To address these challenges, two primary techniques are used: session stores and stateless
sessions.
1. Session Stores:
Session stores, such as Redis or Memcached, centralize session data and store it on the
server-side, providing scalable and persistent storage. By distributing session data across
multiple nodes, session stores ensure data consistency and availability across instances.
These stores enhance security by allowing encrypted session data and controlled access,
reducing the risks of session hijacking.
2. Stateless Sessions:
In stateless sessions, session information is embedded in tokens (e.g., JWT), which are sent
with each request. This method eliminates the need for centralized session storage, improving
scalability by allowing cloud systems to scale horizontally without worrying about session
synchronization. While stateless sessions are inherently secure (since data is not stored on the
server), they require proper token management, such as encryption, expiration, and secure
transmission over HTTPS.
The Facebook API (Application Programming Interface) is a set of tools and protocols that
allow developers to interact with and integrate Facebook's features and data into third-party
applications or websites. The API enables developers to access a wide range of Facebook
services, including user profiles, posts, likes, comments, pages, and more. Here’s an overview
of its key components and functionalities:
Aishwarya Sonawane
17
• Graph API:
• The primary and most important Facebook API is the Graph API, which provides access
to all public and private data on Facebook, including user profiles, posts, comments,
and media. It is a RESTful API, meaning it uses standard HTTP methods like GET, POST,
PUT, and DELETE. The Graph API uses an HTTP URL structure to represent objects (e.g.,
users, photos, pages) and their relationships (e.g., likes, comments).
This API enables developers to manage Facebook ads, campaigns, targeting, and reporting. It
allows businesses to create, manage, and optimize their advertising strategies on Facebook,
Instagram, and Messenger.
This API is used for integrating Facebook Messenger into applications, enabling chatbots,
customer service interactions, and rich media messages between businesses and users.
• Facebook Login:
• Graph API: The core API that enables access to Facebook data, such as user
profiles, pages, posts, photos, and relationships, using a graph-like structure.
• Key APIs: Includes the Marketing API (ad management), Login API (user
authentication), Messenger API (chatbots and messaging), and Instagram
Graph API (managing Instagram content and insights).
• Authentication: Uses OAuth 2.0 for access tokens, which are required to
interact with API endpoints. Tokens can be user-specific or page-specific.
• Permissions and App Review: Apps must request specific permissions (e.g.,
email, pages_manage_posts) to access data, with advanced permissions
requiring Facebook's review.
• API Requests and Responses: Developers send HTTP requests (e.g., GET /me)
to API endpoints, receiving JSON responses for integration into applications.
• Use Cases: Enables social sharing, ad campaign management, analytics,
chatbot integration, and user authentication for external apps.
• Benefits: Provides seamless integration with Facebook's platform, automation
of tasks, valuable analytics, and access to Facebook's extensive user base.
• Challenges: Includes privacy restrictions, rate limits on API usage, a detailed
app review process, and frequent platform updates that require ongoing
maintenance.
•
Aishwarya Sonawane
17
The Google API refers to a set of tools and protocols that enable developers to interact with
various Google services, including Google Maps, Google Drive, Gmail, and Google Cloud
Platform. These APIs allow external applications to access and integrate Google services into
their platforms.
• 1. Definition:
Google APIs provide access to a wide range of services. For example, the Google Drive API
allows users to store, share, and manage files in the cloud. The Gmail API can retrieve emails,
send messages, and manage email labels.
Google provides APIs for search and analytics. Developers can use Google Custom Search to
add search capabilities to their websites and use Google Analytics API to retrieve data about
website traffic and performance.
Google Cloud APIs offer access to powerful machine learning tools, including Cloud Vision,
Cloud Speech-to-Text, and Natural Language Processing (NLP). These APIs allow developers
to integrate AI and ML features into their applications.
The Google Maps API provides features like displaying maps, geolocation, creating routes, and
getting location data. Developers can integrate real-time location tracking, distances, and
directions into their applications.
Google APIs use OAuth 2.0 for user authentication. Developers must implement this protocol
to allow users to sign in and authorize the app to access their Google services (e.g., Gmail or
Google Drive).
• Data Security:
Google ensures that all data accessed through its APIs is encrypted and stored securely.
Developers are encouraged to follow best practices for securing data and handling user
authentication and authorization.
Aishwarya Sonawane
17
• Rate Limiting:
Google APIs impose rate limits to prevent abuse and ensure fair usage. This helps avoid
overloading servers and ensures that developers use resources efficiently.
5. Use Cases:
• Website Integration:
Developers use Google Maps API to embed interactive maps, and the Gmail API to add email
functionality to apps. Google Drive API is commonly used for cloud file storage and sharing in
web apps.
• Business Analytics:
Google Analytics API allows businesses to track website traffic and user behavior, providing
insights for improving marketing strategies and optimizing websites.
• AI and Automation:
With Google Cloud APIs, developers can integrate machine learning and artificial intelligence
into applications, such as image recognition, language translation, and speech-to-text
features.
Amazon SQS is a fully managed queuing service that helps decouple components of a
distributed system.
• DLQs capture failed messages that can't be processed after multiple attempts. This
ensures messages aren’t lost and helps in troubleshooting errors.
The Visibility Timeout ensures that a message isn’t picked up by another consumer while it’s
being processed. Set this timeout based on how long it takes to process a message to avoid
duplicates.
Long Polling reduces unnecessary calls by allowing consumers to wait for messages to arrive,
instead of constantly checking for new ones. This improves efficiency and reduces costs.
• Use Batching:
Send and receive messages in batches to increase performance and reduce API calls. This is
efficient and saves both time and costs.
If the order of messages is important, use FIFO Queues to ensure messages are processed in
the exact order they were sent.
• Ensure Security:
Use IAM roles to control access to SQS queues, and enable Server-Side Encryption (SSE) to
protect the messages and data.
RabbitMQ is an open-source message broker known for its reliability and flexible messaging
protocols.
To avoid downtime, use mirrored queues across multiple RabbitMQ nodes. This ensures that if
one node fails, messages are still accessible from other nodes.
Organize messages based on different tasks or types of work by creating multiple queues. This
helps in better managing workloads and improving performance.
• Message Acknowledgment:
Always use message acknowledgments to ensure messages are not lost. This way, if a
message is not processed correctly, it can be retried.
Set appropriate message TTL (time-to-live) and maximum queue lengths to prevent queues
from growing too large and consuming excessive resources.
Continuously monitor RabbitMQ performance using metrics to identify any issues early. Scale
RabbitMQ clusters as needed to handle higher loads or workloads.
10. What is ACI, Open auth, openID, XACML for securing data in the cloud.
ACI is a networking framework developed by Cisco that provides security and automation for
data center applications. It uses a policy-driven approach to secure and manage applications
within cloud environments. By defining policies for how traffic flows between applications, ACI
ensures that only authorized traffic is allowed, improving security and performance.
• Key Points:
o Controls how applications communicate with each other.
o Provides network security through policies.
o Offers automation and scalability.
OAuth is an open standard for access delegation. It allows users to grant third-party
applications limited access to their resources without sharing their passwords. For example,
you can log into a website using your Google or Facebook account without giving that site your
credentials.
• Key Points:
o Used for safe third-party access to resources.
o Does not require sharing passwords.
o Commonly used for social logins and API access.
3. OpenID
OpenID is an authentication protocol that allows users to log in to multiple websites using a
single set of credentials (like Google or Facebook). It simplifies the login process, improving
user experience while maintaining security.
• Key Points:
o Provides a single sign-on (SSO) experience.
o Often used alongside OAuth.
o Reduces the need for multiple passwords.
Aishwarya Sonawane
17
XACML is a standard used for defining access control policies in a cloud environment. It
allows organizations to define who can access resources and under what conditions. XACML is
used to create detailed and flexible access policies for both users and systems.
• Key Points:
o Defines access control policies in a structured way.
o Used to decide who can access what and when.
o Supports fine-grained access control.
11. Explain in detail about securing data for transport in the cloud.
• In-transit Encryption: Data is encrypted during transmission using protocols
like TLS (Transport Layer Security) or SSL (Secure Sockets Layer) to protect it
from unauthorized access or tampering while traveling over networks.
• Virtual Private Networks (VPNs): Secure tunnels or VPNs are used to create
encrypted communication channels between users and cloud services,
ensuring data remains protected during transfer.
• Data Integrity: Techniques like hashing and message authentication codes
(MACs) are applied to verify that the data has not been altered or tampered with
during transport.
• Multi-factor Authentication (MFA): MFA adds an extra layer of security by
requiring multiple forms of verification before allowing data transfer, ensuring
only authorized users can send or receive data.
• Access Control: Robust access control mechanisms ensure that only
authorized systems or users can initiate or receive data transfers, further
securing data in transit.
• Compliance with Security Standards: Cloud providers often follow industry-
standard certifications and frameworks like ISO 27001, HIPAA, or GDPR to
ensure best practices in securing data transport.
• End-to-End Security: A combination of encryption, secure protocols,
authentication, and compliance measures ensures both the confidentiality and
integrity of data during transport in the cloud.
•
The concept of scalability in applications and cloud services refers to the ability to handle
an increasing amount of work or traffic by adjusting resources efficiently, without
compromising performance. It ensures that an application or service can grow to meet
higher demands or decrease when the demand is low, all while maintaining optimal
performance.
In cloud computing, scalability allows for dynamic resource allocation, either by adding
more resources (horizontal scaling) or increasing the capacity of existing ones (vertical
scaling), ensuring that the system can accommodate changing workloads without
downtime or degradation in service.
Key Points:
Example: Consider you are the owner of a company whose database size was small in
earlier days but as time passed your business does grow and the size of your database also
increases, so in this case you just need to request your cloud service vendor to scale up
your database capacity to handle a heavy workload.
Unit - 3
DevOps is a set of practices that combine software development (Dev) and IT operations (Ops)
to shorten the development lifecycle and provide continuous delivery with high software
quality. DevOps promotes collaboration between development, operations, and other
departments to improve the overall efficiency of an organization.
Benefits of DevOps:
Challenges of DevOps:
• Cultural Shift: Resistance from teams used to traditional workflows can slow down
adoption.
• Integration Complexity: DevOps tools often need to be integrated with existing
systems, which can be complex.
• Security Risks: If not implemented correctly, automation could introduce security
vulnerabilities.
• Tool Overload: Choosing the right tools and managing them effectively can be
challenging.
• Skills Gap: Requires skilled professionals who understand both development and
operations.
Containerization is the process of packaging an application along with all its dependencies
into a single unit called a container, which can run consistently across any computing
environment. Docker is one of the most popular tools used for containerization.
Benefits of Docker:
• Isolation: Each container runs in its own isolated environment, preventing conflicts
between dependencies.
• Scalability: Easily scale applications by running multiple container instances.
Kubernetes and Terraform are essential tools for automating the management of cloud
applications and infrastructure, but they serve different purposes.
Together, Kubernetes and Terraform help streamline cloud application and infrastructure
management by providing automation, scalability, and consistency.
Automating infrastructure on the cloud involves using tools and scripts to provision,
configure, and manage cloud resources automatically, eliminating manual processes.
1. Infrastructure as Code (IaC): Tools like Terraform and AWS CloudFormation allow
defining and managing infrastructure through code, making deployments repeatable
and consistent.
2. Provisioning: Automatically creating cloud resources (servers, storage, networks)
based on predefined configurations.
3. Configuration Management: Tools like Ansible and Puppet automate the setup and
maintenance of cloud instances, ensuring consistency across environments.
4. Scaling: Cloud resources can be automatically scaled up or down based on demand
using auto-scaling features.
5. Monitoring and Logging: Automated monitoring systems track resource performance
and trigger actions (like scaling or sending alerts) when thresholds are met.
Benefits:
Popular tools include Terraform, CloudFormation, Ansible, and Kubernetes, each playing a
role in automating different aspects of cloud infrastructure.
5. . Explain app deployment and orchestration using ECS, ECR & EKS.
App Deployment and Orchestration Using ECS, ECR, and EKS (5 Marks)
In AWS, ECS, ECR, and EKS are services designed for deploying, managing, and orchestrating
applications, particularly in containerized environments. These tools work together to
streamline application deployment and management.
• Overview: ECS is a fully managed container orchestration service that allows you to run
and manage Docker containers on a scalable and reliable infrastructure.
• App Deployment with ECS: You can deploy containerized applications using ECS by
defining tasks (the smallest deployable unit of an application) and services (long-
running tasks that maintain a specified number of task instances). ECS handles
provisioning the underlying EC2 instances and managing containerized workloads.
• Orchestration: ECS automatically schedules containers across a cluster of EC2
instances and supports load balancing, scaling, and managing the container lifecycle.
Aishwarya Sonawane
17
• Overview: ECR is a fully managed Docker container registry service that makes it easy
to store, manage, and deploy Docker container images.
• Integration with ECS: Once your container images are stored in ECR, ECS can pull
those images to deploy and run containers. ECR ensures that images are securely
stored and accessible by ECS tasks.
• Benefits: ECR eliminates the need to manage your own container registry and
integrates seamlessly with ECS, simplifying the workflow from image storage to
container orchestration.
• Overview: EKS is a fully managed Kubernetes service that makes it easy to run
Kubernetes clusters on AWS without needing to manage the Kubernetes control plane.
• App Deployment with EKS: EKS simplifies deploying, managing, and scaling
containerized applications using Kubernetes. Developers can deploy applications by
defining pods (the smallest deployable unit) and deployments within Kubernetes.
• Orchestration: EKS provides powerful orchestration features such as automated
scaling, self-healing (restarting failed containers), rolling updates, and service discovery
for containerized applications.
AWS Elastic Beanstalk is a fully managed service designed to simplify the deployment and
management of applications in the cloud. It abstracts the complexity of infrastructure
provisioning, allowing developers to focus solely on writing and deploying code. Beanstalk
supports multiple programming languages, including Java, .NET, Node.js, Python, Ruby, PHP,
and Docker containers.
Aishwarya Sonawane
17
1. Create an Application:
a. Begin by creating an application in the Elastic Beanstalk Console. An
application is a logical grouping of environments where different versions of the
app can be deployed. Multiple environments can exist for different purposes
(e.g., production, staging).
2. Upload Application Code:
a. Upload your application code, such as a Java WAR file, Node.js, or Python
script, through the AWS Management Console, AWS CLI, or using a CI/CD
pipeline (like AWS CodePipeline).
b. Beanstalk supports a variety of code formats, depending on the application
stack you are using.
3. Select the Platform:
a. Elastic Beanstalk supports a wide range of pre-configured platforms (e.g.,
Tomcat for Java, Nginx for Node.js, .NET for Windows). You can choose the
appropriate platform for your application.
4. Choose Environment Type:
a. Beanstalk offers two types of environments:
i. Web Server Environment: Used for applications that handle HTTP(S)
requests, typically front-end applications.
ii. Worker Environment: Used for applications that process background
tasks asynchronously (e.g., queue workers).
5. Configuration and Customization:
a. You can configure your environment settings, including instance types, scaling
policies, database configurations, and networking settings.
b. Beanstalk allows custom configuration using configuration files
(e.g., .ebextensions) to manage settings beyond the default.
6. Deployment:
a. Elastic Beanstalk automatically handles the deployment of your application to
the environment. It provisions all the necessary AWS resources such as EC2
instances, load balancers, auto-scaling groups, RDS databases, and VPCs (if
needed).
b. Once deployed, Beanstalk continuously monitors your application's health,
automatically replacing instances if they become unhealthy.
7. Scaling:
a. Beanstalk provides auto-scaling based on the number of requests or other
custom metrics. This allows your application to scale up during high demand
and scale down when traffic decreases.
8. Monitoring and Logs:
a. Beanstalk integrates with Amazon CloudWatch to provide real-time monitoring
of application performance, CPU utilization, memory, and other metrics.
b. Logs, such as application logs and environment logs, can be accessed via the
Beanstalk console or through AWS CloudWatch Logs.
Aishwarya Sonawane
17
AWS OpsWorks is a configuration management service that automates the deployment and
management of applications on AWS. It uses tools like Chef and Puppet to ensure consistent
server configurations across environments. In OpsWorks, resources are organized into stacks,
which are collections of infrastructure resources like EC2 instances and load balancers, and
layers, which define the roles of those resources (e.g., web server or database).
OpsWorks automates the configuration process using recipes (Chef) or manifests (Puppet) to
install software and manage settings. It also supports lifecycle events (such as Setup, Deploy,
and Configure) that automate tasks throughout the application’s lifecycle. With Auto Scaling,
OpsWorks adjusts the number of instances as traffic changes. Additionally, it integrates with
CloudWatch to monitor performance and logs.
A RESTful Web API is an interface that allows communication between systems over HTTP. It
follows the principles of Representational State Transfer (REST) to provide simple and
scalable solutions for web applications.
Key Components:
Steps to Design:
Example:
This structure makes the API simple, easy to understand, and scalable.
9. Explain in brief PubNub API for IOT to cloud and mobile device as IOT
PubNub is a real-time messaging platform that enables communication between IoT devices,
the cloud, and mobile devices. It provides low-latency, scalable, and secure data streaming
solutions for IoT applications. Here's how it works:
PubNub simplifies the communication between IoT devices and cloud infrastructure, making it
easy to develop scalable, secure, and real-time IoT applications.
Aishwarya Sonawane
17
Mobile Cloud Access refers to the ability to access cloud computing services and resources
via mobile devices such as smartphones and tablets. It allows users to store, retrieve, and
interact with data and applications remotely, offering flexibility and convenience.
• Cloud Storage & Applications: Mobile devices access cloud storage (e.g., Google
Drive, Dropbox) and apps (e.g., Google Docs, Office 365), enabling seamless
synchronization across devices.
• Scalability & Flexibility: Cloud services scale dynamically, handling varying demands
efficiently, and users can access these resources anytime and anywhere with an
internet connection.
• Cost-Efficiency: Mobile cloud access eliminates the need for powerful hardware, as
the processing and storage occur in the cloud, reducing costs.
• Challenges: Issues like security (data breaches, device theft), connectivity (slow
internet speeds), and latency can impact performance.
In essence, mobile cloud access enhances productivity, collaboration, and accessibility, while
requiring careful attention to security and network quality.
Mobile Cloud Access refers to the use of cloud computing services through mobile devices,
such as smartphones, tablets, or laptops, enabling users to access applications, data, and
services remotely over the internet. This integration of cloud computing and mobile technology
allows users to perform tasks like storing, editing, and sharing files, as well as running cloud-
based applications, without being limited to a specific location or device.
Mobile cloud access works by connecting mobile devices to cloud servers via the internet. The
cloud servers handle the heavy processing and storage, while the mobile devices act as access
points, allowing users to interact with cloud-based services. For example, applications like
Google Drive or Dropbox allow users to store files on the cloud and access them from any
mobile device connected to the internet.
This concept offers benefits such as flexibility, as users can work from anywhere; scalability,
allowing cloud resources to expand or shrink as needed; and cost-efficiency, as users don't
need expensive hardware to run complex applications. However, it also presents challenges,
including security concerns, as data stored in the cloud is susceptible to breaches, and
dependency on internet connectivity, as a stable network is necessary for effective access.
In summary, mobile cloud access combines the power of cloud computing with the portability
of mobile devices, making it possible for users to stay connected and productive without the
need for physical infrastructure or powerful local devices.
Aishwarya Sonawane
17
Unit - 4 (assignment)