0% found this document useful (0 votes)
23 views25 pages

Cloud Computing Question Bank

The document provides an extensive overview of AWS infrastructure technology, including definitions and roles of AWS global infrastructure components such as Regions, Availability Zones, and Edge Locations. It also discusses the differences between EC2-hosted and AWS-managed databases, the role of AWS Lambda in serverless computing, and the purpose of Amazon Route 53. Additionally, it covers various AWS services, auto-scaling configurations, and security features within Amazon VPC.

Uploaded by

theciaraflash
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views25 pages

Cloud Computing Question Bank

The document provides an extensive overview of AWS infrastructure technology, including definitions and roles of AWS global infrastructure components such as Regions, Availability Zones, and Edge Locations. It also discusses the differences between EC2-hosted and AWS-managed databases, the role of AWS Lambda in serverless computing, and the purpose of Amazon Route 53. Additionally, it covers various AWS services, auto-scaling configurations, and security features within Amazon VPC.

Uploaded by

theciaraflash
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Cloud Computing Question Bank

Unit-3: AWS Infrastructure Technology


1. Define AWS global infrastructure:
AWS global infrastructure consists of:
o Regions: Geographical locations hosting multiple data centers.
o Availability Zones (AZs): Isolated data centers within a Region,
ensuring redundancy.
o Edge Locations: Points for content delivery to improve latency.
2. Name three connectivity options available for AWS Management
Console:
o Direct Internet Access.
o AWS VPN.
o AWS Direct Connect.
3 Explain the difference between AWS Regions and Availability Zones:
4 What are Edge Locations?
Edge locations are physical data centers that AWS uses for caching
content and processing requests close to end users. They are part of the
AWS content delivery network (CDN), Amazon CloudFront, and are
distributed globally.

Role of Edge Locations


1. Content Delivery (Caching):
o Edge locations cache static and dynamic content, such as
images, videos, and APIs, for faster delivery to end users.
o This reduces latency as requests are served from the nearest
edge location rather than the origin server.
2. Improved Latency:
o By bringing content closer to the end user, edge locations
minimize the time required for data to travel over the internet,
resulting in faster response times.
3. Scalability and Reliability:
o Edge locations handle large volumes of requests, distributing
traffic efficiently and reducing load on origin servers.
o This ensures reliable service even during high traffic spikes.
4. Security Enhancements:
o They support AWS services like AWS Shield, AWS WAF (Web
Application Firewall), and Amazon Route 53 for DDoS
protection and domain name resolution.
o Security policies can be enforced closer to the users,
protecting the origin infrastructure.
5. Global Presence:
o Edge locations are located in major cities worldwide, ensuring
content delivery and low latency across regions where AWS
may not have full regions or availability zones.
6. Integration with CloudFront:
o As part of the Amazon CloudFront service, edge locations are
integral to content delivery and acceleration.
o They fetch content from the origin only when it's not in the
cache, reducing data transfer costs.
7. Dynamic Content and API Acceleration:
o They support dynamic content delivery and accelerate API
responses using services like AWS Global Accelerator.

Key Benefits of Edge Locations


• Faster Content Delivery: Reduced latency for end users due to
proximity.
• Cost Efficiency: Reduced data transfer and origin server costs via
caching.
• Enhanced Security: Protection against malicious traffic near the
source of the request.
• High Availability: Distributed network ensures resilience against
outages.

5Demonstrate how to set up a VPN connection to AWS:

6Implement a solution to deploy a multi-region application using AWS


Regions and Availability Zones:
o Deploy instances in multiple Regions for geographic redundancy.
o Use Availability Zones within each Region for fault tolerance.
o Configure Route 53 with latency-based routing for user traffic.
o Set up cross-region replication for databases.
Unit-4: AWS Services Part-I
Define the importance of AWS computing services:
AWS computing services provide scalable, flexible, and cost-efficient
solutions for businesses. They enable easy deployment of applications and
support diverse workloads.
Name three database migration tools available in AWS:
• AWS Database Migration Service (DMS).
• AWS Schema Conversion Tool (SCT).
• AWS Snowball.

difference between EC2-hosted databases and AWS-managed


databases lies in the level of control, management, and operational
responsibilities. Here’s a detailed comparison:

Aspect EC2-Hosted Databases AWS-Managed Databases

Databases installed and Databases provided as fully


Definition managed by the user on managed services by AWS
Amazon EC2 instances. (e.g., RDS, DynamoDB).

Full responsibility lies with AWS handles most


the user, including administrative tasks,
Management
installation, configuration, including backups,
Responsibility
patching, backups, scaling, patching, scaling, and
and maintenance. monitoring.
Aspect EC2-Hosted Databases AWS-Managed Databases

Limited control over


Full control over the
configurations, but
Flexibility database software, version,
optimized for ease of use
and configuration.
and performance.

Scaling is automatic or
Manual scaling of compute requires minimal
Scaling and storage resources is intervention, depending on
required. the service (e.g., Aurora's
auto-scaling).

High: Requires expertise to Low: Designed to simplify


Operational
manage and optimize the operations, allowing users
Complexity
database environment. to focus on applications.

AWS provides automated


User must implement
Backup and backup, point-in-time
backup and recovery
Recovery recovery, and snapshots as
strategies manually.
part of the service.

Potentially lower cost for Slightly higher cost due to


large-scale operations but the managed service fee,
Cost
may require significant but saves time and reduces
administrative overhead. management overhead.

Using AWS RDS (Relational


Hosting MySQL, PostgreSQL,
Database Service),
Examples or Oracle databases on EC2
DynamoDB, Aurora, or
instances.
Redshift.

AWS provides built-in


Security measures must be
security features like
configured and maintained
Security encryption, IAM integration,
by the user (e.g., firewall
and compliance
rules, encryption).
certifications.

Use Case Suitable for complex, highly Ideal for applications


customized database needing high availability,
Aspect EC2-Hosted Databases AWS-Managed Databases

requirements or simplified management,


unsupported database and scalability.
types.

Name three database migration tools available in AWS:


AWS Database Migration Service (DMS)
• Purpose: Migrates databases to AWS securely and with minimal
downtime.
• Supports: Homogeneous (e.g., Oracle to Oracle) and heterogeneous
(e.g., SQL Server to MySQL) migrations.
• Features: Continuous data replication, support for data warehouses,
and integration with various AWS services.
AWS Schema Conversion Tool (SCT)
• Purpose: Converts database schemas from one database engine to
another (e.g., Oracle to Amazon Aurora).
• Use Case: Often used alongside AWS DMS for heterogeneous
migrations.
• Features: Identifies schema incompatibilities and suggests fixes.
AWS Snowball
• Purpose: Migrates large datasets to AWS using physical storage
devices.
• Use Case: Ideal for environments with limited or unreliable network
bandwidth.
• Features: Secure data transfer with tamper-proof devices and
encryption.
Explain the difference between EC2-hosted databases and AWS-
managed databases:

EC2-Hosted
Aspect AWS-Managed Databases
Databases

Databases manually
Fully managed database services
installed and managed
Definition provided by AWS, like RDS,
on Amazon EC2
DynamoDB, and Aurora.
instances.

User is responsible for AWS handles administrative


installation, configuration, tasks such as setup,
Management
updates, backups, and backups, updates, and
maintenance. scaling.

Full control over the operating Limited control to


Control system, database software, optimize for simplicity and
and configurations. reliability.

Manual effort required to


Supports automatic or one-click
Scaling scale compute or
scaling for compute and storage.
storage resources.

Low: Simplified
High: Requires database
Operational management, allowing
administration expertise to
Complexity focus on application
manage efficiently.
development.

Backups must be Automatic backups, snapshots,


Backup and
manually configured and and point-in-time recovery
Recovery
managed. provided.

User must set up and High availability, multi-AZ


High
maintain high-availability deployments, and failover
Availability
configurations. are built-in.

Security User is responsible for AWS provides built-in security


implementing and features like encryption, IAM
maintaining security integration, and compliance
measures. support.

Can be cost-effective for


Slightly higher cost for
large-scale, custom setups
Cost managed services, but lower
but requires significant
operational overhead.
management effort.

Hosting MySQL,
Using Amazon RDS,
PostgreSQL, Oracle, or
Examples DynamoDB, or Aurora for fully
other databases on an
managed database solutions.
EC2 instance.

Describe the role of AWS Lambda in serverless computing:


Role of AWS Lambda in Serverless Computing
1. Event-Driven Execution
o Lambda allows you to execute code in response to a variety of
events, such as HTTP requests (via Amazon API Gateway),
changes in an S3 bucket, or updates in a DynamoDB table.
o It acts as a trigger-based service where functions are invoked
only when needed.
2. Serverless Architecture
o Lambda eliminates the need to manage servers. AWS
automatically provisions, scales, and manages the compute
resources needed to execute your code.
o This enables developers to focus solely on application logic.
3. Auto-Scaling
o AWS Lambda automatically scales up to handle high
concurrency and scales down when the demand drops,
ensuring efficient resource utilization.
o There's no need to configure manual scaling or worry about
server capacity.
4. Cost-Efficiency
o With Lambda, you pay only for the compute time you use,
measured in milliseconds, and the number of requests. There
are no charges for idle resources.
o This makes it highly cost-effective, especially for infrequent
workloads.
5. Integration with AWS Services
o Lambda seamlessly integrates with many AWS services, such
as S3, DynamoDB, SNS, SQS, API Gateway, and more, making
it a central component of serverless architectures.
o These integrations allow you to build end-to-end workflows
without additional infrastructure.
6. Supports Multiple Languages
o Lambda supports various programming languages, including
Python, Java, Node.js, Go, and .NET.
o Custom runtimes enable support for other languages, giving
developers flexibility.
7. Stateless Functionality
o Each Lambda invocation is stateless, meaning that no data is
retained between executions. This simplifies scaling and aligns
with serverless principles.
o Temporary storage (e.g., /tmp directory) is available for short-
term needs.
8. Secure by Design
o AWS Lambda runs code within a secured runtime environment
and integrates with AWS Identity and Access Management
(IAM) for fine-grained permissions.
o It ensures that functions only access the resources they are
authorized to use.
9. Microservices Architecture
o Lambda enables a microservices-based design by allowing
developers to split applications into smaller, independent, and
reusable functions.

Demonstrate how to configure auto-scaling for an application on AWS


Configuring Auto-Scaling on AWS
1. Create a Launch Template:
o Go to EC2 Console → Launch Templates → Create Template.
o Define instance configurations like AMI, instance type, key
pair, and security group.
2. Create an Auto Scaling Group:
o Go to EC2 Console → Auto Scaling Groups → Create Group.
o Attach the launch template, set the minimum, maximum, and
desired instance count, and define subnets.
3. Configure Scaling Policies:
o Add a policy under the Automatic Scaling tab.
o Use a metric like CPU Utilization, e.g., scale up if CPU > 70%,
scale down if CPU < 30%.
4. Test and Monitor:
o Simulate traffic to observe scaling.
o Use CloudWatch for monitoring and alerts.

Comparing EC2 Instance Types for Compute-Intensive Applications


For compute-heavy tasks like scientific simulations, AI/ML inference, or
batch processing, the C-Series (Compute-Optimized) is most suitable.
• Example Instance: c5.2xlarge
o vCPUs: 8
o RAM: 16 GiB
o Performance: Optimized for applications requiring high CPU
usage.
o Cost: Lower compared to other families for similar compute
power.
For extreme parallel computing or tasks requiring GPU acceleration,
consider P-Series (e.g., p3.2xlarge).
Summary
• Use C-Series for pure compute-intensive tasks.
• Opt for P-Series if the workload involves GPU processing.

AWS Lambda vs AWS Fargate: Key Differences in Use Cases


Aspect AWS Lambda AWS Fargate

Serverless compute Serverless compute engine for


Definition service that runs code in running containers without
response to events. managing servers.

Best for event-driven, Ideal for running containerized


short-lived tasks like data applications, microservices,
Use Case processing, API handling, and long-running tasks that
or real-time file require more control over
processing. environments.

Designed for short-lived Suitable for long-running


Duration executions (up to 15 tasks or services that may run
minutes). indefinitely.

Automatically scales based


Automatically scales in
on the number of running
response to the number
Scaling containers, but you need to
of events, running code
manage container
only when triggered.
orchestration.

Supports both stateless and


Stateless execution
State stateful applications
model (data persistence
Management (persistent storage can be
needs to be external).
managed within containers).

No control over More control over the


Control over infrastructure (AWS fully infrastructure (you specify
Infrastructure manages compute container images, vCPUs, and
resources). memory requirements).

Real-time data Microservices, web


Common Use processing, backend applications, APIs, batch
Cases logic for web apps, file processing, machine learning
handling, IoT. models in containers.

Resource Limited resources More flexibility in resource


Management (memory up to 10 GB, allocation (up to 120 vCPUs
Aspect AWS Lambda AWS Fargate

execution time up to 15 and 120 GiB of memory per


minutes). task).

Unit-5: AWS Services Part-II


What is the purpose of Amazon Route 53?
Amazon Route 53 is a scalable and highly available Domain Name System
(DNS) web service designed to route end-user requests to the appropriate
resources in a reliable and efficient manner. Its key purposes include:
1. Domain Name Registration:
Route 53 allows you to register domain names (e.g., example.com)
directly through AWS.
2. DNS Routing:
It translates human-readable domain names into IP addresses,
helping direct user traffic to resources like EC2 instances, load
balancers, or S3 buckets.
3. Health Checking:
Route 53 can monitor the health of your resources (e.g., web servers)
and route traffic only to healthy endpoints, improving application
reliability.
4. Traffic Management:
With routing policies like weighted, latency-based, and geo-location
routing, Route 53 enables intelligent traffic distribution, ensuring
users are connected to the best-performing endpoints based on
location or other criteria.
5. Integrated with AWS Services:
It seamlessly integrates with other AWS services like ELB,
CloudFront, and S3, making it easier to manage DNS routing for
AWS-based applications.
6. Failover and High Availability:
Route 53 supports automatic failover between primary and
secondary endpoints to ensure high availability for your applications.

List the different Amazon S3 storage classes:


• S3 Standard.
• S3 Intelligent-Tiering.
• S3 Standard-IA (Infrequent Access).
• S3 One Zone-IA.
• S3 Glacier.
• S3 Glacier Deep Archive.

The security features in Amazon VPC are essential for protecting your
network and resources:
1. Isolation: VPC isolates your resources, ensuring they are not
exposed to the public unless configured.
2. Security Groups (SGs): Act as firewalls for EC2 instances,
controlling inbound and outbound traffic for fine-grained access
control.
3. Network ACLs: Provide additional traffic filtering at the subnet level,
ensuring only allowed traffic enters or leaves.
4. VPC Peering & Private Connectivity: Enables secure
communication between VPCs and on-premises systems, reducing
exposure to the public internet.
5. VPN & Transit Gateway: Secure connections between on-premises
and AWS, and between multiple VPCs, enhancing privacy.
6. Flow Logs: Monitor network traffic for security analysis and
troubleshooting.
7. Private & Public Subnets: Isolate sensitive resources in private
subnets and expose only necessary services to the internet.
8. AWS Firewall Manager: Centralizes management of security
policies across multiple accounts.
9. IAM: Controls access to VPC resources, ensuring only authorized
users can make changes.

Describe the differences between object storage and block storage

Aspect Object Storage Block Storage

Stores data as objects Stores data in fixed-size blocks


Data Storage
(files) with metadata. (like a hard drive).

Flat structure, typically


Hierarchical structure, used for
used for storing
Structure storing structured data (e.g.,
unstructured data (e.g.,
databases, file systems).
images, videos).

Highly scalable with Limited by volume size but still


Scalability virtually unlimited storage scalable (e.g., multiple
capacity. volumes).

Accessed via HTTP/HTTPS Accessed via block-level


Access using APIs (e.g., REST protocols (e.g., iSCSI, Fibre
APIs). Channel).

Provides low-latency, high-


Optimized for throughput
performance access for
Performance rather than low-latency
applications needing frequent
access.
read/write operations.

Ideal for static data, Best for databases, virtual


Use Cases backups, archival, and machines, and transactional
media storage. workloads.
Aspect Object Storage Block Storage

High availability but can be


High durability (e.g., 11 9's
Durability vulnerable to failure without
durability in Amazon S3).
redundancy.

More expensive compared to


Generally lower cost for
Cost object storage for similar
large volumes of data.
storage capacities.

Demonstrate how to create a VPC with custom security configurations:

Create a VPC
• Go to the VPC Dashboard and click Create VPC.
• Choose a CIDR block (e.g., 10.0.0.0/16), name the VPC, and create
it.
2. Create Subnets
• In Subnets, click Create subnet.
• Create public and private subnets with appropriate CIDR blocks
(e.g., 10.0.1.0/24 for public).
3. Create and Attach an Internet Gateway
• In Internet Gateways, click Create and name it.
• Attach it to your VPC for internet access.
4. Configure Route Tables
• Create a route table for the public subnet.
• Add a route: 0.0.0.0/0 → Internet Gateway.
• Associate the public subnet with this route table.
5. Set Up Security Groups
• Create a security group and allow inbound rules:
o HTTP (Port 80), HTTPS (Port 443), and SSH (Port 22) from
trusted IPs.
6. Create Network ACLs (Optional)
• Create Network ACLs and configure inbound and outbound rules for
public access.
7. Launch EC2 Instance (Optional)
• Launch an EC2 instance in the public subnet with the security
group.
This sets up a VPC with proper security configurations for public and
private access.

To configure Amazon Route 53 for DNS routing:

1. Create a Hosted Zone


• In the Route 53 console, click Create hosted zone.
• Enter your domain name (e.g., example.com) and select Public
Hosted Zone.
• Click Create.
2. Create DNS Records
• Click Create Record:
o A Record: For IP routing (e.g., EC2 instance IP).
▪ Name: Subdomain (e.g., www).
▪ Type: A - IPv4 address.
▪ Value: EC2 instance IP.
o CNAME Record: For routing to another domain (e.g., S3
bucket).
▪ Name: Subdomain (e.g., app).
▪ Type: CNAME.
▪ Value: Target domain (e.g., myapp.s3.amazonaws.com).
3. Update Domain Registrar
• Copy the NS records from Route 53.
• Update your domain registrar’s Name Servers to match the ones
from Route 53.
4. Test DNS
• Wait for propagation and test with dig or nslookup.
• Visit your domain (e.g., www.example.com) in a browser to confirm
it’s working.
This sets up DNS routing for your web application using Amazon Route 53.

Compare Amazon S3 storage classes and recommend the best option


for archival storage:

S3 Standard
• Use Case: Frequently accessed data.
• Cost: High.
• Not ideal for archival.
2. S3 Glacier
• Use Case: Infrequent access, long-term archival.
• Cost: Lower than Standard.
• Access time: Minutes to hours.
• Good for archival.
3. S3 Glacier Deep Archive
• Use Case: Very infrequent access, long-term archival.
• Cost: Cheapest.
• Access time: 12 hours or more.
• Best option for archival storage.
4. S3 Intelligent-Tiering & One Zone-IA
• Use Case: Not ideal for archival due to higher costs or single AZ
storage.
Best option for archival: S3 Glacier Deep Archive due to its low cost
and suitability for long-term, infrequent access.

Differentiate between Amazon Lex and Amazon Kendra in terms of


their use cases:

Amazon Lex
• Use Case:
o Conversational interfaces (chatbots and voice bots).
o Enables the creation of intelligent conversational agents for
customer service, virtual assistants, or automation.
o Integrates with Amazon Alexa and other platforms.
• Functionality:
o Natural language understanding (NLU) for interpreting text or
voice inputs.
o Handles dialog management and context across
conversations.
• Examples:
o Customer service bots, virtual assistants, appointment
schedulers.
2. Amazon Kendra
• Use Case:
o Enterprise search solutions for unstructured data.
o Helps users search and retrieve information from large
datasets, documents, and knowledge repositories.
o Facilitates better document management and content
discovery.
• Functionality:
o Uses machine learning to improve search accuracy.
o Supports multiple data sources like websites, databases, and
files.
• Examples:
o Knowledge management systems, document search, FAQs,
internal company resources.

Unit-6: AWS Plans and Support


AWS compute pricing options:
1. On-Demand: Pay-per-use with no commitment. Ideal for
unpredictable workloads.
2. Reserved Instances: Commit for 1-3 years to get lower rates. Best for
predictable workloads.
3. Spot Instances: Bid for unused capacity at discounted rates, but
subject to interruptions.
4. Savings Plans: Commit to a usage amount for 1-3 years for flexible
savings across AWS services.
5. AWS Lambda: Pay for compute time in 100ms increments. Ideal for
serverless workloads.
6. Dedicated Hosts: Rent physical servers for workloads requiring
isolation or compliance.

What are the different storage tiers available in AWS? List three types
of data transfer charges in AWS:
1. S3 Standard: For frequently accessed data. High availability and
durability, but more expensive.
2. S3 Intelligent-Tiering: For unpredictable access patterns. Moves
data between two access tiers based on usage.
3. S3 One Zone-IA: For infrequently accessed data stored in a single
availability zone. Lower cost than Standard.
4. S3 Glacier: For archival data with retrieval times from minutes to
hours. Low-cost storage.
5. S3 Glacier Deep Archive: For long-term, rarely accessed data.
Cheapest storage option.
6. EBS (Elastic Block Store): Persistent block storage for EC2
instances with multiple volume types, including General Purpose
SSD, Provisioned IOPS SSD, and Cold HDD.
7. Amazon FSx: Managed file storage for Windows or Lustre file
systems.

Three Types of Data Transfer Charges in AWS:


1. Data Transfer Out: Charges for data transferred out of AWS to the
internet or to other AWS regions.
2. Data Transfer Between AWS Services: Charges for data transfer
between different AWS services in different regions or accounts.
3. Data Transfer Into AWS: Typically free for most AWS services but
may incur costs for services like Amazon EC2.

Explain the concept of resource billing in AWS:


Resource billing in AWS is based on actual usage of services, where
customers pay for the resources they consume. Key points include:
1. Usage-Based: Charges are based on compute (e.g., EC2), storage
(e.g., S3), data transfer, and service usage (e.g., Lambda).
2. Pricing Models: Options include On-Demand, Reserved, and Spot
Instances.
3. Free Tier: Limited free usage for new customers.
4. Cost Management: Tools like AWS Cost Explorer help track and
optimize spending.
In short, AWS bills customers based on how much they use, with various
pricing models to optimize costs.

Describe the relationship between budget management and cost


optimization in AWS:

Budget Management:
• Purpose: Helps set financial limits on AWS usage and track
spending.
• Tools: AWS provides tools like AWS Budgets to create custom
budgets and receive alerts when spending exceeds defined
thresholds.
• Goal: Ensure that costs do not exceed predefined limits, making it
easier to stay within financial constraints.
2. Cost Optimization:
• Purpose: Focuses on reducing AWS spending by using resources
more efficiently.
• Techniques: Includes rightsizing instances, using reserved or spot
instances, leveraging cheaper storage options, and eliminating
underused resources.
• Goal: Lower AWS costs without compromising performance or
availability.
Relationship:
• Budget management sets spending boundaries, while cost
optimization helps identify and implement strategies to reduce
costs within those boundaries.
• Together, they enable businesses to manage and control AWS
expenses effectively by both tracking costs and making informed
decisions to reduce unnecessary expenditures.

Demonstrate how to estimate costs using the AWS Pricing Calculator:


Access the AWS Pricing Calculator website.
Create a New Estimate by clicking "Create Estimate."
Select Services you want to estimate (e.g., EC2, S3, Lambda).
Configure the Service by entering usage details (e.g., instance type,
storage amount).
View the Estimate to see the cost breakdown.
Add More Services if needed.
Review and Save your estimate or export it.
Adjust for Cost Optimization by experimenting with different
configurations.

Set up a budget in AWS to monitor and control service usage:


1. Go to AWS Management Console > Billing and Cost Management
> Budgets.
2. Click Create budget and choose Cost or Usage.
3. Set the Budget name, Period, and Budget amount.
4. Define Alert thresholds (e.g., 80%, 100%) and set email alerts.
5. Click Create budget to finalize.
6. Monitor and adjust the budget as needed.
This allows you to track and control AWS service usage and spending.

You might also like