Intern Report 409
Intern Report 409
Submitted by
BONTHU LOHITHA
Roll No:216N1A0409
(2024-2025)
SRINIVASA INSTITUTE OF ENGINEERING AND TECHNOLOGY
(UGC – Autonomous Institution)
(Approved by AICTE, permanently affiliated to JNTUK, Kakinada, ISO 9001: 2015 certified Institution)
(Accredited by NAAC with 'A' Grade; Recognised by UGC under sections 2(f) & 12(B))
CERTIFICATE
This is to certify that the Summer Internship Report on “CLOUDVIRTUAL INTERNSHIP”, carried
at AWS academy , is submitted by Bonthu Lohitha (216N1A0409) and She is a student of IV
B.Tech. Electronics and Communication Engineering at Srinivasa Institute of Engineering and
Technology, Amalapuram. She completed the internship during the summer vacation between 3rd year
and 4th year from APRIL to JUNE 2024.
Principal
ACKNOWLEDGEMENT
I would immensely be thankful to Dr. B. RATNA RAJU, Head of the Department, Electronics and
Communication Engineering, for his technical guidance who has been an excellent guide and great
source of inspiration to my work.
I extend my heartfelt gratitude to MISS.P. SATYA HARI CHANDANA for their invaluable
guidance, expertise, and unwavering support throughout my summer internship, which significantly
contributed to my growth and learning.
I would like to express my heart-felt gratitude to my parents without whom I would not have been
privileged to achieve and fulfil my dreams. The satisfaction and euphoria that accompany the
successful completion of the task would be great but incomplete without the mention of the people
who made it possible with their constant guidance and encouragement crowns all the efforts with
success. In this context, I would like to thank all the other staff members, both teaching and non-
teaching, who have extended their timely help and eased my task.
BONTHU LOHITHA
Reg no:216N1A0409
ABSTRACT
This report provides an overview of my 10-week virtual internship in Cloud Architecture, hosted by
AWS. The internship focused on key cloud computing principles and practical applications, enabling
hands-on experience with AWS services. Throughout the program, I gained exposure to various cloud
infrastructure concepts such as virtualization, scalability, load balancing, and security.
Key services like EC2, S3, RDS, and IAM were explored in-depth, allowing me to understand their
configurations and best practices. Additionally, I learned about architecture design patterns, cloud cost
management, and disaster recovery strategies, which are essential for building resilient and cost-
effective cloud environments.
The internship also emphasized the importance of automation and Infrastructure as Code (IaC) using
AWS Cloud Formation, as well as DevOps principles to streamline cloud operations. By the end of the
program, I had developed the ability to design, implement, and manage cloud-based solutions tailored
to real-world business needs. This report outlines the major tasks completed, challenges faced, and key
takeaways from the internship, highlighting my enhanced proficiency in cloud technologies.
INDEX
11 - Conclusion 27
WEEK-1
Introduction to cloud computing
1) Cloud computing is the on-demand delivery of IT resources via the internet with pay-as you-
go pricing.
2) Cloud computing enables you to think of (and use) your infrastructure as software.
3) There are three cloud service models: IaaS, PaaS, and SaaS. There are three cloud deployment
models: cloud, hybrid, and on-premises or private cloud.
4) There are many AWS service analogy for the traditional, on-premises IT space.
The key takeaways from this section of the module include the six advantages of cloud computing:
IaaS provides virtualized computing resources over the internet, allowing users to rent IT infrastructure
like servers, storage, and networking on a pay-as-you-go basis. With IaaS, businesses can avoid the
expense and complexity of buying and managing physical servers. Instead, they can scale resources up
or down depending on demand.
Examples: Amazon Web Services (AWS EC2), Microsoft Azure, Google Cloud Platform (GCP).
Use Case: Ideal for companies needing full control over their infrastructure but want the flexibility of
cloud hosting.
PaaS offers a platform that allows developers to build, test, and deploy applications without worrying
about the underlying infrastructure. The platform handles everything from hosting to database
management, enabling developers to focus on coding and innovation rather than hardware setup and
maintenance.
Key Features: Development tools, middleware, database management, and application hosting.
Examples: AWS Elastic Beanstalk, Google App Engine, Microsoft Azure App Services.
Use Case: Best for developers who want to focus on building applications without managing.
SaaS delivers fully functional software applications over the internet. Users access these applications
via a web browser without needing to install or maintain them on their own devices. The provider
manages everything, including infrastructure, application updates, and security.
Examples: Google Workspace (Gmail, Google Docs), Salesforce, Microsoft Office 365.
Use Case: Ideal for end-users who need ready-to-use software solutions without worrying about
infrastructure or platform management.
1.4: Introduction to Amazon Web Services (AWS):
1. AWS is a secure cloud platform that offers a broad set of global cloud-based products called
services that are designed to work together.
2. There are many categories of AWS services, and each category has many services to choose
from.
3. Choose a service based on your business goals and technology requirements.
4. There are three ways to interact with AWS services.
WEEK-2
Cloud infrastructure design
Amazon VPC can be referred to as the private cloud inside the cloud. It is a logical grouping of servers
in a specified network. The servers that you are going to deploy in the Virtual Private Cloud (VPC)
will be completely isolated from the other servers that are deployed in the Amazon Web Services. You
can have complete control of the IP address to the virtual machines and route tables and gateways to
the VPC. With the help of security groups and network access control lists, you can protect your
application more.
The basic architecture of a properly functioning VPC consists of many distinct services such as
Gateway, Load Balancer, Subnets, etc. Altogether, these resources are clubbed under a VPC to create
an isolated virtual environment. Along with these services, there are also security checks on multiple
levels. It is initially divided into subnets, connected with each other via route tables along with a load
balance.
Components:
1. VPC
You can launch AWS resources into a defined virtual network using Amazon Virtual Private Cloud
(Amazon VPC). With the advantages of utilizing the scalable infrastructure of AWS, this virtual
network closely mimics a conventional network that you would operate in your own data centre.
/16 user-defined address space maximum (65,536 addresses).
2. Subnets
To reduce traffic, the subnet will divide the big network into smaller, connected networks. Up to
/16, 200 user-defined subnets.
3. Route Tables
Route Tables are mainly used to Define the protocol for traffic routing between the subnets.
Network Access Control Lists (NACL) for VPC serve as a firewall by managing both inbound and
outbound rules. There will be a default NACL for each VPC that cannot be deleted.
The Internet Gateway (IGW) will make it possible to link the resources in the VPC to the Internet.
Network Address Translation (NAT) will enable the connection between the private subnet and the
internet.
A Security Group is essentially a virtual firewall that controls inbound and outbound traffic to cloud
resources such as virtual machines (EC2 instances in AWS). Security Groups allow you to define rules
that specify which types of traffic are allowed to access or leave your instances. They play a key role
in ensuring network-level security by managing the traffic flow to and from your resources.
Key Features:
1) Stateless vs. Stateful:
Security Groups in AWS are stateful. This means that if you allow an inbound request from a
particular IP address, the corresponding outbound response is automatically allowed, and vice
versa.
2) Inbound and Outbound Rules:
Security Groups can have specific rules for both inbound (traffic coming to your instance) and
outbound (traffic leaving your instance).
Port Range: A specific port or range of ports (e.g., port 80 for HTTP traffic or 22 for SSH)
Source/Destination: IP address ranges (CIDR blocks) or other Security Groups that are allowed to
interact with your instances.
WEEK-3
COMPUTE SERVICES
3.1: Working with Amazon EC2:
1) Choose an AMI: This is a pre-configured template that includes an operating system and
software.
2) Select Instance Type: Pick an instance type based on CPU, memory, and storage requirements
(e.g.
3) general-purpose, compute-optimized).
4) Configure Instance Details: Set options like the number of instances, network settings, and
scaling options.
5) Add Storage: Attach Elastic Block Store (EBS) volumes to store data.
6) Set Security Group Rules: Create firewall rules to allow or block traffic to your instance.
7) Connect to the Instance: Use SSH for Linux instances or RDP for Windows instances to log in
and manage your instance.
Elastic Block Store (EBS): Persistent storage that remains even if the instance is stopped.
Instance Store: Temporary storage that is lost when the instance stops.
3. Elastic IP Address
An Elastic IP provides a permanent public IP address that can be associated with your instance, even if
it restarts
Auto Scaling adjusts the number of instances based on the demand. This ensures you always have the
right number of resources, automatically adding or removing instances as needed.
Amazon CloudWatch helps you monitor your EC2 instances by tracking metrics like CPU usage, disk
activity, and network traffic. You can set alarms to take action if certain thresholds are exceeded.
3.2: Autoscaling:
Auto Scaling in AWS is a service that automatically adjusts the number of EC2 instances running in
response to the demand for your application. It works by adding more instances when traffic or
workload increases and removing them when demand decreases, ensuring that your application has the
right number of resources at all times.
Key Points:
a) Scaling Up: When your application's usage increases (e.g., more visitors to a website), Auto
Scaling adds more EC2 instances to handle the load.
b) Scaling Down: When the usage decreases, Auto Scaling reduces the number of instances,
saving costs. Triggered by Metrics: Auto Scaling uses performance metrics like CPU usage
or network traffic to decide when to add or remove instances.
c) Cost-Effective: It helps optimize costs by only using the necessary resources based on real-
time demand
3.3Load balancing: Load balancing in cloud computing is the process of distributing incoming
network traffic across multiple servers (or instances) to ensure no single server is overwhelmed. It
helps improve performance, reliability, and availability by efficiently managing traffic to avoid
overloading any single resource.
Distributes Traffic: Balances the load by sending user requests to different servers, preventing any
single server from becoming a bottleneck.
Increases Availability: If one server goes down, the load balancer reroutes traffic to healthy servers,
ensuring continuous availability.
Improves Performance: By spreading the load across multiple servers, it reduces response time and
enhances user experience.
Example in AWS:
In AWS, Elastic Load Balancing (ELB) is the service that distributes incoming application traffic
across multiple EC2 instances in different Availability Zones. ELB automatically detects unhealthy
instances and redirects traffic to healthy ones.
Wireless computing refers to the ability to connect and perform tasks using devices without the need
for physical wired connections. It relies on technologies like Wi-Fi, Bluetooth, and cellular networks,
enabling devices like smartphones, tablets, IoT (Internet of Things) devices, and laptops to interact
with applications and services in the cloud.
AWS Lambda is a serverless computing service that plays a key role in enabling wireless computing
by allowing developers to run code in response to specific events without managing servers. When
combined with wireless devices, AWS Lambda can create powerful, responsive, and scalable
applications, particularly in the world of IoT and mobile services.
1.Event-Driven Execution: Lambda automatically triggers functions when certain events occur, such
as data from IoT devices, user actions on a mobile app, or database changes. It responds to real-time
wireless device interactions.
2.Serverless Architecture: With Lambda, you don’t need to provision or manage servers. This is ideal
for wireless computing environments where services must be highly scalable and dynamic. You only
pay for the compute time used when your code runs.
3.Seamless Integration with AWS Services: Lambda integrates with services like AWS IoT,
DynamoDB, S3, and API Gateway, which are commonly used in wireless applications. For example,
data from IoT sensors can trigger Lambda functions to process and store information in a database.
4. Scalability: Lambda automatically scales to handle multiple events concurrently, making it perfect
for wireless applications where the number of devices and requests can fluctuate greatly.
WEEK-4
STORAGE SOLUTIONS
1.Storage for Any File Type: You can store files like photos, videos, documents, backups, or even
application data.
2.Buckets: Your data is stored in buckets, which are like folders. You create a bucket, upload files into
it, and can organize them however you need.
3.Access from Anywhere: Since it's in the cloud, you can access your data from anywhere with an
internet connection.
4.Security: S3 lets you control who can access your data by setting permissions. You can make files
public, private, or give specific people access.
5.Durability and Availability: S3 automatically replicates your data across different locations,
ensuring it's safe and available even if something goes wrong.
Example Use:
If you run a website and need a place to store images, you can upload them to an S3 bucket. The
images are securely stored, and you can easily link to them from your site. S3 handles all the behind-
the-scenes work, like ensuring your files are safe and available. Bucket policies in Amazon S3 are
rules that define access permissions for your S3 buckets. They allow you to control who can access
your data and what actions they can perform (like reading or writing files). These policies are written
in JSON format and can grant or restrict access to different users, AWS accounts, or even the public.
Key Points:
1.Control Access: You can specify who (users or AWS services) can access the bucket and what
actions they can take, such as:
Public: You can make a bucket or specific files publicly accessible so anyone on the internet can view
them (common for hosting websites or public data).
Private: By default, buckets are private, meaning only the bucket owner or specific users with
permissions can access the files.
3.Conditional Access:
Json
Copy code
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action":
"s3:GetObject",
"Resource": "arn: aws:s3:::my-bucket-name/*"
}
]
}
AWS Elastic Document Store (EDS) is a fully managed NoSQL database service designed to store,
retrieve, and manage semi-structured data, like JSON documents. It's optimized for handling large
volumes of documents and provides high availability, scalability, and flexible querying capabilities.
EDS is often used for applications that require rapid data access and dynamic data structures.
AWS EFS (Elastic File System):
AWS Elastic File System (EFS) is a scalable file storage service that allows you to store and access
files in the cloud. It is designed to be elastic, meaning it can automatically grow and shrink as you add
or remove files. EFS is commonly used with AWS services like EC2 (Elastic Compute Cloud) to
provide shared storage across multiple instances, making it suitable for applications that require file-
level storage and concurrent access.
In summary:
WEEK-5
Key Features
2. Permissions Control: Define policies that specify who can do what. Policies can grant or deny
access to AWS services and resources.
4. Role-Based Access: Use roles to grant permissions to applications or services (like EC2 instances)
without sharing long-term credentials.
5. Centralized Management: Manage user permissions and access across all AWS services from a
single interface.
1. Secure Access: Ensure that only authorized users can access sensitive resources.
2. Compliance and Auditing: Track who accessed what and when for compliance purposes.
3. Temporary Access: Provide temporary access to users or services without the need for
permanent credentials.
1. Users: Individual identities created in IAM to represent people or applications that need access to
AWS resources.
Characteristics:
a) Each user has unique credentials (username and password) for the AWS Management Console
and/or access keys for programmatic access.
b) Users can be assigned to groups to simplify permission management.
Use Case: You create a user for each employee who needs access to AWS services.
2. Roles: IAM roles are identities with specific permissions that can be assumed by AWS services,
applications, or users.
Characteristics:
a) Unlike users, roles do not have permanent credentials; they provide temporary access.
b) Roles can be used by AWS services (like EC2 or Lambda) to perform actions on your behalf. o
Roles are great for granting permissions without sharing long-term access keys.
Use Case: An EC2 instance may assume a role to access an S3 bucket without needing to store AWS
credentials on the instance.
3. Policies: Policies are documents that define permissions for users, groups, or roles. They specify
what actions are allowed or denied on specific resources.
Characteristics:
1) Policies are written in JSON format and can be attached to users, groups, or roles.
2) There are two types of policies:
Inline Policies: Policies that are directly embedded within a single user, group, or role.
Use Case: A policy can allow a user to read from an S3 bucket but deny them the ability to delete
objects in that bucket.
What It Is: A service that helps you create and manage cryptographic keys for encrypting your data.
Key Features:
What They Are: Keys that you create and manage yourself.
Types:
How It Works: For example, Amazon S3 can automatically encrypt your files when you upload them.
2. Data-in-Transit Encryption:
How It Works: Use protocols like TLS to secure data being sent between servers.
3. Application-Level Encryption:
What It Is: Encrypting data within your applications before it’s stored.
How It Works: Use libraries to encrypt sensitive data in your app before saving it in a database or
storage.
WEEK-6
AWS CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos,
applications, and APIs to users globally with low latency and high transfer speeds.
How It Works:
1. Edge Locations: CloudFront uses a network of edge locations around the world. When a user
requests content (like a webpage or video), CloudFront delivers it from the nearest edge location,
reducing loading time.
2. Caching: It caches copies of your content at these edge locations. If the content is not already
cached, CloudFront fetches it from the original source (like an S3 bucket or web server) and then
caches it for future requests.
3. Dynamic and Static Content: CloudFront can deliver both static content (like images, CSS, and
JavaScript) and dynamic content (like API responses).
4. Security: It provides features like SSL/TLS encryption, AWS Shield for DDoS protection, and
signed URLs for secure access.
Benefits:
a) Faster Loading Times: Users get content from the closest location, leading to quicker load
times.
b) Scalability: Automatically scales to handle varying amounts of traffic without requiring
manual intervention.
c) Global Reach: Delivers content to users around the world with low latency.
d) Cost-Effective: Pay only for what you use, with no upfront fees.
Amazon Route 53 is a scalable and highly available Domain Name System (DNS) web service
designed to provide reliable domain name resolution. How It Works
1. Domain Registration: You can register new domain names directly through Route 53 or transfer
existing ones.
3. Health Checks: It can monitor the health of your applications and automatically route traffic away
from unhealthy endpoints.
Benefits:
High Availability: Automatically routes traffic to healthy resources, ensuring your application
remains accessible.
Scalability: Easily handles large amounts of DNS queries without compromising performance
1) AWS DIRECT CONNECT: A dedicated network connection between your on-premises data
centre and AWS.
2) How It Works: Instead of using the public internet, you can establish a private connection to
AWS services, which can improve bandwidth and reduce latency.
3) AWS VPN (Virtual Private Network): A secure connection over the internet between your
on-premises network and your AWS Virtual Private Cloud (VPC).
How It Works: AWS VPN creates an encrypted tunnel over the internet, allowing you to
connect to your AWS resources securely.
4) Hybrid Cloud Connectivity: A combination of on-premises infrastructure and cloud resources
that work together.
How It Works: Hybrid cloud setups can use Direct Connect, VPN, or both to connect on-premises data
centres with cloud resources. This allows for seamless data transfer and application integration
between the two environments.
WEEK-7
Amazon CloudWatch is a monitoring and management service designed to provide visibility into your
AWS resources and applications. It helps you track performance, operational health, and logs, enabling
you to respond to changes and optimize performance.
Key Features:
1.Monitoring Metrics:
What It Does: Collects and tracks metrics (like CPU usage, memory usage, and request counts) from
AWS services and custom applications.
Use Case: Monitor the performance of EC2 instances, RDS databases, or any other AWS services in
real-time.
2.Alarms:
What It Does: Allows you to set alarms based on specific metrics. You can receive notifications or
automatically take actions (like scaling resources) when thresholds are crossed.
Use Case: Set an alarm to notify you if CPU usage exceeds 80% for a specified period.
3.Logs:
What It Does: Collects and stores log files from various AWS services and applications. You can
search and analyse these logs to troubleshoot issues.
Use Case: Use CloudWatch Logs to monitor logs from EC2 instances, Lambda functions, or any
application that generates log data.
4. Dashboards:
What It Does: Provides customizable dashboards that give you a visual representation of your metrics
and logs.
Use Case: Create a dashboard to display key metrics for all your applications in one view.
5.Events:
What It Does: Monitors and reacts to changes in your AWS environment. You can set rules to trigger
actions based on specific events.
Benefits:
a) Centralized Monitoring: Get a unified view of your AWS resources and applications.
b) Real-Time Insights: Quickly identify and respond to issues before they impact users.
c) Cost Management: Optimize resource usage by monitoring performance and usage metrics.
d) Enhanced Security: Keep track of logs for compliance and security auditing.
7.2: Cost optimization strategies:
1) Right-Sizing Resources: Regularly review and adjust the size of your compute instances to match
the actual usage.
How It Helps: Prevents overspending on underutilized resources (e.g., choosing smaller EC2 instances
for lower workloads).
2) Use Reserved Instances or Savings: Purchase Reserved Instances or commit to a Savings Plan for
predictable workloads.
How It Helps: Provides significant discounts (up to 75%) compared to on-demand pricing when you
commit to using specific resources over a period (1 or 3 years).
3) Automate Resource Management: Use automation tools to start and stop instances based on
schedules or usage patterns.
How It Helps: Reduces costs by ensuring that resources are only running when needed (e.g., turning
off development or testing environments during off-hours).
4) Leverage Spot Instances: Use Spot Instances for flexible, fault-tolerant applications where
interruptions are acceptable.
How It Helps: Offers substantial savings (up to 90%) over on-demand pricing.
5) Optimize Storage Cost: Regularly review storage usage and classify data based on access
frequency.
How It Helps: Move infrequently accessed data to lower-cost storage classes (like Amazon S3 Glacier)
to reduce costs.
6) Monitor and Analyse Usage: Use monitoring tools (like AWS Cost Explorer) to track and analyse
resource usage and costs.
How It Helps: Identifies trends, areas of overspending, and opportunities for optimization.
How It Helps: Ensures you only pay for the resources you need during peak times while scaling down
during low demand.
8) Utilize Free Tier Services: Take advantage of AWS Free Tier offerings for testing and
development.
How It Helps: Allows you to explore services at no cost up to certain usage limits, reducing initial
costs for new projects.
9) Regularly Review and Clean Up Resources: Conduct regular audits to identify and remove
unused or underutilized resources.
How It Helps: Eliminates unnecessary spending on resources that are no longer needed.
WEEK-8
AWS Cloud Foundations is a set of fundamental concepts, best practices, and services that help
organizations establish a strong foundation for using Amazon Web Services (AWS) effectively and
securely. It covers the basics of cloud computing and how AWS services work together to meet
business needs.
Key Concepts:
1. Cloud Computing:
Definition: The delivery of computing resources (like servers, storage, databases, and applications)
over the internet (the "cloud") instead of on local servers or personal computers.
2. AWS Services:
Importance: These services enable businesses to build and deploy applications quickly and efficiently.
What It Is: Implementing security measures to protect data and applications in the cloud. o Key
Features: Identity and Access Management (IAM) for controlling user access, encryption for data
protection, and compliance with regulations.
4. Cost Management:
What It Is: Understanding and managing costs associated with using AWS services.
Strategies: Use tools like AWS Cost Explorer to monitor spending and optimize resource usage.
Terraform is an open-source Infrastructure as Code (Iac) tool that allows you to define, provision, and
manage cloud infrastructure using a declarative configuration language. With Terraform, you can
automate the setup of infrastructure resources across various cloud providers, including AWS, Azure,
and Google Cloud.
Key Concepts:
Definition: The practice of managing and provisioning computing infrastructure through code instead
of manual processes.
Benefits: Increases consistency, reduces human error, and allows for version control of infrastructure
configurations.
What It Is: Written in Hashi Corp Configuration Language (HCL) or JSON, these files define the
desired state of your infrastructure. Example: A configuration file might specify the creation of an EC2
instance, VPC, or S3 bucket.
3. Providers:
What It Is: Plugins that allow Terra form to interact with various cloud providers and services (e.g.,
AWS, Azure, Google Cloud). Use Case: Each provider exposes resources specific to that platform,
allowing you to define infrastructure on that provider.
4. Modules:
What It Is: Reusable configurations that encapsulate a group of resources and can be shared across
projects. Use Case: Create a module for setting up a web server, which can be reused in multiple
environments.
WEEK-9
Backup and restore strategies are essential practices for protecting data and ensuring business
continuity in case of data loss due to hardware failures, accidental deletions, or disasters. A good
strategy involves regularly creating copies of data (backups) and having a clear process to recover that
data (restoration) when needed.
Use Case: Provides a comprehensive snapshot but takes longer and requires more storage.
Incremental Backup:
What It Is: Backs up only the data that has changed since the last backup.
Use Case: Faster and uses less storage compared to full backups but requires the last full backup and
all incremental backups for restoration.
Differential Backup:
What It Is: Backs up all data changed since the last full backup.
Use Case: Faster restoration than incremental backups but requires more storage than incremental
backups.
2. Backup Frequency:
What It Is: How often backups are created (e.g., hourly, daily, weekly).
Use Case: Determine the frequency based on how critical the data is and how often it changes.
3. Backup Storage:
On-Premises Storage: Local storage devices (e.g., external hard drives, NAS).
Cloud Storage: Services like Amazon S3, Google Cloud Storage, or Azure Blob Storage for remote
backups.
4. Backup Automation:
Benefits: Reduces human error, ensures backups are not forgotten, and provides consistent protection.
1.Backup Strategy:
What It Is: Regularly backing up data and configurations to recover from data loss. Implementation:
Use automated backup solutions, ensuring backups are stored securely offsite or in the cloud.
3. Geographic Redundancy:
What It Is: Distributing resources across multiple locations to protect against regional disasters.
Implementation: Use multiple data centres or cloud regions to host replicas of critical applications and
data.
What It Is: Periodically simulating disaster scenarios to ensure the DRP works effectively.
Implementation: Conduct regular drills and tests to validate the recovery process, updating the plan
based on lessons learned.
WEEK-10
Overview:
For the final project, I designed and deployed a cloud infrastructure for a [choose one: web application,
e-commerce site, or serverless data processing pipeline]. The project utilized several AWS services to
ensure high availability, fault tolerance, and scalability.
Key Components:
1. Amazon EC2 (Elastic Compute Cloud):Used for deploying and managing virtual servers for the
application.
2. Amazon RDS (Relational Database Service): Implemented a highly available database with
automated backups and failover.
3. Amazon S3 (Simple Storage Service): Used for object storage to host static content and backup
data.
4. Amazon CloudFront: Set up for content delivery and caching, ensuring faster load times and
reduced latency.
5. AWS Lambda: Integrated serverless functions to handle specific workloads like data processing
and event-driven tasks.
6. Amazon Route 53: Managed domain names and DNS routing for the web application.
Security Measures:
- Utilized IAM (Identity and Access Management) to ensure proper access controls and roles for users
and services.
- Implemented AWS Security Groups and Network Access Control Lists (NACLs) to restrict inbound
and outbound traffic to my cloud resources.
- Integrated AWS WAF (Web Application Firewall) for protection against common web exploits.
Solution: Used AWS Auto Scaling to dynamically scale the infrastructure based on demand, ensuring
cost-effective use of resources without compromising performance.
CONCLUSION
In conclusion, my cloud virtual internship on AWS was a great experience. Over the past 10 weeks, I
learned a lot about how to work with cloud technology. I got hands-on experience setting up and
managing cloud resources, which helped me understand how AWS works. Working with my team
improved my communication and problem-solving skills. This internship strengthened my interest in
cloud computing and gave me useful skills for my future career. I’m thankful for the support from my
mentors and colleagues, and I’m excited to use what I’ve learned and moving forward.