0% found this document useful (0 votes)
11 views33 pages

Intern Report 409

This document is a summer internship report by Bonthu Lohitha, a student of Electronics and Communication Engineering, detailing her 10-week virtual internship at AWS Academy focused on Cloud Architecture. The report covers key cloud computing principles, AWS services, and hands-on experience with cloud infrastructure concepts like virtualization, scalability, and security. It also outlines the tasks completed, challenges faced, and the skills acquired during the internship.

Uploaded by

bonthulohitha44
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views33 pages

Intern Report 409

This document is a summer internship report by Bonthu Lohitha, a student of Electronics and Communication Engineering, detailing her 10-week virtual internship at AWS Academy focused on Cloud Architecture. The report covers key cloud computing principles, AWS services, and hands-on experience with cloud infrastructure concepts like virtualization, scalability, and security. It also outlines the tasks completed, challenges faced, and the skills acquired during the internship.

Uploaded by

bonthulohitha44
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 33

SUMMER INTERNSHIP REPORT ON

CLOUD VIRTUAL INTERNSHIP

in partial fulfillment for the award of the degree


of
BACHELOR OF TECHNOLOGY
in
ELECTRONICS AND COMMUNICATION ENGINEERING

Submitted by
BONTHU LOHITHA
Roll No:216N1A0409

Department of Electronics and Communication Engineering


SRINIVASA INSTITUTE OF ENGINEERING AND TECHNOLOGY
(Autonomous)

(2024-2025)
SRINIVASA INSTITUTE OF ENGINEERING AND TECHNOLOGY
(UGC – Autonomous Institution)
(Approved by AICTE, permanently affiliated to JNTUK, Kakinada, ISO 9001: 2015 certified Institution)

(Accredited by NAAC with 'A' Grade; Recognised by UGC under sections 2(f) & 12(B))

NH-216, Amalapuram-Kakinada Highway, Cheyyeru (V), Amalapuram-533216.

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING

CERTIFICATE

This is to certify that the Summer Internship Report on “CLOUDVIRTUAL INTERNSHIP”, carried
at AWS academy , is submitted by Bonthu Lohitha (216N1A0409) and She is a student of IV
B.Tech. Electronics and Communication Engineering at Srinivasa Institute of Engineering and
Technology, Amalapuram. She completed the internship during the summer vacation between 3rd year
and 4th year from APRIL to JUNE 2024.

Mentor Head of the Department

Principal
ACKNOWLEDGEMENT

I am grateful to our principal, Dr. M. SREENIVASA KUMAR, Srinivasa Institute of Engineering


and Technology, who most ably run the institution and his constant encouragement and support in
carrying out my internship report at college.

I would immensely be thankful to Dr. B. RATNA RAJU, Head of the Department, Electronics and
Communication Engineering, for his technical guidance who has been an excellent guide and great
source of inspiration to my work.

I extend my heartfelt gratitude to MISS.P. SATYA HARI CHANDANA for their invaluable
guidance, expertise, and unwavering support throughout my summer internship, which significantly
contributed to my growth and learning.

I would like to express my heart-felt gratitude to my parents without whom I would not have been
privileged to achieve and fulfil my dreams. The satisfaction and euphoria that accompany the
successful completion of the task would be great but incomplete without the mention of the people
who made it possible with their constant guidance and encouragement crowns all the efforts with
success. In this context, I would like to thank all the other staff members, both teaching and non-
teaching, who have extended their timely help and eased my task.

BONTHU LOHITHA

Reg no:216N1A0409
ABSTRACT

This report provides an overview of my 10-week virtual internship in Cloud Architecture, hosted by
AWS. The internship focused on key cloud computing principles and practical applications, enabling
hands-on experience with AWS services. Throughout the program, I gained exposure to various cloud
infrastructure concepts such as virtualization, scalability, load balancing, and security.

Key services like EC2, S3, RDS, and IAM were explored in-depth, allowing me to understand their
configurations and best practices. Additionally, I learned about architecture design patterns, cloud cost
management, and disaster recovery strategies, which are essential for building resilient and cost-
effective cloud environments.

The internship also emphasized the importance of automation and Infrastructure as Code (IaC) using
AWS Cloud Formation, as well as DevOps principles to streamline cloud operations. By the end of the
program, I had developed the ability to design, implement, and manage cloud-based solutions tailored
to real-world business needs. This report outlines the major tasks completed, challenges faced, and key
takeaways from the internship, highlighting my enhanced proficiency in cloud technologies.
INDEX

S.NO WEEKS CONTENT PG.NO

1 Week-1 Introduction to cloud computing 1-3

2 Week-2 Cloud infrastructure design 4-6

3 Week-3 Compute services 7-10

4 Week-4 Storage solutions 11-13

5 Week- 5 Security and identity management 14-16

6 Week-6 Networking and content delivery 17-18

7 Week-7 Monitoring and optimization 19-21

8 Week-8 Automation and infrastructure as code 22-23

9 Week-9 Backup, Disaster recovery 24-25

10 Week-10 Cloud architecture implementation 26

11 - Conclusion 27
WEEK-1
Introduction to cloud computing

1.1: Introduction to cloud computing:

Some key takeaways from this section of the module include:

1) Cloud computing is the on-demand delivery of IT resources via the internet with pay-as you-
go pricing.
2) Cloud computing enables you to think of (and use) your infrastructure as software.
3) There are three cloud service models: IaaS, PaaS, and SaaS. There are three cloud deployment
models: cloud, hybrid, and on-premises or private cloud.
4) There are many AWS service analogy for the traditional, on-premises IT space.

1.2: Advantages of cloud computing:

The key takeaways from this section of the module include the six advantages of cloud computing:

1) Trade capital expense for variable expense.


2) Massive economies of scale.
3) Stop guessing capacity.
4) Increase speed and agility.
5) Stop spending money on running and maintaining data centres.
6) Go global in minute.

Figure: Cloud Computing Overview


1.3: Overview of cloud models:
Infrastructure as a Service (IaaS):

IaaS provides virtualized computing resources over the internet, allowing users to rent IT infrastructure
like servers, storage, and networking on a pay-as-you-go basis. With IaaS, businesses can avoid the
expense and complexity of buying and managing physical servers. Instead, they can scale resources up
or down depending on demand.

Key Features: Virtual machines, storage, networks, and operating systems.

Examples: Amazon Web Services (AWS EC2), Microsoft Azure, Google Cloud Platform (GCP).

Use Case: Ideal for companies needing full control over their infrastructure but want the flexibility of
cloud hosting.

Platform as a Service (PaaS):

PaaS offers a platform that allows developers to build, test, and deploy applications without worrying
about the underlying infrastructure. The platform handles everything from hosting to database
management, enabling developers to focus on coding and innovation rather than hardware setup and
maintenance.

Key Features: Development tools, middleware, database management, and application hosting.

Examples: AWS Elastic Beanstalk, Google App Engine, Microsoft Azure App Services.

Use Case: Best for developers who want to focus on building applications without managing.

Software as a Service (SaaS):

SaaS delivers fully functional software applications over the internet. Users access these applications
via a web browser without needing to install or maintain them on their own devices. The provider
manages everything, including infrastructure, application updates, and security.

Key Features: Web-based applications that users access via subscription.

Examples: Google Workspace (Gmail, Google Docs), Salesforce, Microsoft Office 365.

Use Case: Ideal for end-users who need ready-to-use software solutions without worrying about
infrastructure or platform management.
1.4: Introduction to Amazon Web Services (AWS):

The key takeaways from this section of the module include:

1. AWS is a secure cloud platform that offers a broad set of global cloud-based products called
services that are designed to work together.
2. There are many categories of AWS services, and each category has many services to choose
from.
3. Choose a service based on your business goals and technology requirements.
4. There are three ways to interact with AWS services.
WEEK-2
Cloud infrastructure design

2.1: Amazon (Virtual Private Cloud):

Amazon VPC can be referred to as the private cloud inside the cloud. It is a logical grouping of servers
in a specified network. The servers that you are going to deploy in the Virtual Private Cloud (VPC)
will be completely isolated from the other servers that are deployed in the Amazon Web Services. You
can have complete control of the IP address to the virtual machines and route tables and gateways to
the VPC. With the help of security groups and network access control lists, you can protect your
application more.

Amazon VPC (Virtual Private Cloud) Architecture:

The basic architecture of a properly functioning VPC consists of many distinct services such as
Gateway, Load Balancer, Subnets, etc. Altogether, these resources are clubbed under a VPC to create
an isolated virtual environment. Along with these services, there are also security checks on multiple
levels. It is initially divided into subnets, connected with each other via route tables along with a load
balance.

Figure: Amazon VPC (Virtual Private Cloud) Architecture

Components:
1. VPC

You can launch AWS resources into a defined virtual network using Amazon Virtual Private Cloud
(Amazon VPC). With the advantages of utilizing the scalable infrastructure of AWS, this virtual
network closely mimics a conventional network that you would operate in your own data centre.
/16 user-defined address space maximum (65,536 addresses).

2. Subnets

To reduce traffic, the subnet will divide the big network into smaller, connected networks. Up to
/16, 200 user-defined subnets.

3. Route Tables

Route Tables are mainly used to Define the protocol for traffic routing between the subnets.

4. Network Access Control Lists

Network Access Control Lists (NACL) for VPC serve as a firewall by managing both inbound and
outbound rules. There will be a default NACL for each VPC that cannot be deleted.

5. Internet Gateway (IGW)

The Internet Gateway (IGW) will make it possible to link the resources in the VPC to the Internet.

6. Network Address Translation (NAT)

Network Address Translation (NAT) will enable the connection between the private subnet and the
internet.

2.2: Security Groups:

A Security Group is essentially a virtual firewall that controls inbound and outbound traffic to cloud
resources such as virtual machines (EC2 instances in AWS). Security Groups allow you to define rules
that specify which types of traffic are allowed to access or leave your instances. They play a key role
in ensuring network-level security by managing the traffic flow to and from your resources.

Key Features:
1) Stateless vs. Stateful:
Security Groups in AWS are stateful. This means that if you allow an inbound request from a
particular IP address, the corresponding outbound response is automatically allowed, and vice
versa.
2) Inbound and Outbound Rules:
Security Groups can have specific rules for both inbound (traffic coming to your instance) and
outbound (traffic leaving your instance).

Each rule specifies:

Protocol: Such as TCP, UDP, or ICMP.

Port Range: A specific port or range of ports (e.g., port 80 for HTTP traffic or 22 for SSH)

Source/Destination: IP address ranges (CIDR blocks) or other Security Groups that are allowed to
interact with your instances.

WEEK-3

COMPUTE SERVICES
3.1: Working with Amazon EC2:

1. Launching EC2 Instances To launch an EC2 instance:

1) Choose an AMI: This is a pre-configured template that includes an operating system and
software.
2) Select Instance Type: Pick an instance type based on CPU, memory, and storage requirements
(e.g.
3) general-purpose, compute-optimized).
4) Configure Instance Details: Set options like the number of instances, network settings, and
scaling options.
5) Add Storage: Attach Elastic Block Store (EBS) volumes to store data.
6) Set Security Group Rules: Create firewall rules to allow or block traffic to your instance.
7) Connect to the Instance: Use SSH for Linux instances or RDP for Windows instances to log in
and manage your instance.

2. EC2 Storage Options

Elastic Block Store (EBS): Persistent storage that remains even if the instance is stopped.

Instance Store: Temporary storage that is lost when the instance stops.

3. Elastic IP Address

An Elastic IP provides a permanent public IP address that can be associated with your instance, even if
it restarts

4. Auto Scaling EC2

Auto Scaling adjusts the number of instances based on the demand. This ensures you always have the
right number of resources, automatically adding or removing instances as needed.

5. Monitoring with CloudWatch

Amazon CloudWatch helps you monitor your EC2 instances by tracking metrics like CPU usage, disk
activity, and network traffic. You can set alarms to take action if certain thresholds are exceeded.

3.2: Autoscaling:

Auto Scaling in AWS is a service that automatically adjusts the number of EC2 instances running in
response to the demand for your application. It works by adding more instances when traffic or
workload increases and removing them when demand decreases, ensuring that your application has the
right number of resources at all times.

Key Points:

a) Scaling Up: When your application's usage increases (e.g., more visitors to a website), Auto
Scaling adds more EC2 instances to handle the load.
b) Scaling Down: When the usage decreases, Auto Scaling reduces the number of instances,
saving costs.  Triggered by Metrics: Auto Scaling uses performance metrics like CPU usage
or network traffic to decide when to add or remove instances.
c) Cost-Effective: It helps optimize costs by only using the necessary resources based on real-
time demand

Figure: Auto Scaling

3.3Load balancing: Load balancing in cloud computing is the process of distributing incoming
network traffic across multiple servers (or instances) to ensure no single server is overwhelmed. It
helps improve performance, reliability, and availability by efficiently managing traffic to avoid
overloading any single resource.

Key Benefits of Load Balancing:

Distributes Traffic: Balances the load by sending user requests to different servers, preventing any
single server from becoming a bottleneck.
Increases Availability: If one server goes down, the load balancer reroutes traffic to healthy servers,
ensuring continuous availability.

Improves Performance: By spreading the load across multiple servers, it reduces response time and
enhances user experience.

Example in AWS:

In AWS, Elastic Load Balancing (ELB) is the service that distributes incoming application traffic
across multiple EC2 instances in different Availability Zones. ELB automatically detects unhealthy
instances and redirects traffic to healthy ones.

3.4Introduction to Wireless Computing with AWS Lambda:

Wireless computing refers to the ability to connect and perform tasks using devices without the need
for physical wired connections. It relies on technologies like Wi-Fi, Bluetooth, and cellular networks,
enabling devices like smartphones, tablets, IoT (Internet of Things) devices, and laptops to interact
with applications and services in the cloud.

AWS Lambda is a serverless computing service that plays a key role in enabling wireless computing
by allowing developers to run code in response to specific events without managing servers. When
combined with wireless devices, AWS Lambda can create powerful, responsive, and scalable
applications, particularly in the world of IoT and mobile services.

Key Features of AWS Lambda in Wireless Computing:

1.Event-Driven Execution: Lambda automatically triggers functions when certain events occur, such
as data from IoT devices, user actions on a mobile app, or database changes. It responds to real-time
wireless device interactions.

2.Serverless Architecture: With Lambda, you don’t need to provision or manage servers. This is ideal
for wireless computing environments where services must be highly scalable and dynamic. You only
pay for the compute time used when your code runs.

3.Seamless Integration with AWS Services: Lambda integrates with services like AWS IoT,
DynamoDB, S3, and API Gateway, which are commonly used in wireless applications. For example,
data from IoT sensors can trigger Lambda functions to process and store information in a database.

4. Scalability: Lambda automatically scales to handle multiple events concurrently, making it perfect
for wireless applications where the number of devices and requests can fluctuate greatly.

Use Case: IoT and Mobile Applications


In a wireless computing scenario, IoT devices can send data (e.g., temperature sensors, GPS trackers)
to AWS IoT Core. AWS IoT Core triggers Lambda functions to process this data, store it in a
database, or send alerts. Similarly, mobile apps can trigger Lambda to process backend tasks without
needing to maintain servers.

WEEK-4

STORAGE SOLUTIONS

4.1 AWS S3(simple service storage):


Amazon S3 (Simple Storage Service) is a cloud-based storage service that allows you to store and
retrieve any amount of data at any time. It's highly scalable, secure, and easy to use, making it ideal for
storing files, backups, and data from websites or applications.

Key Features of S3:

1.Storage for Any File Type: You can store files like photos, videos, documents, backups, or even
application data.

2.Buckets: Your data is stored in buckets, which are like folders. You create a bucket, upload files into
it, and can organize them however you need.

3.Access from Anywhere: Since it's in the cloud, you can access your data from anywhere with an
internet connection.

4.Security: S3 lets you control who can access your data by setting permissions. You can make files
public, private, or give specific people access.

5.Durability and Availability: S3 automatically replicates your data across different locations,
ensuring it's safe and available even if something goes wrong.

Example Use:

If you run a website and need a place to store images, you can upload them to an S3 bucket. The
images are securely stored, and you can easily link to them from your site. S3 handles all the behind-
the-scenes work, like ensuring your files are safe and available. Bucket policies in Amazon S3 are
rules that define access permissions for your S3 buckets. They allow you to control who can access
your data and what actions they can perform (like reading or writing files). These policies are written
in JSON format and can grant or restrict access to different users, AWS accounts, or even the public.

Key Points:

1.Control Access: You can specify who (users or AWS services) can access the bucket and what
actions they can take, such as:

Read: Viewing or downloading files.

Write: Uploading or modifying files.

Delete: Removing files from the bucket.

2.Public or Private Access:

Public: You can make a bucket or specific files publicly accessible so anyone on the internet can view
them (common for hosting websites or public data).
Private: By default, buckets are private, meaning only the bucket owner or specific users with
permissions can access the files.

3.Conditional Access:

You can add conditions in the policy, such as:

1) Only allowing access from certain IP addresses.


2) Requiring that the connection be encrypted (using HTTPS).

Example of a Bucket Policy (JSON Format):

Json
Copy code
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action":
"s3:GetObject",
"Resource": "arn: aws:s3:::my-bucket-name/*"
}
]
}

Figure: Amazon s3 from a browser

AWS EDS (Elastic Document Store):

AWS Elastic Document Store (EDS) is a fully managed NoSQL database service designed to store,
retrieve, and manage semi-structured data, like JSON documents. It's optimized for handling large
volumes of documents and provides high availability, scalability, and flexible querying capabilities.
EDS is often used for applications that require rapid data access and dynamic data structures.
AWS EFS (Elastic File System):

AWS Elastic File System (EFS) is a scalable file storage service that allows you to store and access
files in the cloud. It is designed to be elastic, meaning it can automatically grow and shrink as you add
or remove files. EFS is commonly used with AWS services like EC2 (Elastic Compute Cloud) to
provide shared storage across multiple instances, making it suitable for applications that require file-
level storage and concurrent access.

In summary:

1. EDS is for document storage (NoSQL).


2. EFS is for file storage (shared file system).

WEEK-5

SECURITY AND IDENTITY MANAGEMENT

5.1: AWS Identity and Access Management (IAM):


AWS Identity and Access Management (IAM) is a service that helps you manage access to AWS
resources securely. It allows you to create and control user identities and set permissions to determine
what actions users can perform on specific resources.

Key Features

1. User Management: Create individual users and groups to control access.

2. Permissions Control: Define policies that specify who can do what. Policies can grant or deny
access to AWS services and resources.

3. Multi-Factor Authentication (MFA): Enhance security by requiring an additional verification


step, such as a code from a mobile app.

4. Role-Based Access: Use roles to grant permissions to applications or services (like EC2 instances)
without sharing long-term credentials.

5. Centralized Management: Manage user permissions and access across all AWS services from a
single interface.

Common Use Cases

1. Secure Access: Ensure that only authorized users can access sensitive resources.
2. Compliance and Auditing: Track who accessed what and when for compliance purposes.
3. Temporary Access: Provide temporary access to users or services without the need for
permanent credentials.

5.2 User, roles and polices:

1. Users: Individual identities created in IAM to represent people or applications that need access to
AWS resources.

Characteristics:

a) Each user has unique credentials (username and password) for the AWS Management Console
and/or access keys for programmatic access.
b) Users can be assigned to groups to simplify permission management.

Use Case: You create a user for each employee who needs access to AWS services.

2. Roles: IAM roles are identities with specific permissions that can be assumed by AWS services,
applications, or users.

Characteristics:

a) Unlike users, roles do not have permanent credentials; they provide temporary access.
b) Roles can be used by AWS services (like EC2 or Lambda) to perform actions on your behalf. o
Roles are great for granting permissions without sharing long-term access keys.

Use Case: An EC2 instance may assume a role to access an S3 bucket without needing to store AWS
credentials on the instance.

3. Policies: Policies are documents that define permissions for users, groups, or roles. They specify
what actions are allowed or denied on specific resources.

Characteristics:

1) Policies are written in JSON format and can be attached to users, groups, or roles.
2) There are two types of policies:

Managed Policies: Standalone policies created and managed by AWS or users.

Inline Policies: Policies that are directly embedded within a single user, group, or role.

Use Case: A policy can allow a user to read from an S3 bucket but deny them the ability to delete
objects in that bucket.

5.3:key management and encryption:

Key Management Service (KMS):

What It Is: A service that helps you create and manage cryptographic keys for encrypting your data.

Key Features:

1. Central Management: Control all your keys from one place.


2. Automatic Rotation: Keys can be rotated automatically for better security.
3. Access Control: Set who can use or manage the keys.
4. Audit Logging: Keep track of who used the keys for compliance.

Customer Managed Keys (CMKs):

What They Are: Keys that you create and manage yourself.

Types:

a) Symmetric Keys: Same key is used to encrypt and decrypt.


b) Asymmetric Keys: A pair of keys (public for encrypting and private for decrypting).
Encryption Strategies:
1.Data-at-Rest Encryption:

What It Is: Encrypting data stored on disks or storage services.

How It Works: For example, Amazon S3 can automatically encrypt your files when you upload them.

2. Data-in-Transit Encryption:

What It Is: Encrypting data as it travels over networks.

How It Works: Use protocols like TLS to secure data being sent between servers.

3. Application-Level Encryption:

What It Is: Encrypting data within your applications before it’s stored.

How It Works: Use libraries to encrypt sensitive data in your app before saving it in a database or
storage.

WEEK-6

NETWORKING AND CONTENT DELIVERY

6.1: AWS CloudFront for Content Delivery:

AWS CloudFront is a fast content delivery network (CDN) service that securely delivers data, videos,
applications, and APIs to users globally with low latency and high transfer speeds.
How It Works:

1. Edge Locations: CloudFront uses a network of edge locations around the world. When a user
requests content (like a webpage or video), CloudFront delivers it from the nearest edge location,
reducing loading time.

2. Caching: It caches copies of your content at these edge locations. If the content is not already
cached, CloudFront fetches it from the original source (like an S3 bucket or web server) and then
caches it for future requests.

3. Dynamic and Static Content: CloudFront can deliver both static content (like images, CSS, and
JavaScript) and dynamic content (like API responses).

4. Security: It provides features like SSL/TLS encryption, AWS Shield for DDoS protection, and
signed URLs for secure access.

Benefits:

a) Faster Loading Times: Users get content from the closest location, leading to quicker load
times.
b) Scalability: Automatically scales to handle varying amounts of traffic without requiring
manual intervention.
c) Global Reach: Delivers content to users around the world with low latency.
d) Cost-Effective: Pay only for what you use, with no upfront fees.

6.2: Amazon Route 53 for DNS Management:

Amazon Route 53 is a scalable and highly available Domain Name System (DNS) web service
designed to provide reliable domain name resolution. How It Works

1. Domain Registration: You can register new domain names directly through Route 53 or transfer
existing ones.

2. DNS Management: Route 53 translates human-friendly domain names (like www.example.com)


into IP addresses that computers use to communicate.

3. Health Checks: It can monitor the health of your applications and automatically route traffic away
from unhealthy endpoints.

4. Routing Policies: Route 53 offers various routing options, such as:

1) Simple Routing: Direct traffic to a single resource.


2) Weighted Routing: Distribute traffic across multiple resources based on assigned weights.
3) Latency-Based Routing: Send users to the nearest resource to minimize latency.
4) Geolocation Routing: Route users based on their geographic location.

Benefits:

High Availability: Automatically routes traffic to healthy resources, ensuring your application
remains accessible.

Scalability: Easily handles large amounts of DNS queries without compromising performance

6.3: AWS Direct Connect, PS AND HYBRIDE CLOUD CONNECTIVITY:

1) AWS DIRECT CONNECT: A dedicated network connection between your on-premises data
centre and AWS.
2) How It Works: Instead of using the public internet, you can establish a private connection to
AWS services, which can improve bandwidth and reduce latency.
3) AWS VPN (Virtual Private Network): A secure connection over the internet between your
on-premises network and your AWS Virtual Private Cloud (VPC).
How It Works: AWS VPN creates an encrypted tunnel over the internet, allowing you to
connect to your AWS resources securely.
4) Hybrid Cloud Connectivity: A combination of on-premises infrastructure and cloud resources
that work together.

How It Works: Hybrid cloud setups can use Direct Connect, VPN, or both to connect on-premises data
centres with cloud resources. This allows for seamless data transfer and application integration
between the two environments.

WEEK-7

MONITORING AND OPTIMIZATION

7.1: Amazon CloudWatch for Monitoring and Logs:

Amazon CloudWatch is a monitoring and management service designed to provide visibility into your
AWS resources and applications. It helps you track performance, operational health, and logs, enabling
you to respond to changes and optimize performance.

Key Features:
1.Monitoring Metrics:

What It Does: Collects and tracks metrics (like CPU usage, memory usage, and request counts) from
AWS services and custom applications.

Use Case: Monitor the performance of EC2 instances, RDS databases, or any other AWS services in
real-time.

2.Alarms:

What It Does: Allows you to set alarms based on specific metrics. You can receive notifications or
automatically take actions (like scaling resources) when thresholds are crossed.

Use Case: Set an alarm to notify you if CPU usage exceeds 80% for a specified period.

3.Logs:

What It Does: Collects and stores log files from various AWS services and applications. You can
search and analyse these logs to troubleshoot issues.

Use Case: Use CloudWatch Logs to monitor logs from EC2 instances, Lambda functions, or any
application that generates log data.

4. Dashboards:

What It Does: Provides customizable dashboards that give you a visual representation of your metrics
and logs.

Use Case: Create a dashboard to display key metrics for all your applications in one view.

5.Events:

What It Does: Monitors and reacts to changes in your AWS environment. You can set rules to trigger
actions based on specific events.

Use Case: Automatically back up data when an EC2 instance is launched.

Benefits:

a) Centralized Monitoring: Get a unified view of your AWS resources and applications.
b) Real-Time Insights: Quickly identify and respond to issues before they impact users.
c) Cost Management: Optimize resource usage by monitoring performance and usage metrics.
d) Enhanced Security: Keep track of logs for compliance and security auditing.
7.2: Cost optimization strategies:

1) Right-Sizing Resources: Regularly review and adjust the size of your compute instances to match
the actual usage.

How It Helps: Prevents overspending on underutilized resources (e.g., choosing smaller EC2 instances
for lower workloads).

2) Use Reserved Instances or Savings: Purchase Reserved Instances or commit to a Savings Plan for
predictable workloads.

How It Helps: Provides significant discounts (up to 75%) compared to on-demand pricing when you
commit to using specific resources over a period (1 or 3 years).

3) Automate Resource Management: Use automation tools to start and stop instances based on
schedules or usage patterns.

How It Helps: Reduces costs by ensuring that resources are only running when needed (e.g., turning
off development or testing environments during off-hours).

4) Leverage Spot Instances: Use Spot Instances for flexible, fault-tolerant applications where
interruptions are acceptable.

How It Helps: Offers substantial savings (up to 90%) over on-demand pricing.

5) Optimize Storage Cost: Regularly review storage usage and classify data based on access
frequency.

How It Helps: Move infrequently accessed data to lower-cost storage classes (like Amazon S3 Glacier)
to reduce costs.

6) Monitor and Analyse Usage: Use monitoring tools (like AWS Cost Explorer) to track and analyse
resource usage and costs.

How It Helps: Identifies trends, areas of overspending, and opportunities for optimization.

7) Implement Auto-Scaling: Set up auto-scaling to automatically adjust the number of instances


based on demand.

How It Helps: Ensures you only pay for the resources you need during peak times while scaling down
during low demand.

8) Utilize Free Tier Services: Take advantage of AWS Free Tier offerings for testing and
development.
How It Helps: Allows you to explore services at no cost up to certain usage limits, reducing initial
costs for new projects.

9) Regularly Review and Clean Up Resources: Conduct regular audits to identify and remove
unused or underutilized resources.

How It Helps: Eliminates unnecessary spending on resources that are no longer needed.

WEEK-8

AUTOMATION AND INFRASTRUCTURE AS CODE

8.1: Introduction to AWS Cloud Foundations:

AWS Cloud Foundations is a set of fundamental concepts, best practices, and services that help
organizations establish a strong foundation for using Amazon Web Services (AWS) effectively and
securely. It covers the basics of cloud computing and how AWS services work together to meet
business needs.

Key Concepts:

1. Cloud Computing:
Definition: The delivery of computing resources (like servers, storage, databases, and applications)
over the internet (the "cloud") instead of on local servers or personal computers.

Benefits: Scalability, flexibility, cost savings, and global reach.

2. AWS Services:

Overview: AWS offers a wide range of services, including:

Compute (e.g., EC2 for running virtual servers)

Storage (e.g., S3 for storing data)

Databases (e.g., RDS for managed relational databases)

Networking (e.g., VPC for creating private networks)

Importance: These services enable businesses to build and deploy applications quickly and efficiently.

3. Security and Compliance:

What It Is: Implementing security measures to protect data and applications in the cloud. o Key
Features: Identity and Access Management (IAM) for controlling user access, encryption for data
protection, and compliance with regulations.

4. Cost Management:

What It Is: Understanding and managing costs associated with using AWS services.

Strategies: Use tools like AWS Cost Explorer to monitor spending and optimize resource usage.

8.2: Infrastructure Provisioning with Terraform:

Terraform is an open-source Infrastructure as Code (Iac) tool that allows you to define, provision, and
manage cloud infrastructure using a declarative configuration language. With Terraform, you can
automate the setup of infrastructure resources across various cloud providers, including AWS, Azure,
and Google Cloud.

Key Concepts:

1.Infrastructure as Code (IaC):

Definition: The practice of managing and provisioning computing infrastructure through code instead
of manual processes.
Benefits: Increases consistency, reduces human error, and allows for version control of infrastructure
configurations.

2. Terra form Configuration Files:

What It Is: Written in Hashi Corp Configuration Language (HCL) or JSON, these files define the
desired state of your infrastructure. Example: A configuration file might specify the creation of an EC2
instance, VPC, or S3 bucket.

3. Providers:

What It Is: Plugins that allow Terra form to interact with various cloud providers and services (e.g.,
AWS, Azure, Google Cloud). Use Case: Each provider exposes resources specific to that platform,
allowing you to define infrastructure on that provider.

4. Modules:

What It Is: Reusable configurations that encapsulate a group of resources and can be shared across
projects. Use Case: Create a module for setting up a web server, which can be reused in multiple
environments.

WEEK-9

BACKUP, DISASTER RECOVERY

9.1: Implementing Backup and Restore Strategies:

Backup and restore strategies are essential practices for protecting data and ensuring business
continuity in case of data loss due to hardware failures, accidental deletions, or disasters. A good
strategy involves regularly creating copies of data (backups) and having a clear process to recover that
data (restoration) when needed.

Key Components of Backup and Restore Strategies:


1.Backup Types:

Full Back up:

What It Is: A complete copy of all selected data.

Use Case: Provides a comprehensive snapshot but takes longer and requires more storage.

Incremental Backup:

What It Is: Backs up only the data that has changed since the last backup.

Use Case: Faster and uses less storage compared to full backups but requires the last full backup and
all incremental backups for restoration.

Differential Backup:

What It Is: Backs up all data changed since the last full backup.

Use Case: Faster restoration than incremental backups but requires more storage than incremental
backups.

2. Backup Frequency:

What It Is: How often backups are created (e.g., hourly, daily, weekly).

Use Case: Determine the frequency based on how critical the data is and how often it changes.

3. Backup Storage:

On-Premises Storage: Local storage devices (e.g., external hard drives, NAS).

Cloud Storage: Services like Amazon S3, Google Cloud Storage, or Azure Blob Storage for remote
backups.

4. Backup Automation:

What It Is: Using tools or scripts to schedule and automate backups.

Benefits: Reduces human error, ensures backups are not forgotten, and provides consistent protection.

Designing for Disaster Recovery:

1.Backup Strategy:

What It Is: Regularly backing up data and configurations to recover from data loss. Implementation:
Use automated backup solutions, ensuring backups are stored securely offsite or in the cloud.

2. Disaster Recovery Plan (DRP):


What It Is: A documented process outlining how to recover systems and data after a disaster.
Implementation: Develop a DRP that includes roles and responsibilities, recovery objectives (RTO and
RPO), and step-by-step recovery procedures.

3. Geographic Redundancy:

What It Is: Distributing resources across multiple locations to protect against regional disasters.
Implementation: Use multiple data centres or cloud regions to host replicas of critical applications and
data.

4. Regular Testing of DRP:

What It Is: Periodically simulating disaster scenarios to ensure the DRP works effectively.
Implementation: Conduct regular drills and tests to validate the recovery process, updating the plan
based on lessons learned.

WEEK-10

Cloud architecture implementation:

Overview:

For the final project, I designed and deployed a cloud infrastructure for a [choose one: web application,
e-commerce site, or serverless data processing pipeline]. The project utilized several AWS services to
ensure high availability, fault tolerance, and scalability.

Key Components:

1. Amazon EC2 (Elastic Compute Cloud):Used for deploying and managing virtual servers for the
application.
2. Amazon RDS (Relational Database Service): Implemented a highly available database with
automated backups and failover.
3. Amazon S3 (Simple Storage Service): Used for object storage to host static content and backup
data.
4. Amazon CloudFront: Set up for content delivery and caching, ensuring faster load times and
reduced latency.
5. AWS Lambda: Integrated serverless functions to handle specific workloads like data processing
and event-driven tasks.
6. Amazon Route 53: Managed domain names and DNS routing for the web application.

Security Measures:

- Utilized IAM (Identity and Access Management) to ensure proper access controls and roles for users
and services.

- Implemented AWS Security Groups and Network Access Control Lists (NACLs) to restrict inbound
and outbound traffic to my cloud resources.

- Integrated AWS WAF (Web Application Firewall) for protection against common web exploits.

Challenge: Balancing cost-efficiency while maintaining scalability.

Solution: Used AWS Auto Scaling to dynamically scale the infrastructure based on demand, ensuring
cost-effective use of resources without compromising performance.

CONCLUSION

In conclusion, my cloud virtual internship on AWS was a great experience. Over the past 10 weeks, I
learned a lot about how to work with cloud technology. I got hands-on experience setting up and
managing cloud resources, which helped me understand how AWS works. Working with my team
improved my communication and problem-solving skills. This internship strengthened my interest in
cloud computing and gave me useful skills for my future career. I’m thankful for the support from my
mentors and colleagues, and I’m excited to use what I’ve learned and moving forward.

You might also like