Handson VPC
Handson VPC
Amazon VPC or Amazon Virtual Private Cloud is a service that allows its users to launch
their virtual machines in a protected as well as isolated virtual environment defined by them.
You have complete control over your VPC, from creation to customization and even deletion.
It’s applicable to organizations where the data is scattered and needs to be managed well. In
other words, VPC enables us to select the virtual address of our private cloud and we can also
define all the sub-constituents of the VPC like subnet, subnet mask, availability zone, etc on
our own.
• We can place the necessary resources and manage access to those resources in the VPC,
a private area of Amazon that we control.
• A default “VPC” will be generated when we register an AWS account, allowing us to
manage the virtual networking environment, the IP address, the construction of subnets,
route tables, and gateways.
What is Amazon VPC (Virtual Private Cloud)?
Amazon VPC can be referred to as the private cloud inside the cloud. It is a logical grouping
of servers in a specified network. The servers that you are going to deploy in the Virtual Private
Cloud (VPC) will be completely isolated from the other servers that are deployed in the
Amazon Web Services. You can have complete control of the IP address to the virtual machines
and route tables and gateways to the VPC. With the help of security groups and network access
control lists, you can protect your application more.
Amazon VPC (Virtual Private Cloud) Architecture
The basic architecture of a properly functioning VPC consists of many distinct services such
as Gateway, Load Balancer, Subnets, etc. Altogether, these resources are clubbed under a VPC
to create an isolated virtual environment. Along with these services, there are also security
checks on multiple levels.
It is initially divided into subnets, connected with each other via route tables along with a load
balancer.
Amazon VPC (Virtual Private Cloud) Components
VPC
You can launch AWS resources into a defined virtual network using Amazon Virtual Private
Cloud (Amazon VPC). With the advantages of utilizing the scalable infrastructure of AWS, this
virtual network closely mimics a conventional network that you would operate in your own
data center. /16 user-defined address space maximum (65,536 addresses)
Subnets
To reduce traffic, the subnet will divide the big network into smaller, connected networks. Up
to /16, 200 user-defined subnets.
Route Tables
Route Tables are mainly used to Define the protocol for traffic routing between the subnets.
Network Access Control Lists
Network Access Control Lists (NACL) for VPC serve as a firewall by managing both inbound
and outbound rules. There will be a default NACL for each VPC that cannot be deleted.
Internet Gateway (IGW)
he Internet Gateway (IGW) will make it possible to link the resources in the VPC to the
Internet.
Network Address Translation (NAT)
Network Address Translation (NAT) will enable the connection between the private subnet
and the internet.
Amazon VPC (Virtual Private Cloud) Fundamentals
• If the subnet has internet access, then it is called Public Subnet.
• If the subnet doesn’t have internet access, then it is called Private Subnet.
• A subnet must reside entirely within one Availability Zone.
• An entire subnet must be contained within a single Availability Zone.
• Access between instances is managed by VPC Security Groups for both inbound and
outgoing traffic (EC2 Security Groups can only define inbound rules).
• We can specify Subnet IP Routing with the aid of the Route Table.
• If a server/instance which is in a private subnet wants to reach the internet then it must
have NAT in a public subnet.
Subnet
• A subnet is a smaller portion of the network that typically includes all the machines in
a certain area.
• We can add as many as subnets we need in one availability zone. Each subnet must
reside entirely within one availability zone.
• The public subnets will be attached to Internet Gateway which enables Internet access.
• The private subnets will not have internet access.
• Each and every subnet which is presented in VPC must be associated with the routing
table.
Internet Gateway
• With the help of IGW (Internet Gateway), the resources present (e.g: EC2) in the VPC
will enable to access the Internet.
• One VPC can’t have more than one IGW
• If resources are running in a certain VPC then IGW cannot be detached from that
particular VPC.
Route Table
• Route Table contains a set of rules, called route which helps us to route the network
traffic.
• A single VPC can have as many as route tables it requires.
• If the dependencies are attached to the route table then they can’t be deleted.
NACL Network Access Control Lists
• The NACL security layer for VPC serves as a firewall to manage traffic entering and
leaving one or more subnets.
• The NACL for the default VPC is active and connected to the default subnets.
Classless Inter-Domain Routing (CIDR)
• A technique for allocating IP addresses and for IP routing is called classless Inter-
Domain Routing (CIDR), and its range is 0-32.
• When setting up a VPC, we must specify a set of IPv4 addresses using classless Inter-
Domain Routing (CIDR), for (Example:10.0.0.0/16 For our VPC, this will serve as the
main CIDR block).
RFC1918 Address (Private address)
• An enterprise organization will give an internal host an IP address known as an
RFC1918 address. These IP addresses are employed in private networks that cannot be
accessed or accessed through the internet.
The following networks are included in the RFC1918 address (Private address)
10.0.0.0 -10.255.255.255 (10/8 prefix)
172.16.0.0 - 172.31.255.255 (172.16/12 prefix)
192.168.0.0 - 192.168.255.255 (192.168/16 prefix)
Amazon VPC Network Address Translation (NAT)
• RFC1918 address is a workable solution to IPv4 address exhaustion issues thanks
to Network Address Translation (NAT).
• An internal host can communicate with an internet server with help of NAT.
• The internet and a private network are separated by a NAT device.
Use cases of Amazon VPC
• Using VPC, you can host a public-facing website, a single-tier basic web application,
or just a plain old website.
• The connectivity between our web servers, application servers, and database can be
limited by VPC with the help of VPC peering.
• By managing the inbound and outbound connections, we can restrict the incoming and
outcoming security of our application.
Amazon VPC (Virtual Private Cloud) Working
Follow the Setps Mentioned Below To Configure Virtual Private Cloud(VPC)
Setp 1: Login into AWS Console and navigate to the VPC as shown below.
Step 2: After navigating to the AWS VPC know click on create VPC.
Step 3: Configure all the details required to create as shown in the image below. Some of the
most required settings to configure VPC was as follows
• Name of the Network.
• IPv4 CIDR.
• And tags of VPC after that click on create VPC.
Step 4: Virtual Private Cloud Created successfully with the required setting to us.
Step 5: Check the VPC dashboard weather the VPC created is available to use as shown in the
image below GFG-VPC.
Here we will enter a bucket name that should be globally unique. Let's see what will happen if
we provide the name my-s3-test-bucket. Now we will click on Create Bucket.
Here we can see that bucket with the same name already exists.
Now we will enter a new name for our bucket that is globally unique. After that, we will choose
a Region where our bucket will reside. I have chosen it to be US East (Ohio). You can choose
a region that is near to you. Note that the S3 console is Global.
Here we will Block all public access to our S3 bucket that are the default settings. This will
be unchecked if we require public access to our buckets like in the case of hosting a website
that we will cover in our next tutorial.
Currently, we will keep the Bucket Versioning disabled. We will play with it in the later
sections of this tutorial. We can also add tags to our bucket. We will leave the Server-side
encryption disabled for now and will see it in the later sections. Now we will click on Create
bucket.
We will leave the remaining settings to default and then click on Upload.
Here we can see the details of the object uploaded which include Properties, Permissions,
and Versions. Note that we can see an Object URL over here. If we copy this URL and paste
it into the browser let's see what happens.
The access to the file is denied and the reason is that we blocked public access while creating
our S3 bucket. Now let's see what happens if we click on the Open button present in the top
right corner.
Here we can see that our file is accessible now. Note that this is a pre-signed URL to access
this object. We can view the difference between both URLs.
Here for deleting an object permanently we will explicitly select an object and then click
on Delete. To confirm deletion we will write permanently delete and then click on Delete
objects.
Now we will explore one more thing. After uploading the aws.png object again and disabling
the Show versions option we will select the object and then delete it.
Here we can see that deleting specified objects adds delete markers to them. Now we will
type delete and then click on Delete objects.
Here we will enable Show versions again to view the delete marker. Note that a delete marker
is a placeholder for a versioned object named in a simple DELETE request. When we delete
an object with versioning enabled the object is not deleted. The delete marker makes Amazon
S3 behave as if the object has been deleted. To restore the object here, we can delete the delete
marker.
Step-4: S3 Bucket Encryption
Now we will explore encryption in S3 Buckets. When we will get in the details of an object,
we can see that Default Encryption is Disabled that is set at the bucket level. When it is
enabled the new objects that are uploaded to the S3 bucket will be encrypted by default.
When we will get into the Server-side encryption settings of the object we can specify an
Encryption key. It can be either Amazon s3 key (SSE-S3) that is an encryption key created,
managed, and used for us by Amazon S3, or an AWS Key Management Service key (SSE-
KMS) that is protected by AWS Key Management Service. Note that to upload an object
with SSE-C that is a customer-provided encryption key we need to use AWS CLI, AWS
SDK, or Amazon S3 REST API.
Now when we will go to the bucket level encryption it's disabled by default. We will enable it
through which new objects that are stored in the bucket are automatically encrypted. Here also
either we can do encryption through SSE-S3 or SSE-KMS. If we want to use SSE-C we will
have to use AWS CLI, AWS SDK, or Amazon S3 REST API as stated above.
Now when we will upload the object again in the bucket, we can see that the Default encryption
is enabled.
This will take us to a new screen where we can see Bucket ARN which we will use when we
will generate our policy through the Policy generator. Now we will click on Policy generator.
This will lead us to this new screen where we can easily define policy for our bucket through
this user-friendly UI. Here we can define policies that control access to AWS products and
resources. First we will select the Policy Type that in our case is S3 Bucket Policy. We can
also create different types of policies like IAM Policy, an S3 Bucket Policy, an SNS Topic
Policy, a VPC Endpoint Policy, and an SQS Policy. Then we will add a statement that is a
formal description of single permission. Here for Effect we will select Deny. In Principal we
will write *. In Actions, we will select PutObject and in Amazon Resource Name we will
enter the value we found on the previous screen with the name Bucket ARN. Note that we will
add a /* after the Bucket ARN value. After that, we will click on Add Conditions.
Now here we will define that at what condition we don't want a specific object to be uploaded.
We will select the Condition to Null, Key to s3:x-amz-server-side-
encryption and Value to true. This means we are denying any object to be uploaded with Key
s3:x-amz-server-side-encryption set to Null. Now we click on Add Condition.
Now we will click on Add Statement. Now we will add another statement in which all the
other settings will remain the same except for the condition. In Condition we will
select StringNotEquals, for Key, we will select s3:amz-server-side-encryption and
the Value will be set to AES256. Now again we will click on Add Condition. Here it means
that for Key s3:amz-server-side-encryption if the value is not equal to AES256 then we will
not allow the object to get uploaded into our bucket.
ADVERTISEMENT
Again we will click on Add Statement.
Now here we can see both our statements listed. Now we will click on Generate Policy.
It returns us a JSON Document which we will use for our S3 Bucket Policy. Note that how
easy it is to create policies with Policy Generator.
Now we will copy the above JSON document and paste it into our Bucket Policy and after that,
we will click on Save changes.
Testing AWS Bucket Policy
Now we will upload aws.png again into our S3 bucket but this time without specifying any
encryption key. Let's see what happens.
We will receive an error message Upload Failed and we can clearly see that the access is
denied since our object was not encrypted with AES256 i.e SSE-S3 while we uploaded it.
Now we will upload the object again but this time we will specify an encryption key by
overriding default encryption bucket settings and then select SSE-S3 and click on Upload.
Here we can see that object has been uploaded successfully as it satisfies conditions set in our
bucket policy.