AWS Security Specialist Notes
AWS Security Specialist Notes
$ curl 169.254.169.254/latest/meta-data/iam/security-
credentials/my_Instance_IAM_role
This will return access key, secret access key and temp token and token expiry.
Hence this needs to be blocked using IPTables.
useradd developersudo su -curl 169.254.169.254/latest/meta-
data/iam/security-credentials/first-iam-role
IPTABLES:
iptables --append OUTPUT --proto tcp --destination
169.254.169.254 --match owner ! --uid-owner root --jump
REJECT
Policy Variable
Policy variable allows us to create a more generalized and dynamic policy which can
take some values at run time. For
ex: "arn:aws:iam::888913816489:user/${aws:username}"
The above resource ARN will update the ARN based on the IAM username being used,
hence this policy can be attached to many different users and we wont have to mention
unique ARN everytime.
NotPrincipal: When we use NotPrincipal with action "deny", it means to explicitly deny
actions to all other users but not the users/principals mentioned in the NotPrincipal.
Lambda + KMS
Securing environment
variables https://docs.aws.amazon.com/lambda/latest/dg/configuration-envvars.html
Lambda encrypts environment variables with a key that it creates in your account (an AWS
managed customer master key (CMK)). Use of this key is free. You can also choose to
provide your own key for Lambda to use instead of the default key.
When you provide the key, only users in your account with access to the key can view or
manage environment variables on the function. Your organization might also have
internal or external requirements to manage keys that are used for encryption and to
control when they're rotated.
- kms:Decrypt – To view and manage environment variables that are encrypted with a
customer managed CMK.
You can get these permissions from your user account or from a key's resource-based
permissions policy. ListAliases is provided by the managed policies for Lambda. Key
policies grant the remaining permissions to users in the Key users group.
Users without Decrypt permissions can still manage functions, but they can't view
environment variables or manage them in the Lambda console. To prevent a user from
viewing environment variables, add a statement to the user's permissions that denies
access to the default key, a customer managed key, or all keys.
AWS Cloudwatch logs service has the capability to store custom logs and process
metrics generated from your application instances. Here are some example use cases for
custom logs and metrics
1. Web server (Nginx, Apache etc ) access or error logs can be pushed to
Cloudwatch logs it acts as central log management for your applications running
on AWS
2. Custom application logs (java, python, etc) can be pushed to cloudwatch and
you can setup custom dashboards and alerts based on log patterns.
3. Ec2 instance metrics/custom system metrics/ app metrics can be pushed to
cloudwatch.
https://devopscube.com/how-to-setup-and-push-serverapplication-logs-to-aws-
cloudwatch/
Ephemeral ports
The example network ACL in the preceding section uses an ephemeral port range of
32768-65535. However, you might want to use a different range for your network ACLs
depending on the type of client that you're using or with which you're communicating.
The client that initiates the request chooses the ephemeral port range. The range varies
depending on the client's operating system.
Many Linux kernels (including the Amazon Linux kernel) use ports 32768-61000.
Requests originating from Elastic Load Balancing use ports 1024-65535.
Windows operating systems through Windows Server 2003 use ports 1025-5000.
Windows Server 2008 and later versions use ports 49152-65535.
A NAT gateway uses ports 1024-65535.
AWS Lambda functions use ports 1024-65535.
For example, if a request comes into a web server in your VPC from a Windows 10 client
on the internet, your network ACL must have an outbound rule to enable traffic destined
for ports 49152-65535.
If an instance in your VPC is the client initiating a request, your network ACL must have
an inbound rule to enable traffic destined for the ephemeral ports specific to the type of
instance (Amazon Linux, Windows Server 2008, and so on).
In practice, to cover the different types of clients that might initiate traffic to public-
facing instances in your VPC, you can open ephemeral ports 1024-65535. However, you
can also add rules to the ACL to deny traffic on any malicious ports within that range.
Ensure that you place the deny rules earlier in the table than the allow rules that open
the wide range of ephemeral ports.
S3 bucket ownership
If object is being uploaded from a different account, then the uploader is the
owner to the objects and the bucket owner will not have access over the object.
To give bucket owner access over the uploaded object frm different account, the
uploader needs to make sure that the ObjectACL includes -grant-owner-access
option so that the bucket owner becomes owner of the uploaded object as well.
A role/user should not have permissions to view its own policy since if the creds got
compromised, the attacker will use that permissions to view what all services is has
access to and will attack those.
Pacu permissions enumeration: its like metaexploit for aws cloud
https://www.youtube.com/watch?v=bDyBAFX4V7M. : BHack 2020 - Step by step AWS
Cloud Hacking - EN
SSM
SSM Agent is preinstalled, by default, on Amazon Machine Images (AMIs) for macOS.
You don't need to install SSM Agent on an Amazon Elastic Compute Cloud (Amazon
EC2) instance for macOS unless you have uninstalled it.
https://docs.aws.amazon.com/systems-manager/latest/userguide/install-ssm-agent-
macos.html
SSM agent comes pre-installed on most amazon linux ami. Just check status by:
sudo systemctl status amazon-ssm-agent
https://docs.aws.amazon.com/systems-manager/latest/userguide/ssm-agent-status-
and-restart.html
We can run shell commands on the Ec2 instances on which the SSM agent is running:
Go to SSM --> Select 'Run Command' from right side panel and choose "'Run
Command' button-> Choose platform 'Linux' from 'command document' filter
and then choose "AWS-RunShellScript".
https://policysim.aws.amazon.com/
Use this link to select users/groups/roles from right side panel, and then choose the
service on which you want to test access from the right side of the screen. We can
search for that service and upon selecting it, it shows all the possible api calls that can
be made. Select the API calls that you want to test and click on "Run Simulation". It will
show if you will get access denied or not.
After solving the issue of the finding, the finding goes under resolved.
AWS WAF
WE can use WAF to safe guard our resource such as API gateway. We have to create
Web ACLs and in this we select the resource for which we are creating this ACL. Then we
choose a RULE which will come preconfigrued or we can create our own rule for this.
Now, this rule protects the server so if a DDOS starts, the rule will return forbidden
message to the attacker.
Firewall usually operate at level 3 or level 4 of OSI model ie network and transport
layer. Due to this they dont read http packets leaving applications running on level
7 vulnerable such as sql-injection, cross-site scripting. These are layer 7 attacks,
hence something is needed to safeguard against layer 7 for such type of attacks.
We do this using rules ie WAF
Rules are created against http request since we want protection against layer 7
requests
Multiple rules can be used with each other with having logics between them such
as OR, AND etc. We can have conditions such as "if" traffic is from "china" AND "if"
IP belongs to "BlackList", then "Block" traffic.
There are preconfigured "AWS Managed Rule" which we can directly use on WAF such
as 'SQL Injection Protection'. This allows us to easily setup WAF without us creating our
own rules.
Captcha: We can add "Captcha" which would ask the user accessing the ALB to
authenticate/solve the captcha first (or some other resource instead of ALB)
We can also reply with custom response to the rules which would show the response to
the one who is trying to access the url.
What AWS tools should you use to prepare in advance for incident handling?
- How do you monitor for key file changes at OS and application level?
- How will you be notified in a timely manner when something anomalous occurs?
Use below PDF for that security course materials and resources
Table+of+Resources.pdf 61 kB
DNS zone walking basically enables you to fetch the DNS record information of a
domain. It will retrieve the information from the dns zone and give info of IP, SOA etc
Run Command: Allows us to run certain commands on the EC2 instances. This can also
help if some command is required to be run on 100+ instances such as maybe update,
for that also session manager's run command feature will help us run all commands on
remote EC2 having SSM agent.
We can specify on which instance we want to run the command
We can chose to run a shell script on instance specified
Compliance: It shows if the instances with SSM agent have some compliance issue such
as missing patch or missing a new package and will show all the updates available which
can be updated to. It would mark that instance as non-compliant under this feature.
Deploying SSM Agent: SSm agent does not have bydefault permissions on the
instance and hence it is requried to be associated a role which the agent would use to
make amends/fetch data from the instance.
It can be installed on Ec2 and onPrem servers as well
You can use EC2 Instance Connect to connect to your instances using the Amazon EC2
console, the EC2 Instance Connect CLI, or the SSH client of your choice.
When you connect to an instance using EC2 Instance Connect, the Instance Connect API
pushes a one-time-use SSH public key to the instance metadata where it remains for 60
seconds. An IAM policy attached to your IAM user authorizes your IAM user to push the
public key to the instance metadata. The SSH daemon uses
AuthorizedKeysCommand and AuthorizedKeysCommandUser , which are
configured when Instance Connect is installed, to look up the public key from the
instance metadata for authentication, and connects you to the instance.
Ec2 Connect is an offering from EC2 service and doesnt require any extra setup to
connect to ec2. Just have a public IP And port 22 open. Then just click on 'connect' and
this will open up a ssh connection with our Ec2.
'Session Manager' is an offering from Session Manager service and hence requires a few
steps like session manager should be installed and IAm role should be associated to Ec2.
Also, Session Manager 'quick setup' should be run to setup Session Manager service and
this surely costs extra. Advantage of using Sessions Manager option is that you dont
need to open port 22 on Ec2. The Sessions Manager would connect to Ec2's SM agent
internally, hence a preferred way to connect to private Ec2 instances without exposing
them.
We can use Ec2 connect from the EC2 service to ssh into Ec2 directly (rather using
putty or something).
Session manager also has the ability to let you ssh to your ec2 instance from
Sessions Manager console
The above comparision shows that Iam role is not required for Ec2 connect from
EC2 console similarly for sessions manager to connect , it doesnt need IAM role.
However for EC2 connect the security group and Public Ip needs to be open but
sessions manager can let you ssh into the instance without need of public IP.
So basically we can use Sessions Manager to ssh into EC2 instances inside
private subnet. Normally since private ec2 instance would require a bastion host
to ssh into it. But its easier and cheaper to use sessions manager, but make sure
the sessions manager agent is installed and running on these Ec2 instances. Also
remember sessions manager requires an IAm role on ec2.
Patch Manager
It automates the patching process with both security related and other types of
patches. It can identify /scan the required patches and do it for us
Maintainance window: Provides a way to schedule the patching of systems. Cron rate
would help with deploying patch at least traffic
SSM Automation
We can automate a lot of tasks using this such as stopping instances, blocking
open SG etc
These automation also supports 'approval' which uses SNS to send email to
subscriber. Only the IAm user/role mentioned as approver of this automation will
be allowed to approve and no other user can approve this automation request.
SSM Inventory
1. SSM provides a way to look into EC2 and on-premises configurations of the
resource
2. It captures Application names with versions, Files, Network Configurations,
instance details, windows registry and others
3. We can query our inventory to find which specific application is stored on which
instance. This helps to centrally check if a particular application is present all our
EC2 instances or on-prem servers which version info.
4. We can send this info further to S3 and then use athena to query this using
resource data sync.
5. We will have to setup the inventory first. We can choose which instance to include,
also what to include and what not to include in the inventory and also the
frequency of this sync. We can also select S3 sync to sync the data to S3.
AWS EventBridge
EventBridge delivers a stream of real-time data from event sources to targets. Ex:
Ec2(EC2_Terminate) -->EventBridge----> SNS(Alert Sent regarding terminated Ec2).
Further, we can schedule this alerts, hence use EventBridge with a source as
"Schedule". So at specific time/schedule the targets will be triggered.
_____________________________________________________________________________________________
______________________________
Athena
It is used to make SQL like commands on data that is present inside S3 bucket.
It can be used to query large dataset maybe from CloudWatch or CloudTrail logs.
We can query such logs directly using Athena.
_____________________________________________________________________________________________
_____________________________________________________________________________________________
____________________________________________________________
Trusted Advisor
It analyses the AWS environment and provides best practice recommendations in 5
major catergories:
Cost Optimation: Recommendations that allows us to save money such as
"Idle Load balancers".
Performance: Shows recommendations that helps improves performance
such as "EBS volume causing slowdown"
Security: All security related advisories.
Fault Tolerance: Recommendations that would help us make the resource
more fault tolerant. Ex: "Checks if all Ec2 are in different AZ so that if one AZ
goes down the other Ec2 are not affected.
Service Limits: Checks limits of services and lets us know if any service is
breached or will breach any limit
Each category has number of rules, such as security will have many more rules
such as "IAM Password policy", "CloudTrail Logging". But not all are enabled by
default, but to enable these other rules, you will have to upgrade your support
plan. After upgrading support plan, you will get access to all the trusted advisor
checks.
_____________________________________________________________________________________________
_____________________________________________________________________________________________
____________________________________________________________
CloudTrail
3 Types of EventTypes are logged if selected:
1. Management Events: Normal events being performed on our AWS
resources
2. Data Type: S3 related logging such as PutObject, Getobject, DeleteObject.
We can select if we only want readEvent or WriteEvents. These Events can be
seen on S3 buckets or CloudWatch Log group level. (These events wont show
up on CT Dashboard, hence can only see in S3 or CW log group. OR send this
trail data to Splunk and query). Preferred way would be using CloudWatch log
Groups.
3. Insights Events: Identify unusual errors, activities or user behaviour is your
account. Ex : 'spike in Ec2 instances being launched'.
CT Log File Validation: This allows us to confirm if the log files delivered to S3
bucket have been tempered or not. Validation lets us know if there has been a
change in the file or not. Does this using RSA and SHA-256 hashing.
Validation options comes under advanced settings of a trail. Download the zipped
trail, and copy paste this data to convert it into SHA-256.
Amazon Simple Storage Service (Amazon S3) object-level API activity (for example,
GetObject, DeleteObject, and PutObject API operations)
AWS Lambda function invocation activity (for example, InvokeFunction API
operations)
Amazon DynamoDB object-level API activity on tables (for example, PutItem,
DeleteItem, and UpdateItem API operations)
For instructions to activate data event logging, see Logging data events for trails.
For instructions to view data events, see Getting and viewing your CloudTrail log files.
Note: Additional charges apply for logging data events. For more information, see AWS
CloudTrail pricing.
AWS Macie
AWS Macie uses Machine Learning to search for PII data, database backup,
SSL private Key and various others in the S3 buckets and create alerts about
the same.
Checks if buckets are publically exposed, or publically writable.
It has 'Findings' like guardDuty such as 'SensitiveData:S3Object/Credentials' ,
this finding comes up if macie finds credentials on S3 bucket.
So finding sensitive data if primary function, but it also checks open S3
bucket, encryption etc
Custom Data Identifier: We can further create custom regular express ie RegEx to
mach a certain type of data present on s3 and if data present is matched, that
finding type comes up. So scanning sensitive data does not stop with
preconfiugured findings rather we can create our own regEx and get custom
findings.
Scanning jobs needs to be scheduled using 'scheduled job' for repeatitive
scanning or you can go ahea with 'one-time job' for just one time scanning. You
can check this running job inside 'Job' section.
These findings can be stored for long time but needs configuration.
_____________________________________________________________________________________________
______________________________
S3 Event Notification
S3 event notification allows us to receive event notification when certain
event happen in your bucket.
Event Types: These are the events on which you would recieve alerts for, such as
'all object create events' would alert on PUt, Post, Copy and Multipart upload.
Event notification can then triggered or send data to different destinations:
lambda
SQS
SNS
So basically we can get SNS alert if someone does performs some specific tasks on
s3 bucket.
_____________________________________________________________________________________________
______________________________
VPC FlowLogs
VPC flow log is like visitors register, it keeps logs of who visited the aws
environment. VPC flow logs basiscally captures traffc on network interface level.
VPC Flow logs is a feature that enables you to capture information about the IP traffic
going to and from 'network interfaces' in your VPC.
VPC Flow logs capture:
Record traffic information that is visiting the resource (eg EC2). Ex: If Ip 192.12.3.3
is visiting my Ec2, then it would record this detail about the IP. ie 192.12.3.3 ---->
Ec2 instance
Records data if resource connected to any outbound endpoint. Ex: EC2---
>193.13.12.2
VPC flow log dashboard helps us setup dashboard to get idea as to from where or
which country the traffic is coming in.
All the information of VPC flow logs is stored in CloudWatch log groups which gets
created automatically 15 mins after VPC flow logs are enabled.
Destination of VPC Flow Logs can be :
1. CloudWatch Logs Group :By default, once setup, log streams are stored in log
groups
2. S3 bucket : We can optionally push these logs into S3
3. Send to Kinesis Firehose same account
4. Send to Kinesis Firehose Different account
VPC flow logs captures interface level logs and not real time. We can choose to captures
interface level flow logs of a single instance but its better to log complete vpc.
Ask what is the advantage of directly sending vs first collecting data into CW log group
and then sending.
Which is costly
_____________________________________________________________________________________________
______________________________
deliver data into Splunk HEC but we can also choose to send the
data into Kinesis analytics for analysis if we want.
Data Sent from different Sources ---> Goes into Kinesis Stream for
storage----> Kinesis stream pushes it into Firehose for processing and
dilevery---> Firehose Delivers data to Slunk HEC or S3 buckets for
backup or kinesis Analytics
Multiple sensors could be sending continuos data, this data can be stored
and processed using Kinesis.
This requires 2 layers:
1. Storage Layer
2. Processing Layer
3 Entities are invovled in this process:
Producers: Data producers like sensors
StreamStore: Where data is gonna be stored like Kinesis
Consumer: This will run analytics on top of the data and give useful info
Kinesis Agent: This is a software application that is used to send data to Kinesis
stream. Other ways could be done by using AWS SDK.
Consumers: We can Select "Process Data in real Time" under the kinesis stream
dashboard. This will make kinesis as consumer and show some processed data.
Other option could be 'Kinesis Data Firehose' which means using firehose delivery
stream to process and store data.
After sending data to kinesis, it shares a shardID which could be used to refer to
the data and fetch it.
4 Kinesis Features:
1. Kinesis Data Stream : Captures, process and store data streams in real-time
2. Kinesis Data Firehose: Primary to move data from point A to point B. Hence
FireHose can be used to fetch data from Kinesis DataStream and then use FireHose
to send this data to Splunk. When setting up, we get option to choose 'Source' as
Kinesis Stream and choose 'destination' as splunk or S3(S3 can also be choosen as
backup to send logs for backup).
3. Kinesis Data Analytics: Analyze streaming data in real-time with SQL/Java code.
So we can use SQL to make commands and analyse data here.
4. Kinesis Video Steam: capture, process and store video streams
Best Centralized Logging Solution using CWlog group(not neccessary for VPC flow
logs since VPC flow logs can be sent directly into Stream) + Kinesis Stream +
Kinesis firehose + Splunk + S3
Use CW log groups for central VPC flow logs since if each vpc sends it logs
personally into stream, this might cost more and doesnt seems centralised, rather
we can just push VPC flow logs from different accounts into a central CW log
group and then this log group can push logs into Kinesis Stream and then stream
will send data into Firehose.
We can setup a centralized logging solution using below component for CW logs
and VPC flow logs(also sent in CW logs):
Usually we have to setup Splunk so that it can get to each account's cloudWatch
logs and fetch appropriate logs. However, as a better deployment, we can send all
the Cloudwatch logs from different accounts to a central location and then setup
Splunk to just use Kinesis Data Streame (FireHose) to fetch logs. Hence splunk
would not need to get into different accounts to get this data, it can just fetch all
the data from a single Kinese Data Steam which is intern getting data from
different accounts.
https://www.youtube.com/watch?v=dTMlg5hY6Mk
Steps:
########## Recipent Account ###############
Trust Relationship
{
"Statement": {
"Effect": "Allow",
"Principal": { "Service": "logs.region.amazonaws.com" },
"Action": "sts:AssumeRole"
}
}
IAM Policy:
{
"Statement": [
{
"Effect": "Allow",
"Action": "kinesis:PutRecord",
"Resource": "arn:aws:kinesis:region:999999999999:stream/RecipientStream"
},
{
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "arn:aws:iam::999999999999:role/CWLtoKinesisRole"
}
]
}
Output:
{
"destination": {
"targetArn": "arn:aws:kinesis:ap-southeast-1:037742531108:stream/kplabs-demo-
data-stream",
"roleArn": "arn:aws:iam::037742531108:role/DemoCWKinesis",
"creationTime": 1548059004252,
"arn": "arn:aws:logs:ap-southeast-1:037742531108:destination:testDestination",
"destinationName": "testDestination"
}
}
{
"Version" : "2012-10-17",
"Statement" : [
{
"Sid" : "",
"Effect" : "Allow",
"Principal" : {
"AWS" : "585831649909"
},
"Action" : "logs:PutSubscriptionFilter",
"Resource" : "arn:aws:logs:ap-southeast-1:037742531108:destination:testDestination"
}
]
}
Reference:
Kinesis Commands:
_____________________________________________________________________________________________
______________________________
Bastion Host
SSH port forwarding: This allows users to use their local SSH keys to perform some
operations on remote servers without keys being left from workstation. This is secure
deployement since having ssh keys on bation is risking the keys in case of bation host
compromise.
Client WorkStation ----> Bation EC2 (Public Subnet)---> Private EC2
Normal Way (not so secure):
1. Create ssh keys on your instance (client) using ssh-keygen command. This
would create 2 keys, one public and one private
2. Now cat the pub key on client and copy the content to the bation host. Basically
we are trying to copy the public key fm client to bation. Hence in the '/authorised'
key, cat the public key.
3. Do the same with private Ec2 instance also, ie make sure to paste your public key
under /authorized. All all 3 instances have same public key but only client has the
private key as of now.
4. Now that all instances have public key, we can copy the private key from client to
bation host and then use this private key on bation to ssh to private ec2, but
storing private keys on bation hosts is not safe, hence we go with safer approach.
Use Systems Manager to ssh into private Ec2 (very secure).System manager
now also works with on-prem resources hence hybrid deployement is also
supported. So if company has onPrem instance, use SSM (SystemsManager) to ssh
into those onprem resources.
Steps:
Step 1: Generate Certificates:
sudo yum -y install git
git clone https://github.com/OpenVPN/easy-rsa.git
cd easy-rsa/easyrsa3
./easyrsa init-pki
./easyrsa build-ca nopass
./easyrsa build-server-full server nopass
./easyrsa build-client-full client1.kplabs.internal nopass
<key>
Contents of private key (.key) file
</key>
EC2/Resouce -->uses O
_______________________________________________________________________________
VPC Peering also connects one AWS account to another AWS account. So basically we
can use VPC peering to connect 2 different VPC in 2 different AWS accounts.
IMp:
VPC peering will not happen on overlapping CIDR ranges. Ex: 172.16.0.0/16 -----X-
-----cant -peet-----X----172.16.0.0/16
Transit VPC access is not possible which means if VPC A and VPC B are connected
to VPC C, then A<-->C and B<--->C is possible but you cannot reach B via C if you
are A, ie A<--->C<--->B is not possible or A<---->B not possible.
Using EC2 role, we can make direct CLI commands from the EC2 which would use
that Ec2 role instead of the configured credentials.
VPC Endpoint Policies
When we create an endpoint to access some services such as S3, we can create an 'VPC
Endpoint Policy' which would make sure that the access to the resources being
accessed via the VPC endpoint is secure and restricted. Otherwise, using VPC endpoint
without a policy would make all S3 buckets and many more resources accessible without
restriction which we don't want.
Hence using endpoint policy, we can mention as to which resource should be accessible
and which should not ex Bucket_A is accessible and Bucket_B should not be accessible,
this can be managed using VPC Endpoint Policies.
Interface Endpoint
Interface Endpoint is an network interface with private IP and serves as an entry point
for traffic destined to a supported AWS service such as S3 or any VPC Endpoint.
This uses network interface with a policy. This network interface stays inside the subnet
and gets Ip from the subnet.
To use this, create interface endpoint and attach this interface endpoint to your EC2.
Now this EC2 can use interface endpoint to access services such as S3 privately since this
is endpoint connection.
Security groups are attached to ENI (elastic network interface), so if a service has ENI,
then security group can be associated to it. There are lot of services that use EC2
indirectly and hence SG can be applied to them also , such as :
We dont really know which port the client would use to communicate the return traffic,
hence outbound has all port open in case of NACL
IN case of stateless ie security group, if we only have inbound rule opening port 22
and no outbound rules, then also the traffic will be allowed to go out since the
firewall being statefull would remember this connection and will not need
outbound rule to send the traffic. Hence even if there is no outbound rule on
security group, and only if inbound rule exists, then also connection is done.
Inbound rule means the rules we create when the traffic comes inside our Ec2,
hence inboud is for connections initiate by a client
OutBoundRule: This is for connections initiated by the EC2 itself.
If i only create inbound rule opening port 22 to my Ec2 and delete all outbound
rules, then from my laptop I can still ssh to my EC2 even though there is no
outbound rule since SG is statefull and so doesnt need a separate outbound rule to
return the traffic.
However, if I now try to ping google from the ssh'd EC2, it fails since there are no
outbound rule which is for allowing connections initiate by my EC2, hence ping
fails if no outbound rule is present in SG.
So if your server only gets ssh connection from outside and never itself initiates a
connection, then delete all outbound rules in SG so that Ec2 cannot itself initate a
connection untill externally initiated.
This is why the outbound rule of SG is open to all since we trust connections
initiated fm within the EC2 and want to accept all the return traffic if connection
was initiated frm EC2.
So inbound and outbound rules are about who initiated the traffic. IF external
server initiate the traffic, for SG the inbound rules kicks in and if the Ec2 itself
initiates the connection, then outbound rule kicks in and they both act
independtly to each other.
Stateless Firewall:
Ex: NACL
In this case, we need to create inbound and outbound both rules needs to be created.
In case of stateless, we'll have to make sure both inboud and outbound is created for
complete connections.
Basically when we have some data on S3 which we have to share publicly, we have to
make the bucket public however that is not safe. So rather than making bucket public to
world and giving everyone our S3 bucket DNS, we can rather use OAI. OAI is a method
which says that we dont need to make bucket publicly accessible to everyone. rather, we
can use CDN's link and give our CDNs link to everyone and then only this CDN should
be allowed to make GetObject api call to interact with bucket, hence with this way, our
bucket doesnt get random requests from all around the world but rather only gets
request from our CDN and CDn can manage the security.
This is done via updating the bucket policy which only allows CDN to read S3 data and
no one else:
{
"Version": "2012-10-17",
"Statement": {
"Sid": "AllowCloudFrontServicePrincipalReadOnly",
"Effect": "Allow",
"Principal": {
"Service": "cloudfront.amazonaws.com"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*",
"Condition": {
"StringEquals": {
"AWS:SourceArn":
"arn:aws:cloudfront::111122223333:distribution/EDFDVBD6EXAMPLE"
}
}
}
}
CloudFront Signed URLs (Signed URLs is for special case where you want to send a
unique link to fetch the data)
Signed URLs are special types of URLs generated for CDN networks. These URLs are not
the same as the DNS's domain name given when we create the CDN. If signed URL is on,
the normal access request via the CDN's URL gets access denied mentioning 'missing
key-pair'. THis is because once we have mandated use of signed URLs, non signed URLs
would be rejected.
Basically this is the way to generate unique links for customers and allow them on time
download such as link these days we generate link on websites and they allow us to
download the content for some time and then they expire, that is an example of using
Signed URLs which comes with expiries.
This is enabled by ticking 'Restrict Viewer Access' which make sures that user can
only access the contents using the signed URL/Cookies.
_____________________________________________________________________________________________
_____________________________________________________________________________________________
____________________________________________________________
DDOS Mitigation:
1. Be ready to scale so that service scales up in case of attack and server doesn't
breakdown. Autoscaling can help.
2. Minimize the attack surface area ie decouple the infra like database being in
private and frontend being public subnet
3. Know what traffic is normal and what is non-normal.
4. Be ready for a plan in case of DDOS attack
_____________________________________________________________________________________________
_____________________________________________________________________________________________
____________________________________________________________
API Gateway
We can quickly create an API using api gateway and select a method like GET and
integrate it with a lambda as trigger.
Now we can use the invoke URL given by the API gateway service, enter it in
browser and hit enter. This will send a GET request to our api which would further
trigger the lambda.
Since we can select REST API also, we will also get a response sent by lambda to
the api.
_____________________________________________________________________________________________
_____________________________________________________________________________________________
____________________________________________________________
S3 and Lambda
We can setup our lambda and in the trigger, we get a choice to directly use S3 as a
trigger.
This way of selecting S3 as direct trigger to lambda would not require SNS and
and SQS. This would use S3 and direct trigger for lambda.
_____________________________________________________________________________________________
______________________________
_____________________________________________________________________________________________
______________________________
Ec2 Tenancy
Shared: The EC2 instance runs on shared hardware. Issues: Security
Dedicated: Ec2 instance runs on hardware which will be share between resources
of the same AWS account. Hence 1 acccount - 1 hardware. So all Ec2 running on
our host will be ours.
Sometimes some license are tried to hardware such as Oracle license, so we
would not want to change our hardware after start-stop, here is were
dedicated hosts comes in which would let us have same hardware.
Dedicated Hosts : Instance runs on a dedicated host with very granular level of
hardware access
_____________________________________________________________________________________________
______________________________
AWS Artifacts (A document portal for compliance)
The artifact port provides on-demand access to AWS security and compliance
docs, also know as artifacts.
This is required during audits when auditors asks proof if AWS services are
compliant to PCI DSS, HIPA etc
_____________________________________________________________________________________________
______________________________
Lambda@Edge
Lambda@Edge allows us to run lambda functions at the edge location of
the CDN allowing us to do some processing/filtering of the data being
delivered by the CDN.
There are 4 different points in communication where the lambda can be
used. Viewer Request ->Origin request->Origin response->Viewer
Response. This is called cloudfront cache. At any of these 4 points we can
implement a lambda.
_____________________________________________________________________________________________
______________________________
_____________________________________________________________________________________________
______________________________
Step Functions
These are set of lambda functions which depend upon one another.
So maybe we want that lambda 2 should run after lambda 1, use step function to first run
lambda1 and then lambda 1 will execute lambda 2.
AWS Network Firewall (The main firewall For VPC, unlike WAF which
protects ALB, CDN, APi only)
AWS Firewall is a statefull, managed firewall by AWS which provides intrusion detection and
prevention service for our VPC.
AWS Network Firewall works together with AWS Firewall Manager so you can build policies based on
AWS Network Firewall rules and then centrally apply those policies across your VPCs and accounts.
AWS Network Firewall includes features that provide protections from common network
threats. AWS Network Firewall’s stateful firewall can:
This is the main firewall which would protect all our resource inside a VPC. RIght
now we can create only NACLs to secure our Subnets, and SGs work at network
interface level, while WAF only protects ALB, CF(CDN) and API GW. Network
Firewall is the actual firewall that would protect the VPC from malicious traffic.
Filtering and Protocol Identification : Incorporate context from traffic flows, like tracking
connections and protocol identification, to enforce policies such as preventing your VPCs from
accessing domains using an unauthorized protocol. We can also block based on TCP flags.
Active traffic Inspection: AWS Network Firewall’s intrusion prevention system (IPS) provides
active traffic flow inspection so you can identify and block vulnerability exploits using
signature-based detection.
Blocking Domains: AWS Network Firewall also offers web filtering that can stop traffic to
known bad URLs and monitor fully qualified domain names.
Statefull and Statless Ip/Domain Filtering: We can also do stateless IP filtering and statefull
IP filtering. We can also upload a bad domain list so that no resource from our VPC connects
to the same. Both stateless and statefull are supported.
VPC Level: Since this service associates firewall at VPC level, we can make a
uniform deployment of this service such that all VPC gets uniform security. It
also integrates with AWS Firewall Manager service which works at Org level.
This will help in streamlining the firewall controls over VPC throughout the
org. So if some IPs and domains needs to be blocked at Org level, use AWS
Network firewall and associate the rule to all the VPCs. This uniform
deployment of firewall cannot be achieved with any other service which
could block IPs and domains both.
Sits between resource subnet and its Internet Gateway: After you create your
firewall, you insert its firewall endpoint into your Amazon Virtual Private Cloud
network traffic flow, in between your internet gateway and your customer subnet.
You create routing for the firewall endpoint so that it forwards traffic
between the internet gateway and your subnet. Then, you update the route
tables for your internet gateway and your subnet, to send traffic to the
firewall endpoint instead of to each other.
Earlier :
IGW <------>Subnets<-----> AWS resources
Earlier the traffic used to route from subnets inside which the resources are present to
IGW. IGW<------->Network Firewall<----->Subnets<----> AWS Resources
Now the Network Firewall will be in between the subnets and IGW and we amend the
route tables of the IGW and subnet so that instead of exchanging traffic directly, the
traffic should go though the firewall first from subnet and then from firewall it should go
to IGW.
Normally if we have to block some IPs from certain VPCs, we have to use Splunk to first ingest the
VPC flow logs and then cross match those logs with Bad IP list. This can be easily done using
Network Firewall since it allows us to upload a list including domains and IPs that needs to be
restricted.
High availability and automated scaling
AWS Network Firewall offers built-in redundancies to ensure all traffic is consistently
inspected and monitored. AWS Network Firewall offers a Service Level Agreement with
an uptime commitment of 99.99%. AWS Network Firewall enables you to automatically
scale your firewall capacity up or down based on the traffic load to maintain steady,
predictable performance to minimize costs.
Stateful firewall
The stateful firewall takes into account the context of traffic flows for more granular
policy enforcement, such as dropping packets based on the source address or protocol
type. The match criteria for this stateful firewall is the same as AWS Network Firewall’s
stateless inspection capabilities, with the addition of a match setting for traffic direction.
AWS Network Firewall’s flexible rule engine gives you the ability to write thousands of
firewall rules based on source/destination IP, source/destination port, and protocol. AWS
Network Firewall will filter common protocols without any port specification, not just
TCP/UDP traffic filtering.
Web filtering
AWS Network Firewall supports inbound and outbound web filtering for unencrypted
web traffic. For encrypted web traffic, Server Name Indication (SNI) is used for blocking
access to specific sites. SNI is an extension to Transport Layer Security (TLS) that remains
unencrypted in the traffic flow and indicates the destination hostname a client is
attempting to access over HTTPS. In addition, AWS Network Firewall can filter fully
qualified domain names (FQDN).
Intrusion prevention
AWS Network Firewall’s intrusion prevention system (IPS) provides active traffic flow
inspection with real-time network and application layer protections against vulnerability
exploits and brute force attacks. Its signature-based detection engine matches network
traffic patterns to known threat signatures based on attributes such as byte sequences
or packet anomalies.
Traffic Mirroring
AWS allows us to mirror the packets being sent on a Network Interface and send it a location which
we can leverage for investigating the network traffic or what is the content of those packets.
Traffic Mirroring is an Amazon VPC feature that you can use to copy network traffic from
an elastic network interface of type interface . You can then send the traffic to out-
of-band security and monitoring appliances for:
Content inspection
Threat monitoring
Troubleshooting
The security and monitoring appliances can be deployed as individual instances, or as a
fleet of instances behind either a Network Load Balancer with a UDP listener or a
Gateway Load Balancer with a UDP listener. Traffic Mirroring supports filters and packet
truncation, so that you only extract the traffic of interest to monitor by using monitoring
tools of your choice.
https://docs.aws.amazon.com/vpc/latest/mirroring/traffic-mirroring-getting-started.html
EBS encryption: There is no direct way to encrypt existing unencrypted EBS volumes or
snapshots, you can encrypt them by creating a new volume or snapshot.
AWS Organizations
AWS org allows us to manage multiple AWS accounts, it can be used to create
new accounts, have a single billing and all features allows you leverage some
services and some of their features at org level such as Org wide Cloudtrial
trails. Different accounts can be put into different OU(Organization Unit).
Policies applied on parent OU is inherited by child OUs.
SCP: Org also provides SCPs which can be used to deny certain actions at
account level. Even root account is affected by SCP and would get deny on
actions if SCP denies it.
Tag Policies: Allows us to standardize the tags on our resources. We can
create a policy here which would mandate the tag and the format of the tag
can also be defined.
AI opt out: Use this policy to opt out from AWS AI services storing our data
for ML related tasks and optimizations.
Backup Policies: We can use this helps maintains consistency as to how are
backups managed in our org.
Org has following 2 types:
1. All Features:
2. Consolidated Billing: Management account pays for all child accounts
Resource policy: These are directly attached to a resource instead of being attached to
IAM user or role. ex: S3 bucket policy, SQS Access policy etc
Identity Based Policy: These are directly attached to IAm user/role.
How is access decided if resource also has resource based policy and user also has
policy?
Ans. If resource and user both have policies, then summation of all the actions which
are allowed on the resource policy and the identity policy would be allowed.
Its not required that both policies showed allow the action and only then it will
work. If either the resource policy or the identity policy allows it, then the user
would be allowed to make the api call.
Ex:
Below is an example in which there are 3 users and their policies are mentioned,
also there is a s3 bucket whose bucket policy is mentioned below.
S3 bucket policy of Bucket A:
Alice: Can list, read, write
bob: Can read
John: Full access
Identity based user policies:
Alice: Can read and write on Bucket A
Bob: Can rea and write on Bucket A
John: No policy attached
Result when these users try to access the Bucket A:
Alice: Can able to list, read and write
Bob: Can read and write
John: Full access
So as we can observe, it doesn't matter which policy is giving access, both policy at
the end are added together to confirm the access and the combined policy is then
used to evaluate the access to the resource
So
resource policy + identity policy = full permission policy that the user will get
this is for same account access, however for cross account, both policies should
have permissions for the action.
Practicals:
A user 'temp' created with no IAM policy.
A new key created, in the key policy, first the user was not mentioned.
The user's encrypt and decrypt both api calls fail since non of the policies are
giving him access
then 'encrypt' permission was added to the IAM user policy and the KMS policy
remains same with no mention of temp user. this time encrypt worked but not
decrypt
Now, decrypt permissions are added only to the kms key policy but not added to
temp user. Now both encrypt and decrypt works.
This means, like mentioned above, when AWS evaluates permissions, it sums up
the permissions from IAM policy and KMS policy, and the net access depends on
what both give together. So if encrypt is in IAm policy and decrypt in KMS policy,
the user will have summation of both policies as permission, so it will be able to
make encrypt and decrypt both calls.
This could be bit risky since even if KMS policy mentions nothing about the IAM
user, yet just by setting up its own permissions, the temp user will have complete
access to the key just by mentioning them in its IAM access policy.
One way to get rid of this would be remove the "Sid": "Enable IAM User
Permissions", or add specific user ARN instead of "AWS":
"arn:aws:iam::456841097289:root", since root would allow all users and roles to
access the key just by using their IAm permissions.
So below statement in the KMS key policy allows IAm users to use their IAM policies to
gain access over kms key even though the KMS policy itself might not give them
permissions.
{
"Sid": "Enable IAM User Permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::456841097289:root"
},
"Action": "kms:*",
"Resource": "*"
},
If you don't want IAM users to use their own policy to access the key and key should be
only accessible when the KMS key policy allows it, then remove the above segment or at
least mention a specific IAM user ARN whose IAM policy you might want to consider.
Not mentioning root will make sure that no one is able to access the key if the KMS key
policy itself is not allowing it. But please keep inmind that this might make key
unmanageable and then only root might be able to recover it.
External ID
An external ID can be used to make cross account role assumption secure.
IN the trust policy of the role being assumed (destination account), the condition
section of that trust policy would have :
"Condition": {
"StringEquals" : {
"sts: ExternalID": "128361"
}
}}
Hence while assuming the role, make sure to mention this external ID else
the role assumption fails at source accountdue to no external Id provided.
While setting up the role, we get "Options" section while gives an option "Require
External ID" and "RequrieMFA", ticket on the needed option of "required external
ID" on 'create role' page.AWS STS
STS service is responsible to fetch temporary tokens to give temp access.
STS provides Access Key, Secret Access Key and Session Token and this can be
used to get gain access.
Identity Broker
Identity broker is the intermediate who first allows users to login via their own creds
to the identity provider. Then the IDp (identity provider) would cross check the
details with MircrosoftAD, if the creds match, the IDp would accept the login.
Now that Idp has confirmed the user, it will now reach out to AWS STS to fetch
Access keys and tokens for the session, these details are fetched by Idp and then
forwarded to the user trying to authenticate, this is called Auth Response. As soon
as these keys and tokens are received by the user as Auth Response, the user would
automatically be redirected to aws console using those creds, this is 'sign in with
auth response token'
So its like a Cisco employee wearing Cisco Badge goes to Amazon office and
wants to enter amazon
Cisco Employee show if cisco's ID card and ask the gaurd to enter into Amazon's
building
Since the ID card is not of Amazon, Amazon's guard will send his employee to
verify with cisco if I am really a cisco employee
Cisco would confirm, that Yes I am cisco employee trying to get into Amazon's
office for some quick work.
Now Since this is confirm that I am cisco's employee, the Amazon's gaurd will
create a temp Idcard for me by asking permissions from the main Amazon building
which I need to enter
Now I can enter amazon's building using temprory credentials. This is how
federated login works. If we want to login to Amazon using google's creds,
amazon's console will first check with Google to verify my identity and then give
me temporary credentials which I can use to explore amazon. The guard who does
all the verifications and gives me temp Id card is Identity provider. Identity store
would be Cisco's office where my actual identity is stored.
Identity Provider: An example of identity provider that we can add to AWS 'Identity
Provider' in IAM is Okta. Okta is very famous identity provider. okta would provider us
the metadata and SAML data to help login.
steps:
1. users login with username and password
2. this cred are given to Identity Provider
3. Identity Provider validates it against the AD
4. If credentials are valid, broker will contact the STS token service
5. STS will share the following 4 thing
Access Key + Secret Key + Token + Duration
6. Users can now use to login to AWS console or CLI
AWS SSO
AWS SSO allows admins to centrally manage access to multiple AWS accounts and business
applications and provide users with single sign-on access to all their assigned accounts and
applications from one place.
So all the users need to do is signIN into AWS SSO and from here they can be redirected to a page
with different applications such as AWS console, Jenkins, Azure etc to select from, once we selects
the application, we are redirected to the selected app.
AWS CLI also integrates with AWS SSO, so we can use CLI to login using SSO.
In case of SSO, there are 3 parties:
SAML Flow
1. The flow starts when the user opens up the IDP URL, it can be Okta for example,
and the user enters its credentials and selects the application to which it would
gain access.
2. The IDP service will validate the creds at its end, so okta will cross check if creds
are correct at okta end, if okta creds are entered correctly, okta will send back a
SAML assertion as a SAML response
3. User now uses this SAML response to redirect to the SAAS signin page and Service
provider ie AWS will validate those assertion.
4. On validation, AWS will construct relevant temp creds, and construct a user signIn
temp link which is then sent to the user and the user is redirected to AWS console.
Active Directory
Active Directory is like a central server to store Users in a single location.
AD makes it easy to just manage users in a single location
Multiple other services can use AD to fetch user information instead of having to
manage them on each app
AWS has its own called 'AWS Directory service'
This is AWS offering for Active Directory. It gives us 5 ways to setup or connect
active directory to AWS
1. AWS Managed Microsoft AD
2. Simple AD
3. AD Connector
4. Amazon Cognito User Pool
5. Cloud Directory
AWS Managed Microsoft AD : Actual Microsoft AD that is deployed in Cloud. It
has 2 options:
Standard Edition : Upto 500 users
Enterprise Edition: Large Org upto 5000 users
AD Connector: AD connector works like a middle-man/bridge service that
connects your OnPrem AD directory to your AWS Cloud. So when user logs in into
the application, the AD connector would redirect the singin request to OnPrem AD
connector for authentication.
Simple AD: This is a compatible active directory for AWS Directory service which is
similar to MicrosoftAD but not the same. Its like a similar alternate to MicrosoftAD.
It provides authentication, user management and much more. AWS also provides
backups and daily snapshot of the same. It does not support MFA and login via
CMD/terminals. Its like a free version of AD. We can store username/password to
store here and let engineers use SSH command and mention AD creds to SSH into
the EC2.
The access on S3 bucket is the sum of bucket policy + identity policy of the user.
So either the bucket policy or the user policy, any one can give access to an S3 bucket. If
both give access, then also no issues. So if user does not have IAm identity policy, he
can still work on S3 bucket if S3 bucket policy is giving permission to the user. And if
bucket policy doesnt give user permissions but the users identity policy gives access,
then also that bucket would be accessible by the user:
If there is no bucket policy, and block all public access is on, obviously an
object such as a pdf on S3 bucket cannot be accessed from internet. It fails
with below error:
<Error><Code>AccessDenied</Code><Message>Access Denied</Message>
<RequestId>NNRSBY9FR3QKX41G</RequestId>
<HostId>P0oMA9QGoyr+kj3SnuYd3wlVT6pDw2a95WHuem+zh+Iqk1YSWr11J8ATYbb2
6V11PHL+x7XvkyE=</HostId></Error>
Now if I disable block all public access, and bucket does not still have any bucket
policy, then also the object would not be accessible and I would get the same error
as above while trying to access the pdf from internet. Although the 'Access'
Prompt on the bucket would start to show 'Bucket Can be Public'. However with
'block all public access' as 'off' and no bucket policy, the objects still cant be
accessed over internet. It fails with above error. This means just making
'block public access' as off doesnt really make the objects public.
However, as soon as I add below bucket policy after making 'block all public
access' to off :
{ "Version": "2012-10-17", "Statement": [ { "Sid": "Statement1", "Effect":
"Allow", "Principal": "*", "Action": "s3:GetObject", "Resource":
"arn:aws:s3:::triallajbsfljabf/*" } ]}
This time the object loaded perfectly because now public access is allowed and finally
the bucket policy allows it.
Now lets say there is no bucket policy and the 'block all public access' if also off.
As seen above, although this shows 'Bucket Can be Public' but if there is no
bucket policy as above allowing access to "Principal": "*", this would mean the
objects are still not accessible as mentioned above.
This kinda means that without bucket policy, the objects cant be open to world,
but there is a catch, here, Object ACLs can be used to give access to public.
So if there is no bucket policy, that doesnt mean that object cannot be made
public. We can use the Object ACL to make is readable to the world without
having any bucket policy.
However, if we go to the object and try to edit the ACL, it would not allow us and
would show a message:
"This bucket has the bucket owner enforced setting applied for Object
Ownership:
When bucket owner enforced is applied, use bucket policies to control access."
This means, if we want to control the bucket object permissions via ACLs and not
bucket policy, in that case first:
Go to S3 bucket ----> Permissions
Scroll down to "Object Ownership". It mentions "This bucket has the bucket
owner enforced setting applied for When bucket owner enforced is applied,
use bucket policies to control access." Click "Edit"
Now after clicking edit, you are taken to the "Edit Object Ownership" page where we
can enable object ACLs.
Here select "ACLs Enabled". It shows "Objects in this bucket can be owned by
other AWS accounts. Access to this bucket and its objects can be specified
using ACLs.". This means we are now allowing ACLs to set permissions on
objects. ----> "I acknowledge"--->Choose "Bucket owner preferred" --> Click
on "Save Changes". Now your objects can use ACLs to give
permissions. Hence ACLs needs to be enabled first before giving
permissions to objects via ACLs.
After enabling, now go back to object and select the object.
Navigate to "Permissions" of object selected ---> This time the "Access
Control List" dialog box will have 'Edit' enabled. ---> Click on "Edit"
Finally Choose Everyone (public access) --> Object and check the 'Read'
option. And save.
Now if you try to access the object from internet, it would be accessible
without bucket policy.
Conclusion:
To make objects Public in an S3 bucket there are 2 ways, however in both ways, the
Block all public access needs to be 'off', since if this is 'on' no matter what we use,
the objects would not be public, hence while triaging to understand if objects are
public, first check the Block All Public Access is 'On', this means the public is
blocked no matter what object ACL say or bucket policy says. Hence first
requirement to make objects public is "Block All Public Access" needs to be 'Off',
then comes below 2 ways to give access:
1. Bucket Policy : The bucket policy mentioned above can be applied to the bucket
after having 'Block All Public Access' to off. Without bucket policy, the objects cant
be accessed.
2. Object ACLs: If we are not using bucket policy and 'Block All Public Access' is off,
then we can use Object ACLs to make objects public. But this requires 2 things,
first make "ACLs Enabled" in bucket permissions. Then go to object and edit its
ACLs and give "read" access to Everyone (public access).
So this means to make objects public, there are 2 level of security. First is
"Block All Public Access" which needs to "Off" to make bucket public. And
then second level access is more of object/folder specific where we can either
use a bucket policy or Object ACL to give public access.
So "Block All Public Access" is like outside door of house (S3 bucket) that needs to be
opened to give access to inside rooms (objects). Now inside can be opened either via
normal door knobs (Bucket Policy) or inside rooms can be opened using latch(object
ACL), either of the two can be used to open the room.
Conditions:
1. For HTTPS only access restriction: condition "{"Bool" : { "aws: SecureTransport":
"true" }}
Hence while sending objects from account from Account_b to Account_A, make
sure that:
The iam role sending the logs to destination bucket has "PutObjectACL" and
"getObjectACL" permissions added to the IAM policy of the role.
The destination bucket allows the above role to make "PutObjectACL" and
"getObjectACL" api calls in its bucket policy
The ACL of the object being sent can be chosen while sending the object, hence in
the cli command while sending the logs cross account, make sure to append --acl
bucket-owner-full-control to make sure that the receiving bucket can view the
objects also.
ex: aws s3 cp trialFile.txt s3://destinationbucketisthis -acl bucket-owner-full-control
Now when this file gets uploaded, the bucket owner would also have permissions to
view the file and not just the uploader. This is why is most scripts while uploading they
mention the --acl and mention the ACL. ACLs would only matter if the object is being
sent cross account because bydefault the ACL gives permissions to the account who
uploads the object and if the source and destination accounts are same, it would not
matter then.
ACL also kicks in when objects are already present inside the bucket but some other
accounts wants to access it. In that case the permissions needs to be changed to
something like "Public-read - Object Owner has Full Control and all others have read
permissions".
Command to verify the ACL information of a specific object:
Pre-signed URL in S3
If let's say we created a website to download only purchased songs, singed URL
can be generated and sent to customers so that they can be download the
content.
The main use-case here is that the bucket and object can remain private and not
public, the signed URL would enable a guest user to access the private object even
though they might not have an AWS account
Pre-Signed URL can be generated using CLI adding 'prefix' to the command. It also
can add requires an expiry time.
Versioning
This enables 2 things
1. It helps us recover a deleted object
2. If we upload same file but inside it has different text, the previous one gets
overwritten, however with versioning, it allows us to recover the previous file as
initial version.
Versioning applies to complete bucket and not just some objects.
Once enabled, we can still suspend it. After suspending, the previous objects would still
have the versions but the new objects would not have versioned objects.
This is done by creating 'replication rule' in the source s3. Go to 'Replication Rules' of
the S3 bucket and select the destination bucket, rule name, IAm role etc. We can choose
destination bucket in the same account or even in different aws account. While
configuring it would prompt to enable 'versioning' if 'versioning' is not enabled.
Now after configuring this, uploading any objects in the source region bucket would
immediately copy the same object to a different region's destination bucket.
Amazon S3 Inventory
Amazon S3 Inventory is one of the tools Amazon S3 provides to help manage your
storage. You can use it to audit and report on the replication and encryption status of
your objects for business, compliance, and regulatory needs. You can also simplify and
speed up business workflows and big data jobs using Amazon S3 Inventory, which
provides a scheduled alternative to the Amazon S3 synchronous ListAPI operation.
Amazon S3 Inventory does not use the ListAPI to audit your objects and does not affect
the request rate of your bucket.
You can configure multiple inventory lists for a bucket. You can configure what object
metadata to include in the inventory, whether to list all object versions or only current
versions, where to store the inventory list file output, and whether to generate the
inventory on a daily or weekly basis. You can also specify that the inventory list file be
encrypted.
You can query Amazon S3 Inventory using standard SQL by using Amazon Athena,
Amazon Redshift Spectrum, and other tools such as Presto, Apache Hive, and Apache
Spark. You can use Athena to run queries on your inventory files. You can use it for
Amazon S3 Inventory queries in all Regions where Athena is available.
Object Lock
Object lock follow the WORM model WORM stands for 'Write Once , read many"
which means that if the object is written once, it cannot be edited or changed,
however it can be read many times. Object lock only works with versioned bucket
hence versioning needs to be enabled for this.
This makes sure that the data can be tempered with after being written. This helps
in case of attacks such as ransomware during which the attacked would encrypt all the
objects of the bucket. So if object lock is enabled, the attacker would be able to even
encrypt the data since it cannot be changed.
MFA
We can implement MFA condition in the IAM policy so that of the user is trying to make
an api call, they first need to login with MFA.
If user wants to make api call via CLI, in that case need to append --token 'mfa code' at
the last of get-session-token command.
The 'mfa code' would be mentioned on your 'authenticator' app.ex: aws sts get-session-
token --serial-number arn:aws:iam::795574780076:mfa/sampleuser --token-code 774221
Permissions Boundary
Its like the limiting policy which really doesnt give any access but restricts them. So if
permissions boundary gives you EC2 readOnly access and your own IAM policy gives
you admin permissions, then also you would have Ec2ReadONly access due to
permissions boundary.
S3 and IAM
Version Element such as variable $username works with policy verison 2012 and
not verison 2008, hence keep this in mind while creating a policy with version
element variable.
Usually most companies have multiple AWS accounts to manage, maybe more
than 100. The main things that are managed are:
Single SignOn option for users to be able to signIn to any account with single
cred
Need single dashboard to view compliances of different AWS accounts
Would need CFN templates to deploy various services in different account
which is manual work
Control Tower solves these requirements and many more by integrating with these
services such as AWS SSO, Config and aggregators, CF Stacks, best practices, AWS
Organizations etc
Control Tower makes sense in company who are newly deploying to AWS and want
to scale env into multiple accounts without much fuss with CFN templates.
Although usually companies would have their own SSO solution such as
StreamlIne in Cisco and would rather choose to create a CFN that would deploy
the architecture in few clicks. Why pay for extra cost.
So basically when we choose an IAM role while deploying the stack since we
want that CFN to use permissions of that IAM role, for this purpose, the IAM
user which we are currently using should have PassRole permissions since its
like passing a role to the resource. Also, whoever passes the role gets access
to the resource and can further exploit these permissions.
Examples when passRole happens:
While assigning an IAM role to the EC2. Now this IAM role can be used after
logging into the EC2
While assigning role to the CFN template. Now this template can be used to
create/delete resources based on passedRole permissions.
AWS WorkMail
Earlier it was very tough to manage your own mail server, it needs to be installed
and configured
Not that advanced, just very basic UI
AWS KMS
Encrypting Data in KMS:
aws kms encrypt --key-id 1234abcd-12ab-34cd-56ef-1234567890ab --
plaintext fileb://plain-text.txt
Decrypting Data in KMS
aws kms decrypt --ciphertext-blob <value>
KMS Architecture
KMS uses Cloud HSM at backend intensively
Our view/access terminates at KMS interface, behind that the KMS interface is
connected to KMS hosts which are then connected to HSM and CMKs.
Maximum 4kb of data can be encrypted by CMK, for more than data, data key is
required.
Envelope Encryption
AWS KMS solution uses an envelope encryption strategy with AWS KMS keys.
Envelope encryption is the practice of encrypting plaintext data with a data key,
and then encrypting the data key under another key. Use KMS keys to generate,
encrypt, and decrypt the data keys that you use outside of AWS KMS to encrypt
your data. KMS keys are created in AWS KMS and never leave AWS KMS
unencrypted.
So to encrypt data
1. So first a customer master key is generated
2. From this CMK we request AWS to generate 2 keys, i. Cipher text data key
ii. Plain text data key
3. The plain text key is used to encrypt the plain text data. The data is now called
Cipher text data
4. The cipher text key is stored with the cipher text data. This Cipher text key
itself is encrypted by the master key
While Decryption:
1. First we call the KMS decryption console/api and it returns us plain text key
from our CMK
2. This plain text key is then used to decrypt the cipher text key and then this
decrypted cipher text key is used to decyrpt the data which is stored with it.
Envelope encryption is the practice of encrypting plaintext data with a data key,
and then encrypting the data key under another key.
Let's see how envelope encryption works but before encrypting any data customer
needs to create one or more CMKs (Customer Master Keys) in AWS KMS.
Encryption
API request is sent to KMS to generate Data key using CMK. 'GenerateDataKey'
KMS returns response with Plain Data key and Encrypted Data key (using CMK).
Data is encrypted using Plain Data key.
Plain Data key is removed from memory.
Encrypted Data and Encrypted Data Key is packaged together as envelope and
stored.
Decryption
Encrypted Data key is extracted from envelope.
API request is sent to KMS using Encrypted Data key which has information about
CMK to be used in KMS for decryption.
KMS returns response with Plain Data Key.
Encrypted Data is decrypted using Plain Data key.
Plain Data Key is removed from memory.
So basically:
First GenerateDataKey api call is made which gives 2 keys in return, both keys are
same but one of them is encrypted key and the other one is the same key but
encrypted which would be used to encrypt the plain text data.
Data is encrypted using plain text data key
Now since this plain text key is not secure, the plain text key is deleted. And the
encrypted version of this key is stored with the data
now data is encrypted using the plain text data key, and the data key is itself
encrypted using our main CMK.
Now when decryption is to happen, from the data, the encrypted data key is
removed to find the details of the main CMK
This encrypted key which was stored with the data is first decrypted by KMS using
CMK. Now since this key is now decrypted, it can now be used to decryp the data
So the data is decrypted and then the unencrypted key is deleted.
This is why it is called envelop encryption since our plain data is encrypted
with the plain data key, and then the plain data key is encrypted with our
CMK.
Deleting KMS Keys
it's irreversible after deletion.
keys can be scheduled for deletion , default it 30 days wait period and range is 7
days to 30 days.
It can be disabled immediately
KMS keys has 2 parts, the key metadata and key material. Key metadata would have kms
key arn, key id , Key spec, usage, description etc
AWS Managed key cant be deleted, or edited or modify permissions. Its rotated by AWS
every 365. However CMK allows all this and needs manual rotation every 365 days.
Asymmetric Encryption
Asymmetric encryption uses public and private keys to encrypt and decrypt the data
Public key can be shared with public while private remains restricted and secret.
So if person A has to receive a message from me. I'll use person A's pubic key to
encrypt the data. Now this encrypted data reached Person A and only they will be
able to decrypt it since decryption will require their own private key.
Mostly used in SSH and TLS protocols. but also used in BitCoin, PGP, S/MIME
Below are the 2 uses on Asymmetric Keys in AWS that can be done:
1. Encrypt and Decrypt
2. Sign and Verify
Data Key Caching
Data key is needed for every encryption operation. Now if there are several
different applications/api calls requesting encryption, in that case its very tough to
keep on generating and using data keys and it also shoots up the cost.
As a solution, we can implement data key caching which would cache the data key
after generating it so that it can be used in multiple different operations without
every time generating it.
However, it is not very secure since storing data keys is not recommended.
If the CMK which was used to encrypt the EBS volume got deleted, that doesnt
mean the EBS will now be useless. This is because when that EBS was attached to
EC2, an decrypt call was made and so EC2 had the plain text file to decrypt it. So
even if the KMS CMK gets deleted which encrypted the EBS, the EC2 will still be
able to access the volume without issues. However, if the ebs is detached and then
reattacched, in that case CMK will be needed, hence avoid detaching EBS from EC2
if the key is deleted.
KMS Key Policy and Key Access
If a key doesnt have any policy associated, then no one will have any access of that
key. So basically no user, including root user and key creator have any access to the
key unless specified explicitly on the key policy. This is different from S3 buckets
where bucket policy can be left blank and users/roles can have access just with their
own access policy.
when default kms key policy is attached to the key, it enables administrators to
define permissions at IAM level but that doesnt mean every would get access. The
users will still need to be have KMS specific permissions policy attached to the
user/role.
THe KMS key policy actually has the first block which allows use of IAM policy by the
users to access the key and in principal it mentions /root so that it applies to all users
and roles.
Grant User: user who has access to CMK and has generated the grant
Grantee: the person who would be using the grant to access the key.
Generating grant gives back grant token and a grant ID. The grant token would be used by
the Grantee to encrypt/decrypt api and the grantID can be used by grant user to
revoke/delete the grant later on.
The --operation part mentions what access is being given via Grant, if encrypt is
mentioned then when the Grantee would be using the grant, he would be able to make
Encrypt api call.
Cipher text blob: this is the section of the output that we get after encrypting data. the
ciphertext blob section has the actual encrypted text while the other sections include
key related info.
Encryption Context
ALL KMS operations with symmetric CMKs have option to send encryption context
while encrypting some data. Now if encryption context was provided during
encryption, when you would be doing decryption, you would need to provide the
same data again.
This makes like a 2nd security layer since each data that is encrypted might be
encrypted with same key but if we provide unique encryption context, while
decrypting even though same key would be used, every decryption would require
that unique encryption context sent. This is called as Additional Authentication
Data or AAD.
Encryption context is provided while encrypting and it provided in key-value pair
and has to be provided while decrypting the data.
# Standard
Multi-Region Key
We can now create multi-region key instead of region specific keys.
This is achived by something called as 'multi-region primary key' and 'multi-region
replica' key.
The multi-region primary key is the main key that would be created in one region. We can
then create the replica key which would have same key-id and key materials just like
primary key but in a different region.
To create a mulit-region key, just start with normal creation of CMk, however at the first
page of setup where we select key type as 'symmetric', just select 'advanced options' and
choose 'Multi-region key' instead of the bydefault selected option of Single-region key. Rest
of the setup is same.
Now once created, we can select the CMK and select the regionality card, and click on
"Create Replica key"/. this will give option of choosing a new region for the replica key
We can manage key policy for this replica key separately.
Replica key itself cannot be replicated. Rotation cannot be handled by replica key. this all
happens from primary key. multi region keys starts from MRK-*.
S3 Encryption
1. Server Side Encryption
2. Client Side Encryption
SSE-S3: This is a server side encryption in which the each object is encrypted with a
unique key and hence it is also considered to be one of the strongest block ciphers to
encrypt data in AES 256.
SSE-CMK: We can use KMS CMK to ecnrypt and decrypt the data.
SSE-C: IN this the key is provided by the customer while the data is sent to the bucket. So
key needs to be provided along with the data in this case.
S3 Same Region Replication
Amazon S3 now supports automatic and asynchronous replication of newly uploaded S3
objects to a destination bucket in the same AWS Region. Amazon S3 Same-Region
Replication (SRR) adds a new replication option to Amazon S3, building on S3 Cross-
Region Replication (CRR) which replicates data across different AWS Regions
S3 Replication
Replication enables automatic, asynchronous copying of objects across Amazon S3
buckets. Buckets that are configured for object replication can be owned by the same
AWS account or by different accounts. You can replicate objects to a single destination
bucket or to multiple destination buckets. The destination buckets can be in different
AWS Regions or within the same Region as the source bucket.
To automatically replicate new objects as they are written to the bucket, use live
replication, such as Cross-Region Replication (CRR). To replicate existing objects to a
different bucket on demand, use S3 Batch Replication. For more information about
replicating existing objects, see When to use S3 Batch Replication.
To enable CRR, you add a replication configuration to your source bucket. The minimum
configuration must provide the following:
The destination bucket or buckets where you want Amazon S3 to replicate objects
An AWS Identity and Access Management (IAM) role that Amazon S3 can assume
to replicate objects on your behalf
Along with Sheild, we can use WAF as well in case of DDOS attack:
For application layer attacks, you can use AWS WAF as the primary mitigation. AWS WAF
web access control lists (web ACLs) minimize the effects of a DDoS attack at the
application layer do the following:
AWS Shield Standard automatically protects your web applications running on AWS
against the most common, frequently occurring DDoS attacks. You can get the full
benefits of AWS Shield Standard by following the best practices of DDoS resiliency on
AWS.
AWS Shield Advanced manages mitigation of layer 3 and layer 4 DDoS attacks. This
means that your designated applications are protected from attacks like UDP Floods, or
TCP SYN floods. In addition, for application layer (layer 7) attacks, AWS Shield Advanced
can detect attacks like HTTP floods and DNS floods. You can use AWS WAF to apply
your own mitigations, or, if you have Business or Enterprise support, you can engage the
24X7 AWS Shield Response Team (SRT), who can write rules on your behalf to mitigate
Layer 7 DDoS attacks.
Advanced real-time metrics and reports: You can always find information about
the current status of your DDoS protection and you can also see the real-time
report with AWS CloudWatch metrics and attack diagnostics.
Cost protection for scaling: This helps you against bill spikes after a DDoS attack
that can be created by scaling of your infrastructure in reaction to a DDoS attack.
AWS WAF included: Mitigate complex application-layer attacks (layer 7) by
setting up rules proactively in AWS WAF to automatically block bad traffic.
You get 24×7 access to our DDoS Response Team (DRT) for help and custom
mitigation techniques during attacks. To contact the DRT you will need the
Enterprise or Business Support levels.
AssumeRoleWithWebIdentity
Calling AssumeRoleWithWebIdentity does not require the use of AWS security
credentials. Therefore, you can distribute an application (for example, on mobile devices)
that requests temporary security credentials without including long-term AWS
credentials in the application.
Use SignedURLs in case of restricting access to individual files however use Signed
Cookies when there are multiple files such as multiple video files of same format.
Load Balancer
ALB : works at HTTP and HTTPS traffic level. This means whatever data is present at layer
7, like Get, Put etc request, we can route traffic based on these headers. Like ALB can route
traffic based on user-agent data present in the request.
ALB can do: Path based routing: We can mention specific path for website such as
/videos or /games, and ALB will route to different EC2 according to the target group set of
the path.
Listener: It is a component of ALB that checks the protocol of the request and then
forwards the traffic based on the rule set for the that protocol.
NLB: Works at UDP, TCP and TLS protocol level. Gives more performance that ALB. We
cannot associate a security group to NLB. NLB works at network layer. It selects target
based on flow hash algorithm.
Glacier vaults
In glacier, the data is stored as archives.
Vault is a way through which the archives are grouped together in Glacier
We can manage the access to this vault using IAM policies
Resource based policy can also be enabled for vault.
Vault lock allows us to easily manage and enforce compliance controls for vaults
with vault lock policy.
We can specify controls such as "Write Once read Many" (WORM) in vault policy
and lock the policy from future edits
Vault policies are immutable so a policy once created cant be changed.
Creating vault lock policy gives a lock ID.
DynamoDB Encryption
AWS KMS can be used for server side encryption for encrypting data in
DynamDB. This would be encryption at rest. We can either choose AWS-
managed KMS key or choose our own CMK.
We can also choose to do client side encryption which means first
encrypt the data and then send.
For Client Side encryption, DynamoDB provides a library on git called
DynamoDB encryption Client.
Secrets Manager
We can use secrets manager to store text/ keys or strings in an encrypted manner.
Secrets manager can work with RDS database to store its creds and can be used by other
databases as well.
We can enable automatic rotation for our key or choose manual rotation.
We also get code to use it in our SDK and retrieve the secrets.
Secrets Manager integration with RDS
SM integrates with MySQL, PostgreSQL and Aurora on RDS very well to store the credentials
and we can use autorotation with it.
When we created a secret for RDS, it actually creates a lambda which would be going and
changing the creds for rds when rotation is needed. So if the RDS is inside VPC, make sure to
allow inbound for that lambda. Or have the lambda deployed in the same VPC.
DNS Cache Posioning
In this kind of attack, wrong info is fed into the DNS cache of a user so that they may
get redirected to wrong website. So if we wanted to go to google.com, cache poisioning
will input a malicious IP which would redirect to malicious server. Ex : ettercap in kali.
DNSSEC
DNSSEC makes use of asymmetric keys via it uses private and public key.
DNSSec creates a secure domain name system by adding cryptographic signatures
to existing encrypted is provided with a signature. This creates another file sign.txt
and when we use verify, the verify command will let us know if the message was
tempered with.
Verify similar to this DNSSEC works in which the website is verified. DNSSEC also
uses a key which the servers use to verify if the server is actually the real server and
then the website loads for the client.
It does include more steps and hence more computational power is needed.
We can enable DNSSEC in Route53 for our domain. we need to enable "DNSSEC
singing". this create KSK (key-signing key).
When we request a website, we get back website IP and the website signature.
Exam Important Points
1. Dealing with exposed access keys
1. Determine the access associate to the key
2. Deactivate it
3. Add explicit deny all policy
4. review logs for see possible backdoor
2. Dealing with compromised EC2 instace
1. Lock Down the Ec2 instance security group
2. take EBS snapshot
3. Take a memory dump
4. Perform the forensics analysis
3. GuardDuty
1. Whitelist EIP of Ec2 if we are doing pen testing
2. It uses VPC Flow logs, Cloudtrail logs and DNS logs
3. It can generate alerts is Cloudtrail gets disabled.
4. We can now use "PreAuthorized scanning engine" for pen testing aws env without
asking for permissions from AWS like used to be done earlier.
5. Fields of VPC Flow logs:
1. version : flow log version
2. account id
3. network interface id
4. source address
5. destaddr: destination address
6. source port
7. dest port
8. protocol number
9. number of packets transferred
10. number if bytes transferred
11. start time in seconds
12. end time in seconds
13. action : accept vs reject
14. logging status of flow log
6. AWS Inspector scans target based on various baselines:
1. Common Vul and Exporsure ie CVE
2. CIS bechmarks
3. Security Best Practices
4. Network Reachability
7. Systemes Manager:
1. Run command: to run commands on Ec2
2. patch complaince: allows us to check compliance status for theec2 with
respect to patching activity
3. Patch baseline: This determine what patches are needed to be installed in Ec2.
we can also define the approval for the same
4. maintainance window
5. parameter store
8. AWS Config use cases
1. Can be used to audit iAM policy assigned to uses before and after specific
event
2. detect is cloudtrail is disabled
3. verify ec2 has approved ami only
4. detect open to world security groups
5. detect is ebs volumes are encrypted
6. It can work with lambda for automated remediations
9. AWS AThena
1. This doesnt require additional infra to query logs in s3
10. AWS WAF
1. Can be attached to ALB, CLoudFront Distribution and API gateway
2. Blocks layer 7 attacks
3. Can also be used to block user-agent headers
11. CloudWatch Logs
1. Steps to setup and troubleshoot logs are imp
2. First assign appropriate role to Ec2, then install the cloud Agent and finally
configure the agent to log correctly.
3. Verify if awslogs agent is running
4. CloudWatch metric filter can be used to get alerts
12. IP Packet inspection
1. If we want to inspect the IP packets for anomalies, we have to create a proxy
server and route all the traffic from VPC through that server.
2. install appropriate agent on the hosts and examine at the host level
13. AWS Cloudtrail
14. AWS Macie: Can be used to find PII data in s3 bucket
15. AWS Secuirty hub: help with compliance management
16. Services that would help us in case of DDOS attack:
1. WAF
2. Autoscaling
3. AWS Sheild (layer 3, layer 4 and layer 7)
4. R53
5. CloudWatch
17. EC2 key pair
1. If we create ami of ec2 and copy it in a different region, and launch an Ec2
from that ami, the new ec2 will still have older .public key in the
authorized_keys file. We can use the older private key to use to ssh in this new
Ec2
18. EBS secure Data wiping
1. AWS wipes the data when provisioning it for new customer
2. before deleting ebs, we can also wipe data manually
19. Cloudfront
1. We can use OAI so that only the requried customer can view the content
2. We can use SignedURLS for
1. RTMP distribution, this is not supported for cookies
3. We can use signed cookies when we have multiple files in same format
requiring restricting access. hence used in multiple files.
4. CFN uses custom TLS certificate.
20. VPC Endpoints
1. gateway endpoint
2. interface endpoint
21. NACLs
1. Statless firewall offering
2. max 40 rules can be applied
22. AWS SES
1. used for emails
2. Can be encrypted
23. Host Based IDS
1. This can be installed manually on Ec2 for "file integrety monitoring".
2. Can be integrated with CloudWatch for alerting
24. IPS
1. for intrution prevention system, this can be installed on Ec2 to scan traffic and
send data to central ec2
25. ADFS
1. Active Directory federation Service (ADFS) is a SSo solution by microsoft.
2. Supports SAMl
3. Imp: AD groups are associated to the IAM roles. AD GROUP -------> IAM ROLE
26. Cognito
1. Provides authentication, authorization and user management for web and app
2. good choice for mobile application auth needs
3. Catch word: Social Media website
27. KMS
1. If we have accidentally deleted the imported key material, we can download
the new wrapping key and import token and import the original key into
existing CMK
2. Encrypt APi can only encrypt data upto 4KB.
3.
GenerateDataKeyWithoutPlaintext: Returns a unique symmetric data key for
use outside of AWS KMS. This operation returns a data key that is encrypted
under a symmetric encryption KMS key that you specify. The bytes in the key
are random; they are not related to the caller or to the KMS key.
GenerateDataKeyWithoutPlaintext is identical to
the GenerateDataKey operation except that it does not return a plaintext copy
of the data key.
This operation is useful for systems that need to encrypt data at some point,
but not immediately. When you need to encrypt the data, you call
the Decrypt operation on the encrypted copy of the key.
It's also useful in distributed systems with different levels of trust. For example,
you might store encrypted data in containers. One component of your system
creates new containers and stores an encrypted data key with each container.
Then, a different component puts the data into the containers. That
component first decrypts the data key, uses the plaintext data key to encrypt
data, puts the encrypted data into the container, and then destroys the
plaintext data key. In this system, the component that creates the containers
never sees the plaintext data key.
4. To use the an expiring CMK without changing neither the CMK nor the key
material, we will have to re-encrypt the CMK and import the same key
materials again into the CMK, this gives new expiration dates.
5. Import token while importing CMK key material is only valid for 24 hours
6.
Digital signing with the new asymmetric keys feature of AWS KMS
7.
28. S3
1. To restrict access to the bucket based on region, StringLike : {"S3:
LocationContraint": "us-west-2"}
To Learn:
1. KMS Key imports concepts
2. Credential report contents
3. key rotation enable/disable and frequency
4. SCP should not be applied to root account
5. How and which service does Document Signing
Kinesis Client Library (KCL): requires DynamoDB and CloudWatch services and
permissions
Kinesis:
Basic monitoring: Sends stream level data to CW
Enhanced: Sends shard level data
KMS
Encrypt api only encrypts upto 4KB of data. You can use this operation to encrypt
small amounts of arbitrary data, such as a personal identifier or database password,
or other sensitive information. You don't need to use the Encrypt operation to
encrypt a data key. The GenerateDataKey and GenerateDataKeyPair operations
return a plaintext data key and an encrypted copy of that data key.