EC2
EC2
How does AWS autoscaling handle unpredictable traffic and then sudden changes
in the demand.
AWS Auto Scaling dynamically adjusts the number of EC2 instances based on real-time
demand.
Ensures high availability and cost efficiency by scaling up or down as needed.
Key Mechanisms
Scaling Policies: Define how and when scaling actions occur.
Types:
Dynamic Scaling: Adjusts capacity based on real-time metrics.
Predictive Scaling: Uses machine learning to anticipate demand patterns.
CloudWatch Metrics:
Monitors instance performance (CPU utilization, memory usage).
Sends alerts to trigger scaling actions when predefined thresholds are met.
Predefined Thresholds: Set upper and lower limits for resource usage (CPU > 70% triggers
scale-out).
Benefits
Handles Traffic Spikes: Automatically adjusts to sudden changes in traffic.
Cost-Efficiency: Launches instances only when needed, reducing idle resources.
High Availability: Ensures sufficient capacity to handle demand without manual
intervention.
Uses AWS's global infrastructure to route traffic instead of the public internet.
How It Works
Traffic Routing via AWS Network: Routes user traffic through AWS’s private global network
for reduced latency and improved performance.
Benefits
Improved Performance:
Reduces latency by leveraging the AWS global network.
Optimizes routing to the closest and best-performing endpoint.
High Availability:
Provides automatic failover to healthy endpoints, ensuring uninterrupted service.
3. Explain AWS Lambda Edge and what is the primary use cases.
A service to run Lambda functions at AWS Edge locations via Amazon CloudFront.
Executes code closer to users, reducing latency and improving performance.
Features:
Low Latency Execution
Integration with CloudFront
Use Cases:
Dynamically modify content based on user location, device type, or request data.
Perform request validation (e.g., JWT checks) before forwarding to the origin server.
Serve different versions of content to users without changing backend configurations.
Rewrite URLs or generate static pages dynamically to improve search engine visibility.
Benefits:
Reduces latency by executing code at edge locations.
Enhances security by intercepting and validating requests early.
Provides scalability without managing infrastructure.
4. How does Amazon Aurora differ from standard MySQL or postSQL databases on
AWS.
Aspect Amazon Aurora Standard MySQL/PostgreSQL
- 5x better than MySQL.
Performance Standard performance.
- 3x better than PostgreSQL.
- Spans multiple Availability Zones
Requires manual configuration for high
Availability (AZs) automatically.
availability.
- Includes self-healing.
- Auto-scales storage up to 128 TB Scaling requires manual intervention
Scaling
with no downtime. and potential downtime.
- Automated backups and point-
Backups Manual or semi-automated backups.
in-time recovery by default.
- Read replicas with low-latency Replication may involve higher latency
Replication
reads across regions. and manual setup.
- Fully managed by AWS, including Requires manual maintenance by the
Maintenance
patching and updates. user.
5. what are placement groups in AWS and what are their different types.
Placement groups help control the placement of EC2 instances to optimize performance
or resilience.
They determine how instances are physically or logically organized across AWS
infrastructure.
6. How does Amazon S3 event notifications work and what are the possible targets.
S3 Event Notifications enable triggering actions in response to specific events in an S3
bucket, such as object creation or deletion.
Supported Events
• Object Created Events: Triggered on actions like PUT, POST, COPY, or multipart
upload completion.
• Object Deleted Events: Triggered when an object is deleted.
• Reduced Redundancy Storage (RRS) Object Lost Events: Triggered when RRS
objects are lost.
Possible Targets
1. AWS Lambda
o Trigger Lambda functions to run code for tasks such as:
▪ Resizing images.
▪ Data format conversion.
▪ Custom business logic.
▪
2. Amazon SQS (Simple Queue Service)
o Send event messages to an SQS queue for further processing.
o Ideal for decoupled and asynchronous workflows.
o
3. Amazon SNS (Simple Notification Service)
o Send notifications to SNS topics, which deliver messages to subscribers.
o Suitable for broadcasting updates or sending alerts.
Key Points
• Configure S3 Event Notifications using:
o S3 Management Console
o AWS CLI
o AWS SDKs
o
• Ensure the target service (Lambda, SQS, SNS) has the necessary permissions.
•
• Ideal for automating tasks such as:
o Processing new uploads.
o Alerting on deletions.
o Real-time data pipeline triggers.
7.Amazon ECS cluster autoscaling and how does it work with fargate and EC2.
ECS Cluster Autoscaling (CAS) adjusts the size of ECS clusters dynamically based on
application demand.
How It Works
1. With EC2 Launch Type
o Uses Auto Scaling Groups (ASG) to scale the number of EC2 instances up or
down.
o CAS monitors the resource utilization (CPU, memory) of running tasks and
adjusts EC2 instances accordingly.
o Ideal for applications with predictable traffic or resource requirements.
o
2. With Fargate Launch Type
o Scales the number of tasks directly, without managing EC2 instances.
o Since Fargate is serverless, resources are provisioned automatically to meet
demand.
o Ideal for applications with highly dynamic and unpredictable traffic.
Key Features
• Automatically balances cost and performance.
• Eliminates manual intervention for scaling.
• Integrates with CloudWatch for real-time metrics and scaling policies.
Use Cases
• EC2 Launch Type: Suitable for applications requiring control over the underlying
infrastructure.
• Fargate Launch Type: Suitable for serverless containerized applications needing
cost-effective, automated scaling.
8. Describe AWS Control Tower and its use in managing multi-account AWS
environments.
A service that simplifies setting up and managing multi-account AWS environments using
AWS best practices.
Key Features
1. Guardrails
o Pre-configured governance rules to enforce compliance and security.
o Types:
▪ Preventive Guardrails: Block actions that violate rules.
▪ Detective Guardrails: Monitor and notify of non-compliance.
▪
2. Account Vending Machine (AVM)
o Automates the creation of AWS accounts with predefined configurations.
o Ensures newly created accounts adhere to organizational standards.
o
3. Centralized Logging
o Centralized hub for collecting and managing CloudTrail logs and Config
rules for security, auditing, and compliance.
Use Cases
• Large organizations managing multiple AWS accounts.
• Businesses requiring centralized governance for compliance and security.
Benefits
• Simplifies the setup and governance of multi-account AWS environments.
• Enhances compliance and security using guardrails.
• Reduces operational overhead with automation.
• Provides visibility and control over all accounts.
Key Features
1. Simplifies VPC Peering
o Consolidates multiple VPC peering connections into a single gateway.
o Reduces the complexity of managing multiple peering connections between
VPCs.
o
2. Centralized Routing
o Centralized routing hub to manage traffic between VPCs and on-premises
networks.
o Simplifies the network design by providing a single entry and exit point for
data.
o
3. Scalability
o Automatically scales to accommodate increasing numbers of connections
and data throughput.
o Ideal for large environments with many VPCs.
o
4. Cost Efficiency
o Reduces the need for creating multiple VPC peering connections and VPNs.
o Consolidates traffic management, leading to cost savings.
Use Cases
• Large-Scale Architectures: For organizations with many VPCs that need
centralized routing and network management.
• Simplified Network Management: When a streamlined approach to routing and
managing network connectivity is needed.
Benefits
• Simplifies network setup by reducing the complexity of managing multiple peering
connections.
• Provides centralized, scalable, and cost-efficient network routing.
• Ideal for large, complex environments with multiple VPCs and hybrid architectures.
10. Explain how Amazon Cognito can be used for user authentication in serverless
applications.
Amazon Cognito is a service that enables user authentication and authorization for web
and mobile applications, particularly in serverless environments.
It provides scalable and secure user authentication without managing a custom identity
service.
Key Features
1. User Pools
o Manage user registration, authentication, and profile management directly in
Amazon Cognito.
o Ideal for handling user sign-up and sign-in processes for web and mobile
applications.
o
2. Identity Pools
o Provides temporary AWS credentials to authenticated users.
o Enables access to AWS services like S3, DynamoDB, etc., from serverless
applications.
o
3. Social Identity Providers Integration
o Supports third-party identity providers (e.g., Google, Facebook, Amazon).
o Allows users to sign in using their existing social media or other federated
accounts.
o
4. Scalability
o Automatically scales to handle large numbers of users without managing
infrastructure.
Use Cases
• Serverless applications that require secure user authentication and
authorization.
• Applications needing scalable and cost-effective authentication without
managing custom identity services.
Benefits
• Simplifies authentication and authorization, allowing developers to focus on the
application logic.
• Integrates with AWS services, enabling fine-grained access control.
• Supports social login options, improving user experience.
• Scalable solution for large or growing applications.
11. how does Amazon cloudfront signed URLs work to control access to content.
Signed URLs are used to grant temporary access to private content hosted on Amazon
CloudFront.
They allow you to control who can access the content and for how long.
Use Cases
• Protecting Premium Content: Control access to videos, documents, or
downloadable files for paid users.
• Temporary Access: Grant time-limited access to content for specific users or
purposes (e.g., file downloads, video streaming).
Benefits
• Security: Provides temporary access to private content without exposing
permanent URLs.
• Customizable: You can specify both access duration and permissions for fine-
grained control.
• Ideal for Paid Content: Ensures that only authorized users can access premium or
restricted content.
12. What are the differences between instance store and EBS volumes in AWS.
Feature Instance Store (Elastic Block Store) sEBS Volumes
Storage Type Temporary, local storage Persistent block-level storage
Data Data is lost if the instance Data persists even after instance
Persistence stops/terminates stops/terminates
Use Temporary data storage (e.g., Long-term, durable storage (e.g., database,
Case cache, buffer) file storage)
Cannot be detached or Can be detached and reattached to other
Attach/Detach
moved instances
Cost Typically cheaper Costs depend on size and performance characteristics
13. How AWS Direct Connect enhances Network performance and security for
Enterprises.
AWS Direct Connect provides a dedicated, private network connection between an
enterprise's on-premises data center and the AWS environment, bypassing the public
internet.
Benefits:
Low Latency
Improved Security
Cost Efficiency
High Bandwidth Connectivity
14. What is AWS step functions and how does it improve workflow orchestration in
serverless applications.
AWS Step Functions is a serverless orchestration service that allows you to coordinate
multiple AWS services to define and manage workflows. It simplifies the process of
building and managing complex workflows for serverless applications.
Key Features
1. State Machines:
o State Machines represent workflows where you define the series of tasks,
decision points, and branching logic in your application.
o These workflows can consist of multiple steps, allowing you to manage the
flow of execution between tasks.
o
2. Task Definition:
o In Step Functions, tasks represent individual operations or services to be
executed as part of the workflow.
o Tasks can involve invoking Lambda functions, starting ECS tasks, interacting
with SQS queues, or other AWS services.
o
3. Error Handling:
o Built-in error handling mechanisms allow you to define automatic retries,
catch errors, and take appropriate actions like invoking fallback logic or
notifying stakeholders.
o
4. Automatic Retries:
o Step Functions supports automatic retries of failed tasks, allowing workflows
to recover from temporary issues without manual intervention.
o
5. Integration with AWS Services:
o Step Functions seamlessly integrates with many AWS services such as
Lambda, ECS, SQS, and more, making it easy to connect and orchestrate
various components of your serverless architecture.
19. Can You Change the Instance Type of a Running EC2 Instance?
• Cannot Change on the Fly:
o You cannot change the instance type while the EC2 instance is running.
o Steps to Change Instance Type:
1. Stop the EC2 instance.
2. Change the instance type via AWS Console, CLI, or SDK.
3. Start the instance again to reflect the changes.
If you need to launch more instances, you can request an increase in the service limit via
the AWS Support Center.
Auto scaling adjusts the number of EC2 instances to meet demand, ensuring the desired
number of instances is maintained.
30. You want to allow your team to have access to Amazon S3 but you want to restrict
their ability to deete objects how would you implement this.
To allow your team to have access to Amazon S3 while restricting their ability to delete
objects, you need to create an IAM policy that grants necessary permissions while denying
the delete action.
Here’s how to implement this:
Outcome:
• The consultant gains temporary access to the EC2 instance without sharing long-
term credentials.
32. You have an IAM user who accidentally deleted some important data from your S3
bucket. How can you set up a policy to prevent users from deleting objects in the
future?
Preventing Object Deletion in S3
1. Policy Options:
o IAM Policy or S3 Bucket Policy:
▪ Explicitly deny the s3:DeleteObject action to prevent users from
deleting objects.
▪
2. Enable MFA Delete:
o MFA Delete adds an extra layer of protection, requiring Multi-Factor
Authentication (MFA) for deleting objects.
o Only privileged users with MFA will be able to delete objects.
o
3. Steps to Implement:
o IAM Policy:
Create a policy that denies s3:DeleteObject for the S3 bucket, and attach it
to the relevant IAM user or group.
o
o S3 Bucket Policy:
Alternatively, apply a bucket policy that denies the DeleteObject action.
o
4. Accidental Deletion Protection:
o Enable Versioning and configure S3 Object Lock to protect objects from
accidental deletion.
o
5. Outcome:
o These measures prevent unauthorized or accidental object deletion in the
future.
33. Your organization uses different AWS accounts for different teams. How do you
manage permissions across these accounts for a central auditing team?
1. Implement Cross-Account IAM Roles:
o In each AWS account (for different teams), create an IAM role that grants
read-only access to the required resources.
o
2. Specify the Central Auditing Account:
o Trust Policy: Set the central auditing team's AWS account as the trusted
entity in the IAM role's trust policy. This allows the auditing team to assume
the role.
o
3. Auditing Process:
o The auditing team can assume the role from their account and access the
necessary resources in other accounts for auditing purposes.
o
4. Role Assumption:
o Auditing team members log into their own account, assume the cross-
account IAM role, and perform the necessary auditing tasks.
o
5. Outcome:
o Centralized management of auditing permissions across multiple AWS
accounts, without sharing long-term credentials.
34. You need to allow an IAM user to access both EC2 and S3, but only from a specific
IP address range. How can you enforce this restriction?
Create an IAM Policy:
• Define a policy that grants access to both EC2 and S3 actions.
Use Condition Key for IP Restriction:
• In the policy, use the Condition element with the IpAddress condition key to
specify the allowed IP address range.
•
Apply the Policy:
• Attach the policy to the IAM user.
• This policy will ensure that actions on EC2 and S3 are only allowed if the request
originates from the specified IP address range.
•
Result:
• The IAM user can only perform EC2 and S3 actions from the allowed IP address
range. Requests from other IP addresses will be denied.
35. How can you ensure that IAM users are forced to rotate their access keys
regularly?
1. Use IAM Access Analyzer:
o IAM Access Analyzer helps identify resources that are not properly secured.
o Can be used to audit IAM access key usage and detect stale or unused keys.
o
2. Configure a CloudWatch Event Rule:
o Set up a CloudWatch Event to trigger when access keys are older than a
specified period (e.g., 90 days).
o This rule can monitor the age of access keys for all IAM users.
o
3. Lambda Function for Automation:
o Create a Lambda function to:
▪ Disable or delete old access keys automatically.
▪ Notify IAM users about the need to rotate their keys.
▪
4. Notification to Users:
o The Lambda function sends an email notification to users instructing them
to generate new keys.
o
5. Key Rotation Standard:
o Enforce a standard (e.g., every 90 days) for key rotation to ensure security
best practices are followed.
o
6. Result:
o Automated key rotation and notifications, ensuring IAM users regularly
update their access keys.
36. How can you restrict an IAM user to accessing only a specific DynamoDB table and
nothing else?
1. Create an IAM Policy:
o Create an IAM policy that allows only specific actions on DynamoDB, such
as:
▪ GetItem
▪ PutItem
▪ Scan
▪ Query
▪
2. Specify Resource ARN:
o In the policy's resource section, specify the ARN (Amazon Resource Name)
of the specific DynamoDB table you want the user to access.
o This restricts the user to accessing only that particular table.
o
3. Example Policy:
json
Copy code
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:Scan",
"dynamodb:Query"
],
"Resource": "arn:aws:dynamodb:region:account-id:table/example"
}
]
}
o Replace region, account-id, and example with your actual DynamoDB table's
details.
o
4. Result:
o This policy grants the IAM user access to only the specified DynamoDB table
and only the allowed actions, ensuring restricted access to other resources.
37. You need to track which IAM user made a specific API call in AWS. How would you
do this?
1. Use AWS CloudTrail:
o AWS CloudTrail is a service that tracks and logs all API calls made by IAM
users and other AWS resources.
o
2. CloudTrail Logs:
o The CloudTrail logs provide detailed information about:
▪ Which IAM user made the API call.
▪ What action was performed (e.g., creating or deleting a bucket).
▪ When the action was performed (timestamp).
▪ Where the action was performed from (source IP, region, etc.).
▪
3. Auditing with CloudTrail:
o CloudTrail helps in auditing by capturing all the API activity in your AWS
account, including user actions, allowing you to track who did what, when,
and from where.
o
4. Example Use Cases:
o Identifying changes made by users, such as resource creation or deletion.
o Monitoring access patterns and troubleshooting security incidents.
o
5. Accessing CloudTrail Logs:
o You can view and analyze CloudTrail logs using the CloudTrail Console,
Amazon CloudWatch Logs, or export the logs to an S3 bucket for further
processing.
o
By enabling CloudTrail, you can effectively track IAM user actions across your AWS
environment for compliance and security auditing.
38. How do you prevent IAM users from launching EC2 instances outside a particular
instance type (e.g., t2.micro)?
1. Create an IAM Policy:
o Define a policy that allows users to perform the RunInstances action but with
a restriction on the instance type.
o
2. Use Condition Keys:
o In the IAM policy, use the Condition element with a condition key to restrict
the instance type to a specific value (e.g., t2.micro).
o Example policy for restricting to t2.micro instance type:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:RunInstances",
"Resource": "*",
"Condition": {
"StringEquals": {
"ec2:InstanceType": "t2.micro"
}
}
}
]
}
3. Policy Breakdown:
o Action: ec2:RunInstances allows launching EC2 instances.
o Condition: Restricts the ec2:InstanceType to t2.micro only.
o If users attempt to launch instances of other types, the action will be denied.
o
4. Attach the Policy:
o Attach this policy to the IAM user, group, or role that you want to restrict.
Outcome:
• Users will only be able to launch EC2 instances of the specified type (t2.micro) and
will be denied if they try to use any other instance type.
39. You want to enforce MFA for IAM users when accessing the AWS Management
Console. How do you implement this?
1. Create an IAM Policy to Enforce MFA:
o Define a policy that denies all actions if MFA is not enabled. This ensures that
IAM users cannot perform any actions unless MFA is enabled for their
account.
o
2. Policy Example:
o Here's an example IAM policy that denies all actions if MFA is not enabled:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": "*",
"Resource": "*",
"Condition": {
"BoolIfExists": {
"aws:MultiFactorAuthPresent": "false"
}
}
}
]
}
3. Policy Breakdown:
o Action: Denies all actions (Action: "*") on all resources (Resource: "*").
o Condition: The aws:MultiFactorAuthPresent condition checks if MFA is
enabled for the user. If MFA is not present (false), the actions are denied.
o
4. Attach the Policy:
o Attach this policy to the IAM users or groups that require MFA enforcement.
Outcome:
• IAM users will be required to enable MFA for their accounts. They will not be able to
perform any actions in the AWS Management Console unless MFA is configured and
used during login.
40. How can you automate the process of revoking all access for an IAM user when
they leave the company?
1. Use AWS Lambda and CloudWatch Events:
o Automate the process by using AWS Lambda functions triggered by specific
events.
o
2. Steps Involved:
o CloudWatch Event: Set up a CloudWatch event to listen for a termination
event (e.g., a user leaving the company).
o
o Trigger Lambda Function: When the termination event occurs, CloudWatch
triggers a Lambda function.
o
o Lambda Function Actions:
▪ Disable IAM User Account: The Lambda function will disable the IAM
user account to prevent further access.
▪ Remove Access Keys: It will remove all access keys associated with
the user to ensure no API access is available.
▪ Detach IAM Policies: Detach all policies assigned to the IAM user to
revoke permissions.
▪
3. Process Overview:
o When a user is terminated or leaves the company, the CloudWatch event
listens for this change, triggering the Lambda function to revoke all resources
associated with that user.
o
4. Outcome:
o This automation ensures that when an IAM user leaves the company, all their
AWS access (including keys, roles, and permissions) is revoked immediately,
reducing the risk of unauthorized access.
o
Example Lambda Function Flow:
1. CloudWatch detects the termination event.
2.
3. Lambda is triggered and performs the following actions:
o Disables the IAM user.
o Deletes all the user's access keys.
o Detaches policies from the user.
This automation ensures timely and secure revocation of access for users who leave the
company.
41. How would you allow an IAM user to manage EC2 instances only in specific
regions?
Define IAM Policy:
• Create an IAM policy that allows EC2 actions but restricts them to specific regions.
Outcome:
• The IAM user will only be allowed to manage EC2 instances in the us-east-1 region.
Any attempt to perform EC2 actions in other regions will be denied.
Implementation:
• Attach this policy to the IAM user or group to enforce region-specific access control
for EC2.
42. How would you restrict access to specific tags on an EC2 instance?
1. Define IAM Policy:
o Use an IAM policy to restrict access based on the tags attached to EC2
instances.
o
2. Use Condition for Tag-based Access Control:
o In the IAM policy, use the Condition block with the ec2:ResourceTag
condition key to restrict access based on specific tags.
o
3. Example IAM Policy:
o Action: Allow or deny EC2 actions (e.g., ec2:StartInstances,
ec2:StopInstances).
o Condition: Specify the required tag key-value pair (e.g., "Department":
"Finance").
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "ec2:DescribeInstances",
"Resource": "*",
"Condition": {
"StringEquals": {
"ec2:ResourceTag/Department": "Finance"
}
}
}
]
}
4. Outcome:
o This policy allows the IAM user to only perform actions (like
DescribeInstances) on EC2 instances that have a tag with Department:
Finance.
o If the EC2 instance does not have the specified tag, the IAM user will not be
able to perform the action on that instance.
o
5. Implementation:
o Attach the policy to the relevant IAM user, group, or role to enforce access
control based on EC2 instance tags.
43. You need to ensure that only IAM users with a certain tag (e.g., "Department") can
access a particular S3 bucket. How would you do that?
Define IAM Policy:
• Create an IAM policy that restricts access to an S3 bucket based on a specific user
tag (e.g., "Department: Finance").
Implementation:
• Attach the policy to the relevant IAM user, group, or role.
• Ensure that IAM users are tagged with the correct Department value (e.g., Finance)
for this policy to be effective.
44. How do you allow an IAM user to assume multiple roles in different AWS accounts?
Create IAM Policy for AssumeRole Action:
• Create an IAM policy for the user that allows them to assume roles using the AWS
Security Token Service (STS).
• The policy will specify the allowed roles in other AWS accounts.
Summary:
• IAM User Policy: Allows the user to call sts:AssumeRole for roles in other AWS
accounts.
• Role Trust Policy: Specifies the user's account as a trusted entity to assume the
role.
• Outcome: The IAM user can assume multiple roles across different AWS accounts
and work with the resources in those accounts using temporary credentials