Chapter 13-1
Chapter 13-1
Chapter 13-1
This chapter covers the following official AWS Certified SysOps Administrator - Associate (SOA-C02) exam
domains:
(For more information on the official AWS Certified SysOps Administrator - Associate [SOA-C02] exam topics, see the
Introduction.)
This section covers the following objective of Domain 5 (Networking and Content Delivery) from the official AWS Certified
SysOps Administrator - Associate (SOA-C02) exam guide:
CramSaver
If you can correctly answer these questions before going through this section, save time by skimming the Exam Alerts in
this section and then completing the Cram Quiz at the end of the section.
1. What configuration tasks must be completed to enable flow logs on a VPC and view them in CloudWatch logs?
Answers
1. Answer: Create an IAM policy and role, a CloudWatch log group, and a VPC flow log.
VPC flow logs are used to capture information about the IP traffic flowing in or out of network interfaces in a VPC. Flow
logs can be created for an entire VPC, a subnet, or an individual interface. You can capture all traffic, accepted traffic, or
rejected traffic. It can be used to diagnose security group or Network ACL rules. VPC flow logs do not have a performance
impact. The data captured can be published to CloudWatch logs or an S3 bucket. Flow logs do not provide the ability to
view a real-time stream of traffic. Logs are published every 10 minutes by default but can be configured for faster delivery.
The first step required in configuring VPC flow logs is to create the appropriate IAM role. This role must have the
permissions to publish VPC flow logs to CloudWatch logs. Figure 13.1 shows a sample IAM policy. There are prewritten
policies that you can copy from the AWS documentation.
1/8
FIGURE 13.1 Flow log IAM policy
Now that the policy has been created, you must create an EC2 role that includes that policy, as shown in Figure 13.2. You
must also give the new role a trust relationship with VPC flow logs.
The next step is to create a CloudWatch log group. In Figure 13.3, you can see the configuration of a log group.
Now you are finally ready to enable flow logs. In Figure 13.4, a new flog log is created on a VPC. This flow log will capture
all traffic information and send it to the CloudWatch log group shown in Figure 13.3. You could also send the records to an
2/8
S3 bucket. The flow log has the necessary permissions to do this based on the flow log role shown in Figure 13.2. This
flow log will track all interfaces in this VPC. To target a specific subnet or an interface, create the flow log on those objects.
In Figure 13.5, you can see some of the captured flow logs. In this case, they are filtered based on a specific IP address
(74.76.58.81) to reduce the amount of information displayed. Notice that the traffic has been accepted by the security
group.
Figure 13.6 shows traffic that has been blocked by the security group. In this case, 74.76.58.81 is the source IP,
10.1.101.112 is the destination IP, and port 22 is the source port.
ExamAlert
You must be capable of reading and interpreting flow logs and identifying source IP and destination IP addresses, ports,
and whether the traffic was blocked or allowed. Also, you need to understand that traffic can be blocked by a network
3/8
access control list, and that the NACL is enforced before the security group on incoming traffic.
This section covers the following objective of Domain 5 (Networking and Content Delivery) from the official AWS Certified
SysOps Administrator - Associate (SOA-C02) exam guide:
CramSaver
If you can correctly answer these questions before going through this section, save time by skimming the Exam Alerts in
this section and then completing the Cram Quiz at the end of the section.
1. What security configuration task must be completed for ELB access logs to function?
2. Are ELB access logs useful for troubleshooting issues such as spikes in request counts and Layer 7 access codes?
Answers
1. Answer: The logs are sent to an S3 bucket. The S3 bucket policy must be configured to grant ELB access logs write
permissions.
2. Answer: Yes, ELB access logs capture request details and server responses.
ELB access logs are an optional feature that can be used to troubleshoot traffic patterns and issues with traffic as it hits the
ELB. ELB access logs capture details of requests sent to your load balancer such as the time of the request, the client IP,
latency, and server responses. Access logs are stored in an S3 bucket. Log files are published every five minutes, and
multiple logs can be published for the same five-minute period.
The S3 bucket must be in the same region as the ELB. The bucket policy must be configured to allow access logs to write
to the bucket. You can use tools such as Amazon Athena, Loggly, Splunk, or Sumo Logic to analyze the contents of ELB
access logs.
ELB access logs also include HTTP response codes from the target. If a connection could not be established to the target,
it is set to -. Figure 13.7 shows how to configure an S3 bucket as a destination for ELB access logs.
ExamAlert
The S3 bucket that is used for ELB access logs must be in the same region as the bucket and must have a bucket policy
that allows write permissions on the ELB access logs.
4/8
This section covers the following objective of Domain 5 (Networking and Content Delivery) from the official AWS Certified
SysOps Administrator - Associate (SOA-C02) exam guide:
CramSaver
If you can correctly answer these questions before going through this section, save time by skimming the Exam Alerts in
this section and then completing the Cram Quiz at the end of the section.
1. What three services must be configured to be able to perform comprehensive AWS WAF ACL logging?
2. What is the purpose on the Kinesis Data Firehose when configuring AWS WAF ACL logging?
Answers
1. Answer: The AWS WAF web ACL, Kinesis Data Firehose, and S3.
2. Answer: The logs are received by Kinesis Data Firehose, which can trim the logs and reduce the amount of data that
gets stored in S3.
The AWS Web Application Firewall (WAF) protects your resources and stops malicious traffic. Rules can be created based
on conditions like HTTP headers, HTTP body, URI strings, SQL injection, and cross-site scripting. You can enable logging
to get capture information such as the time and nature of requests and the web ACL rule that each request matched.
The logs are received by Kinesis Data Firehose, which can be used to trim the logs and reduce the amount of data that
gets stored. The logs are commonly stored in S3 after being processed by Kinesis. The Kinesis delivery stream can easily
be created using a CloudFormation template that is available on the AWS website.
To configure AWS WAF ACL comprehensive logging, the first step is to create the S3 bucket that the data will be stored in.
You must configure an access policy to allow Kinesis Data Firehose to write to the S3 bucket. The next step is to create a
Kinesis Data Firehose and give it the necessary IAM role to write to the S3 bucket. Finally, you must associate the AWS
WAF with the Kinesis Data Firehose and enable logging.
These logs can be helpful when determining what types of rules should be created or modified. A web ACL can allow or
deny traffic based on the source IP address, country of origin of the request, string match or regular expression (regex)
match, or the detection of malicious SQL code or scripting. For example, a request could include a header with some
identifying information, such as the name of the department. A string or regex match could be used to identify that traffic,
and the logs could be used to determine the volume of matching requests.
ExamAlert
A web ACL can allow or deny traffic based on the source IP address, country of origin of the request, string match or
regular expression (regex) match, or the detection of malicious SQL code or scripting. You can also use the logs that are
generated to examine the number of requests, the nature of those requests, and where they originate from.
CloudFront Logs
This section covers the following objective of Domain 5 (Networking and Content Delivery) from the official AWS Certified
SysOps Administrator - Associate (SOA-C02) exam guide:
CramSaver
If you can correctly answer these questions before going through this section, save time by skimming the Exam Alerts in
this section and then completing the Cram Quiz at the end of the section.
1. You need to troubleshoot the success of connections using HTTP response codes. Which logs should you review?
5/8
2. When you enable logging on a CloudFront distribution, what configuration changes are required on the destination S3
bucket?
Answers
1. Answer: You should review the CloudFront and Application Load Balancer logs.
2. Answer: The destination bucket ACL is automatically updated to allow log delivery.
CloudFront is a content delivery network service that speeds up delivery of your static and dynamic web content. You can
enable standard logs on a CloudFront distribution and deliver them to an S3 bucket. Real-time logs are also possible and
enable you to view request information within seconds of the requests occurring.
As incoming requests reach the CloudFront edge locations, data is captured about the request in a log file that is specific to
a single distribution. These log files are saved to S3 periodically. If there are no requests for an hour, a log file is not
generated. When you configure logging on a distribution, the destination bucket ACL is automatically updated to allow log
delivery.
You should not choose a bucket that is an S3 origin to contain these logs. Also, buckets in the following regions are not
currently supported as CloudFront access log destinations: af-south-1, ap-east-1, eu-south-1, and me-south-1.
More than 30 fields are included in each log file. They contain the date and time of the request and also the edge location
where they were received. Other fields shown include the source IP, protocol, and port.
Much like ALB access logs, the CloudFront logs also include the HTTP status code of the server’s response. This is a
critical tool for analyzing the success of requests.
You can also use Athena to perform overall analysis of CloudFront logs.
ExamAlert
You can use either CloudFront or ELB logs to analyze HTTP response codes and determine if requests were successfully
served.
This section covers the following objective of Domain 5 (Networking and Content Delivery) from the official AWS Certified
SysOps Administrator - Associate (SOA-C02) exam guide:
CramSaver
If you can correctly answer these questions before going through this section, save time by skimming the Exam Alerts in
this section and then completing the Cram Quiz at the end of the section.
1. Name one benefit and one drawback of increasing the TTL on objects in a CloudFront distribution.
2. What does the total error rate metric for a CloudFront distribution indicate?
Answers
1. Answer: Increasing the TTL means CloudFront will reach out to the origin for updated content less often, resulting in
fewer cache misses. The drawback is that your users are more likely to get stale data from the cache.
2. Answer: The total error rate metric indicates the percentage of requests to the origin that result in a 400-type or 500-type
response.
6/8
Improving Cache Hit Rate
The two major benefits of CloudFront are reducing latency experienced by requestors and reducing the hit count on your
origin (S3, web server, and so on). As more objects are served from CloudFront, this reduces the workload that the origin
must perform, and as a result, cost savings can be achieved.
The percentage of requests that are served by CloudFront (without pulling content from the origin) is called the cache hit
ratio. You can observe the cache hit ratio in the CloudFront console. Increasing the TTL is one way to improve this ratio
because CloudFront will reach out to the origin for updated content less often. Of course, this means that your users are
more likely to get stale data from the cache.
CloudFront serves the cached version of a file from an edge location until the file expires. After a file expires, CloudFront
forwards the request to the origin server. CloudFront may still have the latest version, in which case the origin returns the
status code 304 Not Modified. If a newer version exists in the origin, the origin returns the status code 200 OK and the
latest version of the file.
You can also improve caching based on cookie values. Instead of forwarding all cookies, configure specific cookies for
CloudFront to forward to your origin. For example, assume that there are two cookies in a request, and each cookie has
two possible values. In this case, CloudFront forwards four requests to the origin, and all four responses are cached in
CloudFront, even if some of them are identical.
Similarly, you can configure CloudFront to cache based on only a limited set of specified headers instead of forwarding and
caching based on all headers.
When a cache miss occurs, the content must be retrieved from the origin. To the origin, this appears as a web request. The
origin may return an HTTP error code (4xx or 5xx status codes). You can monitor, alarm, and receive notifications that
include these HTTP response codes. CloudFront publishes six metrics with a one-minute granularity into Amazon
CloudWatch:
Total Error Rate: Percentage of requests that result in a 400-type or 500-type response.
ExamAlert
You can monitor HTTP response codes that are returned from the origin using the 4xx Error Rate, 5xx Error Rate, and Total
Error Rate metrics.
This section covers the following objective of Domain 5 (Networking and Content Delivery) from the official AWS Certified
SysOps Administrator - Associate (SOA-C02) exam guide:
CramSaver
If you can correctly answer these questions before going through this section, save time by skimming the Exam Alerts in
this section and then completing the Cram Quiz at the end of the section.
1. What task must be completed by the colocation provider for Direct Connect physical connectivity to be established?
7/8
2. A VPN connection establishment is failing during phase 1. What are some possible causes of this issue?
Answers
1. Answer: A cross-connect must be made between your device and the Direct Connect hardware.
2. Answer: IKE negotiation may fail due to a physical customer gateway that does not meet the AWS VPN requirements.
Also a misconfigured preshared key prevents phase 1 from completing.
Direct Connect issues can be difficult to diagnose. The ideal approach is to use the OSI model to help isolate the potential
issues and the underlying cause.
Layer 1 (physical) issues occur when you are having difficulty establishing physical connectivity to an AWS Direct Connect
device. When a Direct Connect circuit is established, a cross-connect must be made between your port and the Direct
Connect device. You can ask your colocation provider to validate that the cross-connect is properly established. You can
also troubleshoot that all devices under your control are powered on and that the fiber-optic connection is properly
connected.
Layer 2 (data link) issues occur when the physical connection is working properly, but the Direct Connect virtual interface
(VIF) does not come up. These problems are typically the result of a misconfigured VLAN, improperly configured VLAN
802.1Q tagging, or ARP issues. Direct Connect is technically a Layer 2 offering. If Layer 2 is working properly, you can
assume that any Layer 3 or Layer 4 issues are related to other configurations and not Direct Connect.
Layer 3 (network) and Layer 4 (transport) issues are routing related. Make sure that BGP is properly configured with the
correct peer IPs and ASNs.
If you are establishing a new IPsec VPN, the first phase is Internet Key Exchange (IKE). If this phase fails, make sure that
the customer gateway meets the AWS VPN requirements. IKEv1 and IKEv2 are supported, but other versions are not.
Make sure that both ends of the VPN are configured with the correct preshared key.
Phase 2 is IPsec tunnel establishment. You can examine IPsec debug logs to isolate the exact cause of the phase 2
failure. You should verify that no firewalls are blocking Encapsulating Security Payload (ESP) protocol 50 or other IPsec
traffic. Phase 2 should use the SHA-1 hashing algorithm and AES-128 as the encryption algorithm. If you are using policy-
based routing (not BGP), make sure that you have properly identified the networks in both locations.
ExamAlert
Phase 1 of VPN establishment is IKE and relies on supported hardware and a correct preshared key. Phase 2 of VPN
establishment is the IPsec tunnel and relies on the correct hashing and encryption algorithms.
What Next?
If you want more practice on this chapter’s exam objectives before you move on, remember that you can access all of the
Cram Quiz questions on the Pearson Test Prep software online. You can also create a custom exam by objective with the
Online Practice Test. Note any objective you struggle with and go to that objective’s material in this chapter.
8/8