Elastic Load Balancing FAQs

General

Elastic Load Balancing (ELB) supports four types of load balancers. You can select the appropriate load balancer based on your application needs. If you need to load balance HTTP requests, we recommend you use the Application Load Balancer (ALB). For network/transport protocols (layer4 – TCP, UDP) load balancing, and for extreme performance/low latency applications we recommend using Network Load Balancer. If your application is built within the Amazon Elastic Compute Cloud (Amazon EC2) Classic network, you should use Classic Load Balancer. If you need to deploy and run third-party virtual appliances, you can use Gateway Load Balancer.

Yes, you can privately access Elastic Load Balancing APIs from your Amazon Virtual Private Cloud (VPC) by creating VPC endpoints. With VPC endpoints, the routing between the VPC and Elastic Load Balancing APIs is handled by the AWS network without the need for an Internet gateway, network address translation (NAT) gateway, or virtual private network (VPN) connection. The latest generation of VPC Endpoints used by Elastic Load Balancing are powered by AWS PrivateLink, an AWS technology enabling the private connectivity between AWS services using Elastic Network Interfaces (ENI) with private IPs in your VPCs. To learn more about AWS PrivateLink, visit the AWS PrivateLink documentation.

Yes, Elastic Load Balancing guarantees a monthly availability of at least 99.99% for your load balancers (Classic, Application or Network). To learn more about the SLA and know if you are qualified for a credit, visit here.

Application Load Balancer

An Application Load Balancer supports targets with any operating system currently supported by the Amazon EC2 service.

An Application Load Balancer supports load balancing of applications using HTTP and HTTPS (Secure HTTP) protocols.

Yes. HTTP/2 support is enabled natively on an Application Load Balancer. Clients supporting HTTP/2 can connect to an Application Load Balancer over TLS.

You can forward traffic from your Network Load Balancer, which provides support for PrivateLink and a static IP address per Availability Zone, to your Application Load Balancer. Create an Application Load Balancer-type target group, register your Application Load Balancer to it, and configure your Network Load Balancer to forward traffic to the Application Load Balancer-type target group.

You can perform load balancing for the following TCP ports: 1-65535

Yes. WebSockets and Secure WebSockets support is available natively and ready for use on an Application Load Balancer.

Yes. Request tracing is enabled by default on your Application Load Balancer.

While there is some overlap, there is no feature parity between the two types of load balancers. Application Load Balancers are the foundation of our application layer load-balancing platform for the future.

Yes.

Yes.

No. Application Load Balancers require a new set of application programming interfaces (APIs).

The ELB Console will allow you to manage Application and Classic Load Balancers from the same interface. If you are using the command-line interface (CLI) or a software development kit (SDK), you will use a different ‘service’ for Application Load Balancers. For example, in the CLI you will describe your Classic Load Balancers using `aws elb describe-load-balancers` and your Application Load Balancers using `aws elbv2 describe-load-balancers`.

No, you cannot convert one load balancer type into another.

Yes. You can migrate to Application Load Balancer from Classic Load Balancer using one of the options listed in this document.

No. If you need Layer-4 features, you should use Network Load Balancer.

Yes, you can add listeners for HTTP port 80 and HTTPS port 443 to a single Application Load Balancer.

Yes. To receive a history of Application Load Balancing API calls made on your account, use AWS CloudTrail.

Yes, you can terminate HTTPS connection on the Application Load Balancer. You must install a Secure Sockets Layer (SSL) certificate on your load balancer. The load balancer uses this certificate to terminate the connection and then decrypt requests from clients before sending them to targets.

You can either use AWS Certificate Manager to provision an SSL/TLS certificate or you can obtain the certificate from other sources by creating the certificate request, getting the certificate request signed by a CA, and then uploading the certificate either using AWS Certification Manager or the AWS Identity and Access Management (IAM) service.

An Application Load Balancer is integrated with AWS Certificate Management (ACM). Integration with ACM simplifies binding a certificate to the load balancer, thereby streamlining the entire SSL offload process. Purchasing, uploading, and renewing SSL/TLS certificates is a complex, manual, and time-consuming process. With ACM integration with Application Load Balancer, this whole process has been shortened to simply requesting a trusted SSL/TLS certificate and selecting the ACM certificate to provision it with the load balancer.

No, only encryption is supported to the back-ends with an Application Load Balancer.

SNI is automatically enabled when you associate more than one TLS certificate with the same secure listener on a load balancer. Similarly, SNI mode for a secure listener is automatically disabled when you have only one certificate associated to a secure listener.

Yes, you can associate multiple certificates for the same domain to a secure listener. For example, you can associate:

  • ECDSA and RSA certificates
  • Certificates with different key sizes (e.g. 2K and 4K) for SSL/TLS certificates
  • Single-Domain, Multi-Domain (SAN) and Wildcard certificates

Yes, IPv6 is supported with an Application Load Balancer.

You can configure rules for each of the listeners on the load balancer. The rules include conditions and corresponding actions if the conditions are satisfied. The supported conditions are Host header, path, HTTP headers, methods, query parameters, and source IP classless inter-domain routing (CIDR). The supported actions are redirect, fixed response, authenticate, and forward. Once you have set this up, the load balancer will use the rules to determine how a particular HTTP request should be routed. You can use multiple conditions and actions in a rule, and in each condition can specify a match on multiple values.

Your AWS account has these limits for an Application Load Balancer.

You can integrate your Application Load Balancer with AWS Web Application Firewall (WAF), a web application firewall that helps protect web applications from attacks by allowing you to configure rules based on IP addresses, HTTP headers, and custom uniform resource identifier (URI) strings. Using these rules, AWS WAF can block, allow, or monitor (count) web requests for your web application. Please see AWS WAF developer guide for more information.

You can use any IP address from the load balancer’s VPC CIDR for targets within load balancer’s VPC, and any IP address from RFC 1918 ranges (10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16) or RFC 6598 range (100.64.0.0/10) for targets located outside the load balancer’s VPC (for example, targets in Peered VPC, Amazon EC2 Classic, and on-premises locations reachable over AWS Direct Connect or VPN connection).

There are various ways to achieve hybrid load balancing. If an application runs on targets distributed between a VPC and an on-premises location, you can add them to the same target group using their IP addresses. To migrate to AWS without impacting your application, gradually add VPC targets to the target group and remove on-premises targets from the target group. 

If you have two different applications such that the targets for one application are in a VPC and the targets for other applications are in on-premises location, you can put the VPC targets in one target group and the on-premises targets in another target group and use content based routing to route traffic to each target group. You can also use separate load balancers for VPC and on-premises targets and use DNS weighting to achieve weighted load balancing between VPC and on-premises targets.

You cannot load balance to EC2-Classic Instances when registering their Instance IDs as targets. However if you link these EC2-Classic instances to the load balancer's VPC using ClassicLink and use the private IPs of these EC2-Classic instances as targets, then you can load balance to the EC2-Classic instances. If you are using EC2 Classic instances today with a Classic Load Balancer, you can easily migrate to an Application Load Balancer.

Cross-zone load balancing is already enabled by default in Application Load Balancer.

You should use authentication through Amazon Cognito if:

  • You want to provide flexibility to your users to authenticate via social network identities (Google, Facebook, and Amazon) or enterprise identities (SAML) or via your own user directories provided by Amazon Cognito’s User Pool.
  • You are managing multiple identity providers including OpenID Connect and want to create a single authentication rule in Application Load Balancer (ALB) that can use Amazon Cognito to federate your multiple identity providers.
  • You need to actively manage user profiles with one or more social or OpenID Connect identity providers from one central place. For example, you can put users in groups and add custom attributes to represent user status and control access for paid users.

Alternatively, if you have invested in developing custom IdP solutions and simply want to authenticate with a single OpenID Connect-compatible identity provider, you may prefer using Application Load Balancer’s native OIDC solution.

The following three types of redirects are supported.

HTTP to HTTP
http://hostA to http://hostB

HTTP to HTTPS
http://hostA to https://hostB
https://hostA:portA/pathA to https://hostB:portB/pathB

HTTPS to HTTPS
https://hostA to https://hostB

The following content types are supported: text/plain, text/css, text/html, application/javascript, application/json.

HTTP(S) requests received by a load balancer are processed by the content-based routing rules. If the request content matches the rule—with an action to forward it to a target group through a Lambda function as a target—then the corresponding Lambda function is invoked. The content of the request (including headers and body) is passed on to the Lambda function in JavaScript object notation (JSON) format. The response from the Lambda function should be in JSON format. The response from the Lambda function is transformed into an HTTP response and sent to the client. The load balancer invokes your Lambda function using the AWS Lambda Invoke API, and requires that you provide invoke permissions for your Lambda function to the Elastic Load Balancing service.

Yes. Application Load Balancer supports Lambda invocation for requests over both HTTP and HTTPS protocol.

You can use Lambda as a target with the Application Load Balancer in US East (N. Virginia), US East (Ohio), US West (Northern California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada ( Central), EU (Frankfurt), EU (Ireland), EU (London), EU (Paris), South America (São Paulo), and GovCloud (US-West) AWS Regions.

Yes, Application Load Balancer is available in the Local Zone in Los Angeles. Within the Los Angeles Local Zone, Application Load Balancer will operate in a single subnet and scale automatically to meet varying levels of application load without manual intervention.

You are charged for each hour or partial hour that an Application Load Balancer is running and the number of Load Balancer Capacity Units (LCU) used per hour.

An LCU is a new metric for determining how you pay for an Application Load Balancer. An LCU defines the maximum resource consumed in any one of the dimensions (new connections, active connections, bandwidth and rule evaluations) the Application Load Balancer processes your traffic.

No, Classic Load Balancers will continue to be billed for bandwidth and hourly usage.

We expose the usage of all four dimensions that constitute an LCU via Amazon CloudWatch.

No. The number of LCUs per hour will be determined based on maximum resource consumed amongst the four dimensions that constitutes a LCU.

Yes.

Yes. For new AWS accounts, a free tier for an Application Load Balancer offers 750 hours and 15 LCUs. This free tier offer is only available to new AWS customers, and is available for 12 months following your AWS sign-up date.

Yes. You can use both Classic and Application Load Balancers for 15 GB and 15 LCUs respectively. The 750 load balancer hours are shared between both Classic and Application Load Balancers.

Rule evaluations are defined as the product of number of rules processed and the request rate averaged over an hour.

Certificate key size affects only the number of new connections per second in the LCU computation for billing. The following table lists the value of this dimension for different key sizes for RSA and ECDSA certificates.

RSA certificates
Key Size: <=2K 
New connections/sec: 25

Key Size: <=4K 
New connections/sec: 5

Key Size: <=8K 
New connections/sec: 1

Key Size: >8K
New connections/sec: 0.25

ECDSA Certificates
Key Size: <=256
New connections/sec: 25

Key Size: <=384
New connections/sec: 5

Key Size: <=521 
New connections/sec: 1

Key Size: >521 
New connections/sec: 0.25

No. Since cross-zone load balancing is always on with Application Load Balancer, you are not charged for this type of regional data transfer.

No. There is no separate charge for enabling the authentication functionality in Application Load Balancer. When using Amazon Cognito with Application Load Balancer, Amazon Cognito pricing will apply.

You are charged as usual for each hour or partial hour that an Application Load Balancer is running and the number of Load Balancer Capacity Units (LCU) used per hour. For Lambda targets, each LCU offers 0.4 GB processed bytes per hour, 25 new connections per second, 3,000 active connections per minute, and 1,000 rule evaluations per second. For the processed bytes dimension, each LCU provides 0.4 GB per hour for Lambda targets versus 1 GB per hour for all other target types like Amazon EC2 instances, containers, and IP addresses. Please note that usual AWS Lambda charges apply to Lambda invocations by Application Load Balancer.

Applications Load Balancers emit two new CloudWatch metrics. LambdaTargetProcessedBytes metric indicates the bytes processed by Lambda targets, and the StandardProcessedBytes metric indicates bytes processed by all other target types.

Network Load Balancer

Yes. Network Load Balancers support both TCP, UDP, and TCP+UDP (Layer 4) listeners, as well as TLS listeners.

Network Load Balancer provides both TCP and UDP (Layer 4) load balancing. It is architected to handle millions of requests per second and sudden volatile traffic patterns, and provides extremely low latencies. In addition, Network Load Balancer also supports TLS termination, preserves the source IP of the clients, and provides stable IP support and zonal isolation. It also supports long-running connections that are useful for WebSocket type applications.

Yes. To achieve this, you can use a TCP+UDP listener. For example, for a DNS service using both TCP and UDP, you can create a TCP+UDP listener on port 53, and the load balancer will process traffic for both UDP and TCP requests on that port. You must associate a TCP+UDP listener with a TCP+UDP target group.

Network Load Balancer preserves the source IP of the client, which is not preserved in the Classic Load Balancer. Customers can use proxy protocol with Classic Load Balancer to get the source IP. Network Load Balancer automatically provides a static IP per Availability Zone (AZ) to the load balancer and also enables assigning an Elastic IP to the load balancer per AZ. This is not supported with Classic Load Balancer.

Yes. You can migrate to Network Load Balancer from Classic Load Balancer using one of the options listed in this document.

Yes, please refer to Network Load Balancer limits documentation for more information.

Yes, you can use the AWS Management Console, AWS CLI, or the API to set up a Network Load Balancer.

No. To create a Classic Load Balancer, use the 2012-06-01 API. To create a Network Load Balancer or an Application Load Balancer, use the 2015-12-01 API.

Yes, you can create your Network Load Balancer in a single AZ by providing a single subnet when you create the load balancer.

Yes, you can use Amazon Route 53 health checking and DNS failover features to enhance the availability of the applications running behind Network Load Balancers. Using Route 53 DNS failover, you can run applications in multiple AWS Availability zones and designate alternate load balancers for failover across regions. 

In the event that you have your Network Load Balancer configured for multi-AZ, if there are no healthy Amazon EC2 instances registered with the load balancer for that AZ, or if the load balancer nodes in a given zone are unhealthy, then Route 53 will fail away to alternate load balancer nodes in other healthy AZs.

No. A Network Load Balancer’s addresses must be completely controlled by you, or completely controlled by ELB. This is to ensure that when using Elastic IPs with a Network Load Balancer, all addresses known to your clients do not change.

No. For each associated subnet a Network Load Balancer is in, the Network Load Balancer can only support a single public/internet facing IP address.

The Elastic IP Addresses that were associated with your load balancer will return to your allocated pool and be available for future use.

Network Load Balancer can be set up as an internet-facing load balancer or an internal load balancer, similar to what is possible with Application Load Balancer and Classic Load Balancer.

No. For each associated subnet that a load balancer is in, the Network Load Balancer can only support a single private IP.

Yes, configure TCP listeners that route the traffic to the targets that implement WebSockets protocol (https://tools.ietf.org/html/rfc6455 ). Because WebSockets is a layer 7 protocol and Network Load Balancer is operating at layer 4, no special handling exists in Network Load Balancer for WebSockets or other higher level protocols.

Yes. You can use any IP address from the load balancer’s VPC CIDR for targets within load balancer’s VPC and any IP address from RFC 1918 ranges (10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16) or RFC 6598 range (100.64.0.0/10) for targets located outside the load balancer’s VPC (EC2-Classic and on-premises locations reachable over AWS Direct Connect). Load balancing to IP address target type is supported for TCP listeners only, and is currently not supported for UDP listeners.

Yes, Network Load Balancers with TCP and TLS Listeners can be used to setup AWS PrivateLink. You cannot set up PrivateLink with UDP listeners on Network Load Balancers.

While user datagram protocol (UDP) is connectionless, the load balancer maintains UDP flow state based on 5-tuple hash, ensuring that packets sent in the same context are consistently forwarded to the same target. The flow is considered active as long as traffic is flowing and until the idle timeout is reached. Once the timeout threshold is reached, the load balancer will forget the affinity, and the incoming UDP packet will be considered a new flow and load-balanced to a new target.

Network Load Balancer idle timeout for TCP connections is 350 seconds. The idle timeout for UDP flows is 120 seconds.

Each container on an instance can now have its own security group, and does not need to share security rules with other containers. You can attach security groups to an ENI, and each ENI on an instance can have a different security group. You can map a container to the IP address of a particular ENI to associate security group(s) per container. Load balancing using IP addresses also allows multiple containers running on an instance use the same port (say port 80). The ability to use the same port across containers allows containers on an instance to communicate with each other through well-known ports instead of random ports.

There are various ways to achieve hybrid load balancing. If an application runs on targets distributed between a VPC and an on-premises location, you can add them to the same target group using their IP addresses. To migrate to AWS without impacting your application, gradually add VPC targets to the target group and remove on-premises targets from the target group. You can also use separate load balancers for VPC and on-premises targets and use DNS weighting to achieve weighted load balancing between VPC and on-premises targets.

You cannot load balance to EC2-Classic Instances when registering their Instance IDs as targets. However if you link these EC2-Classic instances to the load balancer's VPC using ClassicLink and use the private IPs of these EC2-Classic instances as targets, then you can load balance to the EC2-Classic instances. If you are using EC2 Classic instances today with a Classic Load Balancer, you can easily migrate to a Network Load Balancer.

You can enable cross-zone loading balancing only after creating your Network Load Balancer. You achieve this by editing the load balancing attributes section and then selecting the cross-zone load balancing support checkbox.

Yes, you will be charged for regional data transfer between Availability Zones with Network Load Balancer when cross-zone load balancing is enabled. Check the charges in the data transfer section of the Amazon EC2 On-Demand Pricing page.

Yes. Network Load Balancer currently supports 200 targets per Availability Zone. For example, if you are in two AZs, you can have up to 400 targets registered with Network Load Balancer. If cross-zone load balancing is on, then the maximum targets reduce from 200 per AZ to 200 per load balancer. So, in the example above: When cross-zone load balancing is on, even though your load balancer is in two AZs, you are limited to 200 targets that can be registered to the load balancer.

Yes, you can terminate TLS connections on the Network Load Balancer. You must install an SSL certificate on your load balancer. The load balancer uses this certificate to terminate the connection and then decrypt requests from clients before sending them to targets.

Source IP continues to be preserved even if you terminate TLS on the Network Load Balancer.

You can either use AWS Certificate Manager to provision an SSL/TLS certificate, or you can obtain the certificate from other sources by creating the certificate request, getting the certificate request signed by a certificate authority (CA), and then uploading the certificate either using AWS Certification Manager (ACM) or the AWS Identity and Access Management (IAM) service.

SNI is automatically enabled when you associate more than one TLS certificate with the same secure listener on a load balancer. Similarly, SNI mode for a secure listener is automatically disabled when you have only one certificate associated to a secure listener.

Network Load Balancer is integrated with AWS Certificate Management (ACM). Integration with ACM makes it very simple to bind a certificate to the load balancer thereby making the entire SSL offload process very easy. Purchasing, uploading, and renewing SSL/TLS certificates is a time-consuming manual and complex process. With ACM integration with Network Load Balancer, this whole process has been shortened to simply requesting a trusted SSL/TLS certificate and selecting the ACM certificate to provision it with the load balancer. Once you create a Network Load balancer, you can now configure a TLS listener followed by an option to select a certificate from either ACM or Identity Access Manager (IAM). This experience is similar to what you have in Application Load Balancer or Classic Load Balancer.

No, only encryption is supported to the back-ends with Network Load Balancer.

Network Load Balancer only supports RSA certificates with 2K key size. We currently do not support RSA certificate key sizes greater than 2K or ECDSA certificates on the Network Load Balancer.

You can use TLS Termination on Network Load Balancer in US East (N. Virginia), US East (Ohio), US West (Northern California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), EU (Frankfurt), EU (Ireland), EU (London), EU (Paris), South America (São Paulo), and GovCloud (US-West) AWS Regions.

You are charged for each hour or partial hour that a Network Load Balancer is running and the number of Load Balancer Capacity Units (LCU) used by Network Load Balancer per hour.

An LCU is a new metric for determining how you pay for a Network Load Balancer. An LCU defines the maximum resource consumed in any one of the dimensions (new connections/flows, active connections/flows, and bandwidth) the Network Load Balancer processes your traffic.

The LCU metrics for the TCP traffic are as follows:

  • 800 new TCP connections per second.
  • 100,000 active TCP connections (sampled per minute).
  • 1 GB per hour for Amazon EC2 instances, containers, and IP addresses as targets.

The LCU metrics for the UDP traffic are as follows:

  • 400 new flows per second.
  • 50,000 active UDP flows (sampled per minute).
  • 1 GB per hour for Amazon EC2 instances, containers, and IP addresses as targets.

The LCU metrics for the TLS traffic are as follows:

  • 50 new TLS connections per second.
  • 3,000 active TLS connections (sampled per minute).
  • 1 GB per hour for Amazon EC2 instances, containers, and IP addresses as targets.

No, for each protocol you are charged only on one of the three dimensions (the highest for the hour).

No. Multiple requests can be sent in a single connection.

No. Classic Load Balancers will continue to be billed for bandwidth and hourly charge.

We will expose the usage of all three dimensions that constitutes a LCU via Amazon CloudWatch.

No. The number of LCUs per hour will be determined based on maximum resource consumed amongst the three dimensions that constitutes a LCU.

Yes.

Yes. For new AWS accounts, a free tier for a Network Load Balancer offers 750 hours and 15 LCUs. This free tier offer is only available to new AWS customers, and is available for 12 months following your AWS sign-up date.

Yes. You can use Application and Network each for 15 LCUs and Classic for 15 GB respectively. The 750 load balancer hours are shared between Application, Network, and Classic Load Balancers.

Gateway Load Balancer

You should use Gateway Load Balancer when deploying inline virtual appliances where network traffic is not destined for the Gateway Load Balancer itself. Gateway Load Balancer transparently passes all Layer 3 traffic through third-party virtual appliances, and is invisible to the source and destination of the traffic. For more details on how these load balancers compare, see the features comparison page.

Gateway Load Balancer runs within one AZ.

Gateway Load Balancer provides both Layer 3 gateway and Layer 4 load balancing capabilities. It is a transparent bump-in-the-wire device that does not change any part of the packet. It is architected to handle millions of requests/second, volatile traffic patterns, and introduces extremely low latency. See Gateway Load Balancer features in this table. 

Gateway Load Balancer does not perform TLS termination and does not maintain any application state. These functions are performed by the third-party virtual appliances it directs traffic to, and receives traffic from.

Gateway Load Balancer does not maintain application state, but it maintains flow stickiness to a specific appliance using 5-tuple (for TCP/UDP flows) or 3-tuple (for non-TCP/UDP flows).

By default, Gateway Load Balancer defines a flow as a combination of a 5-tuple that comprises Source IP, Destination IP, IP Protocol, Source Port, and Destination Port. Using the default 5-tuple hash, Gateway Load Balancer makes sure that both directions of a flow (i.e., source to destination, and destination to source) are consistently forwarded to the same target. The flow is considered active as long as traffic is flowing and until the idle timeout is reached. Once the timeout threshold is reached, the load balancer will forget the affinity, and incoming traffic packet will be considered as a new flow and may be load-balanced to a new target.

The default 5-tuple (Source IP, Destination IP, IP Protocol, Source Port, and Destination Port) stickiness provides the most optimal traffic distribution to targets, and is suitable for most TCP and UDP based applications, with some exceptions. The default 5-tuple stickiness is not suitable for TCP or UDP based applications that use separate streams or port numbers for control and data, such as FTP, Microsoft RDP, Windows RPC and SSL VPN. Control and data flows of such applications can land on different target appliances and can cause traffic disruption. If you want to support such protocols, you should enable GWLB flow stickiness using 3-tuple (source IP, destination IP, IP protocol) or 2-tuple (source IP, destination IP).

Some applications do not use TCP or UDP transport at all, but instead use IP protocols such as SCTP and GRE. With the default 5-tuple stickiness of GWLB, traffic flows from these protocols can land on different target appliances and can cause disruption. If you want to support such protocols, you should enable GWLB flow stickiness using 3-tuple (source IP, destination IP, IP protocol) or 2-tuple (source IP, destination IP).

Please see flow stickiness documentation for how to change flow stickiness type.

Gateway Load Balancer idle timeout for TCP connections is 350 seconds. The idle timeout for non-TCP flows is 120 seconds. These timeouts are fixed and cannot be changed.

As a transparent bump-in-the-wire, GWLB itself does not fragment or reassemble packets.

GWLB forwards UDP fragments to/from target appliances. However, GWLB drops TCP and ICMP fragments because layer 4 header is not present in these fragments.

In addition, if the target appliance converts original incoming packet into fragments and sends the newly created fragments back to GWLB, GWLB drops these newly created fragments because they don’t contain layer 4 header. To prevent fragmentation from occurring on the target appliance, we recommend enabling jumbo frame on your target appliance or setting your target appliance’s network interface to use the maximum desired MTU, thus achieving transparent forwarding behavior by keeping the original packet contents as is. 

When a single virtual appliance instance fails, Gateway Load Balancer removes it from the routing list and reroutes traffic to a healthy appliance instance.

If all virtual appliances within an Availability Zone fail, Gateway Load Balancer will drop the network traffic. We recommend deploying Gateway Load Balancers in multiple AZs for greater availability. If all appliances fail in one AZ, scripts can be used to either add new appliances, or direct traffic to a Gateway Load Balancer in a different AZ.

Yes, multiple Gateway Load Balancers can point to same set of virtual appliances.

Gateway Load Balancer is a transparent bump-in-the-wire device and listens to all types of IP traffic (including TCP, UDP, ICMP, GRE, ESP and others). Hence only IP listener is created on a Gateway Load Balancer.

Yes, please refer to Gateway Load Balancer limits documentation for more information.

Yes, you can use the AWS Management Console, AWS CLI, or the API to set up a Gateway Load Balancer.

Yes, you can create your Gateway Load Balancer in a single availability zone by providing a single subnet when you create the load balancer. However, we recommend using multiple availability zones for improved availability. You cannot add or remove availability zones for a Gateway Load Balancer after you create it.

By default, cross-zone load balancing is disabled. You can enable cross-zone loading balancing only after creating your Gateway Load Balancer. You achieve this by editing the load balancing attributes section and then by selecting the cross-zone load balancing support checkbox.

Yes, you will be charged for data transfer between Availability Zones with Gateway Load Balancer when cross-zone load balancing is enabled. Check the charges in the data-transfer section at Amazon EC2 On-Demand Pricing page.

Yes. Gateway Load Balancer currently supports 300 targets per Availability Zone. For example, if you created Gateway Load Balancer in 3 Availability-Zones, you can have up to 900 targets registered. If cross-zone load balancing is on, then the maximum number of targets reduces from 300 per Availability Zone to 300 per Gateway Load Balancer.

You are charged for each hour or partial hour that a Gateway Load Balancer is running and the number of Load Balancer Capacity Units (LCU) used by Gateway Load Balancer per hour.

An LCU is an Elastic Load Balancing metric for determining how you pay for a Gateway Load Balancer. An LCU defines the maximum resource consumed in any one of the dimensions (new connections/flows, active connections/flows, and bandwidth) the Gateway Load Balancer processes your traffic.

The LCU metrics for the TCP traffic is as follows:

  • 600 new flows (or connections) per second.
  • 60,000 active flows (or connections) (sampled per minute).
  • 1 GB per hour for EC2 instances, containers and IP addresses as targets.

No, you are charged only on one of the three dimensions (the highest for the hour).

No. Multiple requests can be sent in a single connection.

You can track usage of all three dimensions of a LCU via Amazon CloudWatch.

Yes.

In order to be valuable, virtual appliances need to introduce as little additional latency as possible, and traffic flowing to and from the virtual appliance must follow a secure connection. Gateway Load Balancer Endpoints create the secured, low-latency connections necessary to meet these requirements.

Using a Gateway Load Balancer Endpoint, appliances can reside in different AWS accounts and VPCs. This allows appliances to be centralized in one location for easier management and reduced operational overhead.

Gateway Load Balancer Endpoints are a new type of VPC endpoint that uses PrivateLink technology. As network traffic flows from a source (an Internet Gateway, a VPC, etc.) to the Gateway Load Balancer, and back, a Gateway Load Balancer Endpoint ensures private connectivity between the two. All traffic flows over the AWS network and data is never exposed to the internet, increasing both security and performance.

A PrivateLink Interface endpoint is paired with a Network Load Balancer (NLB) in order to distribute TCP and UDP traffic that is destined for the web applications. In contrast, Gateway Load Balancer Endpoints are used with Gateway Load Balancers to connect the source and destination of traffic. Traffic flows from the Gateway Load Balancer Endpoint to the Gateway Load Balancer, through the virtual appliances, and back to the destination over secured PrivateLink connections.

Gateway Load Balancer Endpoint is a VPC Endpoint and there is no limit on how many VPC Endpoints can connect to a service that uses Gateway Load Balancer. However, we recommend connecting no more than 50 Gateway Load Balancer Endpoints per one Gateway Load Balancer to reduce the risk of broader impact in case of service failure.

Classic Load Balancer

The Classic Load Balancer supports Amazon EC2 instances with any operating system currently supported by the Amazon EC2 service.

The Classic Load Balancer supports load balancing of applications using HTTP, HTTPS (Secure HTTP), SSL (Secure TCP) and TCP protocols.

You can perform load balancing for the following TCP ports:

  • [EC2-VPC] 1-65535
  • [EC2-Classic] 25, 80, 443, 465, 587, 1024-65535

Yes. Each Classic Load Balancer has an associated IPv4, IPv6, and dualstack (both IPv4 and IPv6) DNS name. IPv6 is not supported in VPC. You can use an Application Load Balancer for native IPv6 support in VPC.

Yes.

If you are using Amazon Virtual Private Cloud, you can configure security groups for the front end of your Classic Load Balancers.

Yes, you can map HTTP port 80 and HTTPS port 443 to a single Classic Load Balancer.

Classic Load Balancers do not cap the number of connections that they can attempt to establish with your load balanced Amazon EC2 instances. You can expect this number to scale with the number of concurrent HTTP, HTTPS, or SSL requests or the number of concurrent TCP connections that the Classic load balancers receive.

You can load balance Amazon EC2 instances launched using a paid AMI from AWS Marketplace. However, Classic Load Balancers do not support instances launched using a paid AMI from Amazon DevPay site.

Yes. See the Elastic Load Balancing web page.

Yes. To receive a history of Classic Load Balancer API calls made on your account, simply turn on CloudTrail in the AWS Management Console.

Yes, you can terminate SSL on Classic Load Balancers. You must install an SSL certificate on each load balancer. The load balancers use this certificate to terminate the connection and then decrypt requests from clients before sending them to the back-end instances.

You can either use AWS Certificate Manager to provision a SSL/TLS certificate or you can obtain the certificate from other sources by creating the certificate request, getting the certificate request signed by a CA, and then uploading the certificate using the AWS Identity and Access Management (IAM) service.

Classic Load Balancers are now integrated with AWS Certificate Management (ACM). Integration with ACM makes it very simple to bind a certificate to each load balancer thereby making the entire SSL offload process very easy. Typically purchasing, uploading, and renewing SSL/TLS certificates is a time-consuming manual and complex process. With ACM integrated with Classic Load Balancers, this whole process has been shortened to simply requesting a trusted SSL/TLS certificate and selecting the ACM certificate to provision it with each load balancer.

You can enable cross-zone load balancing using the console, the AWS CLI, or an AWS SDK. See Cross-Zone Load Balancing documentation for more details.

No, you are not charged for regional data transfer between Availability Zones when you enable cross-zone load balancing for your Classic Load Balancer.