AZURE Application Gateway
AZURE Application Gateway
Application Gateway
Application Gateway manages the requests that client applications can send to a web app. Application
Gateway routes traffic to a pool of web servers based on the URL of a request. This is known as
application layer routing. The pool of web servers can be Azure virtual machines, Azure virtual machine
scale sets, Azure App Service, and even on-premises servers.
The Application Gateway will automatically load balance requests sent to the servers in each back-end
pool using a round-robin mechanism. However, you can configure session stickiness, if you need to
ensure that all requests for a client in the same session are routed to the same server in a back-end
pool.
Load-balancing works with the OSI Layer 7 routing implemented by Application Gateway routing,
which means that it load balances requests based on the routing parameters (host names and paths)
used by the Application Gateway rules. In comparison, other load balancers, such as Azure Load
Balancer, function at the OSI Layer 4 level, and distribute traffic based on the IP address of the target
of a request.
Operating at OSI Layer 7 enables load balancing to take advantage of the other features that
Application Gateway provides.
Additional features
● Support for the HTTP, HTTPS, HTTP/2 and WebSocket protocols.
There are two primary methods of routing traffic, path-based routing and multiple site hosting.
Path-based routing
Path-based routing enables you to send requests with different paths in the URL to a different pool of
back-end servers. For example, you could direct requests with the path /video/* to a back-end pool
containing servers that are optimized to handle video streaming, and direct /images/* requests to a
pool of servers that handle image retrieval.
Multi-site configurations are useful for supporting multi-tenant applications, where each tenant has
its own set of virtual machines or other resources hosting a web application.
Additional features
● Redirection - Redirection can be used to another site, or from HTTP to HTTPS.
● Rewrite HTTP headers - HTTP headers allow the client and server to pass additional information
with the request or the response.
● Custom error pages - Application Gateway allows you to create custom error pages instead of
displaying default error pages. You can use your own branding and layout using a custom error page.
Front-end IP address
Client requests are received through a front-end IP address. You cans configure Application Gateway
to have a public IP address, a private IP address, or both. Application Gateway can't have more than
one public and one private IP address.
Listeners
Application Gateway uses one or more listeners to receive incoming requests. A listener accepts traffic
arriving on a specified combination of protocol, port, host, and IP address. Each listener routes
requests to a back-end pool of servers following routing rules that you specify. A listener can be Basic
or Multi-site. A Basic listener only routes a request based on the path in the URL. A Multi-site listener
can also route requests using the hostname element of the URL. Listeners also handle SSL certificates
for securing your application between the user and Application Gateway.
Routing rules
A routing rule binds a listener to the back-end pools. A rule specifies how to interpret the hostname
and path elements in the URL of a request, and direct the request to the appropriate back-end pool.
A routing rule also has an associated set of HTTP settings. These settings indicate whether (and how)
traffic is encrypted between Application Gateway and the back-end servers, and other configuration
information such as: Protocol, Session stickiness, Connection draining, Request timeout period, and
Health probes.
Back-end pools
A back-end pool references a collection of web servers. You provide the IP address of each web server
and the port on which it listens for requests when configuring the pool. Each pool can specify a fixed
set of virtual machines, a virtual machine scale-set, an app hosted by Azure App Services, or a
collection of on-premises servers. Each back-end pool has an associated load balancer that distributes
work across the pool
The web application firewall (WAF) is an optional component that handles incoming requests before
they reach a listener. The web application firewall checks each request for many common threats,
based on the Open Web Application Security Project (OWASP). These include: SQL-injection, Cross-
site scripting, Command injection, HTTP request smuggling, HTTP response splitting, Remote file
inclusion, Bots, crawlers, and scanners, and HTTP protocol violations and anomalies.
OWASP has defined a set of generic rules for detecting attacks. These rules are referred to as the Core
Rule Set (CRS). The rule sets are under continuous review as attacks evolve in sophistication. WAF
supports two rule sets, CRS 2.2.9 and CRS 3.0. CRS 3.0 is the default and more recent of these rule
sets. If necessary, you can opt to select only specific rules in a rule set, targeting certain threats.
Additionally, you can customize the firewall to specify which elements in a request to examine, and
limit the size of messages to prevent massive uploads from overwhelming your servers.
WAF is enabled on your Application Gateway by selecting the WAF tier when you create a gateway.
Health probes
Health probes are an important part in assisting the load balancer to determine which servers are
available for load balancing in a back-end pool. Application Gateway uses a health probe to send a
request to a server. If the server returns an HTTP response with a status code between 200 and 399,
the server is deemed healthy.
If you don't configure a health probe, Application Gateway creates a default probe that waits for 30
seconds before deciding that a server is unavailable.
*****