0% found this document useful (0 votes)
211 views4 pages

AZURE Application Gateway

The document provides an overview of Azure Application Gateway, which load balances traffic and routes requests to backend servers based on URL rules. Key points include: - It operates at the application layer (OSI Layer 7) to route and load balance web traffic. - Features include load balancing, web application firewall, SSL offloading, auto-scaling, and routing based on URL paths or host names. - Components include front-end IP, listeners, routing rules, backend pools, optional WAF, and health probes.

Uploaded by

trinitycloud001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
211 views4 pages

AZURE Application Gateway

The document provides an overview of Azure Application Gateway, which load balances traffic and routes requests to backend servers based on URL rules. Key points include: - It operates at the application layer (OSI Layer 7) to route and load balance web traffic. - Features include load balancing, web application firewall, SSL offloading, auto-scaling, and routing based on URL paths or host names. - Components include front-end IP, listeners, routing rules, backend pools, optional WAF, and health probes.

Uploaded by

trinitycloud001
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

AZURE Application Gateway

Application Gateway
Application Gateway manages the requests that client applications can send to a web app. Application
Gateway routes traffic to a pool of web servers based on the URL of a request. This is known as
application layer routing. The pool of web servers can be Azure virtual machines, Azure virtual machine
scale sets, Azure App Service, and even on-premises servers.

The Application Gateway will automatically load balance requests sent to the servers in each back-end
pool using a round-robin mechanism. However, you can configure session stickiness, if you need to
ensure that all requests for a client in the same session are routed to the same server in a back-end
pool.

Load-balancing works with the OSI Layer 7 routing implemented by Application Gateway routing,
which means that it load balances requests based on the routing parameters (host names and paths)
used by the Application Gateway rules. In comparison, other load balancers, such as Azure Load
Balancer, function at the OSI Layer 4 level, and distribute traffic based on the IP address of the target
of a request.

Operating at OSI Layer 7 enables load balancing to take advantage of the other features that
Application Gateway provides.

Additional features
● Support for the HTTP, HTTPS, HTTP/2 and WebSocket protocols.

● A web application firewall to protect against web application vulnerabilities.

● End-to-end request encryption.

● Autoscaling, to dynamically adjust capacity as your web traffic load change.

Application Gateway Routing


Clients send requests to your web apps to the IP address or DNS name of the gateway. The gateway
routes requests to a selected web server in the back-end pool, using a set of rules configured for the
gateway to determine where the request should go.

There are two primary methods of routing traffic, path-based routing and multiple site hosting.
Path-based routing
Path-based routing enables you to send requests with different paths in the URL to a different pool of
back-end servers. For example, you could direct requests with the path /video/* to a back-end pool
containing servers that are optimized to handle video streaming, and direct /images/* requests to a
pool of servers that handle image retrieval.

Multiple site routing


Multiple site hosting enables you to configure more than one web application on the same application
gateway instance. In a multi-site configuration, you register multiple DNS names (CNAMEs) for the IP
address of the Application Gateway, specifying the name of each site. Application Gateway uses
separate listeners to wait for requests for each site. Each listener passes the request to a different
rule, which can route the requests to servers in a different back-end pool. For example, you could
configure Application Gateway to direct all requests for http://contoso.com to servers in one back-
end pool, and requests for http://fabrikam.com to another back-end pool. The following diagram
shows this configuration.

Multi-site configurations are useful for supporting multi-tenant applications, where each tenant has
its own set of virtual machines or other resources hosting a web application.
Additional features
● Redirection - Redirection can be used to another site, or from HTTP to HTTPS.

● Rewrite HTTP headers - HTTP headers allow the client and server to pass additional information
with the request or the response.

● Custom error pages - Application Gateway allows you to create custom error pages instead of
displaying default error pages. You can use your own branding and layout using a custom error page.

Application Gateway Configuration


Application Gateway has a series of components that combine to route requests to a pool of web
servers and to check the health of these web servers.

Front-end IP address

Client requests are received through a front-end IP address. You cans configure Application Gateway
to have a public IP address, a private IP address, or both. Application Gateway can't have more than
one public and one private IP address.

Listeners

Application Gateway uses one or more listeners to receive incoming requests. A listener accepts traffic
arriving on a specified combination of protocol, port, host, and IP address. Each listener routes
requests to a back-end pool of servers following routing rules that you specify. A listener can be Basic
or Multi-site. A Basic listener only routes a request based on the path in the URL. A Multi-site listener
can also route requests using the hostname element of the URL. Listeners also handle SSL certificates
for securing your application between the user and Application Gateway.
Routing rules

A routing rule binds a listener to the back-end pools. A rule specifies how to interpret the hostname
and path elements in the URL of a request, and direct the request to the appropriate back-end pool.
A routing rule also has an associated set of HTTP settings. These settings indicate whether (and how)
traffic is encrypted between Application Gateway and the back-end servers, and other configuration
information such as: Protocol, Session stickiness, Connection draining, Request timeout period, and
Health probes.

Back-end pools

A back-end pool references a collection of web servers. You provide the IP address of each web server
and the port on which it listens for requests when configuring the pool. Each pool can specify a fixed
set of virtual machines, a virtual machine scale-set, an app hosted by Azure App Services, or a
collection of on-premises servers. Each back-end pool has an associated load balancer that distributes
work across the pool

Web application firewall

The web application firewall (WAF) is an optional component that handles incoming requests before
they reach a listener. The web application firewall checks each request for many common threats,
based on the Open Web Application Security Project (OWASP). These include: SQL-injection, Cross-
site scripting, Command injection, HTTP request smuggling, HTTP response splitting, Remote file
inclusion, Bots, crawlers, and scanners, and HTTP protocol violations and anomalies.

OWASP has defined a set of generic rules for detecting attacks. These rules are referred to as the Core
Rule Set (CRS). The rule sets are under continuous review as attacks evolve in sophistication. WAF
supports two rule sets, CRS 2.2.9 and CRS 3.0. CRS 3.0 is the default and more recent of these rule
sets. If necessary, you can opt to select only specific rules in a rule set, targeting certain threats.
Additionally, you can customize the firewall to specify which elements in a request to examine, and
limit the size of messages to prevent massive uploads from overwhelming your servers.

WAF is enabled on your Application Gateway by selecting the WAF tier when you create a gateway.

Health probes

Health probes are an important part in assisting the load balancer to determine which servers are
available for load balancing in a back-end pool. Application Gateway uses a health probe to send a
request to a server. If the server returns an HTTP response with a status code between 200 and 399,
the server is deemed healthy.

If you don't configure a health probe, Application Gateway creates a default probe that waits for 30
seconds before deciding that a server is unavailable.

*****

You might also like