Zscaler Digital Transformation Admin Study Guide
Zscaler Digital Transformation Admin Study Guide
1
Basic Data Protection Services......................................................................................117
Data Protection Overview.........................................................................................119
Protecting Data in Motion.........................................................................................123
Protecting Data at Rest............................................................................................131
Incident Management.............................................................................................. 133
Basic Troubleshooting Tools & Support......................................................................... 135
Zscaler Self Help Services.......................................................................................136
Zscaler Troubleshooting Process & Tools................................................................138
Zscaler Customer Support Services........................................................................ 146
2
Zscaler Digital Transformation Administrator
Exam Format
Certification name: Zscaler Digital Transformation Administrator (ZDTA)
Delivered through: Certiverse, our online testing platform
Exam series: Zscaler Digital Transformation
Seat time: 90 minutes
Number of items: 50
Format: Multiple Choice, Scenarios with Graphics, and Matching
Languages: English
Identity Services 4
Basic Connectivity 20
Platform Services 15
Access Control 15
Cybersecurity Services 20
3
Audience & Qualifications
The ZDTA exam is for Zscaler customers as well as all who sell and support the Zscaler
platform. By taking the exam, you are demonstrating your deep understanding and knowledge
needed to sufficiently drive operational success.
Skills Required
● Ability to professionally design, implement, operate, and troubleshoot the Zscaler
platform
● Ability to adapt legacy on-premises technologies and legacy hub-and-spoke network
designs to modern cloud architectures
Recommended Training
Zscaler recommends that you have first attended the Zscaler for Users (EDU-200) course and
hands-on lab, or have solid hands-on experience with ZIA, ZPA and ZDX.
4
Core Skills
Identity Services
Identity Integration will teach you how to authenticate users to the Zero Trust Exchange. This
chapter will enable you to understand how to authenticate users to the Zero Trust Exchange,
and how user attributes are consumed for policy.
—–
5
Authentication and Authorization to the Zero Trust Exchange
The first thing we do when we connect to the Zero Trust Exchange is verify identity and context,
while also consuming attributes for policy. Usually that means connecting to a SAML identity
provider (IdP), but in the case of Zscaler Internet Access, it could be other methods such as
LDAP or a hosted database.
Once we understand the user context, we can control risk through inspection and data
protection, and then we can enforce policies such as Allow, Block, Isolate, and Prioritize. Based
on attributes of the user and the device, Zscaler Internet Access covers SaaS applications and
internet applications. Zscaler Private Access configures connectivity to private applications and
resources hosted at Infrastructure as a Service, Platform as a Service, or your private data
center.
Zscaler integrates with multiple partners and we're specifically going to talk about our Identity
Management framework and how we integrate with Active Directory, Azure Active Directory,
ADFS, Okta, Ping, or really any SAML 2.0-compliant identity provider.
In the immediate sections that follow, we will talk about configuring SAML and SCIM for Zscaler
Internet Access, and Zscaler Private Access in the power of the Zscaler platform to ensure that
users get access to the right resources under the right conditions.
6
SAML Authentication
Components
Also known as the Relying Party Provides Identifiers and Identity Issued to users by the IdP
(RP) to the Identity Provider (IdP) Assertions for users that wish to
access a service. Presented to SPs / RPs to
Employs the services of an IdP confirm authentication
for the Authentication and IdP examples include: Okta,
Authorization of users Ping, AD FS, Azure AD Trust based on PKI
7
How SAML authentication works.
We've got the identity provider (IdP) on the left, the users in the middle, and the service provider
(SP) on the right is Zscaler. The applications the users will attempt to access on the right-hand
side could be public applications like salesforce.com or internal applications that are made
available through Zscaler Private Access.
The first thing that happens is a request is made for an application. Since the user is not
authenticated, at steps 2 and 3, they are redirected to authenticate at either Zscaler Internet
Access or Zscaler Private Access.
Depending on whether the application is public or private, this request to authenticate will, in
turn, lead to a SAML authentication request being sent to the SAML identity provider, which is
steps 4 and 5.
A SAML authentication request is a message that indicates to the identity provider that a
user must authenticate and that a SAML assertion should be returned to the SAML
service provider. The identity provider must be configured to trust this particular service
provider in order to honor the request and ultimately return a SAML assertion. At this
point, the user would be challenged by the identity provider to authenticate. The
authentication policy is controlled by the configuration at the IdP, so this could be a
simple username and password. Maybe it's Kerberos, or it could be multifactor
authentication.
8
Beyond just authenticating users, identity providers can perform additional actions, such as
retrieving additional user attributes and group memberships. This data can be included in the
SAML assertion.
The final thing the identity provider does is assemble the SAML assertion and cryptographically
secure it using a digital signature. The SAML assertion will be delivered to the service provider
via the user's browser. This is delivered using a form POST that is automatically submitted via
JavaScript, so the experience to the end user is the same as an HTTP redirect. This is shown
here in steps 6 and 7.
When Zscaler receives the SAML assertion, it validates the digital signature to ensure it came
from a trusted source and that the data wasn't tampered with in transit. Assuming it's
cryptographically verified, Zscaler issues an authentication token here at step 8 to the Zscaler
Client Connector or a cookie to the user's browser, depending on what client type is being used.
At this point, the user is authenticated at Zscaler, and the request for the application can resume
via the Zscaler Zero Trust Exchange in step 9.
9
SCIM Authorization
Now that you have an understanding of SAML authentication and its workflow, let’s take
a look at the next identity integration, SCIM, which works to provide authorization and
revoke access for disabled users.
What is SCIM?
The system for cross-domain identity management (SCIM) is the standard for automating the
exchange of user identity information between identity domains and provides
automatically-driven updates to user attributes on changes in the home directory. It supports the
addition, deletion, and updating of users as well as the ability to apply policy based on SCIM
user or group attributes.
A standard schema is in place for defining The following operations can be conducted:
resources (e.g. users, groups)
Create: Add a resource (e.g. user, group)
Complex resource types are supported
(e.g. with attributes, sub-attributes, Read: Get information about a resource
multivalued attributes)
Update: Update the attributes of a
Resources are encoded as SCIM objects in resource
JSON
Delete: Remove a resource
With a group-based policy, it is likely more reliable to leverage SCIM criteria, whereas if you
want to apply policy based on the user's accessing device, is it trusted or untrusted, then that
would require using a SAML attribute as criteria. Given the SAML assertion is generated as
users are authenticating, it's possible to include additional contextual information related to the
authentication transaction.
10
Advantages and disadvantages of SCIM
Advantages Disadvantages
SAML Attributes
● SAML attributes are static
● Only applied on authentication
● Only changed on reauthentication
● Can include device and
authentication attributes
SCIM Attributes
● SCIM attributes are dynamic
● User- and group-specific
● They will be updated after a
change in the source directory
● Frequency is IdP controlled
Whether you base policy on SAML or SCIM attributes is dependent on your use cases.
11
ZPA Support for SCIM 2.0
Operations Supported
Add Users: As they are assigned to the ZPA SP in the source IDP
Delete Users: Remove ZPA access for users that are either removed from the ZPA SP in the
source IdP, or are removed from the directory completely.
With SCIM enabled, read-only lists are Synchronization happens periodically using
created in ZPA for: the API
- SCIM users - Update interval of ~40 minutes
- SCIM groups - Manually triggered at any time
- SCIM attributes
Updates are queued for sync from the IdP
Users can only be managed in the source when:
directory/IdP - Users are added/removed to/from a
group mapped to the ZPA SP
Users, groups, and attributes are updated - Users are individually assigned to
from the source directory as changes are or removed from the ZPA SP
made. - Users are removed from the source
directory entirely.
- User attributes are changed in the
source directory.
12
Basic Connectivity
This chapter will explain the different mechanisms to connect to the Zero Trust Exchange,
depending on use cases and locations, emphasizing on best practices.
—–
13
Connecting to the Zero Trust Exchange (ZTE)
Zero trust components are established in the cloud, and users/devices, IoT / OT
devices, or workloads must establish a connection to this cloud so security controls can
be enforced.
Zero trust connections are, by definition, independent of any network for control or
trust. Zero trust ensures access is granted by never sharing the network between the
originator (user/device, IoT / OT device, or workload) and the destination application.
By keeping these separate, zero trust can be properly implemented and enforced over
any network. The network can be located anywhere, or it could be built on IPv6, as the
network is simply the means of connecting initiators to destination apps.
Throughout this chapter, we will dive deeper into the connectivity services outlined in the image
above including:
14
Zscaler Client Connector
15
Authenticated Tunnels
There are a number of different modes for Zscaler Client Connector to function when it's
forwarding traffic to Zscaler Internet Access. The recommended mechanism is to use the
Zscaler tunnel. The tunnel-based approach intercepts traffic at the network level and forwards
that traffic through an encapsulated tunnel to the Zscaler platform.
There are three authenticated tunnel options (meaning that once the user is enrolled in
Zscaler Client Connector, the tunnel is established toward the Zscaler cloud and all traffic that
goes into the tunnel is identified as that user and that user-based policy is applied):
The packet filter based on Route-based mode also Tunnel with local proxy
Windows instruments, works in instruments and creates a loopback
packet filters that grab additional network adapter, address that appears as a
traffic, steer the traffic which becomes the route HTTP, HTTPS proxy, and
toward the Zscaler Client for traffic, route-based then instructs the operating
Connector process that mode instruments, and system’s proxy setting to
can then make a decision additional network adapter, point the browser at that
to forward it to the Zscaler which becomes the route local proxy. It’s then
cloud. for traffic generated from tunneling the traffic toward
client applications. the Zscaler cloud.
Enforced PAC mode, which basically instruments the PAC file in the browser, similar to what
you'd get from a group policy object. That means that the browser itself is forced to go to
Zscaler Internet Access via a specified proxy.
None, meaning that the policy is not going to do any configuration of proxy or tunneling mode,
and relies on the group policy object or the default configuration within the browser.
16
ZTunnel 1.0 vs. 2.0
Tunnel modes come in two formats, the Legacy Z-Tunnel 1.0 and the modern ZTunnel 2.0:
Tunnel Failure: There are also connection timeout options and additional options for redirecting
traffic to a local listener ( tunnel with local proxy, providing safe fallback within the client if the
tunnel mode connection is not successful.
17
Forwarding Profile: Trusted Network Detection
Does a specific FQDN The DNS search domain, The DNS server looks at
resolve to an IP address? If provided by DHCP, where the primary network
those two match, then the the client will receive a adapter on the client and
condition is true. DNS search domain. If understands what DNS
those things match the server is being provided to
configuration of the trusted it through DHCP. If those
network criteria, the user is things are equal, then the
on the matching network. DNS server condition is
true.
Combining these we can say whether they've all got to be true to identify the user or the device
as being on a trusted network, and that trusted network (any number of different locations) can
then be a condition to make a decision on how the different forwarding mechanisms are going to
be used at that location.
18
Forwarding Profile: Multiple Trusted Networks
Now we just need to define each of our multiple Trusted Networks so that we can then make
the decision as to which forwarding profile matches our desired outcome.
19
Forwarding Profile: Profile Action for ZIA
20
Forwarding Profile: System Proxy Settings
Also referred to as
a forwarding PAC
file.
It’s important to understand the behavior of GPO updates, forcing Zscaler Client Connector to
set a proxy setting, or forcing Zscaler Client Connector to set a WPAD script, making sure that
there is no conflict between these.
You can then make decisions on whether to use exactly the same settings for the untrusted
network, for the VPN Trusted Network, and off-trusted network criteria.
21
Summary: It's really important to understand the distinction between a forwarding PAC
and how the forwarding PAC is implemented within Zscaler Client Connector. With a
tunnel mode configuration, we do not want to set any forwarding PAC file and have the
client intercept the traffic natively as the browser or the client configuration resolves an
internet address and intercepts the traffic as it routes toward the internet and tunnels it
toward the Zero Trust Exchange through the DTLS tunnels.
22
Application Profile
Custom PAC URL References the PAC file configured in the ZIA Admin Portal,
making decisions on traffic that should be forwarded or
bypassed from the Zero Trust Exchange.
Restart WinHTTP Ensures that the system refreshes all of the proxy
specific to Windows devices configuration once Zscaler Client Connector is established.
Install Zscaler SSL Covered more in the next section. If you aren’t pushing out
Certificate your own certificates from your own Certificate Authority, then
simply enabling this option will use the one provided by
Zscaler.
23
Tunnel Internal Client Ensures that the health updates and policy traffic passes
Connector Traffic through the Zscaler tunnels towards the Zero Trust Exchange.
Or more specifically, it doesn't go direct to the Zero Trust
Exchange – it stays within the zero trust tunnels.
Cache System Proxy Ensures that Zscaler Client Connector stores the system proxy
state from before it was installed or enabled, and makes sure
that when Zscaler Client Connector is uninstalled or disabled,
a system proxy configuration is reverted and the user can
continue to function as before. And that the Zscaler Client
Connector reverts to previous versions of the Zscaler Client
Connector software in the event of an upgrade issue.
These last two are about supportability in the case where the client needs to uninstall or revert a
previous version, and making sure that they have business continuity in the case of any issue
with the updates.
24
Deploying Zscaler SSL Inspection Certificates
25
Tunnel 2.0 Configuration
When we look at the Z-Tunnel 2.0 configuration, there are a number of options that we need to
consider.
Zscaler also provides the ability for inclusions and exclusions of DNS requests. Zscaler is a
DNS resolver, but it's important to understand that the client is going to get a DNS server from
its DHCP. If the client gets a DNS server from DHCP that is within the RFC 1918 address range,
the client may query that directly, and it's only once the connection comes through Zscaler that
we'll be able to see the traffic and make a DNS re-resolution request.
DNS requests are tunneled to the Zscaler cloud, and the Zscaler cloud performs the DNS
resolution. It's not necessary to configure Zscaler as the DNS server since this configuration
intercepts any DNS request to any DNS server and redirects it or tunnels it to the Zscaler cloud
for the Zero Trust Exchange to perform the DNS resolution.
26
Forwarding Profile PAC vs App Profile PAC
Let's explain the difference between a Forwarding PAC and an App PAC in more detail.
Controls System PAC file - which HTTP Routes traffic AFTER the Client
Proxy to be used for a URL, tunnel with Connector has received it.
local proxy or other explicit proxy.
Used to determine the geographically
Has no bearing where Client Connector will closest Zscaler Enforcement Node (ZEN).
route traffic, only where the user’s apps will
send traffic.
A Forwarding Profile PAC gets defined The Application Profile PAC steers traffic
within the forwarding profile and it steers towards or away from the Zscaler cloud –
traffic toward or away from Zscaler Client after traffic has been intercepted by the
Connector. It's essentially the system PAC tunnel mode or after traffic has been
file, stating which HTTP proxy is going to directed to it with the local proxy.
be used for a specific URL.
The Application Profile PAC then
If it's the PAC file for a Tunnel with Local processes the traffic and makes a decision
Proxy, it's going to point traffic at the which Zscaler node (ZIA Public or Private
loopback address or another explicit proxy. Service Edge, or ZPA Public or Private
It has no bearing on where Zscaler Client Service Edge) is going to process the
Connector will route traffic, only where the request afterwards.
user's applications will send traffic. So a
user's application could be the Internet Finally, that App Profile PAC is then used to
Explorer browser, Edge browser, Chrome determine the geographically closest
browser, Firefox. They would receive the Zscaler enforcement node to process it.
Forwarding Profile PAC that makes the
decision how that browser will treat the
HTTP traffic and what proxy server it will
send it to.
27
ZIA: PAC Files
Within the Zscaler Internet Access (ZIA) Admin Portal, we can define the PAC files that are
hosted on the cloud. PAC files are essentially JavaScript functions that take two inputs that are
dynamically provided to it by the browser, or in the case of
the App Profile PAC, through the inspection process.
Migration:
If you're migrating from an on-premises proxy to the
Zscaler Internet Access platform, existing PAC files may be
migrated to Zscaler. Here you would migrate to Zscaler
using Tunnel with Local Proxy, and you might bring in that
browser configuration. It remains the same. The PAC file
simply returns to Zscaler Client Connector as a proxy, and
Zscaler Client Connector tunnels that traffic to the Zero
Trust Exchange. And because it's an authenticated tunnel, the Zscaler Zero Trust Exchange
understands who the user is.
As a migration step, it's a very simple process to take your existing PAC file and move it into the
Zscaler Client Connector. However, the recommendation is to use the Z-Tunnel 2.0 forwarding
mechanism, and you need to take into consideration how to move from an existing explicit proxy
configuration to moving to Zscaler tunnel mode.
28
ZIA: Browser Behavior - PAC to Tunnel Mode
Site Authentication
The browser will automatically understand how it
authenticates to intranet sites, so it'll do Kerberos,
NTLM (New Technology LAN Manager), and
Integrated Windows Authentication (IWA) to websites,
which are automatically defined as being in the
intranet zone. And the intranet zone is defined as sites
which bypass the proxy, so a site which has a direct
statement in the PAC file is automatically identified as
being an intranet site. Therefore if it challenges for
authentication, the user will automatically
authenticate, or the browser will automatically
authenticate, and the user will be signed in. So if you
remove the PAC file configuration and move to tunnel mode, the definition within the browser of
what is an intranet site is lost. This means the user may be prompted to authenticate to intranet
sites, and it is often seen as a side effect of migrating from PAC file to tunnel mode.
29
Tunnel Mode - Packet Filter Based - ZTunnel 2.0
The application is going to generate some traffic, and it may understand what a Forwarding PAC
file is. It might look at Internet Explorer and bring in the PAC file. And so that PAC file decision
says, should I send it to Zscaler Client Connector? It's also going to understand something that's
a ZPA (Zscaler Private Access) segment, based on how it resolves.
The bottom line is that there are going to be exclusions and inclusions. Anything that's included
will be sent through to Zscaler Client Connector, or Zscaler Client Connector will physically
intercept traffic. Anything that is bypassed or bypassed through the packet filters will go directly
out to the internet.
30
Tunnel Mode - Packet Filter Based - ZTunnel 1.0
With Z-Tunnel 1.0, it's important to understand that traffic at a network layer will only intercept
traffic that's 80 or 443. So if you've got, again, the application generating packets, if it's port 80
or 443, it'll be intercepted and passed to Zscaler Client Connector.
If it's explicitly proxy, do you have a proxy configuration tunnel with local proxy? It'll be passed to
that local adapter that's listening, and again, into Zscaler Client Connector.
Or again, if it's a Zscaler Private Access segment, we'll intercept that and send the traffic to
Zscaler Client Connector. Anything that is none of those, anything that is not a ZPA segment, or
80 and 443, will inherently bypass Zscaler Client Connector and then route directly out through
the interface to the internet.
31
Tunnel with Local Proxy Flow
With the Tunnel with Local proxy as the only configuration on there, you explicitly need the
Forwarding Profile PAC to target traffic directly to the local listener on Zscaler Client Connector.
This means the application needs to be proxy-aware. It needs to understand the configuration of
what my Zscaler Client Connector proxy listener is, and it will then forward the traffic to that
Zscaler Client Connector.
Zscaler Client Connector then passes traffic to Zscaler Internet Access. Any other traffic that's
not ZPA will pass out directly to the internet.
32
Tunnel Mode - Route-Based Flow
In the route-based mode, again, there is a routed adapter. There's an additional network adapter
that's configured on the machine. And in a scenario where the Tunnel with Local Proxy is not
configured, the application generates traffic, it follows the routing table, and that traffic routes
into the Zscaler Client Connector IP address. The Application PAC processes the traffic, and
again, traffic routes either to the Zscaler cloud or it bypasses and routes directly to the internet.
33
ZIA Enrollment Process
When Zscaler Client Connector is launched, it needs to enroll, also needing to authenticate to
be able to understand who the user is, what policy to apply, what tunnels to create, and how to
identify the user through those tunnels.
As the Zscaler Client Connector launches, it's going to talk to the mobile admin portal (Zscaler
Client Connector Portal) and understand what domain the user is in and what SAML identity
provider the user should authenticate against.
The user receives that IdP redirect and they are redirected to their SAML IdP, such as Okta,
ADFS (Active Directory Federation Service), Azure AD (Active Directory). The user will sign into
the SAML IdP and receive a SAML response within the Zscaler Client Connector process. That
SAML response is provided to Zscaler Internet Access, which consumes the response,
validates it, and if the response is valid, then the user receives an authentication token back to
Zscaler Client Connector.
Zscaler Client Connector provides that token to the Zscaler Client Connector Portal, which
validates the token and registers the device. At this point, the Zscaler Client Connector Portal
understands who the user is, fingerprints the device, consumes that device information, and
passes that device registration through to Zscaler Internet Access.
Zscaler Internet Access then provides the client credentials so that when the user makes a
request through the Zscaler service, it can authenticate the user and it uses the Zscaler identity
token to authenticate the client through the platform.
34
ZPA Enrollment Process
With Zscaler Private Access, again, the client is launched as part of the authentication process.
It already understands the domain that the user is in from the Zscaler Internet Access
enrollment, so there's an immediate registration attempt, followed by a second IdP redirect as
Zscaler Internet Access and Zscaler Private Access are controlled as two separate SAML-reliant
party trusts. During this second authentication round where the Zscaler Client Connector talks to
the SAML IdP, it will sign in transparently because they're already signed in from the Zscaler
Internet Access enrollment. There may be a multifactor authentication at this point, but the IdP
authenticates the user and returns the SAML response back to Zscaler Client Connector.
Zscaler Client Connector provides that response token and registers the device into Zscaler
Client Connector Portal, which passes that registration through to Zscaler Private Access, and
Zscaler Private Access enrollment then enables the Zscaler Client Connector certificates to be
generated and Zscaler Client Connector is enrolled in Zscaler Private Access.
Zscaler Client Connector then generates the secure tunnels to the Zero Trust Exchange,
through which the profile and settings are downloaded so the client receives the information
about the Zscaler Private Access applications that they're able to access.
35
Client Connector Intervals
There are multiple intervals where Zscaler Client Connector will refresh the information it has
about the applications, the app profiles, the forwarding profiles, the PAC files, and the policy.
Every Hour
Every hour Zscaler Client Connector will
connect and download any policy updates
for the app profiles and forwarding profiles.
If the PAC files or URLs are changed, it will
automatically update every hour as this
counts as a profile change.
Every 15 Minutes
However every 15 minutes, Zscaler Client
Connector will download the PAC file of the
app profiles and the forwarding profiles in
case they have changed.
Manually
The end user can obviously also initiate an
update through the administration of
Zscaler Client Connector and force a
check for software updates or force a
check for policy change.
36
Rotating Passwords with App Profiles
Zscaler Client Connector is locked down to prevent users from logging out, disabling, or
uninstalling the application. This is password-protected and the password is generated on a
per-configuration basis and available to support personnel through the administration interface
that could then be provided to users if there's a need to uninstall, disable, or log out for some
service-affecting reason.
One-Time Passwords
Administrators and support teams are encouraged to only use the one-time, per-device
password and NOT the global passwords within the App Profile, which can be reused.
37
Device Posture and Posture Checks
BYOD vs. Corporate Devices Does the device trust a root CA, which is only internal to an
organization? This enables us to consider if it's a BYOD
(bring your own device) device versus a corporate device.
We can also check for client certificates and ensure that the
client certificate has a non-exportable private key.
Endpoint Protection So we can understand the security of the endpoint and use
this in policy to provide access to applications.
And then we can interface with third-party endpoint
protection, such as CarbonBlack, CrowdStrike,
SentinelOne, Defender, and the CrowdStrike ZTA score to
make policy decisions.
38
Installing Client Connector
To install and maintain the Client Connector, follow this basic process:
Download the install file The files are available through the Zscaler Client Connector Portal.
The install files are hosted on AWS, so it is possible to copy the link
in order to distribute it to users.
Install on devices Command Line options exist for Windows, Mac, and Linux clients,
Always refer to the online allowing for silent installations.
help for each of the
command line installation The strictEnforcement option requires cloudName and
options. policyToken options to ensure that the user is automatically
triggered to the right cloud and authentication token, and make sure
that the user can not access the Internet until they are enrolled.
Update the client Group-based updates can be readily applied for automatic rollout,
such that specific versions can be applied to specific groups of
users – useful for testing and staggered rollouts.
Troubleshooting Should you encounter any issues with the installation, logs can be
exported from within the client or the built-in packet capture can be
used.
39
App Connectors
App Connectors provide a secure authenticated interface between a customer’s servers and the
ZPA cloud. They do so by establishing connections out through the firewall to the Zscaler Cloud,
and the Zero Trust Exchange facilitates a reverse connection. At no point are the internal/private
servers exposed by public/external DNS or through inbound DMZ firewall holes.
Deploy Connector Groups – Always refer to the online help and use it as
a checklist when deploying app connectors.
40
group or the connectors will automatically perform DNS resolution and
create synthetic server associations that advertise those applications.
This is the default (recommended) configuration and, it is not
recommended to move away from Dynamic Server Discovery unless for
a very specific reason.
IP addresses should be
used sparingly and only
where an application is
accessible by an IP
address, or where the
payload indicates that it
needs to be connected by
IP address.
41
Pulling it Together - Where Each
Component Fits
42
Browser Access & Privileged Remote Access
Browser Based Access provides connectivity through a web browser without the Zscaler
Client Connector being installed to HTTP and HTTPS applications. This core connectivity
capability also provides access to Privileged Remote Access applications such as SSH or RDP.
Zscaler Browser Access enables users to authenticate to internal websites from anywhere,
without needing to manage a DMZ, Internet Edge, or a VPN—all with the same user experience
as a direct website connection. A User Portal provides a graphical view within the browser of
those browser-based access applications that the user has access to.
With added protection from the OWASP (Open Web Application Security Project) Top 10 and
Custom Signatures to inspect the web content, as well as ZTNA policy for least privileged
access, security gets a huge boost over legacy website access methods.
Now, we can immediately provide day-one access for subsidiaries or acquisitions or partners to
access applications. We can further provide limited access to suppliers, contractors, customers,
and other third parties through this mechanism, without the need for them to install a Zscaler
Client Connector software (BYOD included—which comes down to the trust of the users and the
risk associated with accessing those applications without a client-based endpoint solution).
43
How Browser Access Works
Configuration Overview
Fundamentals SSL is always used for the outside connection, whereas HTTP or
HTTPS may be used internally.
It’s important that the client trusts that server certificate. This might
be publicly signed by a public certificate authority (the private key
never leaves the Zscaler cloud), or it could be an internal certificate
authority where only your internal clients trust that root certificate
authority. Once the user has the connection to the Zero Trust Exchange
and the policy permits them access, an inside-out tunnel is created
between the App Connector and the Zero Trust Exchange.
There are three tunnels: The client to the Zero Trust Exchange, the
App Connector to the Zero Trust Exchange, and Zero Trust Exchange
to the application through the App Connector.
ZPA Admin As with any other application, the website application must be in an
Configuration application segment. But in this case, as DNS is utilized, FQDNs are
Always refer to the required.
online help, using it
as a checklist. If desired, also create a User Portal for added user convenience,
especially if there are multiple Browser Access enabled applications,
linking those applications to the portal.
DNS Configuration The Zscaler CNAME (alias) provided by ZPA will be put in the public
DNS.
44
Privileged Remote Access (PRA)
PRA is an authenticated remote desktop gateway/SSH gateway that relies on Zscaler’s Service
Edge and the App Connector to allow a user to access IT and OT servers, desktops, and
workstations using their browser, typically through an authenticated web portal.
Similar to the user portal, the users can connect to the Zero Trust Exchange, and the Zero Trust
Exchange provides authenticated access to jumphosts, workstation servers, IT, and OT
equipment, all within the browser. The console session is streamed, meaning that no data is
stored on the user's device. The user's device has no direct access to that application. It
eliminates the need for firewalls, DMZs, and the jumphosts and workstations and servers can be
limited to the Zscaler App Connectors’ IP addresses. And, based on policy, users can only
access the consoles they're permitted access to.
The key driver is the common use case of supporting BYOD devices so the user never needs a
corporate device to be able to access privileged resources. Because it's an unmanaged device,
you can provide secure access for contractors, suppliers, and other third parties to perform
privileged access, such as administration and maintenance tasks.
As with Browser Access, specific configuration steps are contained within the online help and
should be referenced during the process.
45
Platform Services
Platform Services will allow you to explore fundamental capabilities that Zscaler offers including
Private Service Edges, Device Posture, TLS Inspection, Policy Framework, and Analytics &
Reporting. Gain an overview of Zscaler’s fundamental Platform capabilities. Dive deeper into
how these functionalities interact with other services within Zero Trust Exchange and gain
knowledge on how to configure Zscaler’s Platform Services as they relate to Zscaler best
practices.
—–
46
Zscaler's Platform Services Suite
Included in Zscaler’s holistic Zero Trust Exchange is Zscaler’s Platform Services suite. This
important suite contains a set of fundamental functionalities that are common across Zscaler’s
other services suites such as Connectivity, Access Control, Security, and Digital Experience.
47
Device Posture
Device Attributes
As the client initiates a SAML authentication request, both for
Zscaler Internet Access or Zscaler Private Access, a SAML
response is returned. Once consumed, Zscaler can apply policy
based on those attributes.
48
TLS Inspection
As part of the Platform Services suite of capabilities, TLS Decryption or Inspection works to
inspect content and enable various Access Control, Cyber Protection, and Data Protection
functionalities to apply policy based on the content of those encrypted communications.
Zscaler decrypts Zscaler provides Zscaler ensures Zscaler mitigates Zscaler allows you
and inspects 100% controlled and rapid optimal cipher any access risk. to measure
of TLS traffic without deployment and selection and key coverage, value,
constraints. operations safeguards. and troubleshoot so
you gain instant
awareness
Within the Zero Trust Exchange, there are several facets of how TLS inspection works. The first
is Access Control—URL Filtering and Cloud Firewall functionality apply policy based on the
request and the response.
The second part is based around compromise—the actual payload that's coming from or going
toward the web server such as malware inspection, antivirus, the Advanced Threat Protection,
the IPS signature, and cloud sandbox functionality.
Finally there’s the Data Loss or the data protection side. Inline DLP means scanning the
payload that's coming from the client to make sure that nefarious users or accidental users are
leaking data out to the internet. Being able to provide Granular Application Controls, not just on
the FQDN, the URL that's being accessed, but across the entire URI that's being connected to.
All of this is built as a scalable platform, assuming 100% of the transactions will be SSL and
100% of those could be decrypted. Generating intermediate certificates at line speed for all
users and all locations enables the best security and data protection outcomes.
49
As noted, without TLS inspection,
security controls are effectively
blind to any malicious payloads,
data leakage, and emerging
threats.
When you look at an encrypted HTTPS When you decrypt an HTTPS transaction
transaction, there is nothing visible to a or TLS, the following becomes visible:
viewer. The only item that one can see
from a TLS Handshake without inspection - HTTP Headers
is the Server or Domain Name as seen in - Request and Response Headers
the image below. - Full Request URL
- Request Method
- All of the Payload
50
How Does SSL Inspection Work?
ZIA ZPA
Here, ZIA is a forward proxy doing SSL With ZPA it is in essence a reverse proxy,
man-in-the-middle inspection. The client becoming the web server that the user is
makes a request to the website and the ZIA connecting to. There's a Client Hello
Service Edge sits in between that. On the towards ZPA and then there's a connection
other side is a connection from the Service from the App Connector to the application,
Edge to the originating web server, where the real service certificate is
receiving the server certificate back, provided to the App Connector.
validating (signed by a trusted issuer, date
is valid, issuer is valid, and the content The App Connector or the ZPA Service
within that certificate is valid). ZIA will then Edge provides a real service certificate
generate a certificate on the fly, signed by a back to the client. The real service
trusted issuer, and provide that certificate certificate may be the same one from the
back to the client. As far as the client is application, or it may be a certificate that
concerned, they're getting a certificate for was specifically uploaded to the ZPA
the web server, but we're generating that platform for those websites that we are
on the fly as a man-in-the-middle proxying through it.
certificate.
Short version: Zscaler Internet Access (ZIA) is a forward proxy interception and Zscaler
Private Access (ZPA) is essentially a reverse proxy SSL termination interception.
51
From a client (end user) perspective, they could look at the certificate chain within their browser
session to see which certificate is actually being used. In this case we can see that while they
are going to a Wikipedia site, in reality the certificate being used is the one from Zscaler,
providing a clear indication that the traffic is now being intercepted and inspected.
52
Safe to say, Zscaler knows perhaps more than any company on Earth how to roll out SSL
inspection at scale. Let’s break it down into manageable steps.
Pre-Work Ensure that you have agreement within an organization that SSL
inspection is going to be deployed. It's being deployed to support the
business. It's not to spy on what users are doing, but to ensure that the
business can continue to function making sure that they're not infected
with malware and to ensure that data doesn't leak out that affects the
customer perception or the reputation of a business, defining the
acceptable usage policy and any notifications that are going to be
provided to users.
It's important to get buy-in from the legal teams, privacy leaders, the
security teams, to understand why this is being done.
53
This is often a misunderstood step with lofty goals of deploying SSL
inspection. You can get wrapped up in lots of red tape as to workers'
councils, for example, being concerned about spying on users and what
they're doing. But it's really important to get a level set within the
organization of why TLS inspection is being done - for the security of
the business and the business’s reputation, understanding what that
encrypted data is, and how Zscaler handles it.
Obfuscating the user and device data, never storing any of the content
of the payload onto the disk. Scanning for data protection, for infection,
and making sure malware doesn't come into the organization or leak
out. And of course to best identify and block command and control
services.
Finally, develop a communication plan for the users. Why are we doing
this? Updating the acceptable usage policy, sending out notifications so
users understand how to check whether or not SSL is being inspected
or not, and what decisions those users can make as to whether or not
they're going to continue using their work device to connect to these
websites whilst SSL inspection is taking place.
Root CA Once we have buy-in, we're going to look at rolling out the root
Enrollment certificate authorities to these devices and make sure that all client
devices trust the intermediate certificate or trust the certificates that
have been inspected.
There are multiple kinds of SSL inspection. The default that comes with
Zscaler is the Zscaler root certificate authority which has a Chain of
Trust through the intermediate certificate, the short-lived temporary
certificate, and the web server domain certificate that's going to be
issued on the fly.
54
Customers also have the ability to bring their own certificate authority
and you might take two different routes for this.
55
certificate will be installed automatically on devices that run the
Zscaler Client Connector.
Initial Roll-Out And then we'll roll out and do some inspection for a select group of
users, for specific categories, collecting feedback from the users on
their experience of that.
This all starts with configuring the base set of rules before and
continuing throughout the pilot phase.
With this policy, we can enforce that only websites with specific TLS
versions or the client minimum TLS version is supported, as well as
how we control the certificate trust. Is it passed through? Do we block
untrusted certificates? Do we block undecryptable traffic and will we
perform an OCSP check for that certificate, which is the Online
Certificate Status Protocol, to check whether or not a certificate is valid
or has been revoked?
56
Granular rule-based engine
● User/group/department
● URL Category/Cloud App
● Destination IP/FQDN group
● Device: Name, OS, Trust Level
57
For a pilot rule set, we're going to specifically say for a group of users
that are in a group called pilot SSL, we're going to inspect some very
specific categories. We're going to block untrusted certificates. We're
going to do that OCSP revocation check. We're going to block
undecryptable traffic, and ensure that the minimum client and server
versions are TLS 1.0, and then the default rule is ‘do not inspect’.
Now it's important to them to block this (and similarly Apple Private
Relay) within the firewall, which forces the client to turn back to normal
HTTP, HTTPS, over TCP 443 so that the SSL inspection can be done
for that protocol. We continue to roll out and we're going to take a look
at applications and client environments that may have challenges with
inspection.
Extended Roll-Out Now we'll figure out deeper problems that might have existed, such as
certificate pinning, how to handle developer environments, IoT
environments, and the policy around services like Office 365.
58
Certificate pinning or hard-coded certificates – what is that? It means
that the client is going to check for the certificate and expects a specific
certificate to be returned. The man-in-the-middle certificate that we
deliver will not be trusted. It's not the certificate that's been expected, so
the application will fail. A good example of this would be something like
Dropbox. The Dropbox client expects a specific certificate to be
delivered with a specific serial number, signed by a specific issuer,
therefore the man-in-the-middle certificate we deliver isn't trusted, so
the Dropbox client will fail. However, Dropbox within the browser will
continue to work.
59
Troubleshooting
A packet capture will help you understand what's going on. You could
look at the certificate when the client makes the request and understand
that the certificate was delivered to the client, but the client closed the
connection very quickly once the certificate was delivered. That
indicates that the client, although connected, decided it didn't trust the
certificate and closed down.
We can then also understand by looking in the SSL logs for those
transactions within the Zscaler administration interface (now known as
ZIA Admin Portal) that the client failed the handshake. The client closed
the connection, and therefore we need to do something about this. So
what can you do? You can make decisions based on either the specific
operating system, the operating system combined with an application to
bypass inspection, and you could also do this based on the user agent
as well.
60
Measure & Report Really throughout the rollout it makes sense to measure and report on
the capabilities.
● How much SSL inspection has been done? Quantify the value of
SSL inspection towards the business.
● And then we can also see these TLS versions and ciphers as
we go through to see how this is changing over time and the
value that Zscaler brings for the SSL inspection journey.
The Protocol Report, Security Audit Report (Cyber Risk Report) and
SSL Policy Reason field are all go-to resources to measure the
success.
61
Policy Framework
Authentication Policy
Answer:
● Client Connector - Was the user prompted to
enter a Username?
The Domain (after the @) maps to the Identity
Provider, and user is redirected to the
Identity Provider to authenticate
● Client Connector - Was the client installed with
a –userDomain option
The Domain maps to the Identity Provider
● Browser-Based Access - Multiple Identity
Providers configured? The user is prompted for a username/email. The Domain (after
the @) maps to the Identity Provider, and user is redirected to the Identity Provider to
authenticate
● Browser-Based Access - Single Identity Provider configured? The user will automatically
be redirected to the Identity Provider
Looking at the different policy framework and components, think about how the Zero Trust
Exchange identifies the user. Can we give anything as part of install parameters or configuration
62
to understand which identity providers should be used to authenticate the user? Based on that,
should the user be allowed to connect to the Zero Trust Exchange?
We go through an authentication round. The SAML IdP will control whether or not it grants
SAML assertions to the user based on its policy. Zscaler consumes that assertion and then
makes a decision as to which parts of the Zero Trust Exchange the user is entitled to connect to.
Once the user is connected, we can assess information about how the user is connected,
whether they're using browser-based access, privileged remote access, whether we're going to
drive them into isolation, or use the Zscaler Client Connector and the Zscaler Client Connector
can give us that trusted network policy to understand which network the user is on and what
services should be enabled.
Based on that network policy, we can make decisions on how the user connects. Should they
connect to public Service Edges or private Service Edges?
Based on the Zscaler Client Connector information, we can understand should we update the
user's version of Zscaler Client Connector. What profile should be installed, both the application
configuration and the forwarding configuration on the client? Then based on that information
and the SAML information that we've provided and device posture, we can make decisions on
whether or not Zscaler Internet Access is enabled, Zscaler Private Access, or Zscaler Digital
Experience.
There are multiple use cases where we might only want to enable Zscaler Internet Access for a
user and not Zscaler Private Access or Zscaler Digital Experience, or vice versa. And then
based on those SAML authentications, SCIM provisioning, the SOAR solution, SIM solution,
analyzing user access, we can build risk posture and we can pass that through and make
decisions about whether the user is allowed to access applications through the Zero Trust
Exchange, both public and private. We can understand if the device is managed or unmanaged
and make policy decisions to allow access to applications.
And then also making decisions about how we inspect SSL traffic. What should we do about
inspection? Should traffic be inspected or not? How do we handle errors with the original web
server certificate? If it's unsigned, if it's invalid because of date or SNI (Server Name Indication)
missing, do we pass that through, or do we just inherently block that connection?
The Zero Trust Exchange needs to have some information about how to authenticate the user.
We're deploying the Zscaler Client Connector as we see in the top image. Or in the bottom
image there, we're using browser-based access. If we install a Zscaler Client Connector with
user domain and cloud name information, we'll automatically map to an identity provider and
redirect the user to that identity provider to authenticate. That authentication could then be
certificate-based, form-based, could be multifactor, could be transparent authentication.
63
Single vs Multiple Identity Providers
Most organizations will have a single Identity Provider
During Mergers & Acquisitions, or Cloud Migration, multiple Identity Providers may be necessary
Configuration:
● Add IDPs - Add the IDPs in the Administration Configuration
● Configure Domain - Domains map Identity Providers to user domains
● Login - In Zscaler Client Connector or Browser Based Access, the user may be
prompted to enter a credential
● Policy - The Policy maps the domain to the IDP. The user may be prompted, or the
prompt may be bypassed through installer options in Zscaler Client Connector
With browser-based access, if there is a single domain associated with the browser application
or the tenant, again, the user will be automatically redirected to the identity provider to
authenticate. However, if multiple domains exist, then the user may be prompted to enter a
credential to drive the decision criteria as to which identity provider we're going to redirect them
to.
64
That might be based on the location the user is coming from. It might be a hint as far as the
Zscaler Client Connector. It might be that the user is prompted to enter a credential which
includes a domain into Zscaler and we use that to trigger the redirect to the identity provider.
Obviously, we don't want to get in the way of a good user experience. If we can do anything to
make the decision on behalf of the user, then the user experience is better. But there are times
where we will have to prompt the user tool to enter something to redirect, but in most cases we
can bypass that by putting some hints into the client or moving to a single identity provider
environment.
65
Service Entitlement
After authenticating to Zscaler Internet
Access, the SAML attributes are consumed
and passed to Zscaler Client Connector
Portal, where the policy controls whether
the user will be enrolled in Zscaler Private
Access and Zscaler Digital Experience
Configuration:
● Add IDPs - Add the IDPs in Zscaler
Internet Access
● Configure Group Attributes - which
groups is the user in
● Set Entitlement Policy in Zscaler
Client Connector Portal - Which
groups are allowed access to ZPA or
ZDX
● Alternatively - Policy set to Enabled
by Default
The service entitlement can be based on user information (if the user is in a certain group), or
we can enable it for all users.
Do we enable things like Machine Tunnel? With Zscaler Internet Access we can base that on
device posture. We can understand the posture of the device as a managed or unmanaged
device, making the decision of whether the user even gets enrolled into Zscaler Internet Access.
Zscaler Digital Experience has exactly the same context as Zscaler Private Access based on
group attributes. Group attributes are synchronized from the authentication around a user to
Zscaler Internet Access and then those groups are used for the entitlement policy for Zscaler
Private Access or Zscaler Digital Experience.
SCIM attributes are synchronized periodically, and therefore flow periodically into the Zscaler
Client Connector Portal. Or if we're just using SAML, those group memberships are transferred
immediately through to the Zscaler Client Connector Portal for policy.
66
Analytics & Reporting
The logging of all transactions as they pass through the Zero Trust Exchange is extremely
important so that organizations are able to report
on user activity and perform analytics that make
decisions for future policy.
Logging Architecture
The logging architecture is for when the user
makes a transaction through the Zero Trust
Exchange, it passes logs through a log router
which decides where those logs will be stored.
A log entry is created on a transaction completion for Web (including SSL transaction), Firewall
and Sandbox transactions are logged. As content passes through an enforcement node is never
stored, only processed and forwarded. And the content (payload) is never stored, only the
transaction is logged.
67
Then the data is tokenized on the ZIA Public and Private Service Edges, and ZPA Public and
Private Service Edges so that, again, the amount of data is reduced while making sure that that
data isn't human-readable.
Having been both reduced and tokenized, it is compressed and sent to the log servers for
storage. The end result is 50-to-1 or greater compression rate as well as all of that indexing
happening at the point that the log is created so that when the logs are consumed they are very
efficient to analyze,
68
This enables the creation of big data output via interactive reports, meaning that you can drill
down into individual transactions. Whether it's SSL and certificate reports, URL category
reports, dashboards showing the company risk score, security audit reports, or threat insight
reports, the data and value is easy to extract.
Trending data analysis across the platform and comparing your organization’s capabilities with
industry standards or other organizations in the same industry as you further helps you
understand how traffic flows through the platform from your users and where the applications
they're accessing are.
For the executive needs, the Zscaler Executive Insights tool enables executives to instantly get
reports on how Zscaler is helping their business, how they're reducing the number of threats
coming into the organization, and where they compare to peer organizations and cloud risk
scores.
For the threat hunting teams out there, there are even deeper forensic reports, historical
sandbox data, and
detailed information
on patient zero
potential infections.
69
Zscaler Digital Experience
Digital Experience will introduce you to Zscaler's digital experience monitoring capabilities. Gain
an overview of Zscaler’s digital experience monitoring capabilities that work to analyze, resolve,
and troubleshoot user experience issues. Dive deeper into the components of Zscaler Digital
Experience, along with how to configure, monitor, and troubleshoot these features and functions
as they relate to Zscaler best practices.
—–
70
Introduction to Zscaler Digital Experience
ZDX Overview
The rapid adoption of cloud and mobility initiatives within organizations and a shift to
work-from-anywhere have introduced new monitoring challenges for IT teams. Digital
experience monitoring for a hybrid workforce requires a modern and dynamic approach, as IT
teams need to continuously monitor and measure the digital experience for each user from the
user perspective, regardless of their location.
Traditional monitoring tools take a data center-centric approach to monitoring and collecting
metrics from fixed sites rather than directly from the user device. This approach does not
provide a unified view of performance based on a user device, network path, or application.
Zscaler provides this unified view through our Digital Experience monitoring solution that sits on
top of the Zero Trust Exchange. Zscaler Digital Experience (ZDX) helps IT teams monitor digital
experiences from the end user perspective to optimize performance and rapidly fix offending
application, network, and device issues.
71
The power of ZDX is being able to use these calculated scores in order to drill into issues when
a score seems to be visibly low. What, however, might cause a good score to go down? The
items below highlight the common issues every organization must face:
72
Here are just 5 of the key features that are commonly utilized:
Visibility into Saas & ZDX provides visibility not only into an organization’s zero trust
Private Applications environments but into their private and SaaS applications as
well.
73
Software & Device Software and Device Inventory is based on the different
Inventory endpoint metrics that Zscaler collects from the user’s device.
Software Inventory
Software Inventory allows you to view current and historical
information about software versions and updates on your users'
devices.
Device Inventory
Device Inventory allows you to view current information about
your organization's devices and their associated users.
74
Y-Engine (Automated Root ZDX’s Y-Engine (Automated Root Cause Analysis) allows an
Cause Analysis) organization to automatically isolate root causes of performance
issues, spend less time troubleshooting, eliminate
finger-pointing, and get users back to work faster.
ZDX APIs ZDX’s APIs integrate digital experience insights with popular
ITSM tools like ServiceNow to provide additional insights and
trigger remediation workflows.
75
ZDX Use Cases
Did you know that hybrid work has increased ticket resolution time by 30%? Did you also know
that almost 70% of businesses rely on virtual meetings (according to Metrigy research)?
Zscaler addresses both of these use cases and more with its powerful end-user monitoring
capabilities. Here are just six common use cases that ZDX addresses:
Real-time Detection of
SaaS Outages
76
Baselining Performance
Between Office and
Working from Anywhere
77
Visibility into Private
Applications via ZPA
78
ZDX Architecture overview
Understanding the basics of
ZDX’s architecture is important as
it will help you to more clearly
navigate, configure, and
troubleshoot the various features,
functionalities, and issues that
arise within the Digital Experience
console.
Applications come as predefined or custom. Those that are predefined have minimal
configuration needs, which are usually along the lines of providing the tenant ID, while custom
applications require at least one web probe created.
79
Probes
Web Probes always pull objects from the Cloud Path Probes discover the network
server and are used to collect metrics like: elements of the application, basically what
are the network hops the user is taking on
the way to the application.
● Page Fetch Time - network fetch
time for the specified URL Metrics collected include:
● DNS Time - time it took to resolve
the DNS name ● Hop Count
● Server Response Time - time to ● Packet Loss - for each hop
the first byte ● Latency Information
● Availability - is the service
Protocols include:
available, yes or no
● Adaptive - the best protocol for
each leg in the cloud is selected by
an auto-discovery process
● ICMP - default value, processed by
router CPU
● TCP - processed by router ASIC,
immune to rate limiting
● UDP - some routers only respond to
UDP packets, RFC recommended
port of 33434
80
As the data from the Cloud Path Probe is collected, administrators and support staff have deep
visibility and insight at their fingertips, greatly reducing resolution time for existing issues and
even preventing future ones.
Command-Line View
81
Monitoring Digital Experience
The ZDX dashboard provides an overview of the application performance and user experience,
providing filters to focus on any scenario that might arise. Each application shows a ZDX Score,
based on the selected time range, where the probes act on the user’s behalf so there is not
need for them to interact with the application to generate the data that drives the dashboard.
82
83
Y-Engine Helps you get to the root cause of a problem quickly, automating your root
causes analysis for the impacted ZDX Score.
This includes the ability to compare the same data point to past ones.
UCaas Monitoring and looking at call quality for users (Zoom, Teams) over time,
Monitoring with the ability to focus on specific meetings (participants, locations,
devices…)
84
Software & Provides the ability to drill down all the way on a specific user or device to
Device learn what software is present, at what level, to help correlate if a specific
Inventory software or devices might be impacted.
85
Access Control
Access Control Services extend segmentation and policy control with capabilities such as
Firewall, DNS, URL Filtering, and more. In this chapter we extend segmentation and policy
controls to understand how the Zero Trust Exchange applies policy for applications, as well as
how the Zero Trust Exchange handles DNS and shortest-path selection for application
experience optimization.
Gain an overview of Zscaler’s Access Control capabilities, dive deeper into specific policy
controls for applications, and gain knowledge on how to configure Zscaler’s Access Control
Services as they relate to Zscaler best practices.
—–
86
Access Control Overview
The challenge of legacy firewalls
Traditional legacy on-premise firewalls are no longer suitable in a world where users require
access to the corporate network anywhere, anytime, on any device.
The challenge is that legacy firewall appliances are zone-based architectures. They establish
barriers between trusted internal and untrusted external networks, where user policies and
criteria are made available. This poses three main risks for organizations around security,
performance, and cost and complexity.
Here are the common LEGACY FIREWALL risks and their consequences:
Zscaler solves these challenges through its holistic approach to providing a platform of services
that enables organizations to bring true zero trust to every endpoint - whether a user,
application, or IoT device.
87
Zscaler's Access Control Services Suite
Here are the three most important use cases which cloud generation
firewalls enable enterprises to benefit from.
The first and foremost use case is being very adaptive and
consistent when it comes to accessing applications – no matter where
you are – especially post-COVID, with hybrid work becoming the
common norm for all enterprises. Making sure customers get the
same stack of next generation firewall capabilities and security
irrespective of their location is very critical. As mentioned earlier,
traditional on-premises appliances have inconsistent security postures
configured because they have to handle the remote users in different
policies and different appliances, versus what is there on the physical
sites or premises of branch offices.
The second key use case is for customers to migrate from their
hub-and-spoke architecture to more direct-to-internet architecture for
making their most important SaaS applications like M365, Salesforce,
and other key applications more secure. It is very important to have a
product with capabilities which can completely identify, have visibility,
and prevent all sorts of threats from an access control perspective.
And the third use case is whenever an end user is trying to access an
application over the internet, DNS plays a critical role. Optimizing and
securing DNS acts as a first line of defense for many enterprises to
prevent half of the threats right at the DNS level itself. And the most
important capability from an NG (next generation) firewall perspective
88
is to provide a scalable cloud intrusion prevention and detection, which
has complete context.
URL Filtering URL Filtering is the first line of defense that an organization needs to
leverage in order to provide effective and efficient access security
control for users. This allows those enterprises to:
Let's discuss the top use cases in which enterprises deploy URL
Filtering or are leveraging this capability.
The basic use case is it starts with basic access control with deeper
granularity. Like different departments, different users need to access
certain websites but not others. So it acts as a simple access control
mechanism based on the business needs, on who should access what
content, from which device, and from which location they can access
the content.
89
The second top use case we are increasingly seeing is new websites
getting added. Oftentimes it is very difficult for any platform to quickly
identify whether a website is good or bad because a lot of these
websites are registered and there isn’t any content. By the time you
flag a website as good, there is a possibility that the content is bad.
Bad actors have leveraged that domain for posting and used it for
phishing and other suspicious activity. So the key use case is when
there is a certain notion of whether a destination is safe or unsafe.
That is where the isolation of web pages is very critical for customers.
Oftentimes, things are not white or black, they could be gray, which
means the categorization cannot be done immediately or more time is
needed to flag it as a genuinely good website or something harmful. It
involves a certain time and a certain amount of insights to really get to
that categorization.
90
certain supported operating systems of endpoints, and those use
cases can be easily handled as well.
● Cautioning Users
● User-Agent Based Policies
● Time-Based Policies
● Time-Based Policies
● Rule Expiration
● Bandwidth Quota Supported
Bandwidth Control Bandwidth Control is one of the core capabilities of Zscaler’s services
from an access control perspective ready to provide secure
connectivity to the internet and private applications.
91
4. Need to limit bandwidth for Windows and iOS updates
Microsoft Office 365 By leveraging various Access Control Services already discussed
(M365) including URL Filtering and Bandwidth Control as well additional
Platform and Connectivity services such as TLS Inspection, Policy
Framework, and the Zscaler Client Connector, the Zscaler Zero Trust
Exchange is able to enable organizations to deploy M365 and ensure
an optimized user experience.
92
The best practice is to:
93
Segmentation & Zscaler’s Private Application Access securely makes connections into
Condition Access an organization's private applications regardless of the user's location
through Policies and device.
Why Segmentation?
Segmentation limits the network access only to the application or
resource required. Contrast to traditional VPNs that provide access to
all resources on the network when the user or device connects.
94
The core use cases for leveraging Private Application Access
connections and segmentation are:
1. Remote Access
2. Third Party Access
3. Segmentation
4. Mergers & Acquisitions (M&A)
5. Transformation
95
Cybersecurity Services
Go deep into the essential security capabilities of the Zero Trust Exchange. The Zero Trust
Exchange is fundamentally a security platform, and in this chapter, we will explore its traffic
inspection capabilities and how Zscaler’s Single-Scan, Multi-Action functionality optimizes the
inspection of traffic. We will also learn about how deception works in a zero trust environment.
You will get an overview of Zscaler's cybersecurity and protection capabilities, dive deeper into
Zscaler's advanced threat protection and antivirus as part of the Zscaler security service suite,
and learn how Zscaler provides detection and response through its alerting framework.
—–
96
Cybersecurity Overview
Cyberattacks are becoming more and more common. Attackers’ techniques are becoming more
sophisticated as ransomware, phishing, malware, and other attacks hit one after the other.
More than ever, it is critical for every organization to have a set of cybersecurity services that
analyze organizational risk and defend against cyberattacks so they can rest assured they will
not be compromised.
Zscaler solves these challenges through a holistic platform of services that stops these attacks
before they can cause harm.
Before diving into what cybersecurity functions Zscaler provides and how they should be
configured, let's look at a clear overview of:
To start, we should understand how frequently these attacks occur. A significant data breach
makes the news at least every few months. For instance, we recently heard about the data
breach at companies like Twilio. The above are some examples of notorious attacks like
Colonial Pipeline and
SolarWinds.
97
of both the cyberthreat and cybersecurity landscapes.
To understand the current cybersecurity landscape and some of the problems our customers
are facing, it is important to understand three points:
1. Attackers are increasingly using automation, and it has become exceedingly easy for just
anyone to launch an attack. Many attacks, the Colonial Pipeline attack for instance, use
ransomware-as-a-service, where ransomware is run on demand by a third party. Many
attacks also involve credential theft, including phishing, often done using phishing kits,
which are widely and readily available for any popular productivity suite, like Microsoft
365. With these premade kits and services, you don't have to be an expert at coding to
launch these attacks.
2. Over the last decade, many enterprise customers have invested a great deal of money in
cybersecurity, steadily acquiring multiple best-of-breed products to stop advanced
attacks. Unfortunately, acquiring many different point products creates operational
complexity, and integration is often difficult. Context is not shared across these products,
so it's fragmented, making it difficult for anyone to get the full picture of threats, in
addition to creating the third problem, the adoption gap.
3. The adoption gap has been an issue for some time, but when you have multiple point
products, it gets compounded. Generally, you have a lot of technical debt when you're
replacing a legacy product with the next-generation product. After replacement, you
have inertia to move away from that technical debt, compounded even further when your
point products don't talk to each other. Attackers take full advantage of this adoption gap:
for instance, a lot of these attacks happen with customers using VPN products.
98
If you focus on attacks, they all have basically the same story. The MITRE ATT&CK framework
breaks down 12 different stages of an attack, but you can simplify it into the above four
high-level stages.
The first stage is about the attack surface. Attackers are looking for exposed endpoints. These
could be your exposed public servers, your VPN users etc. Once attackers have found the
attack surface, they can use different techniques to execute their initial compromise. This is
where they may send a phishing document or spear phishing email. They'll try to lure victims to
a website where they can download a malicious file, or perhaps the website itself is running
active malicious JavaScript. Once they succeed and land on a target system, the first thing they
want to do is find your most critical and sensitive data and assets.
Attackers want to move laterally to identify those, and they can do this in various ways. If your
network or environment is not segmented, or if your applications are otherwise exposed, it
becomes very easy for them. They can use techniques like “living off the land” to find out what
your most sensitive assets are. Once they get to those assets, they start stealing data. For
example, in the case of ransomware, they can use the stolen data in a “double extortion” attack,
where they encrypt your data in addition to exfiltrating it, giving them extra leverage.
Any attack, from advanced supply chain attacks to ransomware, can be mapped to this simple
framework. Now, let’s look in a little more detail at how this manifests in an attack, and the
specific Zscaler products that can stop attacks at these stages. Suppose an attacker has gained
initial entry using a vulnerable VPN.
Once the attacker has gained initial entry, then they move on to the initial compromise, where
they will establish some level of foothold with spear phishing or broad credential phishing. They
may deliver malware using an innocuous looking .docx file with a malicious macro inside it. As
soon as the user opens or installs this file, the malware installs itself. At that point, the attack
can start moving laterally. It may use techniques like malvertising or keylogging to steal
credentials, or start figuring out what and where your other sensitive assets are, such as your
domain controllers.
99
In one of these attacks, attackers found out a specific domain controller held passwords for
other domain controllers or other infrastructure. That is what they used to do privilege
escalation. Once they do that, the next phase is to steal the data, and after they have made that
theft of data, the next thing is that they will install this ransomware and demand payment. So
this is a good example of how ransomware attacks proceed. Now, you can also see underneath
what are all the different capabilities that come together to stop this.
In our attack surface, we have ZPA capabilities. To prevent initial compromise we have a lot of
our ZIA capabilities around secure web gateway, IPS, Cloud Sandbox, and Cloud Browser
Isolation. Again, to eliminate lateral movement, we have ZPA for users, we have ZPA for
workloads, we have our Deception capabilities. And, last but not least, to stop data loss, we
have our data protection capabilities around cloud DLP, cloud CASB, and Workload protection.
So, now that we have understood how a lot of these attacks happen, what is the right approach
to stopping a lot of these attacks?
● First and foremost, we believe the right way to solve these attacks is a platform
approach. Now, you may hear the platform approach from everyone out there in the
industry, but when we say a platform approach, it means a very adaptive platform. It
means a platform that is scalable, that can inspect SSL at scale for all your users,
without you having to worry about how much of the traffic can you decrypt. It has to be a
platform that supports APIs where you can signal into the platform and signal out of the
platform, which makes it very programmable, and it has to be a platform that uses AI and
ML to learn constantly to adapt itself to the most sophisticated attacks and deliver the
most superior outcomes.
● The second thing that I want to talk about is an automated and integrated platform. Any
product that has to solve cyber security challenges today has to do both of these.
Essentially, we want to deliver accelerated outcomes for our customers by leveraging
automation and reducing the time it takes to detect and respond. Integration means that
we should be able to integrate with other products that a customer has acquired over the
last few years. We should be able to talk to their SIEM. We should be able to talk to their
EDR products. What we mean by automation is when we find out that something or a
specific endpoint or system is malicious, how do we quickly quarantine it? How do we
signal across the entire platform that this specific user is a compromised user and we
need to limit further damage?
100
● And last but not least, which is very, very critical, is the concept of layered defense. Now,
defense in depth has been around as a concept for a very long time, but when you put
layered defense as a platform approach versus point product, it works a lot better. The
reason it works a lot better is because the context is not fragmented, it is shared. With
the Zero Trust Exchange, you know what user, what identity is trying to get to what
resources, and you can inspect the content and you provide layers of protection so that
you increase the cost of the attacks so much for the attacker that they actually give up
and move on to a different target. A good example of this would be how advanced Cloud
Sandbox and Cloud Browser Isolation work together or work hand in hand. You can in
fact use both of them in tandem to not only secure your users, but also deliver a very
compelling user experience.
For example, while you analyze and detonate the file in a sandbox environment, you can
give your user a PDF-rendered safe version of the same file using isolation technology.
You can also protect them from going to suspicious websites. So we'll go into all those
details in each of these specific sections, but this is what we believe is the right approach
to stopping attacks.
101
Now, this is how it looks like for the Zscaler Zero Trust Exchange Platform. If you can see here,
you have things at the bottom, which are connectivity services, above which we have the
platform services, above which we have built the access control, followed by security services
and data protection services, followed by digital experience. What we will do here is that we will
go into more detail for each one of these sections to understand this in more detail and see how
you actually use the Zscaler products to achieve the best possible security outcomes for
customers. Now, if we are to step back a little bit and see how this all comes together, this is
what it actually looks like.
Then further inside, you do a full complete analysis using content inspection, using a SSL
inspection, followed by the likes of PageRisk. PageRisk is a proprietary technology that we have
built, which dynamically in-line calculates different risk attributes of any given website on the
102
world wide web. Then we have technologies like browser isolation where we can actually stop
someone from going onto a suspicious website. This kind of attack is called a watering hole
attack, where a commonly known website has malicious content like malicious JavaScript
running on it. This is where we can provide another layer of defense using Browser Isolation,
followed by File Type Control, followed by our Cloud Sandbox technology, which uses advanced
AI / ML and behavioral analysis to find out if a file is malicious or not.
And at the end, we deliver safe content to the user. All of this comes together to look like a Zero
Trust Exchange Platform, which maps to the same four-stage attack model we discussed
above. So you have the reduced attack surface. This is where our ZPA product offering comes
into play, where you can actually give privileged remote access, You can give private access to
applications. Your applications are not visible on the internet. Then the second piece is to stop
initial compromise. This is where a lot of ZIA capabilities around secure web gateway, advanced
threat protection, Cloud Sandbox, Cloud Firewall, IPS, and Browser Isolation come together,
followed by how you stop lateral movement. This is where Deception capabilities and policy
segmentation kind of capabilities come together, followed by the prevention of data loss. This is
where again, the secure web gateway DLP capabilities along with Browser Isolation and Cloud
Sandbox come together.
What we have understood here is just to summarize, why do attacks continue to happen? What
has changed, what the current threat landscape looks like, how customers are struggling, and
we've also discussed a platform approach on what is the right approach with a little bit of detail
on what are the different components of this platform approach with the Zscaler Zero Trust
Exchange Platform.
103
Advanced Threat Protection
Advanced Threat Protection is one of the key capabilities of Zscaler’s Secure Web Gateway
portfolio within Zscaler
Internet Access (ZIA).
It protects users going
out to the internet
against common
attacks such as
phishing.
Gaining access to
phishing kits and
creating phishing
websites to enable
these attacks has become extremely easy. This is why an organization's need for a threat
protection capability is so strong in today’s digital world.
It is also important to understand command and control channels as they are a part of every
cyber attack. Once a phishing attack occurs and a user is directed to malicious content, the
following typically happens:
One way to block any attack is to disrupt this command and control channel. Zscaler has the
power to do this through our Advanced Threat Protection capability.
Our controls allow us to disrupt these known and even unknown command and control channels
so that your users are always protected.
This further allows us to create an early warning system for your enterprise, ensuring that all of
these capabilities are working together to provide a layered defense.
104
This broad spectrum of services is what comprised Advanced Threat Protection:
Reduce the attack surface Identify and prevent access Block known malicious
with policy to control to potentially dangerous sites, IPs, URLs through
access to sanctioned, content, such as IOC exchange with
sanctioned SaaS dangerous file types. industry peers, Cloud
applications URL and Effect, Threat Research,
categories. and PageRisk
Content Type Content can be blocked using all of these individually configurable
settings:
For example, looking just at high risk files, these could be binary
files or executable files from untrusted locations.
Blocking any Windows executable files from websites that are not
categorized is a great measure. It’s completely okay for users to
download an .exe file from a Windows website, but if it's an
uncategorized website, then most likely that file is going to be
malicious.
105
Newly Registered & There are three domain approaches to implement when it comes
Observed Domains to domain defense, they are:
106
● Newly revived domains are very unique and
differentiated. Because with some really sophisticated
attacks, such as the Solar Winds supply chain attack where
one of the domains that they used was a regular domain
that had been sitting idle for many years.
The attackers acquired that old existing domain which had
built good reputation and repurposed it to serve command
and control activity for this specific attack.
As you can see in the graph here, we get this feed (also
from Farsight) where a domain is showing certain activity
within a few days and then it just disappears. Anything that
has actually gone offline for more than 10 days and has
107
come back again is captured as a newly revived domain.
108
● Cloud Sandbox where these malicious files are
detonated in a sandbox environment. Here they are
closely observed for what kind of servers they're
establishing command and control channels to and
then using the Cloud Effect, we deliver all of that
intelligence through Advanced Threat Protection to
all customers instantaneously (even a customer
who does not have advanced Cloud Sandbox still
gets this intelligence via another customer who may
have actually downloaded a sample in advanced
Cloud Sandbox)
Malicious Active Content Malicious active content and server-side vulnerabilities. These
& Server Side could be:
Vulnerabilities
● Malicious content and sites.
● Malicious ActiveX controls
● Browser exploits
● File format vulnerabilities.
109
When it comes to cross-site scripting protection where a web
server has vulnerabilities that allow malicious threat actors to inject
code into the site, that is what we can block using these settings.
110
● Suspicious Content Protection (aka PageRisk)
○ Multi data algorithm applied to web page (not file)
○ The algorithm determines the riskiness
○ Blocked based on customer set threshold
111
With this technique, we are even able to detect some of the most advanced phishing sites,
which are using man-in-the-middle attacks, where the adversary has created an entire phishing
infrastructure where he can actually frontend the entire website to the user, but on the backend,
they are still transmitting all those credentials to the actual website to give the user a very native
end-to-end experience. But
the user does not know that
anything that is being entered
on that front-ended website is
actually being sent to the
attacker. These attacks have
become exceedingly
common. We have seen this
technique being used often,
which is why this is a very
powerful capability where we
can even block patient zero
phishing pages.
And there are some statistics here which will of course change over
time, which will go bigger and bigger. But on a daily basis, we are able to
discover more than a hundred botnets. The important thing here to
understand is that we also have the ability to detect and block unknown
command and control, which is a very strong way to block any attack
because any attack that you see today will always use command and
control from the most commodity-based attack to the most advanced
attack. So blocking that, disrupting that command control channel also
allows us to block a lot of these attacks.
112
113
Antivirus / Malware Protection
Antivirus or Malware Protection is a key component of how Zscaler protects organizations and
their users from malicious files and attacks. Like Advanced Threat Protection, Antivirus sits
under Zscaler’s Cyber Protection capabilities in our Security Services suite.
To understand this capability, we first need to be able to identify common malware types that are
targeting the enterprise.
Malware used to steal Tools deployed after the Malware that can provide
sensitive information from adversary has gained full remote access to a
target systems. Common access. Common tools target system. Common
families include Trickbot, include Mimikatz, families include Nanocore,
Qakbot, Agent Tesla, and Meterpreter, and Empire. njRAT, and Remcos.
Usrnif.
114
Phishing Phishing specifically when you're delivering a file is called spear phishing.
This is using email to deliver malware, either as an attachment to that email
or a link where the user will click. And unbeknownst to them, this specific
file will download. This is the most common delivery mechanism used
today.
Exploit Kits Exploit Kits are essentially malicious code looking to exploit browsers or
vulnerabilities within browsers. This was very, very popular when Internet
Explorer was still a very common browser, but as Google Chrome has
become more popular as a browser, we are seeing less and less of exploit
kits. But still, there's plenty of Internet Explorer out there. There are still
plenty of exploit kits out there, and we are seeing some advanced exploit
kits, even for the most common advanced browser like Google Chrome.
Watering Hole A watering hole is when you take a very popular website and you put
malicious content on it. This could be malicious JavaScript. This could be a
malicious driver download. And when anybody goes to that website, they
will, unbeknownst to them, download a specific malware that was actually
put on this website. The way attackers do this is typically by renting
advertising space on the website because the creator of that website may
not completely track what advertisements are coming and how they're
actually being rendered on the website. So a very common way to actually
land a malicious code on a website is to use the advertisement space.
Pre-existing More than a few years ago, Forbes.com was one of the websites which
Compromise was a victim of a watering hole attack. The last one is a pre-existing
compromise. This actually means that compromise or unauthorized access
is initially executed by a different operator and then it is sold to the highest
bidder. So the attackers will compromise a device and then they will see,
okay, maybe there is someone actually who is a different attacker, can
have better use of it. So they will actually send this off to the other attacker.
115
signatures that are mostly file based. With this we actually block the most common threats in
malware. A lot of these signatures or engines are the AV engine that is running. These are
signatures that identify binary payloads. A lot of these signatures are based on MD5 hashes,
where we know that the file is malicious. And a lot of this actually can also be done using AI /
ML, where we can identify if a file that is being downloaded is malicious or not.
116
Let's take a deeper look at
how the detection and
response workflow will look
like with this capability.
There’s also a lot of contextualized information about this specific alert. What TrickBot is, for
instance, is a banking trojan. That means someone has inadvertently downloaded a banking
trojan. The next step you would naturally feel is to find out who this specific user is. So you go to
impacted systems. Once you are within impacted systems, you will actually see the number of
systems by department. You'll also see the number of impacted systems by location. And here
you can actually find out there's a total of 64 systems that have been impacted. The usernames
are defined here, shown along with the client IP, the first time that these systems were impacted,
what department they're in, what location it is.
For customizing, admins can create their own alert rules and get the notification of these alerts
outside of the UI using email or through webhook third-party support for applications including
ServiceNow, Slack teams, OpsGenie, PagerDuty, and Splunk.
This is our capability around detection and response, where we built a correlation engine within
the ZIA product that can actually take all these logs, correlate them, and provide very
meaningful actionable consumable alerts that the SOC team can use to go and do meaningful
detection and response activity.
117
Basic Data Protection Services
Basic Data Protection Services will allow you to explore the breadth of the data protection
capabilities of the Zero Trust Exchange.
Gain an overview of Zscaler’s Data Protection capabilities, dive deeper into specific functions,
and gain knowledge on how to configure Zscaler’s Data Protection Services as they relate to
Zscaler best practices.
—–
118
Data Protection Overview
What is Zscaler Data Protection
The adoption of SaaS and public cloud has
rendered data widely distributed and difficult,
if not impossible, to secure with legacy
protection appliances. As such, it is easy for
both careless users and malicious actors to
expose enterprise cloud data.
In addition, Zscaler realized that the number one data exfiltration channel is no longer the
USB and external drive on the endpoint, but rather a user's personal cloud storage,
collaboration, and cloud-based personal email applications.
The Zero Trust Platform far exceeds standalone DLP (data loss prevention) or CASB (cloud
access security broker)
products. This enables the
delivery with high
performance, high scalability,
high accuracy, and efficacy.
And from a data security
perspective, what we really
delivered is a solution that
protects your data - from
data in motion, data at rest,
as well as the data that is
sitting on BYOD (bring your
own device) and unmanaged
assets. The way we
delivered it is not in isolation; it is all part of our secure Service Edge platform. Secure Service
Edge (Zscaler Private Edge and Zscaler
Public Edge) combines the data protection
from external threats, as well as insider
threats, your employees and their activities
in different cloud channels and other
channels.
119
When Zscaler talks about data protection, essentially we are talking about two different
segments.
● DLP: From a DLP perspective, how do we do data loss prevention for cloud traffic? How
do we protect your data on the
endpoint? How do we protect your
secure data through email, your
corporate exchange, and Gmail? And
how do we protect your sensitive
assets and crown jewels when the
data is sitting with your private apps?
And all these capabilities behind the
scene are using our data
classification engine, which is
essentially a data loss prevention
engine.
Let’s review, via 4 use cases, the drivers behind Zscaler’s Data Protection strategy:
Cloud Application How do we prevent data loss to internet and cloud applications? In this
Data Loss mode, we are a man-in-the-middle proxy. All your internet bound traffic
is egressing through us. Every single transaction we are inspecting on
the wire. Most of your internet-bound traffic today is HTTPS-encrypted.
So at ingress, we will crack open that SSL connection, and then once
we crack open that SSL connection, we are ready to inspect your
content, your payload. And we do that with different types of DLP
classification techniques. Then we enforce policy based on the user
120
groups and departments they're coming from. On the destination side,
this is based on cloud applications, URL category, cloud applications,
specific activities, and so on.
The simple use case example here is why my users are uploading zip
folders and encrypted files to a random website, or why I am seeing so
many users uploading my office documents to an application called
Pdfconverter.com, or why user John is uploading sensitive PII (personal
identifiable information) data to his personal OneDrive account. We are
delivering this visibility at real time with our man-in-the-middle proxy.
And then all the policies that we are enforcing, these are all real time.
The actions could allow the transactions, block the transactions, or
monitor the user, coach the user, send user notification, and so on. And
once again, this is all in-line transactions. This is our strength, this is our
bread and butter, because when you have to monitor all your
internet-bound traffic and all these transactions, you really have to think
about scale, speed, and efficacy.
BYOD and How do you protect your sensitive data from BYOD and unmanaged
Unmanaged assets? If you think about the situation that we are in today,
Assets post-COVID, everybody's working from home. Most of the time, perhaps
they're using their corporate assets, but once in a while they're using
their personal Mac, their personal Windows, and they're going straight to
their critical cloud-based applications like Office 365 and Salesforce.
Remember, these devices are unmanaged assets. That means there is
no footprint on this device. There is no Zscaler Client Connector, there is
no special PAC file.
The company, the IT admin, the DLP admin - they have absolutely no
visibility into what users are doing when they're connecting to these
critical cloud applications. So in this mode, we interject ourselves during
the authentication time and then through SAML proxy and identity proxy,
we will identify if the user is coming from an untrusted device. And then
once we identify this untrusted device, then we forward this session to
our Cloud Browser Isolation. Once the traffic is using Cloud Browser
Isolation, then we control that entire channel. We enforce different types
of conditional access policies so that we can always protect your data.
Data at Rest How do we protect data at rest? When organizations deploy applications
like Office 365 or Google, or even from a public cloud infrastructure
perspective, they're probably storing a lot of data in their S3 (Amazon
Web Services (AWS) Single Storage Service) bucket, Azure block, and
121
GCP (Google Cloud Platform). There is no concern if the user was
storing sensitive data because these applications essentially are their
corporate applications. These applications are their official storage
application and collaboration application. Once again, if a user from
finance is uploading financial statements and storing them in corporate
OneDrive, that behavior is okay. But what we see in the market is once
the data sits there, there is a tremendous amount of accidental data loss
through data exposure.
Cloud A lot of data exfiltration, data loss, today is happening because of cloud
Misconfiguration misconfigurations. This misconfiguration at the application level is done
by admin users as well as end users. With our SSPM module and our
third-party app integrations SaaS-to-SaaS API connections, we monitor
these misconfigurations. And then whenever we find a serious violation,
we trigger remediation actions.
These are some of the top use cases that are really driving our data protection adoption today.
We extended the same classification stack, the data protection stack, to protect your data that is
sitting with your private apps. These private apps might be hosted in your on-premises location
or from public cloud infrastructure. But again, we extended our same classification, the data
classification engine to protect your data from private apps.
122
Protecting Data in Motion
Inline Data Protection
There are four Data Protection capabilities that Zscaler provides through the Zero Trust
Exchange to ensure data security for Data in Motion.
At the same time, from a Cloud Access Security Broker (CASB) perspective, Data in Motion
means that CASB is running in inline forward proxy mode and with Browser Isolation (Isolation
Proxy).
When you think about in-line data protection, there are several use cases that are very critical
for you to protect your data in-line in real time.
Shadow IT and Data Everything starts with visibility. You can't really secure what you
Discovery don't see, so application discovery for Shadow IT (applications that
are not corporate-sanctioned and perhaps being utilized by
individual users, different views).
123
The risk algorithm is designed with about 75 attributes at the
backend.
Application
When our research team does application research, essentially
they're basically taking that application and putting that application
in a sandbox.
● What kind of encryption and SSL channel this application is
using for data in motion.
● Is this application evasive?
● If we close Port 80 and 443, does this application look for
other open ports to get out?
User
From the actual users of these applications - from the threat
character's perspective, where we look at different characteristics.
Hosting
From the hosting character's perspective, we are looking at what
kind of certifications this application has.
● Is it PCI (Payment Card Industry)-certified, SOC (System
and Organization Controls)-certified, GDPR (General Data
Protection Regulation)-certified?
● What kind of Ts and Cs (terms and conditions) does this
application offer?
124
applications - their terms and conditions will say, “If you
upload any data to our cloud, that data becomes our
intellectual property.” And the users never pay attention to
these terms and conditions. They continue to use this
application, putting the organization at a serious risk.
Cloud App Control When you have visibility with a single policy, you can block all these
applications, bad applications. You can also block specific activities
within applications. So when you build a Cloud Application policy,
you can say all applications that are higher than a risk 4 should be
automatically blocked. Or you can build a very granular policy
saying all applications that are not PCI-certified should not be
utilized by our finance team. So the Shadow IT visibility eventually
bubbles up to your policy constructions. And then not only do you
have that complete visibility, but you can take different actions
based on application risk score.
125
DLP Inline for Web & ● Dictionaries - A DLP dictionary contains a set of patented
SaaS algorithms that are designed to detect specific kinds of
information in your users' traffic. The Zscaler service
provides predefined dictionaries that you can modify and, in
some cases, clone. You can also create custom dictionaries
for content not covered by predefined dictionaries. For
example, you can create custom dictionaries that trigger
based on specific patterns and phrases, or trigger based on
exact data matching.
● Exact Data Match (EDM) - With Zscaler EDM, you can
easily find and control any occurrence of specific data. From
employee records to customers’ personal data and credit
card numbers, EDM lets you fingerprint sensitive data and
improve detection accuracy while reducing DLP false
positives.
● Indexed Document Matching (IDM) - With Zscaler IDM,
you can secure high-value documents that typically carry
sensitive data. Fingerprint tax, medical, manufacturing, or
other important forms and detect documents that use those
templates across all your cloud data channels.
● Optical Character Recognition (OCR) - Data doesn’t only
appear in plain text—so you need DLP that secures visual
data as well. Zscaler OCR scans images to perform data
classification for files like PNGs and JPEGs, and for images
embedded in other file types (e.g., Microsoft Word
documents). It even works in tandem with EDM and IDM
functions.
● Azure Information Protection (AIP) / Microsoft
Information Protection (MIP) Labels - Microsoft
Information Protection (MIP) provides sensitivity labels,
which you can use to identify and protect files with sensitive
content. These MIP labels are maintained by Microsoft and,
through the addition of an MIP Account in the ZIA Admin
Portal, these labels can be retrieved from Microsoft so that
126
they can be used when defining a Data Loss Prevention
(DLP) policy in the ZIA Admin Portal.
The first feature that is very popular within data loss prevention is policies based on file types.
Zscaler DLP supports hundreds of file types and you can simply pick and choose a specific file
type and you can protect that data. The use case here is if any of my users are uploading Office
documents to an
application called
Pdfconverter.com, I
want to block it. And in
this case, you go to our
policy engine, you
select a specific file
type, and then for that
specific file type and file
size, you allow or block
all applications, you
allow or block different
actions and activities.
3 Levels of Inspection
Behind the scenes, when we are enforcing policy based on file type, we are not just simply
looking at the file extension. If we did that, then it's going to be very easy for users, malicious
users, to bypass by simply changing the file extension. Instead, we do three levels of inspection.
127
● First, we look at some of the early bytes that we call Magic Bytes.
● Second, we will look at the mime type,
● Third, we will look at the file extension.
These three levels of inspection gives us a lot of confidence that we are not generating any
false positives. The file type based data loss prevention is a very popular, simple policy. You can
combine a policy where you are not only looking at the file type, but you are utilizing our Cloud
App Control to enforce a granular policy based on that specific cloud application’s activity.
Now look at how we look at the content and do deep content inspection with different types of
DLP bells and whistles.
So the first one is the predefined dictionary. With the Zscaler DLP engine we provide hundreds
of predefined classifiers to identify PCI data, PII data, and PHI (protected health information)
data. Of course, within the PCI industry, credit card number is a very popular dictionary. If you
are trying to protect PII data, it is perhaps someone's Social Security number in the US, but a
UK tax ID number for UK users, or Canadian Scene (now known as Scene+) numbers, or
different countries that have different PII IDs. We support hundreds of them today. In the PHI
world, we are looking at ICD-10 (International Classification of Diseases, Tenth Revision) codes,
CPT (Current Procedural Terminology) codes, different medical dictionaries, and things like that.
Many of these predefined dictionaries are built based on standard regex and PCRE (Perl
Compatible Regular Expressions) engines. But in many of these dictionaries we have also
utilized AI and ML. For example, we have a dictionary that identifies source code. You cannot
128
write a regex to identify source code, so we had to lean on AI and ML. Same thing for financial
statements, profanity, adult languages, and so on.
Beside predefined dictionaries, we also offer a custom dictionary engine. Our customers can
build their own dictionary based on different phrases, keywords and patterns, and regular
expressions. Perhaps you are trying to protect documents that have a header and footer
company-confidential or internal-use only. So you can definitely leverage our custom
dictionaries for that. Now the way the DLP policy works is that you take these predefined
dictionaries and custom dictionaries, you build a DLP engine, perhaps by combining both
predefined dictionaries and custom dictionaries using different Boolean logic - like AND or NOT.
And then once you build these DLP engines, then you take those engines and apply that to a
policy.
Let's take the Zscaler credit card dictionary. Any document that has 50 credit card numbers, I'm
interested to know. And, combine that with another predefined dictionary called employee's first
name, and then combine these two dictionaries with a custom dictionary that says Company
Confidential. Essentially you have combined a predefined dictionary with a custom dictionary
with different Boolean logic. You build that DLP engine, and then now you take that engine and
apply that engine to a policy. The predefined dictionaries, custom dictionaries, the engines,
those are basic building blocks for DLP.
Many large enterprise customers - majors - go above and beyond just building DLP policies
using dictionaries and engines. Exact data match is a very popular feature within our existing
install base. Here, the customers want us to match their exact data and then based on their
exact data, they want us to take different actions. So here we are not triggering a DLP policy
because we saw a generic credit card number, but instead we are looking at a very specific
credit card number that belongs to that organization.
129
EDM (exact data match) was designed to learn from your structured data. Let's say you have a
very large database and you want to protect the data. Or let's say you have a large CSV file,
where you have 200 million rows, 10 different columns. Each row is representing, let's say, one
of your employee’s PII info - their first name, their last name, their credit card number, their
Social Security number, their address - all of that structure data can be fed to Zscaler's EDM
engine, and then the EDM engine will learn from your own data. And then once the learning
happens, then the EDM engine is looking at all cloud transactions and matching your exact data
and then triggering different types of actions.
The question is how do you feed that data? We never ask our customers to upload their
sensitive data in our cloud environment. Instead we give our customers an on-premises VM.
That on-premises VM is basically the index tool for EDM. So you would take that on-premises
VM, it's essentially a VM image where you would deploy it to your on-premises locations and
then you start feeding your data to this on-premises VM. By the way, nobody from Zscaler has
access to this on-premises VM because, again, this is a VMware image that you install, you
deploy, and you control.
When you feed your structured data to this on-premises VM, it doesn't have to be a manual
process. You can automate that whole process. You can point your database to this
on-premises VM and then, incrementally, the index tool will fetch your data and then index your
data.
Next we take each one of these data elements, create a hash, and then there is a persistent
connection between that index tool and our cloud. Then the index tool - as and when it's
130
creating a hash for each and every single data element - pushes that hash to our cloud. So in
our cloud, we don't store your exact data. All we are storing are a bunch of hash and tokens.
And then once that indexing is done - let's say someone is trying to exfiltrate Jim Smith's
personal data via email, in the email we see Jim and Smith, we see his exact credit card number
- we will do a hash-based lookup and we will hit the exact match. And because of that exact
match, then we will take different actions.
131
Protecting Data at Rest
Out of Band Data Protection & SSPM
There are three things that you need to pay attention to.
132
ones are bad. For example, if the admin did not turn on
multi-factor authentication for all your Office 365 apps, that's
not a good idea. And we will highlight it based on the
signatures and predefined policies that we built.
133
Incident Management
Incident Management Capabilities
When adopting Data Protection capabilities such as DLP and CASB, there may be cases where
alerts are generated and Administrator teams are asked to troubleshoot and effectively manage
these incidents.
When it comes to incident management, the very first thing is about how do I enable the end
user so that I can delegate (back to the user) a lot of the violations that I'm seeing today? There
are different options available within Zscaler DLP, and CASB. (Our CASB is known as SaaS
Security API.)
Browser-Based You can use a browser-based notification where you can customize
that page with your logo, with your verbiage, and then send those
notifications to tell the end user what's going on. If a user is trying to
upload sensitive PCI data to their personal Dropbox account, you can
block that transaction, you can allow that transaction, but at the same
time, you send a user notification through the browser.
Application-Based In many organizations the communication between IT and the end user
is happening not through browsers. They prefer a connector where they
can actually use Slack notification channels or Teams. And we have
built both connectors in our solution. So when you see these violations,
you can notify the user via Slack and Teams. And then when you notify
134
or try to coach the user, you can also do a form POST where you can
ask for a justification. That justification comes back to the admin.
Client Connector And then the last option we give you is a pop-up through our Zscaler
Pop-Up Client Connector that is running on your endpoint. So again, the same
idea. If you block a transaction, the Zscaler Client Connector will pop
up and then it'll communicate with the user and basically ask for
justification, or ask the user not to do this type of violation in the future.
Now focusing on the admin side, there are lots of options available for admins when they have
to deal with DLP and CASB incidents. One of the options is email notification. We can also do
incident management through SecureICA protocol, and of course we can stream real-time logs
and feed that to your SIEM (security information and event management) tools.
135
Basic Troubleshooting Tools & Support
Troubleshooting & Support will teach you about Zscaler's support ecosystem and how to
troubleshoot common issues. Understanding what is happening within the Zero Trust Exchange
is important to troubleshoot issues, or to simply report on user access. This chapter explores the
reporting capabilities within the platform, and how to extract data. We will also explain how to
raise a support ticket, and how to provide necessary data to allow support to assist you in
troubleshooting and issues. Learn about Zscaler’s Support Services ecosystem and how to
troubleshoot common issues by leveraging Zscaler's best practice processes and tools.
—–
136
Zscaler Self Help Services
Inevitably, as you go through your secure digital transformation, you will come across questions
that you would like answers to and problems that you would like to know how to resolve.
Zscaler provides a valuable Support Services ecosystem to help you more quickly find the
information you need in real time and troubleshoot any common issues that may occur.
To get you started in learning about the troubleshooting and support resources Zscaler has
available, let’s first explore our Self Help Service options.
Zscaler Help Zscaler’s Help Documentation portal is the first place you want to go
Documentation for questions about what something is, how it works, how to configure
Portal various capabilities and features, as well as basic troubleshooting with
Zscaler.
You can also go to the Submit a Ticket option, which will provide a
search function within the Customer Portal.
Zscaler If you run into a more specific problem that you cannot find the answer
Knowledgebase (KB) to within the Help Documentation portal, Zscaler’s Knowledge Base
(KB) is the next place you will want to look.
137
Engineers, and contains documentation on specific symptoms and
solutions that they have worked through with customers.
138
Zscaler Troubleshooting Process & Tools
When issues do occur, it’s important to have a methodology and logical approach in determining
how to localize, isolate, and then diagnose the problem.
Zscaler provides a framework for troubleshooting common issues that occur within Zscaler
Internet Access (ZIA) and Zscaler Private Access (ZPA).
Troubleshooting Process
Localize With an Internet access connection through Zscaler, an issue can occur in any
of the following areas:
Diagnose Ascertain what the problem is from the information you gathered in the
previous steps and plan remedial action
139
140
■ Authentication
■ Traffic flow through Client Connector
■ Application connection - DNS, Firewall, TCP/UDP Networking
■ Policy - Allow/Deny, Reauth, Caution, Redirect
■ Security - TLS, Inspection, Sandbox
■ Data Protection - Rules/Triggers
■ Digital Experience - Probes, logs
Troubleshooting Tools
Proxy Test The URL https://ip.zscaler.com/ can be used to verify if you are going
through the Zscaler service
Admin UI Logs Load the Insights report from the Zscaler Analytics menu with respect to
the modules such as web, firewall, tunnel, or DNS and export the related
logs to a file.
Zscaler Analyzer Run Zscaler Analyzer and capture the page load and latency data to the
Output destination in question, both with and without Zscaler
ZCC Packet Run ZCC packet capture to capture all the traffic from the machine.
Capture
141
Packet Captures: A packet capture is sometimes needed to analyze the network traffic so that
we have the option to capture it directly from the ZCC:
Enabling the Start Packet Capture Option: Using the Start Packet Capture Option –
To enable packet capture for Zscaler Client When reproducing an issue that requires
Connector: packet capture:
Once we export the logs and extract that we receive two pcap files:
CaptureAdapters CaptureLWF
A Zscaler Best Practice is to collect the log files using the Export Logs function which will
include the packet captures also (if captured) so that they can be exported as a zip file and
attached to a support ticket
142
ZCC Logs: Zscaler offers troubleshooting tools online and on the Zscaler Client Connector
application. On Zscaler Client Connector, you can see the available tools in the Troubleshoot
section of the More tab. However, you can enable or disable these tools for users from the
Client Connector Portal (Administration > Client Connector Support > App Supportability)
You can set different log modes determining the type of information the logs store:
Error logs only when the app encounters an error affecting functionality
Warn logs when the app is functioning but encountering potential issues or when
conditions for the Error log mode are met
Info logs general app activity or when conditions for the Warn log mode are met
Debug logs all app activity that could assist Zscaler Support in debugging issues or
when conditions for the Info log mode are met
To collect the log files manually, navigate to the following directories for each Operating System
(OS):
Windows C:\ProgramData\Zscaler
Linux /var/log/zscaler/.Zscaler/logs
143
Exporting Logs from ZCC: Right click on Tray Icon to Export the logs or Export Logs from “More”
options in debug mode. Exporting logs is the preferred method (ZIP file).
AppInfo To leverage information about system, application, CPU utilization, Route print
etc. for troubleshooting issues.
Setupapi.dev To troubleshoot installation issues (such as driver error coming during the
installation).
ZSATunnel To inspect connection to service edge, Zscaler Client Connector Portal, or any
application we are accessing.
ZSAUpdate To dissect issues arising when the ZCC is attempting to update to any
particular version or to the latest version.
By checking the respective log files corresponding to the issue, you can identify where the issue
is so that you can take the required action accordingly e.g. If you are facing an authentication
issue, then you can refer to ZSA Auth logs.
You can use any browser to collect the SAML logs. A browser dedicated (not a Zscaler
dedicated) SAML tracer extension can be downloaded with respect to the:
● Chrome: Home > Extension > SAML Message Decoder
● Firefox: Add-ons Manager > SAML Message Decoder
These tools will help collect the SAML encoded information which can be decoded later with any
SAML decoder.
After collecting the headers: You can look for the POST requests against the
login.<cloudname>.net and/or samlsp.private.zscaler.com URL
144
From the same request, you can fetch the SAML code and that can be decoded with any SAML
decoder.
145
Zscaler Customer Support Services
Zscaler has built multiple support offerings tailored to each organization’s unique requirements and
needs. The table below provides an overview of capabilities, deliverables, and SLAs:
Premium Support is a paid upgrade from our default embedded Standard Support with the
license purchase. Premium Support subscription customers meeting certain criteria qualify for
the elevated Premium Plus services, with the assignment of a Technical Account Manager
(TAM) supplemented by Zscaler’s senior support engineers for an enhanced support experience
146
Submitting a support ticket via the Submitting a support ticket via the web
admin console: form:
● Fastest way to submit a ticket For customers who do not have access to
● Login to the admin console and the Zscaler console, tickets can be
submit a case submitted via a web form.
● Provide a “Preferred Contact Time
Zone” to enable the support team to
call you when you are available
Remote Assistance
Phone Support
147
Zscaler Support Services: Components of Zscaler Support Ticket
Component Description
Issue Subject Provide a summary of the problem with the main symptom and scope. This is
a free-text field; it should be as concise as possible but give a complete
indication of the nature of the problem
Description Provide a detailed description of the problem. This is a free-text field that
allows you to fully explain what the nature of the problem is, what its
symptoms are, where and when the problem occurs, what process you
suspect is at fault, and what steps you have taken to identify the problem or
what corrective actions you have taken with no success
Ticket Type Select from the available types: “Problem,” “Question,” “Categorization,” or
“Provisioning.”
Ticket Priority Select from the available priorities: “Urgent,” “High,” “Medium,” or “Low.”
Traffic Forwarding Which traffic forwarding method is used (IPsec Tunnel (VPN); GRE
Method Tunnel; PAC over IPsec; PAC over GRE; PAC Only; Proxy Chaining;
Private or Virtual Service Edge; Explicit Proxy; Zscaler Client
Connector)?
Zscaler Data Which Zscaler data centers are used (the ZIA Public Service Edge from
Centers Used the ip.zscaler.com output)?.”
Problem / When did the problem start? When did it stop? Is it ongoing?
Incidents Period
Issue Scope What is the scope (intermittent or always; all or some data centers; all or
some sites; all or some users; all or some end-user website
destinations) of the issue?
148
Zscaler Support Services: Zscaler Support Ticket- General Information Gathering:
Support Resources
Information to be gathered as per issue type so the support engineer can investigate.
Information to be Issue type:
gathered:
Zscaler client When ZCC is failing to connect to a Service Edge (Connection error)
connector logs or Captive portal detection or firewall error.
Packet Captures When we are not able to connect to Service edge or particular
application is not loading.
Speedtest.zscaler.com When we observe the latency in the network and want to check the
network speed.
Web-insights To check the response code for a particular URL and if the SSL
inspection is performed or not.
You explored the three self help portals as a place to start on your troubleshooting journey, and
you discovered the process and tools Zscaler recommends that you utilize.
Later, should you proceed with EDU-202: Zscaler for Users – Advanced, you will be presented
with common troubleshooting scenarios for a wide range of scenarios along with how to
localize, isolate, and diagnose the problem.
149