Nexpose 6.4 Full
Nexpose 6.4 Full
Nexpose 6.4 Full
User's Guide
Product version: 6.0
Table of contents
Table of contents 2
Revision history 14
Document conventions 18
Getting Started 20
Logging on 24
Discover 44
What is a site? 46
Table of contents 2
How are sites different from asset groups? 47
Default settings 48
Zero-day vulnerabilities 51
VMWare 53
Policy benchmarks 55
Choosing a grouping strategy for creating a site with manually selected assets 67
Adding an engine 74
Table of contents 3
Assigning a site to the new Scan Engine 78
The benefits 86
Creating a logon for Web site session authentication with HTTP headers 129
Table of contents 4
Creating a global blackout 142
Table of contents 5
Requirements for the vAsset Scan feature 192
Assess 234
Table of contents 6
Locating assets by operating systems 241
Table of contents 7
Overriding rule test results 292
Act 304
Best practices for using the Vulnerability Trends report template 359
Table of contents 8
Selecting risk trends to be included in the report 364
Prerequisites 367
Understanding the reporting data model: Overview and query design 372
Overview 372
For ASVs: Consolidating three report templates into one custom template 507
Table of contents 9
Working with human-readable formats 519
Tune 533
Table of contents 10
Collecting information about discovered assets 550
Table of contents 11
Change Scan Engine deployment 584
Changing your risk strategy and recalculating past scan data 615
Table of contents 12
Resources 626
Option 1 628
Option 2 628
Glossary 669
Table of contents 13
Revision history
Copyright © 2015 Rapid7, LLC. Boston, Massachusetts, USA. All rights reserved. Rapid7 and Nexpose are trademarks of
Rapid7, Inc. Other names appearing in this content may be trademarks of their respective owners.
Revision history 14
Revision date Description
Nexpose 5.3: Added information on scan template configuration, including
new discovery performance settings for scan templates; CyberScope XML
June 6, 2012
Export report format; vAsset discovery; appendix on using regular
expressions.
Nexpose 5.4: Added information vulnerability category filtering in reports and
August 8, 2012
customization of advanced policies.
Nexpose 5.5: Added information about working with custom report templates,
uploading custom SCAP templates, and working with configuration
December 10, 2012
assessment. Updated workflows for creating, editing and distributing reports.
Updated the glossary with new entries for top 10 report templates and shared
scan credentials.
April 24, 2013 Nexpose 5.6: Added information about elevating permissions.
May 29, 2013 Updated Web spider scan template settings.
Nexpose 5.7: Added information about creating multiple vulnerability
exceptions and deleting multiple assets.
July 17, 2013 Added information about Vulnerability Trends Survey report template.
Added information about new scan log entries for asset and service discovery
phases
July 31, 2013 Deleted references to a deprecated feature.
September 18,
Added information about vulnerability display filters.
2013
November 13, 2013
Added information about validating vulnerabilities.
Revision history 15
Revision date Description
September 10,
Added information about VMware NSX integration.
2014
September 17, Added a link to a white paper on security strategies for managing
2014 authenticated scans on Windows targets.
October 10, 2014 Made minor formatting changes.
Nexpose 5.11: Added information about Scan Engine pooling, update
October 22, 2014
scheduling, and cumulative scan results.
November 5, 2014 Added PCI executive summary content to the Reporting Data model.
December 10, 2014 Published PDF for localization.
Updated information about upcoming targeted scanning feature and support
December 23, 2014
for VMware NSX versions for integration with Nexpose.
Nexpose 5.12: Added information about new Site Configuration panel and
import option for custom Root certificates. Reorganized section on
January 28, 2015
configuring sites and added section on scenarios for creating sites for specific
use cases.
Nexpose 5.13: Added information about Dynamic Discovery of via
ActiveSync for mobile devices and via DHCP log queries for other assets;
April 8, 2015
asset group scanning; linking matching assets across sites; scan scheduling
enhancements.
May 27, 2015 Nexpose5.14: Added information about scan schedule blackouts.
Nexpose 5.15: Added content on "listening" for syslog data as a collection
method and using Infoblox Trinzic DDI as a data source for dynamic discovery
via DHCP log queries. See Discovering assets through DHCP log queries on
June 24, 2015
page 157. Added content on Importing AppSpider scan data on page 201.
Added protocol_id column to to dim_asset_service_credential. See
Understanding the reporting data model: Dimensions on page 439.
Nexpose 5.16: Added instructions for Sending custom fingerprints to paired
Scan Engines on page 624. Added information about Reporting data model
July 29, 2015
version 2.0.1, which enables SQL queries on mobile device data. See dim_
mobile_asset_attribute on page 451.
Nexpose 5.17: Added instructions for Working with assets scanned in Project
Sonar on page 185. Added new configuration featurse for features for
August 26, 2015
Running a manual scan on page 204. Added instructions for Stopping all in-
progress scans on page 221.
Nexpose 6.0: Updated to reflect new look and feel. Added information to
October 8, 2015
Automating security actions in changing environments on page 222.
Revision history 16
Revision date Description
Updated dim_host_type table in reporting data model to include 'Mobile' type.
October 21, 2015 See dim_host_type on page 486. Corrected misnamed column and added
missing columns in fact_asset_vulnerability_instance on page 412.
Updated section on NSX Integration. Added section on Remote Registry
January 20, 2016
Activation for Windows.
Revision history 17
About this guide
This guide helps you to gather and distribute information about your network assets,
vulnerabilities, and configuration compliance using Nexpose. It covers the following activities:
l logging onto the Security Console and navigating the Web interface
l setting up a site
l running scans
l managing Dynamic Discovery
l viewing asset and vulnerability data
l applying Real Context with tags
l creating remediation tickets
l creating reports
l reading and interpreting report data
All features documented in this guide are available in the Nexpose Enterprise edition. Certain
features are not available in other editions. For a comparison of features available in different
editions see http://www.rapid7.com/products/nexpose/compare-editions.jsp.
Document conventions
Words in italics are document titles, chapter titles, and names of Web interface pages.
Items in Courier font are commands, command examples, and directory paths.
Note: NOTES contain information that enhances a description or a procedure and provides
additional details that only apply in certain cases.
Tip: TIPS provide hints, best practices, or techniques for completing a task.
Warning: WARNINGS provide information about how to avoid potential data loss or damage or
a loss of system integrity.
If you haven’t used the application before, this section helps you to become familiar with the Web
interface, which you will need for running scans, creating reports, and performing other important
operations.
l Running the application on page 21: By default, the application is configured to run
automatically in the background. If you need to stop and start it automatically, or manage the
application service or daemon, this section shows you how.
l Using the Web interface on page 24: This section guides you through logging on, navigating
the Web interface, using configuration panels, and running searches.
Getting Started 20
Running the application
This section includes the following topics to help you get started with the application:
Nexpose is configured to start automatically when the host system starts. If you disabled the
initialize/start option as part of the installation, or if you have configured your system to not start
automatically as a service when the host system starts, you will need to start it manually.
Starting the Security Console for the first time will take 10 to 30 minutes because the database of
vulnerabilities has to be initialized. You may log on to the Security Console Web interface
immediately after the startup process has completed.
If you have disabled automatic startup, use the following procedure to start the application
manually:
By default the application starts automatically as a service when Windows starts. You can disable
this feature and control when the application starts and stops.
If you disabled the initialize/start option as part of the installation, you need to start the application
manually.
Starting the Security Console for the first time will take 10 to 30 minutes because the database of
vulnerabilities is initializing. You can log on to the Security Console Web interface immediately
after startup has completed.
To start the application from graphical user interface, double-click the Nexposein the
Internet folder of the Applications menu.
To start the application from the command line, take the following steps:
1. Go to the directory that contains the script that starts the application:
$ cd [installation_directory]/nsc
2. Run the script:./nsc.sh
cd [installation_directory]/nsc
2. Run the script to start, stop, or restart the daemon. For the Security Console, the script file
name is nscsvc. For a scan engine, the service name is nsesvc:
./[service_name] start|stop
Preventing the daemon from automatically starting with the host system
To prevent the application daemon from automatically starting when the host system starts, run
the following command:
This section includes the following topics to help you access and navigate the Security Console
Web interface:
If your Security Console is not connected to the Internet, you can find directions on updating and
activating on private networks. See the topic Managing versions, updates, and licenses in the
administrator’s guide.
Logging on
If you received a product key, via e-mail use the following steps to log on. You will enter the
product key during this procedure. You can copy the key from the e-mail and paste it into the text
box; or you can enter it with or without hyphens. Whether you choose to include or omit hyphens,
do so consistently for all four sets of numerals.
If you do not have a product key, click the link to request one. Doing so will open a page on the
Rapid7 Web site, where you can register to receive a key by e-mail. After you receive the product
key, log on to the Security Console interface again and follow this procedure.
If you are running the browser on the same computer as the console, go to the following
URL: https://localhost:3780
If you are running the browser on a separate computer, substitute localhost with the
correct host name or IP address.
Tip: If there is a usage conflict for port 3780, you can specify another available port in the
httpd.xml file, located in [installation_directory]\nsc\conf. You also can switch the port after you
log on. See the topic Changing the Security Console Web server default settings in the
administrator’s guide.
Note: If the logon window indicates that the Security Console is in maintenance mode, then
either an error has occurred in the startup process, or a maintenance task is running. See
Running in maintenance mode in the administrator’s guide.
2. Enter your user name and password that you specified during installation.
Logon window
If you are a first-time user and have not yet activated your license, the Security Console
displays an activation dialog box. Follow the instructions to enter your product key.
Logging on 25
Activate License window
The first time you log on, you will see the News page, which lists all updates and improvements in
the installed system, including new vulnerability checks. If you do not wish to see this page every
time you log on after an update, clear the check box for automatically displaying this page after
every login. You can view the News page by clicking the News link that appears under the Help
icon dropdown. The Help icon can be found near the top right corner of every page of the console
interface.
For organizations that want additional security upon login, the product supports Two Factor
Authentication. Two Factor Authentication requires the use of a time-based one-time password
application such as Google Authenticator.
Two Factor Authentication can only be enabled by a Global Administrator on the Security
Console.
Once Two Factor Authentication is enabled, when a user logs on, they will see a field where they
can enter an access code. For the first time, they should log in without specifying an access code.
Once the user logs in, they can generate a token in the User Preferences page.
A Global Administrator can check whether users have completed the Two Factor Authentication
on the Manage Users page. The Manage Users page can be reached by going to the
Administration tab and clicking the Manage link in the Users section. A new field, Two Factor
Authentication Enabled, will appear in the table and let the administrator know which users have
enabled this feature.
If the user doesn’t create a token, they will still be able to log in without an access code. In this
case, you may need to take steps to enforce enablement.
You can enforce that all users log in with a token by disabling the accounts of any users who have
not completed the process, or by creating tokens for them and emailing them their tokens.
To disable users:
1. Go to the Manage users page by going to the Administration tab and clicking the Manage link
in the Users section.
2. Select the checkbox next to each user for whom the Two Factor Authentication Enabled
column shows No.
3. Select Disable users.
1. Go to the Manage users page by going to the Administration tab and clicking the Manage link
in the Users section.
2. Select Edit for that user.
3. Generate a token for that user.
4. Provide the user with the token.
5. Once the user logs in with their access code, they can change their token if they would like in
the User preferences page.
The Security Console includes a Web-based user interface for configuring and operating the
application. Familiarizing yourself with the interface will help you to find and use its features
quickly.
When you log on to the to the Home page for the first time, you see place holders for information,
but no information in them. After installation, the only information in the database is the account of
the default Global Administrator and the product license.
The Home page also displays a chart that shows trends of risk score over time. As you add
assets to your environment your level of risk can increase because the more assets you have, the
more potential there is for vulnerabilities.
Each point of data on the chart represents a week. The darker blue line and measurements on
the left show how much your risk score has increased or decreased over time. The lighter blue
line displays the number of assets.
Note: This interactive chart shows a default of a year’s worth of data when available; if you have
been using the application for a shorter historical period, the chart will adjust to show only the
months applicable.
l In the search filter at the top left of the chart, you can enter a name of a site or asset group to
narrow the results that appear in the chart pane to only show data for that specific site or
group.
l Click and drag to select a smaller, specific timeframe and view specific details. Select the
Reset/Zoom button to reset the view to the previous settings.
l Hover your mouse over a point of data to show the date, the risk score, and the number of
assets for the data point.
l Select the sidebar menu icon on the top left of the chart window to export and print a chart
image.
On the Site Listing pane, you can click controls to view and edit site information, run scans, and
start to create a new site, depending on your role and permissions.
On the Ticket Listing pane, you can click controls to view information about tickets and assets for
which those tickets are assigned.
On the Asset Group Listing pane, you can click controls to view and edit information about asset
groups, and start to create a new asset group.
A menu appears on the left side of the Home page, as well as every page of the Security
Console. Mouse over the icons to see their labels, and use these icons to navigate to the main
pages for each area.
Icon menu
The Home page links to the initial page you land on in the Security Console.
The Assets page links to pages for viewing assets organized by different groupings, such as the
sites they belong to or the operating systems running on them.
The Reports page lists all generated reports and provides controls for editing and creating report
templates.
The Administration page is the starting point for all management activities, such as creating and
editing user accounts, asset groups, and scan and report templates. Only Global Administrators
see this icon.
Some features of the application are supported in multiple languages. You have the option to set
your user preferences to view Help in the language of your choosing. You can also run Reports in
multiple languages, giving you the ability to share your security data across multi-lingual teams.
To select your language, click your user name in the upper-right corner and select User
Preferences. This will take you to the User Configuration panel. Here you can select your
language for Help and Reports from the corresponding drop down lists.
When selecting a language for Help, be sure to clear your cache and refresh your browser after
setting the language to view Help in your selection.
Setting your report language from the User Configuration panel will determine the default
language of any new reports generated through the Create Report Configuration panel. Report
configurations that you have created prior to changing the language in the user preferences will
remain in their original language. When creating a new report, you can also change the selected
language by going to the Advanced Settings section of the Create a report page. See the topic
Creating a basic report on page 341.
Throughout the Web interface, you can use various controls for navigation and administration.
View Help.
View the Support page to search FAQ
Pause a scan. pages and contact Technical Support.
View the News page which lists all
updates.
Product Click the product logo in the upper-left
Resume a scan.
logo area to return to the Home page.
This link is the logged-on user name.
Click it to open the User Configuration
User:
panel where you can edit account
<user
Stop a scan. information such as the password and
name>
view site and asset group access. Only
link
Global Administrators can change roles
and permissions.
Log out of the Security Console
Initiate a filtered search interface. The Logon box appears. For
Log Out
for assets to create a security reasons, the Security Console
link
dynamic asset group. automatically logs out a user who has
been inactive for 10 minutes.
Expand a drop-down list
of options to create sites,
asset groups, tags, or
reports.
With the powerful full-text search feature, you can search the database using a variety of criteria,
such as the following:
Access the Search box on any a page of the Security Console interface by clicking the
magnifying glass icon near the top right of the page.
Enter your search criteria into the Search box and then click the magnifying glass icon again. For
example, if you want to search for discovered instances of the vulnerabilities that affect assets
running ActiveX, enter ActiveX or activex in the Search text box. The search is not case-
sensitive.
For example, if you want to search for discovered instances of the vulnerabilities that affect
assets running ActiveX, enter ActiveX or activex in the Search text box. The search is not case-
sensitive.
Starting a search
ActiveX, results appear in the Vulnerability Results table. At the bottom of each category pane,
you can view the total number of results and change settings for how results are displayed.
Search results
In the Search Criteria pane, you can refine and repeat the search. You can change the search
phrase and choose whether to allow partial word matches and to specify that all words in the
phrase appear in each result. After refining the criteria, click the Search Again button.
When you run initial searches with partial strings in the Search box that appears in the upper-right
corner of most pages in the Web interface, results include all terms that even partially match
those strings. It is not necessary to use an asterisk (*) on the initial search. For example, you can
enter Win to return results that include the word Windows, such as any Windows operating
If you want to modify the search after viewing the results, an asterisk is appended to the string in
the Search Criteria pane that appears with the results. If you leave the asterisk in, the modified
search will still return partial matches. You can remove the asterisk if you want the next set of
results to match the string exactly.
If you precede a string with an asterisk, the search ignores the asterisk and returns results that
match the string itself.
Certain words and individual characters, collectively known as stop words return no results, even
if you enter them with asterisks. For better performance, search mechanisms do not recognize
stop words. Some stop words are single letters, such as a, i, s, and t. If you want to include one of
You can access a number of key Security Console operations quickly from the Administration
page. To go there, click the Administration icon. The page displays a panel of tiles that contain
links to pages where you can perform any of the following operations to which you have access:
Tiles that contain operations that you do not have access to because of your role or license
display a label that indicates this restriction.
After viewing the options, select an operation by clicking the link for that operation.
The Security Console provides panels for configuration and administration tasks:
Note: Parameters labeled in red denote required parameters on all panel pages.
Note: You can change the length of the Web interface session. See Changing Security Console
Web server default settings in the administrator’s guide.
By default, an idle Web interface session times out after 10 minutes. When an idle session
expires, the Security Console displays a logon window. To continue the session, simply log on
again. You will not lose any unsaved work, such as configuration changes. However, if you
choose to log out, you will lose unsaved work.
If a communication issue between your browser and the Security Console Web server prevents
the session from refreshing, you will see an error message. If you have unsaved work, do not
leave the page, refresh the page, or close the browser. Contact your Global Administrator.
Your product key is your access to all the features you need to start using the application. Before
you can being using the application you must activate your license using the product key you
received. Your license must be active so that you can perform operations like running scans and
creating reports. If you received an error message when you tried to activate your license you can
try the troubleshooting techniques identified below before contacting Technical Support.
Product keys are good for one use; if you are performing the installation for a second time or if
you receive errors during product activation and these techniques have not worked for you,
contact Technical Support.
open the Security Console Configuration panel. Select Update Proxy to display the
Proxy Settings section ensure that the address, port, domain, User ID, and password
are entered correctly.
l If you are not using a proxy, ensure the Name or address field is specified as
updates.rapid7.com. Changing this setting to another server address may cause your
activation to fail. Contact Technical Support if you require a different server address
and you receive errors during activation.
Security Console.
l Select the OS Diagnostics and Network Diagnostics checkboxes.
l Click Perform diagnostics to see the current status of your installation. The results
column will provide valuable information such as, if DNS name resolution is successful,
if firewalls are enabled, and if the Gateway ping returns a ‘DEAD’ response.
To know what your security priorities are, you need to discover what devices are running in your
environment and how these assets are vulnerable to attack. You discover this information by
running scans.
First, if you don't know what a site is, go to What is a site? on page 46. And learn about different
Site creation scenarios on page 48 for use cases such as discovering all the assets in your
environment, running PCI scans, and dealing with Zero-day vulnerabilities.
The Discover section provides guidance on operations that enable you to prepare and run scans.
Creating and editing sites on page 56: Before you can run a scan, you need to create a site. A site
is a collection of assets targeted for scanning. A basic site includes assets, a scan template, a
Scan Engine, and users who have access to site data and operations. This section provides
steps and best practices for creating a basic static site.
Adding assets to sites on page 62: This section explains different ways to specify which assets
should be scanned, and it provides best practices for planning sites.
Selecting a Scan Engine or engine pool for a site on page 71: A Scan Engine is a requirement for
a site. It is the component that will do the actual scanning of your target assets. By default, a site
configuration includes the local Scan Engine that is installed with the Security Console. If you
want to use a distributed or hosted Scan Engine, or a for a site, this section guides you through
the steps of selecting it.
Configuring distributed Scan Engines on page 74: Before you can select a distributed Scan
Engine for your site, you need to configure it and pair with the Security Console, so that the two
components can communicate. This section shows you how.
Working with Scan Engine pools on page 78: You can improve the speed of your scans for large
numbers of assets in a single site by pooling your Scan Engines. This section shows you how to
use them.
Configuring scan credentials on page 87: To increase the information that scans can collect, you
can authenticate them on target assets. Authenticated scans inspect assets for a wider range of
vulnerabilities, as well as policy violations and adware or spyware exposures. They also can
collect information on files and applications installed on the target systems. This section provides
guidance for adding credentials to your site configuration. It also links to sections on elevating
permissions, working with PowerShell, and best practices.
Configuring scan authentication on target Web applications on page 124: Scanning Web
applications at a granular level of detail is especially important, since publicly accessible Internet
Discover 44
hosts are attractive targets for attack. Authenticated scans of Web assets can flag critical
vulnerabilities such as SQL injection and cross-site scripting. This section provides guidance on
authenticating Web scans.
Managing dynamic discovery of assets on page 146: If your environment includes virtual
machines, you may find it a challenge to keep track of these assets and their activity. A feature
called vAsset discovery allows you find all the virtual assets in your environment and collect up-to-
date information about their dynamically changing states. This section guides you through the
steps of initiating and maintaining vAsset discovery.
Configuring a dynamic site on page 182: After you initiate vAsset discovery, you can create a
dynamic site and scan these virtual assets for vulnerabilities. A dynamic site’s asset membership
changes depending on continuous vAsset discovery results. This section provides guidance for
creating and updating dynamic sites.
Integrating NSX network virtualization with scans on page 191: Integrating Nexpose with the
VMware NSX network virtualization platform gives a Scan Engine direct access to an NSX
network of virtual assets. This section provides guidance on setting up the integration.
Running a manual scan on page 204: After you create a site, you’re ready to run a scan. This
section guides you through starting, pausing, resuming, and stopping a scan, as well as viewing
the scan log and monitoring scan status.
Discover 45
What is a site?
A site is a collection of assets that are targeted for a scan. You must create a site in order to run a
scan of your environment and find vulnerabilities.
l target assets
l a scan template
l one or more Scan Engines
l other scan-related settings such as schedules or alerts
Your first choice is how you want to add assets to your site. You can do this by manually inputting
individual assets and/or asset groups, or by dynamically discovering assets through a connection.
The main factor to consider is the fluidity of your scan target environment.
Note: You select how assets are added to your site on the Assets tab of the Site Configuration.
Specifying individual assets or ranges is a good choice for situations where the addresses of your
assets are likely to remain stable.
Specifying asset groups allows you to scan based on logical groupings that you have previously
created. In the case of scanning dynamic asset groups, you can scan based on whether assets
meet certain criteria. For example, you can scan all assets whose operating system is Ubuntu. To
learn more about asset groups, see Working with asset groups on page 305.
Adding assets through connection is ideal for a highly fluid target environment, such as a
deployment of virtualized assets. It is not unusual for virtual machines to undergo continual
changes, such as having different operating systems installed, being supported by different
resource pools, or being turned on and off. Because asset membership in such a site is based on
continual discovery of virtual assets, the asset list changes as the target environment changes, as
reflected in the results of each scan.
You can change asset membership in a site that populates assets through a connection by
changing the discovery connection or the criteria filters that determine which assets are
discovered. See Managing dynamic discovery of assets on page 146.
What is a site? 46
How are sites different from asset groups?
Asset groups provide different ways for members of your organization to grant access to, view,
scan, and report on asset information. You can create asset groups that contain assets across
multiple sites. See Working with asset groups on page 305.
This section discusses "recipes" for sites to suit common needs. By selecting an appropriate
template, assets, and configuration options, you can customize your site to suit specific goals.
Default settings
This scan template gives you thorough vulnerability checks on the majority of non-Web assets. It
runs faster than the scan template with the Web spider.
To check thoroughly for vulnerabilities, you should specify credentials. See Configuring scan
credentials on page 87 for more information.
As you establish your vulnerability scanning practice, you can create additional sites with various
scan templates and change your Scan Engine from the default as needed for your network
configuration.
Summary: The first step in checking for vulnerabilities is to make sure you are checking all the
assets in your organization. You can find basic information about the assets in your organization
by conducting a discovery scan. The application includes a built-in scan template for a discovery
scan.
Your discovery scan may vary depending on your organization’s network configuration.We
recommend conducting a discovery scan on as wide a range of IP addresses as possible, in case
your organization has items outside the typical range. Therefore, for the initial discovery scan, we
recommend initially checking the entire private IPv4 address space (10.0.0.0/8, 172.16.0.0/12
and 192.168.0.0) as well as all of the public IP addresses owned or controlled by the
organization. Doing so will help you find the largest possible number of hosts. We recommend
this certainly for organizations who actually make use of all the private address space, but also for
organizations with smaller networks, in order to make sure they find everything they can.
Note: Scanning so many assets could take some time. To estimate how long the scans will take,
see the Planning for capacity requirements section of the administrator's guide. In addition, a
discovery scan can set off alerts through your system administration or antivirus programs; you
may want to advise users before scanning.
1. Create a new static site (see Configuring a basic static site on page 1), including the following
settings:
l When specifying the included assets, specify a range in Classless Inter-Domain
active, it is likely that this result is not showing the actual network configuration, but is
being affected by something else such as a piece of security equipment (for example,
intrusion detection software, intrusion protection software, or a load balancer).
Determine what is causing the unexpected result and make changes so that you can
get accurate scan information. For example, if a firewall is causing the inaccurate
results, whitelist Nexpose on the firewall.
l Ports incorrectly showing as inactive: You may find areas of the network where you
were not able to scan. For instance, there may be an address you know about that was
not found on the discovery scan. Check whether the omission was due to a firewall or
logical routing issue. If so, configure an additional Scan Engine on the other side of the
barrier and scan those assets.
If you have external IP addresses, you can check on what someone could access from
outside. You can set up a Scan Engine outside your network perimeter and see what it can
find. If you would like to get an "external" view of your firewall, perform a scan from an
engine that is external to the organization and treated the same as other external
machines. You may want to consider using a Rapid7 hosted engine.
l One of the most dangerous types of vulnerabilities is one that could let an
unauthenticated external user log on, such as an exposed Telnet port. Make it an
urgent priority to remediate such vulnerabilities.
l Otherwise, begin addressing the results by reducing the attack surface:
l Take down or block access to hosts that do not need to be public.
l Use firewall rules to restrict access to as many services and hosts as possible.
l Address the remaining external-facing vulnerabilities based on CVSSv2 score
and prevalence.
Zero-day vulnerabilities
In the case of newly announced, high risk vulnerabilities, you may want to scan for just that
specific vulnerability, in order to find out as quickly as possible which of your assets are
affected.
You can create a custom scan template that checks just for specific vulnerabilities, and
scan your sites with this special template. You can use the Common Vulnerabilities and
Exposures Identifier (CVE-ID) to focus only on checks for that vulnerability.
Zero-day vulnerabilities 51
Note: Check the Rapid7 Community for additional guidance related to recently-
announced major vulnerabilities.
1. Typically, the best practice is to create a new scan template by copying an existing one.
The best one to copy will vary depending on the nature of the vulnerability, but Full Audit
with Web Spider or Full Audit without Web Spider are usually good starting points. For
more information on scan templates, see Scan templates on page 639.
2. Ensure the Vulnerabilities option is selected, and that the Web Spidering option is
selected if relevant. Clear the Policies option to focus the template on the checks
specific to this vulnerability.
3. Edit the scan template name and description so you will be able to recognize later that
the template is customized for this purpose.
4. Go to the Vulnerability Checks page. First, you will disable all checks, check categories,
and check types so that you can focus on scanning exclusively for items related to this
issue.
5. Expand the By Category section and click Remove categories.
6. Select the check box for the top row (Vulnerability Category), which will auto-select the
check boxes for all categories. Then click Save. Note that 0 categories are now
enabled.
7. Expand the By Individual Check section and click Add checks.
8. Enter or paste the relevant CVE-ID in the Search Criteria box and click Search. Select
the check box for the top row (Vulnerability Check), which will auto-select the check
boxes for all types. Then click Save.
9. Repeat step 7 for any additional CVE-IDs associated with the issue.
10. Save the scan template.
11. Create or edit a site to include:
l the new custom scan template
If you have assets in multiple locations, there are several factors to take into consideration:
To scan large numbers of assets, you may want to take advantage of Scan Engine pooling.
A Scan Engine pool can help with load balancing and serve as backup if one Scan Engine
fails. To learn more about configuring Scan Engine pools, see Working with Scan Engine
pools on page 78.
To scan Amazon Web Services (AWS) virtual assets, you need to perform some
preparation in your AWS environment and create a discovery connection specific to this
type of assets. To learn more, see Preparing for Dynamic Discovery in an AWS
environment on page 153.
VMWare
To scan VMWare virtual assets, you will need to perform some preparation steps in the
target VMWare environment, and then create a discovery connection specific to this type
of assets. To learn more, see Preparing the target VMware environment for Dynamic
Discovery on page 155.
If your systems process, store, or transmit credit card holder data, you may be using
Nexpose to comply with the Payment Card Industry (PCI) Security Standards Council
Data Security Standards (DSS). The PCI internal audit scan template is designed to help
you conduct your internal assessments as required in the DSS.
To learn more about PCI DSS 3.0, visit our resource page.
1. As described in PCI DSS 3.0 section 6.1, you need to create a process to identify
security vulnerabilities. To do so create one or more sites in Nexpose using the
following configurations:
a. Enter your organization information as required for PCI-specific reports in the
Organization section of the Info & Security tab of the Site Configuration.
b. Include the assets you need to scan for PCI compliance. (Generally these hosts
will comprise your Cardholder Data environment or “CDE”).
c. Use the PCI internal audit scan template.
d. Specify credentials for the scan. (These credentials should have privileges to
read the registry, file, and package management aspects of target systems).
2. As indicated in the PCI Data Security Standard requirements 11.2.1 and 11.2.3, you
need to create and examine reports to verify that you have scanned for and remediated
vulnerabilities. You should also keep copies of these reports to prove your compliance
with the PCI DSS.
a. Create a new report as indicated in Creating a basic report on page 341. You will
most likely want to use the PCI Executive Summary and PCI Vulnerability Details
reports. Follow this process for each of those templates. Specify the following
settings:
i. For the Scope of the report, specify the assets you are scanning for PCI.
ii. In the advanced settings, under Distribution, specify the e-mail sender
address and the recipients of the report.
5. Continue to scan and mitigate. You will need to scan internally quarterly until you have
remediated all high-risk vulnerabilities, as defined in sections 6.1 and 11.2.1 of the PCI
DSS. You will also need to scan after major changes, as defined in section 11.2.3. The
acceptable timeframes for applying remediations are outlined in section 6.2.
The application includes built-in scan templates that can be used for policy benchmarking.
These include CIS, DISA, and USGCB. Each of these templates contains a bundle of
policies to be used for different platforms; only the ones that apply are evaluated. Of the
three, CIS contains support for the widest variety of platforms. For more information on
these templates, see Scan templates on page 639.
All policy scan templates require a username and password pair used to gain access to
assets such as desktop and server machines. Typically this account will have the privileges
of an administrator or root user. For more on credentials, see Configuring scan credentials
on page 87.
The CIS scan template includes policy checks specific to databases, and requires a
username and password for database access.
Policy benchmarks 55
Creating and editing sites
In this section you will learn how to create and configure sites. If you are a new user, you will learn
how to create your first basic site. Experienced users can find information on more advanced
practices and configurations.
Topics include:
Note: Not all of the procedures described are required for every kind of site. The basic
requirements to save a site are a name and at least one asset.
If you want to edit an existing site, click that site's Edit icon in the Sites table on the Home page.
If you want to create a new site, click the Create tab at the top of the page and then select Site
from the drop-down list.
OR
Click the Create Site button at the bottom of the Sites table.
Click the tabs in the Site Configuration to configure various aspects of your site and scans:
The Save & Scan and Save buttons are enabled after you enter the minimum required site
information, which includes the site name and at least one asset.
The top of each required tab-Info & Security and Assets-changes from to red to green after you
enter as the minimum required information, which includes the name of the site and at least one
asset to scan.
1. On the Site Configuration – Info & Security tab, type a name for your site.
Tip: You may want to name your site based on how the assets within that site are grouped.
For example, you could name them based on their locations, operating systems, or the types
of assets, such as those that need to be audited for compliance.
l The Very Low setting reduces the risk index to 1/3 of its initial value.
l The Low setting reduces the risk index to 2/3 of its initial value.
l High and Very High settings increase the risk index to twice and 3 times its initial value
respectively.
l A Normal setting does not change the risk index.
5. Add business context tags to the site. Any tag you add to a site will apply to all of the member
assets. For more information and instructions, see Applying RealContext with tags.
6. Click Organization to enter your company information. These fields are used in PCI reports.
For more information on managing user access, see Giving users access to a site on page 60.
When editing a site, you can control which users have access to it. Allowing users to configure
and run scans on only those assets for which they are responsible is a security best practice, and
it ensures that different teams in your organization are able to manage targeted segments of your
network.
For example, your organization has an administrative office in Chicago, a sales office in Hong
Kong, and a research center in Berlin. Each of these locations has its own site with a dedicated IT
or security team in charge of administering its assets. By giving one team access to the Berlin site
and not to the other two sites, you allow that team to monitor and patch the research center
assets without being able to see sensitive information in the administrative or sales offices.
When Global Administrator creates a user account, he or she can grant the user access to all
sites, or restrict access by adding the user to access lists for specific sites. See the topic
Configure general user account attributes in the administrator's guide
After users are added to a site's access list, you can control whether they actually can view the
site as you are editing that site:
1. On the Home page, click the Edit icon for the site that you want to add users to.
2. Click the Info & Security tab.
3. Click Access.
4. The Site Access table displays every user in the site's access list. Select the check box for
every user whom you want to give access to the site.
To give access to all displayed users, select the check box in the top row.
Note: Global Administrators and users with access to all sites do not appear in the table. They
automatically have access to any site.
An asset is a single device on a network that the application discovers during a scan. In order to
create a site you must assign assets to it.
l If you want to add or remove assets to an existing site, click that site's Edit icon in the Sites
table on the Home page.
l If you want to add assets while creating a new site, click the Create site button on the Home
page.
Note: If you created the site through the integration with VMware NSX, you cannot edit assets,
which are dynamically added as part of the integration process. See Integrating NSX network
virtualization with scans on page 191.
You can either manually input your assets or asset groups, or specify a connection that discovers
assets.
Note: Switching between Name/Address and Connections methods will delete any unsaved
assets that have been included for scanning. Also, refreshing your browser will remove unsaved
assets.
Note: After you save a site, you cannot change the method for specifying assets. For example, if
you specify assets with a discovery connection and then save the site, you can not manually add
IP addresses or host names afterward.
Use this method to create a site that scans a manually specified collection of assets or asset
groups. Such sites work best for scanning environments that have non-virtual assets and do not
often change. You can specify individual assets, ranges, asset groups, or a mixture.
Use this method to specify individual assets or ranges of assets. You can use only this method, or
also add asset groups to the same site.
To add assets:
Use any of the following notations Each target can be separated by either typing a comma or
Enter after each asset or range:
l 10.0.0.1
l 10.0.0.1 - 10.0.0.255
l 10.0.0.0/24
l 2001:db8::1
l 2001:db8::0 - 2001:db8::ffff
l 2001:db8::/112
l 2001:db8:85a3:0:0:8a2e:370:7330/124
l www.example.com
l 2001:db8::1
l 2001:db8:0:0:0:0:0:1
l 2001:0db8:0000:0000:0000:0000:0000:0001
If you use CIDR notation for IPv4 addresses (x.x.x.x/24) the Network Identifier (.0) and
Network Broadcast Address (.255) will be ignored, and the entire network is scanned.
You also can import a comma- or new-line-delimited ASCII-text file that lists IP address and host
names of assets you want to scan by clicking Choose File or Browse, depending or your
browser.
If you don't want to scan certain assets, enter their names or addresses in the Exclude pane.
You may, for example, want to avoid scanning a specific asset within an IP address range
either because it is unnecessary to scan, as with a printer, or it may require a different
template or scan window than other assets in the range. The same format notations apply.
Tip: For a list of your assets that you can copy to your clipboard, click next to the Browse
button.
Use this method to scan one or more asset groups that you have previously created based on
logical groupings. You can also combine the asset groups with individually specified assets or a
range, as described above. You can either scan all the assets with the same Scan Engine or
pool, or scan them each with the Scan Engine that was most recently used to scan the asset. To
learn more, see Determining how to scan each asset when scanning asset groups on page 72.
If you don't want to scan certain assets, enter their names or addresses in the Exclude pane.
You may, for example, want to avoid scanning a specific asset within an IP address range
either because it is unnecessary to scan, as with a printer, or it may require a different
template or scan window than other assets in the range. The same format notations apply.
Use this method to create a site in which the Security Console discovers assets via a connection
with a server that manages those assets. Asset membership in a site created this way is subject
to change under any of the following conditions:
For information on different types of discovery connections and best practices see Managing
dynamic discovery of assets on page 146.
Consider several things when selecting assets for a site. Asset selection can have an impact on
the quality of scans and reports.
Choosing a grouping strategy for creating a site with manually selected assets
There are many ways to divide network assets into sites. The most obvious grouping principle is
physical location. A company with assets in Philadelphia, Honolulu, Osaka, and Madrid could
have four sites, one for each of these cities. Grouping assets in this manner makes sense,
especially if each physical location has its own dedicated Scan Engine. Remember, each site is
assigned to a specific Scan Engine.
With that in mind, you may find it practical simply to base site creation on Scan Engine
placement. Scan engines are most effective when they are deployed in areas of separation and
connection within your network. So, for example, you could create sites based on subnetworks.
Other useful grouping principles include common asset configurations or functions. You may
want have separate sites for all of your workstations and your database servers. Or you may wish
to group all your Windows 2008 Servers in one site and all your Debian machines in another.
Similar assets are likely to have similar vulnerabilities, or they are likely to present identical logon
challenges.
If you are performing scans to test assets for compliance with a particular standard or policy, such
as Payment Card Industry (PCI) or Federal Desktop Core Configuration (FDCC), you may find it
helpful to create a site of assets to be audited for compliance. This method focuses scanning
resources on compliance efforts. It also makes it easier to track scan results for these assets and
include them in reports and asset groups.
When selecting assets for sites, flexibility can be advantageous. You can include an asset in more
than one site. For example, you may wish to run a monthly scan of all your Windows Vista
workstations with the Microsoft hotfix scan template to verify that these assets have the proper
Microsoft patches installed. But if your organization is a medical office, some of the assets in your
“Windows Vista” site might also be part of your “Patient support” site, which you may have to
scan annually with the HIPAA compliance template.
You can also define an asset group within a site, in order to scan based on a specific logical
grouping.
10.1.10.0/23
10.1.20.0/24
10.2.10.0/23
10.2.20.0/24
A potential problem with this grouping is that managing scan data in large chunks is time
consuming and difficult. A better configuration groups the elements into smaller scan sites for
more refined reporting and asset ownership.
In the following configuration, Example, Inc., introduces asset function as a grouping principle.
The New York site from the preceding configuration is subdivided into Sales, IT, Administration,
Printers, and DMZ. Madrid is subdivided by these criteria as well. Adding more sites reduces
scan time and promotes more focused reporting.
Choosing a grouping strategy for creating a site with manually selected assets 68
Site name Address Number of Component
space assets
New York Sales 10.1.0.0/22 254 Security
Console
New York IT 10.1.10.0/24 25 Security
Console
New York 10.1.10.1/24 25 Security
Administration Console
New York Printers 10.1.20.0/24 56 Security
Console
New York DMZ 172.16.0.0/22 30 Scan Engine 1
Madrid Sales 10.2.0.0/22 65 Scan Engine 2
Madrid Development 10.2.10.0/23 130 Scan Engine 2
An optimal configuration, seen in the following table, incorporates the principal of physical
separation. Scan times will be even shorter, and reporting will be even more focused.
Choosing a grouping strategy for creating a site with manually selected assets 69
Site name Address space Number of Component
assets
New York Sales 10.1.1.0/24 84 Security
1st floor Console
New York Sales 10.1.2.0/24 85 Security
2nd floor Console
New York Sales 10.1.3.0/24 85 Security
3rd floor Console
New York IT 10.1.10.0/25 25 Security
Console
New York 10.1.10.128/25 25 Security
Administration Console
New York Printers 10.1.20.0/25 28 Security
Building 1 Console
New York Printers 10.1.20.128/25 28 Security
Building 2 Console
New York DMZ 172.16.0.0/22 30 Scan Engine
1
Madrid Sales Office 1 10.2.1.0/24 31 Scan Engine
2
Madrid Sales Office 2 10.2.2.0/24 31 Scan Engine
2
Madrid Sales Office 3 10.2.3.0/24 33 Scan Engine
2
Madrid Development 10.2.10.0/24 65 Scan Engine
Floor 2 2
Madrid Development 10.2.11.0/24 65 Scan Engine
Floor 3 2
Madrid Printers 10.2.20.0/24 35 Scan Engine
Building 3 2
Madrid DMZ 172.16.10.0/24 15 Scan Engine
3
Choosing a grouping strategy for creating a site with manually selected assets 70
Selecting a Scan Engine or engine pool for a site
A Scan Engine is one of the components that a site must have. It discovers assets during scans
and checks them for vulnerabilities or policy compliance. Scan Engines are controlled by the
Security Console, which integrates their data into the database for display and reporting.
If you have deployed distributed Scan Engines or engine pools, or you are using
Nexpose hosted Scan Engines, you will have a choice of engines or pools for this site. Otherwise,
your only option is the local Scan Engine that was installed with the Security Console. It is also
the default selection.
l If you are adding an engine while configuring a new site, click the Create site button on the
Home page.
l If you are adding a new engine option to an existing site, click that site's Edit icon in the Sites
table on the Home page.
1. Click the Engines tab of the Site Configuration.
2. If you are scanning an asset group, select the desired option for scanning assets. See
Determining how to scan each asset when scanning asset groups on page 72Determining
how to scan each asset when scanning asset groups.
Note: Although this option appears in any site configuration, it only applies when scanning
asset groups.
Tip: If you have many engines or pools you can make it easier to find the one you want by
entering part of its name in the Filter text box.
When scanning asset groups, you have the option to use the same Scan Engine or Scan Engine
Pool to scan all the assets in a site, or to scan each asset with the Scan Engine that was
previously used. The best choice depends on your network configuration: for example, if your
assets are geographically dispersed, you may want to use the most recent Scan Engine for each
asset so they will be more likely to be scanned by a Scan Engine in the same location.
OR
Select Engine most recently used for that asset. This may result in different assets being
scanned by different Scan Engines.
Note: Even if you chose to scan with the engine most recently used for this asset, this setting
will still be used for any asset that has never been scanned before. Therefore, you should
make a choice no matter which option you chose above.
Choosing to scan with the most recently used engine for each asset
If you select the option to scan with the engine most recently used for that asset, the Scans page
may display multiple Scan Engines in the Current Scans table and the Past Scans table.
On the page for a scan, you can view the Scan Engines Status table. To learn more, see
Running a manual scan on page 204.
Your organization may distribute Scan Engines in various locations within your network, separate
from your Security Console.Unlike the local Scan Engine, which is installed with the Security
Console, you need to separately configure distributed engines and pair then with the console, as
explained in this section.
1. Install the Scan Engine. See the installation guide for instructions. You can download it from
the Support page in Help.
2. Start the Scan Engine. You can only configure a new Scan Engine if it is running.
By default, the Security Console initiates a TCP connection to Scan Engines over port 40814. If a
distributed Scan Engine is behind a firewall, make sure that port 40814 is open on the firewall to
allow communication between the Security Console and Scan Engine.
Adding an engine
The first step for integrating the Security Console and the new Scan Engine is adding information
about the Scan Engine.
If you are adding an engine while configuring a new site, click the Create site button on the Home
page.
If you are adding a new engine option to an existing site, click that site's Edit icon in the Sites table
on the Home page.
After you add the engine, the Security Console creates the consoles.xml file. You will need to edit
this file in the pairing process.
If you are a Global Administrator, you also have the option to add an engine through the
Administration tab:
Adding an engine 75
After you add the engine, the Security Console creates the consoles.xml file. You will need
to edit this file in the pairing process.
Note: You must log on to the operating system of the Scan Engine as a user with administrative
permissions before performing the next steps.
Edit the consoles.xml file in the following step to pair the Scan Engine with the Security Console.
1. Open the consoles.xml file using a text editing program. Consoles.xml is located in the
[installation_directory]/nse/conf directory on the Scan Engine.
2. Locate the line for the console that you want to pair with the engine. The console will be
marked by a unique identification number and an IP address.
3. Change the value for the Enabled attribute from 0 to 1.
The Scan Engine's consoles.xml file showing that the Security Console is enabled
The Status column indicates with a color-coded arrow whether the Security Console or a
Scan Engine is initiating communication in each pairing. The color of the arrow indicates the
status of the communication. A green arrow indicates Active status, which means you can
now assign a site to this Scan Engine and run a scan with it.
For more information on communication status, see Managing the Security Console on
page 1
The Scan Engines table with the Refresh icon and Active status highlighted
Note: If you ever change the name of the Scan Engine, you will have to pair it with the Security
Console again. The engine name is critical to the pairing process.
On the Scan Engines page, you can also perform the following tasks:
l You can edit the properties of any listed Scan Engine by clicking Edit for that engine.
l You can delete a Scan Engine by clicking Delete for that engine.
l You can manually apply an available update to the scan engine by clicking Update for that
engine. To perform this task using the command prompt, see Using the command console in
the administrator's guide.
You can configure certain performance settings for all Scan Engines on the Scan Engines page
of the Security Console configuration panel. For more information, see Changing default Scan
Engine settings in the administrator's guide.
Note: If you have not yet set up a site, create one first. See Creating and editing sites on page 56.
If you are assigning a site to an engine while configuring it, see Selecting a Scan Engine or
engine pool for a site on page 71.
1. Go to the Sites page of the Scan Engine Configuration panel and click Select Sites.
The console displays a box listing all the sites in your network.
2. Click the check boxes for sites you wish to assign to the new Scan Engine and click Save.
The sites appear on the Sites page of the Scan Engine Configuration panel.
You can improve the speed of your scans for large numbers of assets in a single site by pooling
your Scan Engines. With pooling, the work it takes to scan one large site is split across multiple
Additionally, engine pooling can assist in cases of fault tolerance. For example, if one Scan
Engine in the pool fails during a scan, it will transfer the scanning tasks of that asset to another
engine within the pool.
Note: To verify that you are licensed for Scan Engine pooling, See Finding out what features
your license supports on page 627 .
l If you are adding an engine pool while configuring a new site, click the Create site button on
the Home page.
l If you are adding a new engine pool to an existing site, click that site's Edit icon in the Sites
table on the Home page.
1. In the Site Configuration, click Engines.
2. Click Create Engine Pool.
If you are a Global Administrator, you can also create pools using the Administration tab:
The Scan Engine Pool Configuration page displays all of the engines that you have available
(hosted and local engines cannot be used and won't appear), the number of pools they are
in, the number of sites associated, and their status.
Tip: For additional information on optimal deployment settings for Scan Engine pooling, see the
section titled Deploying Scan Engine Pools in the administrator's guide.
You may already have the application configured to match single Scan Engines to individual
sites. If you decide to start using pooling, you may not achieve optimal results by simply moving
those engines into a pool.
For optimal results, you can make the following adjustments to your site configuration:
Note: If you do create a large site to replace your smaller ones, you will lose any data from
pre-aggregated sites once you delete them.
l Schedule scans to run successively rather than concurrently.
l If you are going to run overlapping scans, stagger their start times as much as possible. This
will prevent queued scan tasks from causing delays.
Tip: You can make scans complete more quickly by increasing the scan threads used. If the
engine is already at capacity utilization, you can add more RAM to increase the amount of
threads. For more information on tuning scan performance see Tuning performance with
simultaneous scan tasks on page 545.
You may need to scan different types of assets for different types of purposes at different
times. A scan template is a predefined set of scan attributes that you can select quickly rather
than manually define properties, such as target assets, services, and vulnerabilities. For a list
of scan templates and suggestions on when to use them, see Scan templates on page 639.
Nexpose includes a variety of preconfigured scan templates to help you assess your
vulnerabilities according to the best practices for a given need.
Using varied templates is a good idea, as you may want to look at your assets from different
perspectives. The first time you scan a site, you might just do a discovery scan to find out what
is running on your network. Then, you could run a vulnerability scan using the Full Audit
template, which includes a broad and comprehensive range of checks. If you have assets that
are about to go into production, it might be a good time to scan them with a Denial-of-Service
template. Exposing them to unsafe checks is a good way to test their stability without affecting
workflow in your business environment. You may also want to apply different templates to
different types of assets; for instance, Web audit for Web servers and Web applications.
A Global Administrator can also customize scan templates or create new ones to suit your
organization's particular needs. By creating sites of selected assets and applying the most
relevant scan template, you can conduct scans that are specific to your needs. See
Configuring custom scan templates on page 543 for more information. Keep in mind that the
scans must balance three critical performance factors: time, accuracy, and resources. If you
customize a template to scan more quickly by adding threads, for example, you may pay a
price in bandwidth.
If you want to change the scan template for an existing site, click that site's Edit icon in the
Sites table on the Home page.
If you want to select the scan template while creating a new site, click the Create site button
on the Home page.
Note: If you created the site through the integration with VMware NSX, you can change the
scan template but it will not affect the type of scan or the scan results. See Integrating NSX
network virtualization with scans on page 191.
The default is Full audit without Web Spider. This is a good initial scan, because it
provides full coverage of your assets and vulnerabilities, but runs faster than if Web
spidering were included.
1. Click the Copy icon next to the listed template you want to base the new one on, or click
Create Scan Template to start from scratch.
2. Change the template as desired. See Configuring custom scan templates on page 543 for
more information.
3. Click Save.
5. Click the Refresh icon at the top of the Scan Templates table to make the new template
appear.
Nexpose retains all vulnerability results based on different scan templates within a site. This
allows you to run targeted scans of your assets with different templates without affecting results
that are not part of current scan configuration.
The benefits
When scheduling scans for your site, you can apply different templates to specific scan windows.
For example, schedule a recurring scan to run on the day after Patch Tuesday each month with a
template configured to verify the latest Microsoft patches. Then schedule scans with a different
template to run on other days.
You will can check the same set of assets for different, specific vulnerabilities. If a zero-day threat
is reported, customize a template that only includes checks for that vulnerability. After
remediating the zero-day, resume scanning with a template that you routinely use for your site.
When you run successive scans for the same vulnerability, even if it was previously scanned with
a different template, the most current result replaces previous results in the scan history for the
affected site. Take the following example:
If your alternating scan templates include different target ports, your results depend on which
ports you are scanning for a specific vulnerability, as in the following example:
You run one scan to check for a self-signed certificate, using a template that includes port 80. The
results are positive. You run another scan for the same vulnerability, but this time you use a
template that does not include port 80. Regardless of the results of the second scan, your site's
scan data will include a positive result for self-signed certificate on port 80.
In this topic:
Related topics:
Scanning with credentials allows you to gather information about your network and assets that
you could not otherwise access. You can inspect assets for a wider range of vulnerabilities or
security policy violations. Additionally, authenticated scans can check for software applications
and packages and verify patches. When you scan a site with credentials, target assets in that site
authenticate the Scan Engine as they would an authorized user.
Topics in this section explain how to set up and test credentials for a site as well as shared scan
credentials, which you can use in multiple sites. Certain authentication options, such as SSH
public key and LM/NTLM hash, require additional steps, which are covered in related topics. You
can also learn best practices for getting the most out of credentials, such as expanding
authentication with elevated permissions.
Two types of scan credentials can be created in the application, depending on the role or
permissions of the user creating them:
The range of actions that a user can perform with each type depends on the user’s role or
permissions, as indicated in the following table:
The application uses an expert system at the core of its scanning technology in order to chain
multiple actions together to get the best results when scanning. For example, if the application is
able to use default configurations to get local access to an asset, then it will trigger additional
actions using that access. The Nexpose Expert System paper outlines the benefits of this
approach and can be found here: http://information.rapid7.com/using-an-expert-system-for-
deeper-vulnerability-scanning.html?LS=2744168&CS=web. The effect of the expert system is
that you may see scan results beyond those directly expected from the credentials you provided;
for example, if some scan targets cannot be accessed with the specified credentials, but can be
accessed with a default password, you will also see the results of those checks. This behavior is
similar to the approach of a hacker and enables Nexpose to find vulnerabilities that other
scanners may not.
The application provides features to protect your credentials from unauthorized use. It securely
stores and transmits credentials using encryption so that no end users can retrieve unencrypted
passwords or keys once they have been stored for scanning. Global Administrators can assign
permission to add and edit credentials to only those users that should have that level of access.
For more information, see the topic Managing users and authentication in the administrator's
guide. When creating passwords, make sure to use standard best practices, such as long,
complex strings with combinations of lower- and upper-case letters, numerals, and special
characters.
If you plan to run authenticated scans on Windows assets, keep in mind some security strategies
related to automated Windows authentication. Compromised or untrusted assets can be used to
steal information from systems that attempt to log onto them with credentials. This attack method
threatens any network component that uses automated authentication, such as backup services
or vulnerability assessment products.
There are a number of countermeasures you can take to help prevent this type of attack or
mitigate its impact. For example, make sure that Windows passwords for Nexpose contain 32 or
more characters generated at random. And change these passwords on a regular basis.
In this topic:
In this topic, you will learn how set up and test credentials for a site, how to restrict them to a
specific asset or port, and how to edit and enable the use of previously created credentials.
l Create a new set of credentials. Credentials created within a site are called site-specific
credentials and cannot be used in other sites.
l Enable a set of previously created credentials to be used in the site. This is an option if site-
specific credentials have been previously created in your site or if shared credentials have
been previously created and then assigned to your site.
Note: To learn about credential types, see Managing shared scan credentials on page 96.
The first action in creating new site-specific scan credentials is naming and describing them.
Think of a name and description that will help you recognize at a glance which assets the
credentials will be used for. This will be helpful, especially if you have to manage many sets of
credentials.
If you want to add credentials while configuring a new site, click the Create site button on the
Home page.
OR
Click the Create tab at the top of the page and then select Site from the drop-down list.
If you want to add credentials for an existing site, click that site's Edit icon in the Sites table on the
Home page.
If you do not know what authentication service to select or what credentials to use for that service,
consult your network administrator.
Note: All credentials are protected with RSA encryption and triple DES encryption before they
are stored in the database.
4. If you want to test the credentials or restrict them see the following two sections. Otherwise,
click Create.
The newly created credentials appear in the Scan Credentials table, which you can view by
clicking Manage Authentication.
You can verify that a target asset in your site will authenticate the Scan Engine with the
credentials you’ve entered. It is a quick method to ensure that the credentials are correct before
you run the scan.
1. In the Add Credentials form, expand the Test Credentials section by clicking the arrow.
2. Expand the Test Credentials section.
3. Enter the name or IP address of the authenticating asset.
Note: If you do not enter a port number, the Security Console will use the default port for the
service. For example, the default port for CIFS is 445.
3. Note the result of the test. If it was not successful, review and change your entries as
necessary, and test them again. The Security Console and scan logs contain information
about authentication failure when testing or scanning with these credentials. See Working
with log files in the administrator's guide.
4. If you want to restrict the credentials to a specific asset or port, see the following section.
Otherwise, click Create.
If a particular set of credentials is only intended for a specific asset and/or port, you can restrict
the use of the credentials accordingly. Doing so can prevent scans from running unnecessarily
longer due to authentication attempts on assets that don’t recognize the credentials.
Specifying a port allows you to limit your range of scanned ports in certain situations. For
example, you may want to scan Web applications using HTTP credentials. To avoid scanning all
Web services within a site, you can specify only those assets with a specific port.
OR
Enter host name or IP address of the asset and the number of the port that you want to
restrict the credentials to.
Note: If you do not enter a port number, the Security Console will use the default port for the
service. For example, the default port for CIFS is 445.
3. When you have finished configuring the set of credentials, click Create.
Tip: To verify successful scan authentication on a specific asset, search the scan log for that
asset. If the message “A set of [service_type] administrative credentials have been verified.”
appears with the asset, authentication was successful.
If a set of credentials is not enabled for a site, the scan will not attempt authentication on target
assets with those credentials. Make sure to enable credentials if you want to use them.
1. To enable credentials for an existing site, click that site's Edit icon in the Sites table on the
Home page.
2. Click the Authentication link in the Site configuration .
The Scan Credentials table lists any site-specific credentials that were created for the site or
any shared credentials that were assigned to the site. For more information, see Managing
shared scan credentials on page 96.
3. Select the Enable check box for any set of credentials that you want to scan with.
4. Click the Save button for the site configuration.
Note: You cannot edit shared scan credentials in the Site Configuration panel. To edit shared
credentials, go to the Administration page and select the manage link for Shared scan
credentials. See Editing shared credentials that were previously created on page 99. You must
be a Global Administrator or have the Manage Site permission to edit shared scan credentials.
The ability to edit credentials can be very useful, especially if passwords change frequently. You
can only edit site-specific credentials in a Site Configuration panel.
1. To enable credentials for an existing site, click that site's Edit icon in the Sites table on the
Home page.
2. Click the Authentication tab in the Site configuration .
3. Click the hyperlink name of any set of credentials that you want to edit.
4. Change the configuration as desired. See the following topics for more information:
You can create and manage scan credentials that can be used in multiple sites. Using shared
credentials can save time if you need to perform authenticated scans on a high number of assets
in multiple sites that require the same credentials. It’s also helpful if these credentials change
often. For example, your organization’s security policy may require a set of credentials to change
every 90 days. You can edit that set in one place every 90 days and apply the changes to every
site where those credentials are used. This eliminates the need to change the credentials in every
site every 90 days.
To configure shared credentials, you must have a Global Administrator role or a custom role with
Manage Site permissions.
Note: To learn the differences between shared and site-specific credentials, see Shared
credentials vs. site-specific credentials on page 88.
After you create a set of shared scan credentials you can take the following actions to manage
them:
Tip: Think of a name and description that will help Site Owners recognize at a glance which
assets the credentials will be used for.
Configuring the account involves selecting an authentication method or service and providing all
settings that are required for authentication, such as a user name and password.
If you do not know what authentication service to select or what credentials to use for that service,
consult your network administrator.
You can verify that a target asset will authenticate a Scan Engine with the credentials you’ve
entered. It is a quick method to ensure that the credentials are correct before you run the scan.
Tip: To verify successful scan authentication on a specific asset, search the scan log for that
asset. If the message “A set of [service_type] administrative credentials have been verified.”
appears with the asset, authentication was successful.
For shared scan credentials, a successful authentication test on a single asset does not
guarantee successful authentication on all sites that use the credentials.
Note the result of the test. If it was not successful, review and change your entries as
necessary, and test them again.
7. Upon seeing a successful test result, configure any other settings as desired.
8. If you want to restrict the credentials to a specific asset or port, see the following section.
Otherwise, click Save.
If a particular set of credentials is only intended for a specific asset and/or port, you can restrict
the use of the credentials accordingly. Doing so can prevent scans from running unnecessarily
longer due to authentication attempts on assets that don’t recognize the credentials.
If you restrict credentials to a specific asset and/or port, they will not be used on other assets or
ports.
Specifying a port allows you to limit your range of scanned ports in certain situations. For
example, you may want to scan Web applications using HTTP credentials. To avoid scanning all
Web services within a site, you can specify only those assets with a specific port.
Note: If you do not enter a port number, the Security Console will use the default port for the
service. For example, the default port for CIFS is 445.
3. When you have finished configuring the set of credentials, click Save.
You can assign a set of shared credentials to one or more sites. Doing so makes them appear in
lists of available credentials for those site configurations. Site Owners still have to enable the
credentials in the site configurations. See Configuring scan credentials on page 87.
1. Go to the Site assignment page of the Shared Scan Credentials Configuration panel.
2. Select one of the following assignment options:
If you select the latter option, the Security Console displays a button for selecting sites.
4. Select the check box for each desired site, or select the check box in the top row for all sites.
Then click Add sites.
5. Configure any other settings as desired. When you have finished configuring the set of
credentials, click Save.
The Security Console displays a page with a table that lists each set of shared credentials
and related configuration information.
The ability to edit credentials can be very useful, especially if passwords change frequently.
The Security Console displays a page with a table that lists each set of shared credentials
and related configuration information.
You can use Nexpose to perform credentialed scans on assets that authenticate users with SSH
public keys.
This method, also known as asymmetric key encryption, involves the creation of two related keys,
or large, random numbers:
l a public key that any entity can use to encrypt authentication information
l a private key that only trusted entities can use to decrypt the information encrypted by its
paired public key
l The application supports SSH protocol version 2 RSA and DSA keys.
l Keys must be OpenSSH-compatible and PEM-encoded.
l RSA keys can range between 768 and 16384 bits.
l DSA keys must be 1024 bits.
This topic provides general steps for configuring an asset to accept public key authentication. For
specific steps, consult the documentation for the particular system that you are using.
The ssh-keygen process will provide the option to enter a pass phrase. It is recommended that
you use a pass phrase to protect the key if you plan to use the key elsewhere.
1. Run the ssh-keygen command to create the key pair, specifying a secure directory for storing
the new file.
This example involves a 2048-bit RSA key and incorporates the /tmp directory, but you
should use any directory that you trust to protect the file.
This command generates the private key files, id_rsa, and the public key file, id_rsa.pub.
2. Make the public key available for the application on the target asset.
3. Make sure that the computer with which you are generating the key has a .ssh directory. If not,
run the mkdir command to create it:
mkdir /home/[username]/.ssh
Append the contents on the target asset of the /tmp/id_rsa.pub file to the .ssh/authorized_
keys file in the home directory of a user with the appropriate access-level permissions that
are required for complete scan coverage.
After you provide the private key you must provide the application with SSH public key
authentication.
If you want to add SSH credentials while configuring a new site, click the Create site button on
the Home page.
OR
Click the Create tab at the top of the page and then select Site from the drop-down list.
If you want to add SSH credentials for an existing site, click that site's Edit icon in the Sites table
on the Home page.
Note: ssh/authorized_keys is the default file for most OpenSSH- and Drop down-based SSH
daemons. Consult the documentation for your Linux distribution to verify the appropriate file.
This authentication method is different from the method listed in the drop-down as Secure
Shell (SSH). This latter method incorporates passwords instead of keys.
You can elevate permissions for both Secure Shell (SSH) and Secure Shell (SSH) Public
Key services.
11. (Optional) Enter the appropriate user name. The user name can be empty for sudo
credentials. If you are using su credentials with no user name the credentials will default to
root as the user name.
If the SSH credential provided is a root credential, user ID =0, the permission elevation
credentials will be ignored, even if the root account has been renamed. The application will
ignore the permission elevation credentials when any account, root or otherwise named,
with user ID 0 is specified.
12. When you have finished configuring the credentials, click Create if it is a new set, or Save if it
is a previously created set.
With SSH authentication you can elevate Scan Engine permissions to administrative or root
access, which is required for obtaining certain data. For example, Unix-based CIS benchmark
checks often require administrator-level permissions. Incorporating su (super-user), sudo (super-
user do), or a combination of these methods, ensures that permission elevation is secure.
Permission elevation is an option available with the configuration of SSH credentials. Configuring
this option involves selecting a permission elevation method. Using sudo protects your
administrator password and the integrity of the server by not requiring an administrative
password. Using su requires the administrator password.
The option to elevate permissions appears when you create or edit SSH credentials in a site
configuration:
Permission elevation
l su– enables you to authenticate remotely using a non-root account without having to configure
your systems for remote root access through a service such as SSH. To authenticate using
su, enter the password of the user that you are trying to elevate permissions to. For example,
if you are trying to elevate permissions to the root user, enter the password for the root user in
the password field in Permission Elevation area of the Shared Scan Credential Configuration
panel.
l sudo– enables you to authenticate remotely using a non-root account without having to
configure your systems for remote root access through a service such as SSH. In addition, it
enables system administrators to explicitly control what programs an authenticated user can
run using the sudo command. To authenticate using sudo, enter the password of the user that
you are trying to elevate permission from. For example, if you are trying to elevate permission
to the root user and you logged in as jon_smith, enter the password for jon_smith in the
password field in Permission Elevation area of the Shared Scan Credential Configuration
panel.
l sudo+su– uses the combination of sudo and su together to gain information that requires
privileged access from your target assets. When you log on, the application will use sudo
authentication to run commands using su, without having to enter in the root password
anywhere. The sudo+su option will not be able to access the required information if access to
the su command is restricted.
l pbrun– uses BeyondTrust PowerBroker to allow Nexposeto run whitelisted commands as root
on Unix and Linux scan targets. To use this feature, you need to configure certain settings on
your scan targets. See the following section.
Before you can elevate scan permissions with pbrun, you will need to create a configuration file
and deploy it to each target host. The configuration provides the conditions that Nexpose needs
to scan successfully using this method:
l Nexpose can execute the user's shell, as indicated by the $SHELL environment variable, with
pbrun.
l pbrun does not require Nexpose to provide a password.
l pbrun runs the shell as root.
The following excerpt of a sample configuration file shows the settings that meet these
conditions:
RootUsers = {"user_name" };
RootProgs = {"bash"};
basename(command) in RootProgs) {
runuser = "root";
rungroup = "!g!";
rungroups = {"!G!"};
runcwd = "!~!";
setenv("SHELL", "!!!");
setenv("HOME", "!~!");
setenv("USER", runuser);
setenv("USERNAME", runuser);
setenv("LOGNAME", runuser);
setenv("PWD", runcwd);
setenv("PATH", "/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin");
CleanUp();
accept;
Administrators of target assets can control and track the activity of su and sudo users in system
logs. When attempts at permission elevation fail, error messages appear in these logs so that
administrators can address and correct errors and run the scans again.
Nexpose can pass LM and NTLM hashes for authentication on target Windows or Linux
CIFS/SMB services. With this method, known as “pass the hash,” it is unnecessary to “crack” the
password hash to gain access to the service.
Several tools are available for extracting hashes from Windows servers. One solution is
Metasploit, which allows automated retrieval of hashes. For information about Metasploit, go to
www.rapid7.com.
When you have the hashes available, take the following steps:
If you want to add credentials while configuring a new site, click the Create site button on the
Home page.
OR
Click the Create tab at the top of the page and then select Site from the drop-down list.
If you want to add credentials for an existing site, click that site's Edit icon in the Sites table on the
Home page.
1. Select Authentication.
2. Click Add Credentials.
3. In the Add Credentials form, enter a name and description for a new set of credentials if
necessary.
4. Click Account under Add Credentials.
5. Select Microsoft Windows/Samba LM/NTLM Hash (SMB/CIFS) from the Service drop-down
list.
6. (Optional) Enter the appropriate domain.
7. Enter a user name.
8. Enter or paste in the LM hash followed by a colon (:) and then the NTLM hash. Make sure
there are no spaces in the entry. The following example includes hashes for the password
test:
01FC5A6BE7BC6929AAD3B435B51404EE:0CB6948805F797BF2A82807973B89537
9. Alternatively, using the NTLM hash alone is acceptable as most servers disregard the LM
response:
0CB6948805F797BF2A82807973B89537
10. When you have finished configuring the credentials, click Create if it is a new set or Save if it is
a previously created set.
Windows PowerShell is a command-line shell and scripting language that is designed for system
administration and automation. As of PowerShell 2.0, you can use Windows Remote
Management to run commands on one or more remote computers. By using PowerShell and
Windows Remote Management with your scans, you can scan as though logged on locally to
each machine. PowerShell support is essential to some policy checks in SCAP 1.2, and more
efficiently returns data for some other checks.
In order to use Windows Remote Management with PowerShell, you must have it enabled on all
the machines you will scan. If you have a large number of Windows assets to scan, it may be
more efficient to enable it through group policy on your Windows domain.
For information on how to enable Windows Remote Management with PowerShell in a Windows
domain, the following resources may be helpful:
l http://blogs.msdn.com/b/wmi/archive/2009/03/17/three-ways-to-configure-winrm-
listeners.aspx
l http://www.briantist.com/how-to/powershell-remoting-group-policy/
l http://blogg.alltomdeployment.se/2013/02/howto-enable-powershell-remoteing-in-windows-
domain/
Additionally, when using Windows Remote Management with PowerShell via HTTP, you need to
allow unencrypted traffic.
Policies > Administrative Templates > Windows Components > Windows Remote
Management (WinRM) > WinRM Service
OR
If you want to add credentials while configuring a new site, click the Create site button on the
Home page.
If you want to add credentials for an existing site, click that site's Edit icon in the Sites table on the
Home page.
The application will automatically use PowerShell if the correct port is enabled, and if the correct
Microsoft Windows/Samba (SMB/CIFS) credentials are specified.
If you have PowerShell enabled, but don’t want to use it for scanning, you may need to define a
custom port list that does not include port 5985.
When scanning Windows assets, we recommend that you use domain or local administrator
accounts in order to get the most accurate assessment. Administrator accounts have the right
level of access, including registry permissions, file-system permissions, and either the ability to
connect remotely using Common Internet File System (CIFS) or Windows Management
Instrumentation (WMI) read permissions. In general, the higher the level of permissions for the
account used for scanning, the more exhaustive the results will be. If you do not have access, or
want to limit the use of domain or local administrator accounts within the application, then you can
use an account that has the following permissions:
l The account should be able to log on remotely and not be limited to Guest access.
l The account should be able to read the registry and file information related to installed
software and operating system information.
Note: If you are not using administrator permissions then you will not be granted access to
administrator shares and non-administrative shares will need to be created for read access to the
file system for those shares.
Nexpose and the network environment should also be configured in the following ways:
l For scanning domain controllers, you must use a domain administrator account because local
administrators do not exist on domain controllers.
l Make sure that no firewalls are blocking traffic from the Nexpose Scan Engine to port 135,
either 139 or 445 (see note), and a random high port for WMI on the Windows endpoint. You
can set the random high port range for WMI using WMI Group Policy Object (GPO) settings.
Note: Port 445 is preferred as it is more efficient and will continue to function when a name
conflict exists on the Windows network.
l If running an antivirus tool on the Scan Engine host, make sure that antivirus whitelists the
application and all traffic that the application is sending to the network and receiving from the
network. Having antivirus inspecting the traffic can lead to performance issues and potential
false-positives.
l Verify that the account being used can log on to one or more of the assets being assessed by
using the Test Credentials feature in the application.
l If you are using CIFS, make sure that assets being scanned have Remote Registry service
enabled. If you are using WMI, then the Remote Registry service is not required.
If your organization’s policies restrict or prevent any of the listed configuration methods, or if you
are not getting the results you expect, contact Technical Support.
For scanning Unix and related systems such as Linux, it is possible to scan most vulnerabilities
without root access. You will need root access for a few vulnerability checks, and for many policy
checks. If you plan to scan with a non-root user, you need to make sure the account has specified
permissions, and be aware that the non-root user will not find certain checks.The following
sections contain guidelines for what to configure and what can only be found with root access.
Due to the complexity of the checks and the fact they are updated frequently, this list is subject to
change.
l Elevate permissions so that you can run commands as root without using an actual root
account.
OR
l Configure your systems such that your non-root scanning user has permissions on specified
commands and directories.
One way to elevate scan permissions without using a root user or performing a custom
configuration is to use permission elevation, such as sudo or pbrun. These options require
specific configuration (for instance, for pbrun, you need to whitelist the user's shell), but do not
require you to customize permissions as described in Commands the application runs below. For
more information on permission elevation, see Authentication on Unix and related targets: best
practices on page 116.
The following section contains guidelines for what commands the application runs when
scanning. The vast majority of these commands can be run without root. As indicated above, this
list is subject to change as new checks are added.
The majority of the commands are required for one of the following:
Note: The application expects that the commands are part of the $PATH variable and there are
no non-standard $PATH collisions.
l ifconfig
l java
l sha1
l sha1sum
l md5
l md5sum
l awk
l grep
l egrep
l cut
l id
l ls
Nexpose will attempt to scan certain files, and will be able to perform the corresponding checks if
the user account has the appropriate access to those files. The following is a list of files or
directories that the account needs to be able to access:
For Linux, the application needs to read the following files, if present, to determine the
distribution:
On any Unix or related variants (such as Ubuntu or OS X), there are specific commands the
account needs to be able to perform in order to run specific checks. These commands should be
whitelisted for the account.
The account needs to be able to perform the following commands for certain checks:
l cat
l find
l mysqlaccess
l mysqlhotcopy
l sh
l sysctl
l dmidecode
l apt-get
l rpm
For the following types of distributions, the account needs execute permissions as indicated.
l uname
l dpkg
l egrep
l cut
l xargs
l uname
l rpm
l chkconfig
Mac OS X:
l /usr/sbin/softwareupdate
l /usr/sbin/system_profiler
l sw_vers
Solaris:
l showrev
l pkginfo
l ndd
Blue Coat:
l show version
Juniper:
l uname
l show version
VMware ESX/ESXi:
l vmware -v
l rpm
l esxupdate -a query || esxupdate query
AIX:
Cisco:
l show version (Note: this is used on multiple Cisco platforms, including IOS, PIX, ASA, and
IOR-XR)
FreeBSD:
l pkg_info
For certain vulnerability checks, root access is required. If you choose to scan with a non-root
user, be aware that these vulnerabilities will not be found, even if they exist on your system.The
following is a list of checks that require root access:
Note: You can search for the Vulnerability ID in the search bar of the Security Console to find the
description and other details.
Scanning Web applications at a granular level of detail is especially important, since publicly
accessible Internet hosts are attractive targets for attack. By giving the scan inside access with
authentication, you can inspect Web assets for critical vulnerabilities such as SQL injection and
cross-site scripting.
l Web site form authentication: Many Web authentication applications challenge users to log on
with forms. With this method, the Security Console retrieves a logon form from the Web
application. You specify credentials in that form that the Web application will accept. Then, a
Scan Engine submits those credentials to a Web site before scanning it. See Creating a logon
for Web site form authentication on page 125.
In some cases, it may not be possible to use a form. For example, a form may use a
CAPTCHA test or a similar challenge that is designed to prevent logons by computer
programs. Or, a form may use JavaScript, which is not supported for security reasons. If
these circumstances apply to your Web application, you may be able to authenticate the
application with the following method.
l Web site session authentication: The Scan Engine sends the target Web server an
authentication request that includes an HTTP header—usually the session cookie header—
from the logon page. See Creating a logon for Web site session authentication with HTTP
headers on page 129
The authentication method you use depends on the Web server and authentication application
you are using. It may involve some trial and error to determine which method works better. It is
advisable to consult the developer of the Web site before using this feature.
Note: For HTTP servers that challenge users with Basic authentication or Integrated Windows
authentication (NTLM), configure a set of scan credentials using the service called Web Site
HTTP Authentication. To use this service, select Add Credentials and then Accountin the
Authentication tab of the site configuration. See Configuring site-specific scan credentials on
page 90.
If you create a logon while configuring a new site, click the Create site button on the Home page.
OR
Click the Create tab at the top of the page and then select Site from the drop-down list.
If you want to create a logon for an existing site, click that site's Edit icon in the Sites table on
the Home page.
Tip: If you do not know any of the required information for configuring a Web form logon, consult
the developer of the target Web site.
5. In the Base URL text box, enter the main address from which all paths in the target Web site
begin.
The credentials you enter for logging on to the site will apply to any page on the site, starting
with the base URL. Include the protocol with the address. Example: http://example.com or
https://example.com
6. In the Logon Page URL text box, enter the page that contains the form for logging onto the
Web site. It should also include the protocol.
Examples: http://example.com/logon.html
7. Click Next.
The Security Console contacts the Web server to retrieve any available forms. If it fails to
make contact or retrieve any forms, it displays a failure notification. If it retrieves forms, it
displays additional configuration steps.
1. From the Form drop-down list, select the form for logging onto the Web application. Based on
your selection, a table of fields appears for that particular form.
Note: If the original value was provided by the Web server, you must first clear the check box
before entering a new value. Only change the value to match what the server will accept at logon.
If you are not certain of what value to use, contact your Web administrator.
3. Click Save.
The Security Console displays the field table with any changed values according to your
edits. Repeat the editing steps for any other values that you want to change.
When all the fields are configured according to your preferences, continue with creating a regular
expression for logon failure and testing the logon:
1. Change the regular expression (regex) if you want to use one that is different from the default
value.
2. Click Test logon to make sure that the Scan Engine can successfully log on to the Web
application.
If logon failure occurs, change any settings as necessary and try again.
Tip: To find an appropriate regex, try logging onto the target Web site with incorrect credentials.
If the site displays a message such as Logon failed or Invalid credentials, you can use that string
for the regex.
When using HTTP headers to authenticate the Scan Engine, make sure that the session ID
header is valid between the time you save this ID for the site and when you start the scan. For
more information about the session ID header, consult your Web administrator.
Not every Web site supports the storage of cookies, so it is helpful to verify that header
authentication is possible on your target Web site before you use this method. Verification
involves exporting the cookie values from the target Web site. Various tools are available for this
task. For example, if you use Firefox as your browser, you can install the Cookie Exporter,
Cookie Importer, and Firebug addons. The following steps incorporate Firefox as the browser for
illustration:
1. After installing Cookie Exporter, Cookie Importer, and Firebug, restart Firefox and enable
cookies.
2. Log onto the target Web site.
3. From the Firefox Tools menu, select Export Cookies... and save the exported cookies to a .txt
file.
4. Open the .txt file and delete all but the session cookies, since you’ll need those for
authentication. One header defines the credentials, and the other defines the session. Save
the updated file.
The exported cookies file with all but the session cookies removed.
Creating a logon for Web site session authentication with HTTP headers 129
5. Restart the browser and clear your browser history.
6. From the Firefox Tools menu, select Import Cookies... Firefox displays a message indicating
that two cookies were imported.
7. Navigate to the target Web site. If header authentication is possible, you will bypass the logon
page, and you will immediately be authenticated.
After verifying that header authentication is possible, start the HTTP headers configuration:
If you want to configure HTTP headers while configuring a new site, click the Create site button
on the Home page.
OR
Click the Create tab at the top of the page and then select Site from the drop-down list.
If you want to configure HTTP headers for an existing site, click that site's Edit icon in the Sites
table on the Home page.
Creating a logon for Web site session authentication with HTTP headers 130
Continue with adding a header:
Tip: If you do not know any of the required information for configuring a Web form logon, consult
the developer of the target Web site.
1. In the HTTP Header Values table, click the Add Header hyperlink.
2. In the pop-up dialog box, enter a name/value pair for the header and click the Add Header
button.
l Name corresponds to a specific data type, such as the Web host name, Web server type,
session identifier, or supported languages. The name can only include letters and numerals. It
cannot include spaces or special characters.
l Value corresponds to the actual value string that the console sends to the server for that data
type. For example, the value for a session ID (SID) might be a uniform resource identifier
(URI).
For example, a name/value pair may specify a name/value pair for a session ID. The name
might be Session-id, and the value might be URI.
Name/value pair
If you are not sure what header to use, consult your Web administrator.
After you enter the name/value pair, it appears in the HTTP Header Values table.
Creating a logon for Web site session authentication with HTTP headers 131
HTTP Header Values table
Continue with creating a regular expression for logon failure and testing the logon:
1. Change the regular expression (regex) if you want to use one that is different from the default
value.
The default value works in most logon cases. If you are unsure of what regular expression to
use, consult the Web administrator. For more information, see Using regular expressions
on page 633.
2. Click Test logon to make sure that the Scan Engine can successfully log on to the Web
application.
If logon failure occurs, change any settings as necessary and try again.
Creating a logon for Web site session authentication with HTTP headers 132
Setting up scan alerts
When a scan is in progress, you may want to know as soon as possible if certain things happen.
For example, you may want to know when the scan finds a severe or critical vulnerability or if the
scan stops unexpectedly. You can have the application alert you about scan events that are
particularly important to you.
This feature is not a required part of the site configuration, but it's a convenient way to keep track
of your scan when you don't have access to the Security Console Web interface or are simply not
checking activity on the console.
If you want to add an alert for an existing site, click that site's Edit icon in the Sites table on the
Home page.
If you want to add an alert while creating a new site, click the Create site button on the Home
page.
OR
Click the Create tab at the top of the page and then select Site from the drop-down list.
To set up alerts:
3. The Enable check box is selected by default to ensure that an alert is generated. You can
clear the check box at any time to disable the alert if you prefer not to receive that alert
temporarily without having to delete it.
4. Enter a name for the alert.
5. Enter a value in the Maximum Alerts to Send field if you want to limit the number of this type of
alert that you receive during the scan.
6. Select the check boxes for types of events that you want to generate alerts for.
For example, if you select Paused and Resumed, an alert is generated every time the
application pauses or resumes a scan.
7. Select a severity level for vulnerabilities that you want to generate alerts for. For information
about severity levels, see Viewing active vulnerabilities on page 259.
8. Select the Confirmed, Unconfirmed, and Potential check boxes to receive those alerts.
9. Select a notification method from the drop-down box. Alerts can be sent via SMTP e-mail,
SNMP message, or Syslog message. Your selection will control which additional fields
appear below this box.
Creating an alert
Depending on your security policies and routines, you may schedule certain scans to run on a
monthly basis, such as patch verification checks, or on an annual basis, such as certain
compliance checks. It's a good practice to run discovery scans and vulnerability checks more
often—perhaps every week or two weeks, or even several times a week, depending on the
importance or risk level of these assets.
Scheduling scans requires care. Generally, it’s a good idea to scan during off-hours, when more
bandwidth is free and work disruption is less likely. On the other hand, your workstations may
automatically power down at night, or employees may take laptops home. In this case, you may
need to scan those assets during office hours. Make sure to alert staff of an imminent scan, as it
may tax network bandwidth or appear as an attack.
If you plan to run scans at night, find out if backup jobs are running, as these can eat up a lot of
bandwidth.
Your primary consideration in scheduling a scan is the scan window: How long will the scan take?
l A scan with an Exhaustive template will take longer than one with a Full Audit template for the
same number of assets. An Exhaustive template includes more ports in the scope of a scan.
l A scan with a high number of services to be discovered will take additional time.
l Checking for patch verification or policy compliance is time-intensive because of logon
challenges on the target assets.
l A site with a high number of assets will take longer to scan.
l A site with more live assets will take longer to scan than a site with fewer live assets.
l Network latency and loading can lengthen scan times.
l Scanning Web sites presents a whole subset of variables. A big, complex directory structure
or a high number of pages can take a lot of time.
Note: You cannot save a site configuration with overlapping schedules. Make sure any given
scan time doesn't even partially conflict with that of another.
By alternating scan templates in a site, you can check the same set of assets for different needs.
For example, you may schedule a recurring scan to run on a fairly routine basis with a template
that is specifically tuned for the assets in a particular site. Then you can schedule a monthly scan
to run with a special template for verifying Microsoft patches that have been applied after Patch
Tuesday. Or you can schedule a monthly or quarterly scan with an internal PCI template to
monitor compliance.
l If you want to set a schedule for an existing site, click that site's Edit icon in the Sites table on
the Home page.
l If you want to set a schedule while creating a new site, click the Create site button on the
Home page.
OR
Click the Create tab at the top of the page and then select Site from the drop-down list.
1. Click the Schedules tab of the Site Configuration.
2. Click Create Schedule.
3. Select the check box labeled Enable schedule.
The Security Console displays options for a start date and time, maximum scan duration in
minutes, and frequency of repetition.
OR
Select a date from the calendar that appears when you click inside the text box.
If you select the option to continue where the scan left off, the paused scan will continue at
the next scheduled start time.
If you select the option to restart the paused scan from the beginning, the paused scan will
stop and then start from the beginning at the next scheduled start time.
9. To make it a recurring scan, select the Repeat scan every check box. Select a number and
time unit.
10. Click Save.
The newly scheduled scan appears in the Scan Schedules table, which you can access by
clicking Manage Schedules.
Tip: You can edit a schedule by clicking its hyperlink in the table.
You may want to suspend a scheduled scan. For example, a particular set of assets may be
undergoing maintenance at a time when a scan is scheduled. You can enable and disable
schedules as your needs dictate.
Scan blackouts allow you to prevent scans from taking place during specified times when you
need to keep the network available for other traffic. For example, if your company makes
extensive backups on Fridays, you could create a recurring blackout period from 9 am to 9 pm
every Friday to prevent scans from running at that time.
l Global blackouts apply throughout your Nexpose workspace. Global blackouts are created
and managed from the Administration page. They can only be created and managed by
Global Administrators.
l Site-level blackouts apply only for specific sites. They are created and managed from the
Site Configuration. Site-level blackouts can be created and managed by Global
Administrators or by Site Managers for that site.
During a blackout period, any scheduled scans will not start. If anyone tries to start a manual scan
during a blackout period, they will see a message informing them of the blackout period. Global
Administrators will have the option to scan anyway. Others will be unable to proceed with the
scan.
If a scan is already in progress when a blackout period begins, the scan will be paused by the
system for the duration of the blackout period. The scan will resume once the blackout period is
over, in most cases. The exception is if a scheduled scan is paused by the system for a blackout
and meets its maximum duration during the blackout period. In that case, the scan duration will
take precedence and the blackout duration will not resume.
Note: Each scan takes approximately 30 seconds to shut down, and the scans shut down
sequentially. There will be network activity at the beginning of the blackout period while the scans
shut down. If you are creating a blackout period because you cannot have network activity during
a certain time period, set the blackout to begin earlier to allow for all the scans to shut down.
As previously mentioned, in order to create a site-level blackout, you must be a Site Manager for
that site, or a Global Administrator.
Before creating a new site-level blackout, you may want to review the existing site-level and
global blackouts that may apply to this site. Doing so will help you avoid creating overlapping or
conflicting blackouts.
In the Site Configuration, Site Managers and Global Administrators can edit site-level blackouts
and view global blackouts.
Note: If you modify a blackout that is currently in effect, it will be stopped and any running scans
will resume.
l To enable or disable a site-level blackout, select or clear the Enable check box. Global
blackouts can only be edited on the Administration page by Global Administrators.
l To edit a site-level blackout, click the start date. Edit the settings. Click Save on the Create
Blackout page and on the Site Configuration.
Before creating a global blackout, you may want to review the existing global blackouts in order to
avoid creating a new one that overlaps or conflicts.
Note: If you modify a blackout that is currently in effect, it will be stopped and any running scans
will resume.
1. Click the start date for the blackout you want to edit.
2. Edit the desired settings: Start date and time, maximum duration, whether to repeat the
blackout, and, if so, a repetition schedule. Select or clear the Enable blackout checkbox to
determine whether the blackout will take effect.
3. Click Save on the Manage Blackouts page.
4. Click Save Global Blackouts.
To manage disk space and ensure data integrity of scan results, administrators can delete
unused sites. By removing unused sites, inactive results do not distort scan results and risk
posture in reports. In addition, unused sites count against your license and can prevent the
addition of new sites. Regular site maintenance helps to manage your license so that you can
create new sites.
Note: To delete a site, you must have access to the site and have Manage Sites permission. The
Delete button is hidden if you do not have permission.
To delete a site:
l Click the Assets icon and then click on the number of sites at the top.
Note: You cannot delete a site that is being scanned. You receive this message “Scans are still in
progress. If you want to delete this site, stop all scans first”.
The Sites panel displays the sites that you can access based on your permissions.
All reports, scan templates, and scan engines are disassociated. Scan results are deleted.
If the delete process is interrupted then partially deleted sites will be automatically cleared.
It may not be unusual for your organization’s assets to fluctuate in number, type, and state, on a
fairly regular basis. As staff numbers grow or recede, so does the number of workstations.
Servers go on line and out of commission. Employees who are traveling or working from home
plug into the network at various times using virtual private networks (VPNs).
This fluidity underscores the importance of having a dynamic asset inventory. Relying on a
manually maintained spreadsheet is risky. There will always be assets on the network that are
not on the list. And, if they’re not on the list, they're not being managed. Result: added risk.
One way to manage a "dynamic inventory," is to run discovery scans on a regular basis. See
Configuring asset discovery on page 548. This approach is limited because scan provides a
snapshot of your asset inventory at the time of the scan. Another approach, Dynamic Discovery,
allows you to discover and track assets without running a scan. It involves initiating a connection
with a server or API that manages an asset environment, such as one for virtual machines, and
then receiving periodic updates about changes in that environment. This approach has several
benefits:
l As long as the discovery connection is active, the application periodically discovers assets "in
the background," without manual intervention on your part.
l You can create dynamic sites that update automatically based on dynamic asset discovery.
See Configuring a dynamic site on page 182. Whenever you scan these sites, you are
scanning the most current set of assets.
l You can concentrate scanning resources for vulnerability checks instead of running discovery
scans.
For connections to Amazon Web Services, DHCP log servers, and VMware servers, your
Nexpose must enable the Dynamic Discovery option.
For ActiveSync connections that allow you to discover mobile devices, your license must enable
the Mobile option.
4. See if the Dynamic Discovery or Mobile feature is checked, depending on your needs.
An increasing number of users are connecting their personal mobile devices to corporate
networks. These devices increase and expand attack surfaces in your environment with
vulnerabilities that allow attackers to bypass security restrictions and perform unauthorized
actions or execute arbitrary code.
You can discover devices with Apple iOS or Google Android operating systems that are
connected to Microsoft Exchange over ActiveSync. All versions of iOS and Android are
supported.
The Security Console discovers mobile devices that are managed with Microsoft Exchange
ActiveSync protocol. The Dynamic Discovery feature currently supports Exchange versions 2010
You can connect to the mobile data via one of three Microsoft Windows server configurations:
The advantage of using one of the WinRM configurations is that asset data discovered through
one of these methods includes the most recent time that each mobile device was synchronized
with the Exchange server. This can be useful if you do not want your reports to include data from
old devices that are no longer in use on the network. You can create a dynamic asset group for
mobile devices with old devices filtered out. See Performing filtered asset searches on page 313.
Depending on which Windows server configuration you are using, you will need to take some
preliminary steps to prepare your target environment for discovery.
LDAP/AD
For the discovery connection, the Security Console requires credentials for a user with read
permission on the mobile device objects in Active Directory. The user must be a member of the
Organization Management Security Group in Microsoft Exchange or a user that has been
granted read access to the mobile device objects.This allows the Security Console to perform
LDAP queries.
1. Start the Active Directory Service Interfaces Editor (ADSI Edit) and connect to the AD
environment.
2. Select the OU that contains users with ActiveSync (Mobile) devices. In this example, the
Users OU contains users with ActiveSync devices.
3. Right-click the Users OU and select Properties.
4. Select the Security tab.
5. Click the Add button and add the user account that the Security Console will use for
connecting to the AD server.
6. Select the user and click Advanced.
7. Select the user and click Edit .
8. From the Applies to drop-down list, select Descendant msExchActiveSyncDevice objects.
Repeat the previous steps for any additional OUs containing ActiveSync (Mobile) devices.
The setup requirements and steps in the target environment are practically identical for
PowerShell and Office 365 configurations:
Note: The WinRM gateway may also be the Exchange server or Nexpose, if the Security
Console is running on Windows.
Note: Consult a Windows server administrator if you are unfamiliar with these procedures.
The WinRM gateway must have an available https WinRM listener at port 5986. Typical steps to
enable this include the following:
1. Verify that the server has a Server Authentication certificate installed that is not expired or
self-signed. For more information, see https://technet.microsoft.com/en-
us/library/cc731183.aspx .
2. Enable the WinRM https listener:
C:\> winrm quickconfig -transport:https
3. Increase the WinRM memory limit with a PowerShell command (Minimum setting is 1024 MB;
but 2048 MB is recommended.):
[PS] C:\> set-item wsman:localhost\Shell\MaxMemoryPerShellMB 2048
The following instructions are available for enabling WinRM for an account other than
administrator: http://docs.scriptrock.com/kb/using-winrm-without-admin-rights.html
If WinRM fails using a domain controller as WinRM gateway, see the blog at
http://www.projectleadership.net/blogs_details.php?id=3154 for assistance. Typically, running
setspn -L [server_name] returns two WinRM configurations; but, in this case, none are
displayed.
If the PowerShell script fails with error Process is terminated due to StackOverflowException.,
the WinRM memory limit is insufficient. Increase the setting by running the PowerShell
command:
To verify and troubleshoot Exchange connectivty, open the PowerShell Windows WinRM
gateway server with the WinRM credentials. Then run the following Powershell command with
your Exchange user credentials and the Exchange server fully qualified domain name for your
organization:
$cred = Get-Credential
$s = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri
http://exchangeserver.domain.com/ -credential $cred
Import-PSSession $s
Get-ActiveSyncDevice
This will display a window to enter the credentials. If the New-PSSession fails this indicates that
the remote PowerShell connection to the Exchange server failed.
If the Get-ActiveSyncDevices command returns not devices, this may indicate that your
Exchange account has insufficient permission to perform the query.
The Office 365 configuration works exactly like the PowerShell configuration, except that it
communicates with Microsoft's Exchange server in the Cloud and connects to the gateway
somewhat differently via PowerShell.
After preparing your network for discovery, see Creating and managing Dynamic Discovery
connections on page 158.
To optimize discovery results on a continuing basis, observe some best practices for managing
your environment:
Test your ActiveSync environment to verify all components are working and communicating
properly. This will help improve your coverage.
Creating rules for ActiveSync devices in your network further expands your control. You can, for
example, create rules for approving quarantined devices.
Individual users in your organization may use multiple devices, each with its own partnership or
set of ActiveSync attributes created during the initial synchronization. Additionally, users routinely
upgrade from one version of a device to another, which also increases the potential number of
partnerships to support. Managing these partnerships is important for tracking ActiveSync
devices in your environment. It involves the removal of old devices from the Exchange server,
which can help create a more accurate mobile risk assessment.
If your organization uses Amazon Web Services (AWS) for computing, storage, or other
operations, Amazon may occasionally move your applications and data to different hosts. By
In the AWS context, an instance is a copy of an Amazon Machine Image running as a virtual
server in the AWS cloud. The scan process correlates assets based on instance IDs. If you
terminate an instance and later recreate it from the same image, it will have a new instance ID.
That means that if you scan a recreated instance, the scan data will not be correlated with that of
the preceding incarnation of that instance. The results will be two separate instances in the scan
results.
Before you initiate Dynamic Discovery and start scanning in an AWS environment, you need to:
l be aware of how your deployment of Nexpose components affects the way Dynamic
Discovery works
l create an AWS IAM user or IAM role
l create an AWS policy for your IAM user or IAM role
In configuring an AWS discovery connection, it is helpful to note some deployment and scanning
considerations for AWS environments.
It is a best practice to scan AWS instances with a distributed Scan Engine that is deployed within
the AWS network, also known as the Elastic Compute Cloud (EC2) network. This allows you to
scan private IP addresses and collect information that may not be available with public IP
addresses, such as internal databases. If you scan the AWS network with a Scan Engine
deployed inside your own network, and if any assets in the AWS network have IP addresses
identical to assets inside your own network, the scan will produce information about assets in
your own network with the matching addresses, not the AWS instances.
Note: The AWS network is behind a firewall, as are the individual instances or assets in the
network, so there are two firewalls to negotiate for AWS scans.
If the Security Console and Scan Engine that will be used for scanning AWS instances are
located outside of the AWS network, you will only be able to scan EC2 instances with Elastic IP
(EIP) addresses assigned to them. Also, you will not be able to manually edit the asset list in your
site configuration or in a manual scan window. Dynamic Discovery will include instances without
EIP addresses, but they will not appear in the asset list for the site configuration. Learn more
about EIP addresses.
If your Security Console is located outside the AWS network, the AWS Application Programming
Interface (API) must be able to recognize it as a trusted entity before allowing it to connect and
discover AWS instances. To make this possible, you will need to create IAM user, which is an
AWS identity for the Security Console, with permissions that support Dynamic Discovery. When
you create an IAM user, you will also create an access key that the Security Console will use to
log onto the API.
Note: When you create an IAM user, make sure to select the option to create an access key ID
and secret access key. You will need these credentials when setting up the discovery connection.
You will have the option to download these credentials. Be careful to download them in a safe,
secure location.
Note: When you create an IAM user, make sure to select the option to create a custom policy.
If your Security Console is installed on an AWS instance and, therefore, inside the AWS network,
you need to create an IAM role for that instance. A role is simply a set of permissions. You will not
need to create an IAM user or access key for the Security Console.
Note: When you create an IAM role, make sure to select the option to create a custom policy.
When creating an IAM user or role, you will have to apply a policy to it. A policy defines your
permissions within the AWS environment. Amazon requires your AWS policy to include minimal
permissions for security reasons. To meet this requirement, select the option create a custom
policy.
You can create the policy in JSON format using the editor in the AWS Management Console.
The following code sample indicates how the policy should be defined:
{
"Version": "2012-10-17",
"Statement": [
{ "Sid": "Stmt1402346553000", "Effect": "Allow", "Action":
l Preparing the target VMware environment for Dynamic Discovery on page 155
l Creating and managing Dynamic Discovery connections on page 158
l Initiating Dynamic Discovery on page 167
l Using filters to refine Dynamic Discovery on page 170
l Configuring a dynamic site on page 182
An increasing number of high-severity vulnerabilities affect virtual targets and devices that
support them, such as the following:
l management consoles
l management servers
l administrative virtual machines
l guest virtual machines
l hypervisors
Merely keeping track of virtual assets and their various states and classifications is a challenge in
itself. To manage their security effectively you need to keep track of important details: For
example, which virtual machines have Windows operating systems? Which ones belong to a
particular resource pool? Which ones are currently running? Having this information available
keeps you in synch with the continual changes in your virtual asset environment, which also helps
you to manage scanning resources more efficiently. If you know what scan targets you have at
any given time, you know what and how to scan.
In response to these challenges the application supports dynamic discovery of virtual assets
managed by VMware vCenter or ESX/ESXi.
Once you initiate Dynamic Discovery it continues automatically as long as the discovery
connection is active.
l vCenter 4.1
l vCenter 4.1, Update 1
l vCenter 5.0
l ESX 4.1
l ESX 4.1, Update 1
l ESXi 4.1
l ESXi 4.1, Update 1
l ESXi 5.0
The preceding list of supported ESX(i) versions is for direct connections to standalone hosts. To
determine if the application supports a connection to an ESX(i) host that is managed by vCenter,
consult VMware’s interoperability matrix at http://partnerweb.vmware.com/comp_
guide2/sim/interop_matrix.php.
You must configure your vSphere deployment to communicate through HTTPS. To perform
Dynamic Discovery, the Security Console initiates connections to the vSphere application
program interface (API) via HTTPS.
If Nexpose and your target vCenter or virtual asset host are in different subnetworks that are
separated by a device such as a firewall, you will need to make arrangements with your network
administrator to enable communication, so that the application can perform Dynamic Discovery.
Make sure that port 443 is open on the vCenter or virtual machine host because the application
needs to contact the target in order to initiate the connection.
When creating a discovery connection, you will need to specify account credentials so that the
application can connect to vCenter or the ESX/ESXi host. Make sure that the account has
permissions at the root server level to ensure all target virtual assets are discoverable. If you
assign permissions on a folder in the target environment, you will not see the contained assets
unless permissions are also defined on the parent resource pool. As a best practice, it is
recommended that the account have read-only access.
Make sure that virtual machines in the target environment have VMware Tools installed on them.
Assets can be discovered and will appear in discovery results if they do not have VMware Tools
installed. However, with VMware Tools, these target assets can be included in dynamic sites.
This has significant advantages for scanning. See Configuring a dynamic site on page 182.
l Preparing the target environment for Dynamic Discovery through DHCP Directory Watcher
method on page 157
l Creating and managing Dynamic Discovery connections on page 158
l Initiating Dynamic Discovery on page 167
l Using filters to refine Dynamic Discovery on page 170
l Configuring a dynamic site on page 182
This connection extends your visibility into your asset inventory by exposing assets that may not
be otherwise apparent. Scan Engines query DHCP server logs, which dynamically update with
fresh asset information every five seconds. The engines pass the results of these queries to the
Security Console. For each DHCP connection, you assign a specific Scan Engine.
On first connection, the method will yield the current DHCP lease table. In any use after that first
connection, the method discovers assets that the DHCP server has detected, or assets that have
renewed their IP addresses, after the connection is initiated.
You can leverage the number of distributed Scan Engines to communicate with multiple DHCP
servers and to connect with these servers in less accessible locations, such as behind firewalls or
on the network perimeter.
Note: The DHCP method only discovers assets that have not yet been discovered by Nexpose
through a different method or through a scan.
l Directory Watcher monitors a specified directory on a DHCP server host and uploads new
DHCP entries added to the directory at 10-second intervals. Use this method for log files that
roll over to new files, such as Microsoft DHCP or Internet Information Services (IIS) files.
l Syslog operates like a syslog server, listening on a TCP or UDP port to receive syslog
messages. Use this method if you are managing DHCP information with an Infoblox Trinzic
appliance.
Preparing the target environment for Dynamic Discovery through DHCP Directory Watcher
method
Note: The current implementation of DHCP discovery with the Directory Watcher method
supports Microsoft Server 2008 and 2012.
Tip: The default directory path of the DHCP log file is in Windows 2008 is
%windir%\System32\Dhcp. The path in Windows 2012 is %systemroot%\System32\Dhcp.
This action provides Nexpose the information it needs to contact a server or process that
manages the asset environment.
You must have Global Administrator permissions to create or manage Dynamic Discovery
connections. See the topic Managing users and authentication in the administrator's guide.
If you want to create a connection while configuring a new site, click the Create site button on the
Home page.
OR
Click the Create tab at the top of the page and then select Site from the drop-down list.
If you want to create a connection for an existing site, click that site's Edit icon in the Sites table on
the Home page.
Enter the information for a new connection with Exchange ActiveSync (LDAP):
1. Enter a unique name for the new connection on the General page.
2. Enter the name of the Active Directory (AD) server to which the Security Console will connect.
3. Select a protocol from the drop-down list.
LDAPS, which is LDAP over SSL, is the more secure option and is recommended if it is
enabled on your AD server.
4. Enter a user name and password for a member of the Organization Management Security
Group in Microsoft Exchange.
This account will enable the Security Console to discover mobile devices connected to the
AD server.
Enter the information for a new connection with Exchange ActiveSync (WinRM/PowerS/hell or
WinRM/Office 365):
1. Enter a unique name for the new connection on the General page.
2. Enter the name of the of the WinRM gateway server to which the Security Console will
connect.
3. Enter a user name and password for an account that has WinRM permissions for the gateway
server.
4. Enter the fully qualified domain name of the Exchange server that manages the mobile device
information.
5. Enter a user name and password for an administrator account or a user account that has
View-Only Organizational Management or higher role of the Organization Management
Security Group in Microsoft Exchange.
6. Click Save. The connection appears in the Connection drop-down list, which you can view by
clicking Select Connection.
7. Continue with Initiating Dynamic Discovery on page 167.
1. Enter a unique name for the new connection on the General page.
2. From the drop-down list, select the geographic region where your AWS instances are
deployed.
3. If your Security Console and the Scan Engine you will use to scan the AWS environment are
deployed inside the AWS network, select the check box. This will make the application to scan
private IP addresses. See Inside or outside the AWS network? on page 153.
4. If you indicate that the Security Console and Scan Engine are inside the AWS network, the
Credentials link disappears from the left navigation pane. You do not need to configure
credentials, since the AWS API recongizes the IAM role of the AWS instance that the Security
Console is installed on. In this case, simply click Save and ignore the following steps.
5. Enter an Access Key ID and Secret Access Key with which the application will log on to the
AWS API.
6. Click Save. The connection appears in the Connection drop-down list, which you can view by
clicking Select Connection.
7. Continue with Initiating Dynamic Discovery on page 167.
1. Enter a unique name for the new connection on the General page.
Note: Syslog is the only available collection method for the Infoblox Trinzic event source.
The Security Console displays the General page of the Asset Discovery Connection panel.
l Exchange ActiveSync (LDAP) is for mobile devices managed by an Active Directory (AD)
server.
l Exchange ActiveSync (WinRM/PowerShell) is for mobile devices managed by an on-premise
Exchange server accessed with PowerShell.
l Exchange ActiveSync (WinRM/Office 365) is for mobile devices managed by Cloud-based
Exchange server running Microsoft Office 365.
l vSphere is for environments managed by VMware vCenter or ESX/ESXi.
l AWS is for environments managed by Amazon Web Services.
l DHCP Service is for assets that Scan Engines discover by collecting log data from DHCP
servers.
3. Enter the name of the Active Directory (AD) server to which the Security Console will connect.
4. Select a protocol from the drop-down list.
LDAPS, which is LDAP over SSL, is the more secure option and is recommended if it is
enabled on your AD server.
5. Click Credentials.
6. Enter a user name and password for a member of the Organization Management Security
Group in Microsoft Exchange.
This account will enable the Security Console to discover mobile devices connected to the
AD server.
7. Click Save.
8. Continue with Initiating Dynamic Discovery on page 167.
3. From the drop-down list, select the geographic region where your AWS instances are
deployed.
4. If your Security Console and the Scan Engine you will use to scan the AWS environment are
deployed inside the AWS network, select the check box. This will make the application to scan
private IP addresses. See Inside or outside the AWS network? on page 153.
5. If you indicate that the Security Console and Scan Engine are inside the AWS network, the
Credentials link disappears from the left navigation pane. You do not need to configure
credentials, since the AWS API recongizes the IAM role of the AWS instance that the Security
Console is installed on. In this case, simply click Save and ignore the following steps.
6. Click Credentials.
3. Enter a fully qualified domain name for the server that the Security Console will contact in
order to discover assets.
4. Enter a port number and select the protocol for the connection.
5. Click Credentials.
6. Enter a user name and password with which the Security Console will log on to the server.
Make sure that the account has access to any virtual machine that you want to discover.
7. Click Save.
8. Continue with Initiating Dynamic Discovery on page 167.
Note: Syslog is the only available data collection method for the Infoblox Trinzic event source.
1. Enter a unique name for the new connection on the General page.
2. Click Service.
3. On the Service page, select an event source type.
4. Select the Syslog collection method.
5. Select the number of the port that the syslog parser listens on for log entries related to asset
information.
6. Select the protocol for the port that the syslog parser listens on for log entries related to asset
information.
7. Select the Scan Engine that will collect the DHCP server log information.
8. Click Save.
9. Continue with Initiating Dynamic Discovery on page 167.
To view available connections or change a connection configuration take the following steps:
OR
On the Discovery Connections page, you can also delete connections or export connection
information to a CSV file, which you can view in a spreadsheet for internal purposes.
You cannot delete a connection that has a dynamic site or an in-progress scan associated with it.
Also, changing connection settings may affect asset membership of a dynamic site. See
Configuring a dynamic site on page 182. You can determine which dynamic sites are associated
with any connection by going to the Discovery Management page. See Monitoring Dynamic
Discovery on page 181.
If you change a connection by using a different account, it may affect your discovery results
depending which virtual machines the new account has access to. For example: You first create
a connection with an account that only has access to all of the advertising department’s virtual
machines. You then initiate discovery and create a dynamic site. Later, you update the
connection configuration with credentials for an account that only has access to the human
resources department’s virtual machines. Your dynamic site and discovery results will still include
the advertising department’s virtual machines; however, information about those machines will
no longer be dynamically updated. Information is only dynamically updated for machines to which
the connecting account has access.
This action involves having the Security Console contact the server or API and begin discovering
virtual assets. After the application performs initial discovery and returns a list of discovered
assets, you can refine the list based on criteria filters, as described in the following topic. To
perform Dynamic Discovery, you must have the Manage sites permission. See Configuring roles
and permissions in the administrator's guide.
1. After creating a connection (see Creating a connection in a site configuration on page 158) ,
click Select Connection.
2. Select the desired option from the drop-down list.
Note: Assets discovered through a dynamic connection also appear on the Assets page. See
Comparing scanned and discovered assets on page 237
The Security Console displays the General page of the Asset Discovery Connection panel.
3. Select the appropriate discovery connection name from the drop-down list labeled
Connection.
4. Click Discover Assets.
Note: With new, changed, or reactivated discovery connections, the discovery process must
complete before new discovery results become available. There may be a slight delay before
new results appear in the Web interface.
After performing the initial discovery, the application continues to discover assets as long as the
discovery connection remains active. The Security Console displays a notification of any inactive
discovery connections in the bar at the top of the Security Console Web interface. You can also
check the status of all discovery connections on the Discovery Connections page. See Creating
and managing Dynamic Discovery connections on page 158.
If you create a discovery connection but don’t initiate discovery with that connection, or if you
initiate a discovery but the connection becomes inactive, you will see an advisory icon in the top,
left corner of the Web interface page. Roll over the icon to see a message about inactive
connections. The message includes a link that you can click to initiate discovery.
After Nexpose discovers assets, they also appear in the Discovered by Connection table on the
Assets page. See Locating and working with assets on page 235 for more information.
You can use filters to refine Dynamic Discovery results based on specific discovery criteria. For
example, you can limit discovery to assets that are managed by a specific resource pool or those
with a specific operating system.
Note: If a set of filters is associated with a dynamic site, and if you change filters to include more
assets than the maximum number of scan targets in your license, you will see an error message
instructing you to change your filter criteria to reduce the number of discovered assets.
Using filters has a number of benefits. You can limit the sheer number of assets that appear in the
discovery results table. This can be useful in an environment with a high number of virtual assets.
Also, filters can help you discover very specific assets. You can discover all assets within an IP
address range, all assets that belong to a particular resource pool, or all assets that are powered
on or off. You can combine filters to produce more granular results. For example, you can
discover all of Windows 7 virtual assets on a particular host that are powered on.
For every filter that you select, you also select an operator that determines how that filter is
applied. Then, depending on the filter and operator, you enter a string or select a value for that
operator to apply.
l Operating System
l User
l Last Sync Time (WinRM/PowerShell and WinRM/Office 365 connections only)
Operating System
With the Operating System filter, you can discover assets based on their operating systems. This
filter works with the following operators:
l contains returns all assets with operating systems whose names contain an entered string.
l does not contain returns all assets with operating systems whose names do not contain an
entered string.
User
With the User filter, you can discover assets based on their associated user accounts. This filter
works with the following operators:
l contains returns all assets with user accunts whose names contain an entered string.
l does not contain returns all assets with user accounts whose names do not contain an
entered string.
l is returns all assets with user accounts whose names match an entered string exactly.
l is not returns all assets with user accounts whose names do not match an entered string.
l starts with returns all assets with user accounts whose names begin with the same characters
as an entered string.
Note: This filter is only available with WinRM/PowerShell and WinRM/Office 365 Dynamic
Discovery connections.
With the Last Synch Time filter, you can track mobile devices based on the most recent time they
synchronized with the Exchange server. This filter can be useful if you do not want your reports to
l earlier than returns all mobile devices that synchronized earlier than a number of preceding
days that you enter in a text box.
l within the last returns all mobile devices that synchronized within a number of preceding days
that you enter in a text box.
l Availability Zone
l Guest OS family
l Instance ID
l Instance Name
l Instance state
l Instance Type
l Region
Availability Zone
With the Availability Zone filter, you can discover assets located in specific Availability Zones. This
filter works with the following operators:
l contains returns all assets that belong to Availability Zones whose names contain an entered
string.
l does not contain returns all assets that belong to Availability Zones whose names do not
contain an entered string.
Guest OS family
With the Guest OS family filter, you can discover assets that have, or do not have, specific
operating systems. This filter works with the following operators:
l contains returns all assets that have operating systems whose names contain an entered
string.
l does not contain returns all assets that have operating systems whose names do not contain
an entered string.
With the Instance ID filter, you can discover assets that have, or do not have, specific Instance
IDs. This filter works with the following operators:
l contains returns all assets whose instance names whose instance IDs contain an entered
string.
l does not contain returns all assets whose instance IDs do not contain an entered string.
Instance name
With the Instance Name filter, you can discover assets that have, or do not have, specific
Instance IDs. This filter works with the following operators:
l is returns all assets whose instance names match an entered string exactly.
l is not returns all assets whose instance names do not match an entered string.
l contains returns all assets whose instance names contain an entered string.
l does not contain returns all assets whose instance names do not contain an entered string.
l starts with returns all assets whose instance names begin with the same characters as an
entered string.
Instance state
With the Instance state filter, you can discover assets (instances) that are in, or are not in, a
specific operational state. This filter works with the following operators:
l is returns all assets that are in a state selected from a drop-down list.
l is not returns all assets that are not in a state selected from a drop-down list.
Instance type
With the Instance type filter, you can discover assets that are, or are not, a specific instance type.
This filter works with the following operators:
l is returns all assets that are a type selected from a drop-down list.
l is not returns all assets that are not a type selected from a drop-down list.
IP address range
With the IP address range filter, you can discover assets that have IP addresses, or do not have
IP addresses, within a specific range. This filter works with the following operators:
l is returns all assets with IP addresses that falls within the entered IP address range.
l is not returns all assets whose IP addresses do not fall into the entered IP address range.
When you select the IP address range filter, you will see two blank fields separated by the word
to. Enter the start of the range in the left field, and end of the range in the right field. The format for
the IP addresses is a “dotted quad.” Example: 192.168.2.1 to 192.168.2.254
Region
With the Region type filter, you can discover assets that are in, or are not in, a specific geographic
region. This filter works with the following operators:
l is returns all assets that are in a region selected from a drop-down list.
l is not returns all assets that are in a not a region selected from a drop-down list.
Regions include Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), EU
(Ireland), or South American (Sao Paulo).
l Cluster
l Datacenter
l Guest OS family
l Host
l IP address range
l Power state
l Resource pool path
l Virtual machine name
With the Cluster filter, you can discover assets that belong, or don’t belong, to specific clusters.
This filter works with the following operators:
l is returns all assets that belong to clusters whose names match an entered string exactly.
l is not returns all assets that belong to clusters whose names do not match an entered string.
l contains returns all assets that belong to clusters whose names contain an entered string.
l does not contain returns all assets that belong to clusters whose names do not contain an
entered string.
l starts with returns all assets that belong to clusters whose names begin with the same
characters as an entered string.
Datacenter
With the Datacenter filter, you can discover assets that are managed, or are not managed, by
specific datacenters. This filter works with the following operators:
l is returns all assets that are managed by datacenters whose names match an entered string
exactly.
l is not returns all assets that are managed by datacenters whose names do not match an
entered string.
Guest OS family
With the Guest OS family filter, you can discover assets that have, or do not have, specific
operating systems. This filter works with the following operators:
l contains returns all assets that have operating systems whose names contain an entered
string.
l does not contain returns all assets that have operating systems whose names do not contain
an entered string.
With the Host filter, you can discover assets that are guests, or are not guests, of specific host
systems. This filter works with the following operators:
l is returns all assets that are guests of hosts whose names match an entered string exactly.
l is not returns all assets that are guests of hosts whose names do not match an entered string.
l contains returns all assets that are guests of hosts whose names contain an entered string.
l does not contain returns all assets that are guests of hosts whose names do not contain an
entered string.
l starts with returns all assets that are guests of hosts whose names begin with the same
characters as an entered string.
IP address range
With the IP address range filter, you can discover assets that have IP addresses, or do not have
IP addresses, within a specific range. This filter works with the following operators:
l is returns all assets with IP addresses that falls within the entered IP address range.
l is not returns all assets whose IP addresses do not fall into the entered IP address range.
When you select the IP address range filter, you will see two blank fields separated by the word
to. Enter the start of the range in the left field, and end of the range in the right field. The format for
the IP addresses is a “dotted quad.” Example: 192.168.2.1 to 192.168.2.254
Power state
With the Power state filter, you can discover assets that are in, or are not in, a specific power
state. This filter works with the following operators:
l is returns all assets that are in a power state selected from a drop-down list.
l is not returns all assets that are not in a power state selected from a drop-down list.
With the Resource pool path filter, you can discover assets that belong, or do not belong, to
specific resource pool paths. This filter works with the following operators:
l contains returns all assets that are supported by resource pool paths whose names contain an
entered string.
l does not contain returns all assets that are supported by resource pool paths whose names
do not contain an entered string.
You can specify any level of a path, or you can specify multiple levels, each separated by a
hyphen and right arrow: ->. This is helpful if you have resource pool path levels with identical
names.
For example, you may have two resource pool paths with the following levels:
Human Resources
Management
Workstations
Advertising
Management
Workstations
The virtual machines that belong to the Management and Workstations levels are different in
each path. If you only specify Management in your filter, the application will discover all virtual
machines that belong to the Management and Workstations levels in both resource pool paths.
However, if you specify Advertising -> Management -> Workstations, the application will only
discover virtual assets that belong to the Workstations pool in the path with Advertising as the
highest level.
With the Virtual machine name filter, you can discover assets that have, or do not have, a specific
name. This filter works with the following operators:
l Host name
l IP address
l MAC address
Host
With the Host filter, you can discover assets based on host names. This filter works with the
following operators:
l is returns all assets with host names that match an entered string exactly.
l is not returns all assets with host names that do not match an entered string.
l contains returns all assets with host names that contain an entered string.
l does not contain returns all assets with host names that do not contain an entered string.
l starts with returns all assets with host names that begin with the same characters as an
entered string.
IP address range
With the IP address range filter, you can discover assets that have IP addresses, or do not have
IP addresses, within a specific range. This filter works with the following operators:
l is returns all assets with IP addresses that falls within the entered IP address range.
l is not returns all assets whose IP addresses do not fall into the entered IP address range.
MAC address
With the MAC address filter, you can discover assets based on MAC addresses. This filter works
with the following operators:
l is returns all assets with MAC addresses that match an entered string exactly.
l is not returns all assets with MAC addresses that do not match an entered string.
l contains returns all assets with MAC addresses that contain an entered string.
l does not contain returns all assets with MAC addresses that do not contain an entered string.
l starts with returns all assets with MAC addresses that begin with the same characters as an
entered string.
If you use multiple filters, you can have the application discover assets that match all the criteria
specified in the filters, or assets that match any of the criteria specified in the filters.
The difference between these options is that the all setting only returns assets that match the
discovery criteria in all of the filters, whereas the any setting returns assets that match any given
filter. For this reason, a search with all selected typically returns fewer results than any.
For example, a target environment includes 10 assets. Five of the assets run Ubuntu, and their
names are Ubuntu01, Ubuntu02, Ubuntu03, Ubuntu04, and Ubuntu05. The other five run
Windows, and their names are Win01, Win02, Win03, Win04, and Win05. Suppose you create
two filters. The first discovery filter is an operating system filter, and it returns a list of assets that
run Windows. The second filter is an asset filter, and it returns a list of assets that have “Ubuntu”
in their names.
If you discover assets with the two filters using the all setting, the application discovers assets that
run Windows and have “Ubuntu” in their asset names. Since no such assets exist, no assets will
be discovered. However, if you use the same filters with the any setting, the application discovers
assets that run Windows or have “Ubuntu” in their names. Five of the assets run Windows, and
the other five assets have “Ubuntu” in their names. Therefore, the result set contains all of the
assets.
Note: If a virtual asset doesn’t have an IP address, it can only be discovered and identified by its
host name. It will appear in the discovery results, but it will not be added to a dynamic site. Assets
without IP addresses cannot be scanned.
After you initiate discovery as described in the preceding section, and the Security Console
displays the results table, take the following steps to configure and apply filters:
A new filter row appears. Set up the new filter as described in the preceding step.
6. Add more filters as desired. To delete any filter, click the appropriate - icon.
After you configure the filters, you can apply them to the discovery results.
1. Select the option to match any or all of the filters from the drop-down list below the filters.
2. Click Filter.
The discovery results table now displays assets based on filtered discovery.
Since discovery is an ongoing process as long as the connection is active, you may find it useful
to monitor events related to discovery. The Discovery Statistics page includes several informative
tables:
l Assets lists the number of currently discovered virtual machines, hosts, data centers, and
discovery connections. It also indicates how many virtual machines are online and offline.
l Dynamic Site Statistics lists each dynamic site, the number of assets it contains, the number of
scanned assets, and the connection through which discovery is initiated for the site’s assets.
l Events lists every relevant change in the target discovery environment, such as virtual
machines being powered on or off, renamed, or being added to or deleted from hosts.
Dynamic Discovery is not meant to enumerate the host types of virtual assets. The application
categorizes each asset it discovers as a host type and uses this categorization as a filter in
searches for creating dynamic asset groups. See Performing filtered asset searches on page
313. Possible host types include Virtual machine and Hypervisor. The only way to determine the
host type of an asset is by performing a credentialed scan. So, any asset that you discover
through Dynamic Discovery and do not scan with credentials will have an Unknown host type, as
displayed on the scan results page for that asset. Dynamic discovery only finds virtual assets, so
dynamic sites will only contain virtual assets.
If you attempt to create a dynamic site based on a number of discovered assets that exceeds
the maximum number of scan targets in your license, you will see an error message
instructing you to change your filter criteria to reduce the number of discovered assets. See
Using filters to refine Dynamic Discovery on page 170.
After creating and initiating a discovery connection, you can continue configuring a site.
If you have created and initiated a discovery connection outside of a site configuration, click the
Create Dynamic Site button on the Discovery page. The Security Console displays the Site
Configuration. Continue configuring the site.
As long as the connection for an initiated Dynamic Discovery is active, asset membership in a
dynamic site is subject to change whenever changes occur in the target environment.
You can also change asset membership by changing the discovery connection or filters. See
Using filters to refine Dynamic Discovery on page 170.
1. Click that Edit icon of the site you want to edit in the Sites table on the Home page.
2. Select the Assets tab.
3. The Connection option for specifying assets is already selected. Do not change it.
4. Click Select Connection.
5. Select a different connection from the drop-down list if desired.
6. Click the Filters button to change asset membership if desired. Using filters to refine Dynamic
Discovery on page 170.
7. Click Save in the Site Configuration.
Another benefit is that if the number of discovered assets in the dynamic site list exceeds the
number of maximum scan targets in your license, you will see a warning to that effect before
running a scan. This ensures that you do not run a scan and exclude certain assets. If you run a
scan without adjusting the asset count, the scan will target assets that were previously
discovered. You can adjust the asset count by refining the discovery filters for your site.
If you change the discovery connection or discovery filter criteria for a dynamic site that has been
scanned, asset membership will be affected in the following ways: All assets that have not been
scanned and no longer meet new discovery filter criteria, will be deleted from the site list. All
assets that have been scanned and have scan data associated with them will remain on the site
list whether or not they meet new filter discovery criteria. All newly discovered assets that meet
new filter criteria will be added to the dynamic site list.
Rapid7's Project Sonar is an initiative to improve security through the active analysis of public
networks. The data-gathering involves running non-invasive scans of IPv4 addresses across
internet-facing systems, organizing the results, and sharing the data with the information security
community.
You can import information about assets scanned by the Project Sonar lab by using a Dynamic
Discovery connection, filtered by domain name.
Because it is imported from an external source, Project Sonar can provide a useful
"outsider" view of your environment. With its scope of discovery, Sonar may find assets belonging
to your organization that you previously were not aware of, or had not been tracking. This can
expand your view of your exposure surface area. This is a simple way to gain an initial snapshot
of all your public-facing assets.
After using the Dynamic Discovery feature to find assets found by Project Sonar, you can view,
sort, and tag these assets as you would any other assets in your Security Console database.
You can create granular subsets of these assets, using filtered asset searches and dynamic
asset groups. For example, you can organize the assets according to IP address ranges. Or, if
the assets in a given domain have a certain naming convention, you can separate out all assets
that have common elements in their host names. This refinement of data helps you make better
sense of what you are looking at, which is particularly useful if a given domain includes a high
number of assets.
Using dynamic asset groups, you can distribute focused reports to specific members of your
security team who may be responsible for subsets of the assets, assuming the discovered
domain belongs to your organization.
You can further analyze these assets for vulnerabilities or policy compliance by scanning the
dynamic asset groups.
Sonar data refreshes approximately on a weekly basis, so the asset information you retrieve at
any time may be "stale".
The Security Console connection discovers a maximum of 10,000 assets per each dynamic site
based on the discovery connection. These are the first 10,000 assets returned by the lab servers,
and the list is subject to change at any time.
Sonar data should not be considered a definitive or comprehensive view. It is a starting point for
understanding a public Internet presence. You can use this information to get a closer look at the
environment and its exposure surface area.
Use the following workflow to get the most value out of the asset data you import from Sonar
Labs.
Your Nexpose must have the Dynamic Discovery feature enabled. Also, the Security Console
must be able to contact the Sonar Labs server (https://sonar.labs.rapid7.com) via port 443.
You can verify that the connection is live by taking the following steps:
The following steps involve initiating connection to the Sonar labs server then saving the
discovered assets in a site configuration.
Nexpose establishes the connection and queries the Sonar server. A table appears and lists
each asset that matches the query.
Note: It is unnecessary to add credentials or change the scan template for this site. You can,
however, create a scan schedule if you want to. See Scheduling scans on page 135.
The site appears in the Sites table on the Home page. At this point, the asset data is not yet
imported into the Security Console database. That happens when you scan the scan the dynamic
site you have just created.
When you run a scan of the site based on the Sonar connection, the scan queries the Sonar
server and imports the Sonar data into the Security Console so that you can view, sort, and tag it
on the Assets page. This scan does not perform any checks, which is why it is unnecessary to
create credentials or change the scan template.
When the scan completes, the assets appear in the Scanned table of the Assets page with no
vulnerability counts because the scan did not include any checks.
Creating Dynamic Asset Groups based on filtered searches, you can organize your assets into
manageable subsets. And if you want to scan Sonar-discovered assets for vulnerabilities, you will
have a better idea of what you are scanning.
For example, by searching for assets based on host name, you can find all the assets that have
your organization's domain name. This helps you avoid scanning assets that do not belong to
your organization.
Warning: It is strongly recommended that you do not perform vulnerability scans on assets that
do not belong to your organization. Scans can be perceived as attacks and can be otherwise
disruptive to business operations.
By searching for assets based on IP address ranges, operating systems, or tags, you can isolate
specific assets to make sure will scan assets that you need to assess for vulnerabilities or policy
compliance and that you are avoiding others.
For more information, see Performing filtered asset searches on page 313 and Creating a
dynamic or static asset group from asset searches on page 334.
After you create dynamic asset groups based on your Sonar data, you can create a site to scan
these assets for vulnerabilities or policy compliance.
5. Configure the rest of the site according to your preferences. See Creating and editing sites on
page 56.
Virtual environments are extremely fluid, which makes it difficult to manage them from a security
perspective. Assets go online and offline continuously. Administrators re-purpose them with
different operating systems or applications, as business needs change. Keeping track of virtual
assets is a challenge, and enforcing security policies on them is an even greater challenge.
The vAsset Scan feature addresses this challenge by integrating Nexpose scanning with the
VMware NSX network virtualization platform. The integration gives a Scan Engine direct access
to an NSX network of virtual assets by registering the Scan Engine as a security service within
that network. This approach provides several benefits:
l The integration automatically creates a Nexpose site, eliminating manual site configuration.
l The integration eliminates the need for scan credentials. As an authorized security service in
the NSX network, the Scan Engine does not require additional authentication to collect
extensive data from assets.
l Security management controls in NSX use scan results to automatically apply security policies
to assets, saving time for IT or security teams. For example, if a scan flags a vulnerability that
violates a particular policy, NSX can quarantine the affected asset until appropriate
remediation steps are performed.
Note: The vAsset Scan feature is a different feature and license option from vAsset Discovery,
which is related to the creation of dynamic sites that can later be scanned. For more information
about that feature, see Managing dynamic discovery of assets on page 146.
When you create a site through this NSX integration process, you cannot do the following actions
in the Site Configuration:
l Edit assets, which are dynamically added as part of the integration process.
l Change the Scan Engine, which is automatically configured as part of the integration process.
l Change the assigned scan template, which is Full Audit.
l Add scan credentials, which are unnecessary because the integration provides Nexpose with
the depth of access to target assets that credentials would otherwise provide.
To use the vAsset Scan feature, you need the following components:
l a Nexpose installation with the vAsset Scan feature enabled in the license
l VMware ESXi 5.5 hosts
l VMware vCenter Server 5.5
l VMware NSX 6.0 or 6.1
l Guest Introspection deployed
l VMware Tools installed with VMCI drivers
4. In the Installation pane, select the Service Deployments tab. Click the green plus sign ( )
and then select the check box for Guest Introspection. Then click the Next button to configure
the deployment.
1. In the Select clusters pane, select a datacenter and cluster to deploy the Guest Introspection
on. Then click Next.
2. In the Select storage pane, select a data store for the VMware Endpoint. Then click Next.
3. In the Configure management network pane, select a network and IP assignment for the
VMware Endpoint. Then click Next.
4. In the Ready to complete pane, click Finish.
Click the Update link on the Administration page under NSX Manager.
2. Verify the NexposeVASE.ovf file is accessible from the Security Console by typing the
following URL in your browser:
https://[Security_Console_IP_address]:3780/nse/ovf/NexposeVASE.ovf.
Nexpose must be registered with VMware NSX before it can be deployed into the virtual
environment.
5. On the Credentials page of the NSX Connection Manager panel, enter credentials for
Nexpose to use when connecting with NSX Manager.
6. Select the Callback IP address from the drop down menu. If the Nexpose console has multiple
IP addresses, select the IP that can be reached by the NSX Manager.
Note: These credentials must be created on NSX in advance, and the user must have the NSX
Enterprise Administrator role.
This deployment authorizes the Scan Engine to run as a security service in NSX. It also
automatically creates a site in Nexpose.
5. In the Installation pane, click the green plus sign ( ) and then select the check box for
Rapid7 Nexpose Scan Engine. Then click the Next button to configure the deployment.
6. Select the cluster in which to deploy the Rapid7 Nexpose Scan Engine.
Note: One Scan Engine will be deployed to each host in the selected cluster.
7. Configure the deployment according to your environment settings. Then click Finish.
Note: The Service Status will display Warning while the Scan Engine is initializing.
This procedure involves creating a group of virtual machines for Nexpose to scan. You will apply
a security policy to this group in the following procedure.
This new policy applies the Scan Engine as a Guest Introspection service for the security group.
3. Click OK.
This machine will serve as a scan target to verify that the integration is operating correctly.
1. Power on a Windows Virtual Machine that has VMware Tools version 9.4.0 or later installed.
The rules of the policy will be enforced within the security group based on scan results.
For information about monitoring the scan see Running a manual scan on page 204.
If you use Rapid7 AppSpider to scan your Web applications, you can import AppSpider data with
Nexpose scan data and reports. This allows you to view security information about your Web
assets side-by-side with your other network assets for more comprehensive assessment and
prioritization.
If you import the XML file on a recurring basis, you will build a cumulative scan history in Nexpose
about the referenced assets. This allows you to track trends related to those assets as you would
with any assets scanned in Nexpose.
Note: This import process works with AppSpider versions 6.4.122 or later.
1. Create a site if you want a dedicated site to include AppSpider data exclusively. See Creating
and editing sites on page 56.
Since you are creating the site to contain AppSpider scan results, you do not need to set up
scan credentials. You will need to include at least one asset, which is a requirement for
creating a site. However, it will not be necessary to scan this asset.
If you want to include AppSpider results in an existing site with assets scanned by Nexpose,
skip this step.
4. In the Site Summary table for that site, click the hypertext link labeled Import AppSpider
Assessment.
5. Click the button that appears, labeled Choose File. Find the VulnerabilitiesSummary.xml on
your local computer and click Open in Windows Explorer.
6. Click Import.
Note: Although you can include imported assets in dynamic assets groups, the data about these
imported assets is not subject to change with Nexpose scans. Data about imported assets only
changes with subsequent imports of AppSpider data.
Running an unscheduled scan at any given time may be necessary in various situations, such as
when you want to assess your network for a new zero-day vulnerability or verify a patch for that
same vulnerability. This section provides guidance for starting a manual scan and for useful
actions you can take while a scan is running:
To start a scan manually for a site right away from the Home page, click the Scan icon for a given
site in the Site Listing table of the Home page. Or click the Scan button that appears below the
table labeled Current Scans for All Sites.
Or, you can click the Scan button on the Sites page or on the page for a specific site.
Scanning a single asset at any given time can be useful. For example, a given asset may contain
sensitive data, and you may want to find out right away if it is exposed with a zero-day
vulnerability.
To scan a single asset, go to the page for that asset by linking to it from any Assets table on a site
page, asset group page, or any other pertinent location. Click the Scan asset now button that
appears below the asset information pane.
With asset linking enabled, an asset in multiple sites is regarded as a single entity. See Linking
assets across sites on page 628 for more information. If asset linking has been enabled in your
Nexpose deployment, be aware of how it affects the scanning of individual assets.
With asset linking, an asset will be updated with scan data in every site, even if the user running
the scan does not have access to at least one of the sites to which an asset belongs. For
example: A user wants to scan a single asset that belongs to two sites, Los Angeles and Belfast.
This user has access to the Los Angeles site, but not the Belfast site. But the scan will update the
asset in the Belfast site.
Blackouts are scheduled periods in which scans are prevented from running. With asset linking
enabled, if you attempt to scan an asset that belongs to any site with a blackout currently in effect,
the Security Console displays a warning and prevents the scan from starting. If you are a Global
Administrator, you can override the blackout.
When you start a manual scan, the Security Console displays the Start New Scan dialog box.
In the Manual Scan Targets area, select either the option to scan all assets within the scope of a
site, or to specify certain target assets. Specifying the latter is useful if you want to scan a
particular asset as soon as possible, for example, to check for critical vulnerabilities or verify a
patch installation.
Note: You can only manually scan assets that were specified as addresses or in a range.
If you select the option to scan specific assets, enter their IP addresses or host names in the text
box. Refer to the lists of included and excluded assets for the IP addresses and host names. You
can copy and paste the addresses.
Note: If you are scanning Amazon Web Services (AWS) instances, and if your Security Console
and Scan Engine are located outside the AWS network, you do not have the option to manually
specify assets to scan. See Inside or outside the AWS network? on page 153.
l If you are scanning a single asset that belongs to multiple sites, you can select the specific site
to scan it in. This can be useful in situations such as verification of a Patch Tuesday update on
a Windows asset.
l You can use a scan template other than the one assigned for the selected site. If, for example,
you've addressed an issue that cause the asset to fail a PCI scan, you can apply the
appropriate PCI template and confirm that the issue has been corrected.
l If you are scanning a site, you can use a Scan Engine other than the one assigned for the site.
If you know that the currently assigned engine is in use, you can switch to a free one. Or you
can change the perspective with which you will "see" the asset. For example, if the currently
assigned engine is a Rapid7 Hosted engine, which provides an "outsider" view of your
network, you can switch to a distributed engine located behind the firewall for an interior view.
When the scan starts, the Security Console displays a status page for the scan, which will display
more information as the scan continues.
When a scan starts, you can keep track of how long it has been running and the estimated time
remaining for it to complete. You can even see how long it takes for the scan to complete on an
indi-vidual asset. These metrics can be useful to help you anticipate whether a scan is likely to
complete within an allotted window.
You also can view the assets and vulnerabilities that the in-progress scan is discovering if you are
scan-ning with any of the following configurations:
l distributed Scan Engines (if the Security Console is configured to retrieve incremental scan
results)
l the local Scan Engine (which is bundled with the Security Console)
Viewing these discovery results can be helpful in monitoring the security of critical assets or
determin-ing if, for example, an asset has a zero-day vulnerability.
OR
1. On the Home page, locate the Current Scan Listing for All Sites table.
2. In the table, locate the site that is being scanned.
3. In the Progress column, click the In Progress link.
When you click the progress link in any of these locations, the Security Console displays a
progress page for the scan.
At the top of the page, the Scan Progress table shows the scan’s current status, start date and
time, elapsed time, estimated remaining time to complete, and total discovered vulnerabilities. It
lists the number of assets that have been discovered, as well as the following asset information:
l Active assets are those that are currently being scanned for vulnerabilities.
l Completed assets are those that have been scanned for vulnerabilities.
l Pending assets are those that have been discovered, but not yet scanned for vulnerabilities.
These values appear below a progress bar that indicates the percentage of completed assets.
The bar is helpful for tracking progress at a glance and estimating how long the remainder of the
scan will take. .
You can click the icon for the scan log to view detailed information about scan events. For more
infor-mation, see Viewing the scan log on page 215.
The Completed Assets table lists assets for which scanning completed successfully, failed due to
an error, or was stopped by a user.
The Incomplete Assets table lists assets for which the scan is pending, in progress, or has been
paused by a user. Additionally, any assets that could not be completely scanned because they
went offline during the scan are marked Incomplete when the entire scan job completes.
These table list every asset's fingerprinted operating system (if available), the number of
vulnerabilities discovered on it, and its scan duration and status. You can click the address or
name link for any asset to view more details about, such as all the specific vulnerabilities
discovered on it.
The table refreshes throughout the scan with every change in status. You can disable the
automatic refresh by clicking the icon at the bottom of the table. This may be desirable with scans
of large environments because the constant refresh can be a distraction.
The scan progress page also reports the status of the Scan Engine used for the site.
If you are scanning an asset group that is configured to use the Scan Engine most recently used
for each asset, you may see statuses reported for more than one Scan Engine. For more
information, see Determining how to scan each asset when scanning asset groups on page 72 .
l Unknown: The Scan Engine could not be contacted. You can check whether the Scan Engine
is running and reachable.
It is helpful to know the meaning of the various scan states listed in the Status column of the Scan
Progress table. While some of these states are fairly routine, others may point to problems that
you can troubleshoot to ensure better performance and results for future scans. It is also helpful
to know how certain states affect scan data integration or the ability to resume a scan. In the
Status column, a scan may appear to be in any one of the following states:
In progress: A scan is gathering information on a target asset. The Security Console is importing
data from the Scan Engine and performing data integration operations such as correlating assets
or applying vulner-ability exceptions. In certain instances, if a scan’s status remains In progress for
an unusually long period of time, it may indicate a problem. See Determining if scans with normal
states are having problems on page 213.
Completed successfully: The Scan Engine has finished scanning the targets in the site, and the
Security Console has finished processing the scan results. If a scan has this state but there are
no scan results displayed, see Determining if scans with normal states are having problems on
page 213 to diagnose this issue.
Stopped: A user has manually stopped the scan before the Security Console could finish
importing data from the Scan Engine. The data that the Security Console had imported before
the stop is integrated into the scan database, whether or not the scan has completed for an
individual asset. You cannot resume a stopped scan. You will need to run a new scan.
In all cases, the Security Console processes results for targets that have a status of Completed
Successfully at the time the scan is paused. You can resume a paused scan manually.
Failed: A scan has been disrupted due to an unexpected event. It cannot be resumed. An
explanatory message will appear with the Failed status. You can use this information to
troubleshoot the issue with Technical Support. One cause of failure can be the Security Console
or Scan Engine going out of service. In this case, the Security Console cannot recover the data
from the scan that preceded the disruption.
Another cause could be a communication issue between the Security Console and Scan Engine.
The Security Console typically can recover scan data that preceded the disruption. You can
determine if this has occurred by one of the following methods:
l Check the connection between your Security Console and Scan Engine with a ICMP (ping)
request.
l Click the Administration tab and then go to the Scan Engines page. Click on the Refresh icon
for the Scan Engine associated with the failed scan. If there is a communication issue, you will
see an error message.
l Open the nsc.log file located in the \nsc directory of the Security Console and look for error-
level messages for the Scan Engine associated with the failure.
Aborted: A scan has been interrupted due to crash or other unexpected events. The data that the
Security Con-sole had imported before the scan was aborted is integrated into the scan database.
You cannot resume an aborted scan. You will need to run a new scan.
If a scan has an In progress status for an unusually long time, this may indicate that the Security
Con-sole cannot determine the actual state of the scan due to a communication failure with the
Scan Engine. To test whether this is the case, try to stop the scan. If a communication failure has
occurred, the Security Console will display a message indicating that no scan with a given ID
exists.
If a scan has a Completed successfully status, but no data is visible for that scan, this may
indicate that the Scan Engine has stopped associating with the scan job. To test whether this is
the case, try start-ing the scan again manually. If this issue has occurred, the Security Console will
display a message that a scan is already running with a given ID.
If you are a user with appropriate site permissions, you can pause, resume or stop manual scans
and scans that have been started automatically by the application scheduler.
To pause a scan, click the Pause icon for the scan on the Home, Sites, or specific site page; or
click the Pause Scan button on the specific scan page.
A message displays asking you to confirm that you want to pause the scan. Click OK.
To resume a paused scan, click the Resume icon for the scan on the Home, Sites, or specific site
page; or click the Resume Scan button on the specific scan page. The console displays a
message, asking you to confirm that you want to resume the scan. Click OK.
To stop a scan, click the Stop icon for the scan on the Home, Sites, or specific site page; or click
the Stop Scan button on the specific scan page. The console displays a message, asking you to
confirm that you want to stop the scan. Click OK.
The stop operation may take 30 seconds or more to complete pending any in-progress scan
activity.
The Security Console lists scan results by ascending or descending order for any category
depending on your sorting preference. In the Asset Listing table, click the desired category
column heading, such as Address or Vulnerabilities, to sort results by that category.
Two columns in the Asset Listing table show the numbers of known exposures for each asset.
The column with the ™ icon enumerates the number of vulnerability exploits known to exist for
each asset. The number may include exploits available in Metasploit and/or the Exploit Database.
The column with the icon enumerates the number of malware kits that can be used to exploit
the vulnerabilities detected on each asset.
To view the results of a scan, click the link for a site’s name on the Home page. Click the site
name link to view assets in the site, along with pertinent information about the scan results. On
this page, you also can view information about any asset within the site by clicking the link for its
name or address.
To troubleshoot problems related to scans or to monitor certain scan events, you can download
and view the log for any scan that is in progress or complete.
Scan log files have a .log extension and can be opened in any text editing program. A scan log’s
file name consists of three fields separated by hyphens: the respective site name, the scan’s start
date, and scan’s start time in military format. Example: localsite-20111122-1514.log.
If the site name includes spaces or characters not supported by the name format, these
characters are converted to hexadecimal equivalents. For example, the site name my site would
be rendered as my_20site in the scan log file name.
The following characters are supported by the scan log file format:
l numerals
l letters
l hyphens (-)
l underscores (_)
The file name format supports a maximum of 64 characters for the site name field. If a site name
contains more than 64 characters, the file name only includes the first 64 characters.
You can change the log file name after you download it. Or, if your browser is configured to
prompt you to specify the name and location of download files, you can change the file name as
you save it to your hard drive.
You can find and download scan logs wherever you find information about scans in the Web
interface. You can only download scan logs for sites to which you have access, subject to your
permissions.
To download a scan log click the Download icon for a scan log.
A pop-up window displays the option to open the file or save it to your hard drive. You may select
either option.
If you do not see an option to open the file, change your browser configuration to include a default
program for opening a .log file. Any text editing program, such as Notepad or gedit, can open a
.log file. Consult the documentation for your browser to find out how to select a default program.
To ensure that you have a permanent copy of the scan log, choose the option to save it. This is
recommended in case the scan information is ever deleted from the scan database.
While the Web interface provides useful information about scan progress, you can use scan logs
to learn more details about the scan and track individual scan events. This is especially helpful if,
for example, certain phases of the scan are taking a long time. You may want to verify that the
prolonged scan is running normally and isn't "hanging". You may also want to use certain log
information to troubleshoot the scan.
2013-06-26T15:02:59 [INFO] [Thread: Scan default:1] [Site: Chicago_servers] Nmap will scan
1024 IP addresses at a time.
This entry states the maximum number of IP addresses each individual Nmap process will scan
before that Nmap process exits and a new Nmap process is spawned. These are the work units
assigned to each Nmap process. Only 1 Nmap process exists per scan.
The following list indicates the most common reasons for discovery and port scan results as
reported by the scan:
You can quickly browse the scan history for your entire deployment by seeing the Scan
History page.
On any page of the Web interface, click the Administrationicon. On the Administration page, click
the view link for Scan History.
The interface displays the Scan History page, which lists all scans, plus the total number of
scanned assets, discovered vulnerabilities, and other information pertaining to each scan. You
can click the date link in the Completed column to view details about any scan.
You can download the log for any scan as discussed in the preceding topic.
You may find it necessary on occasion to stop any in-progress scans before they complete. There
may, for example, be issues causing a disruption to operations on your network or your target
assets. Or perhaps scans are running longer than expected, and you need to stop them in order
to perform maintenance work on your assets.
If you have multiple scans running you can stop them all simultaneously with one action.
When you run any of the stopped scans again, they start from the beginning.
Security-wise, things are always changing inside and outside your environment. Inside, new
assets come online every time your organization hires a new staff member or commissions a new
server to replace an old model. Or, previously scanned assets come back online after not being
visible in your network for sometime.
Outside, new vulnerabilities keep coming into existence and threatening to expose your assets to
an ever-growing number of attacks.
By automating responses to these changes, you can keep your security team informed on the
latest developments and ready to take appropriate actions at any time. If a new asset comes
online, you can have it scanned immediately for any flaws or exposures. If a new high-risk
vulnerability is announced, you can find out right away which assets are affected by it.
The Automated Actions feature enables you use events involving assets and vulnerabilities as
triggers for running scans and modifying sites.
You must be a Global Administrator and have a Nexpose Enterprise license to use this feature.
Each Nexpose content update adds a fresh set of new vulnerability checks that you can scan for.
After an update occurs, you may want to know right away if any of your assets are affected with
certain high-risk or high-severity vulnerabilities or those with high CVSS scores. If a hotfix content
update has checks for a zero-day exposure, you may want to scan your network for that
vulnerability as soon as possible.
l CVSS score
l Severity level
l Risk score
4. Depending on the metric you selected, enter a minimum value.
4. Click Next.
5. Select an action from the drop-down list. With new vulnerabilities, the only available action is a
scan.
6. Select a site to scan for the new vulnerabilities. For example, you might have a site containing
sensitive assets that you will want to scan right away.
7. Click Next.
8. Enter a name to help you remember the automated action.
9. Click Save Action.
The new action appears in the list of automated actions, with a status of Ready, which means that
any time a new content update is applied with vulnerabilities that match the filter criteria, a scan
for those vulnerabilities will run on the site you selected.
Your attack surface changes with every new hire getting a laptop or an employee getting a
replacement workstation. Depending on the size of your organization, it may be difficult to keep
track of every new asset with manual effort. By using the Dynamic Discovery feature and running
scans, you can keep up to date with the latest changes to your asset inventory. You can also use
these mechanisms to trigger automatic actions to track "new" assets more closely and assess
any security flaws with them.
If you are using any of the following discovery methods, you can automate security-related
actions to track newly discovered assets or assets that were scanned in the past and then
disappeared and reappeared on your network:
l vSphere
l Amazon Web Services (AWS)
l DHCP Service
The Dynamic Discovery feature continuously finds any assets added to your environment.
For example, you may be concerned about new virtual machines coming on line, as
detected by your vSphere connection. You may have assets in different resource pool paths
named for different departments, such as for Marketing or Sales. Maybe you want to make
sure that any new VM in your Sales resource pool gets scanned immediately.
In this example, you would select Resource Pool Path as the filter and Contains as the
operator. Then you would enter Sales as the value.
A new filter row appears. Set up the new filter as described in the preceding step.
Tip: Adding more filters typically narrows the field of assets because they have to match
more criteria. For more information about using filters, see Using filters to refine Dynamic
Discovery on page 170
l Adding an asset to a site only (without scanning) will cause the asset to be scanned during the
next scheduled window, or when a user runs a manual scan of that site. This option is
preferable if scanning the new asset is less urgent and doesn't require tying up scanning
resources right away.
l Adding an asset to a site and scanning that site immediately is preferable if scanning the new
asset is more urgent. This may be the case with more sensitive assets.
9. Select a site to add the asset to.
Note: You can only select sites containing assets that were manually added, as opposed to
assets that were added via Dynamic Discovery connections.
The new action appears in the list of automated actions, with a status of Ready, which means that
any time a new asset matching the filter criteria is discovered, the action will be taken.
Change is a constant in your organization. Certain assets that you have been scanned before
may "disappear" for a few weeks and then "resurface." Staff members go on vacations and not
turn on their laptops while they are gone. Or IT may take workstations offline while repairing or
upgrading their systems.
To help you keep current with these types of changes, you can use automation to add "re-
discovered" assets to sites, scan them, or tag them for tracking and reporting:
For example, you may be concerned about new virtual machines coming on line, as
detected by your vSphere connection. You may have assets in different resource pool paths
named for different departments, such as for Marketing or Sales. Maybe you want to make
sure that any new VM in your Sales resource pool gets scanned immediately.
In this example, you would select Resource Pool Path as the filter and Contains as the
operator. Then you would enter Sales as the value.
7. A new filter row appears. Set up the new filter as described in the preceding step.
Tip: Adding more filters typically narrows the field of assets because they have to match
more criteria. For more information about using filters, see Using filters to refine Dynamic
Discovery on page 170
8. Select an action:
9. If selected the site or site/scan option, select a site to add the asset to.
Note: You can only select sites containing assets that were manually added, as opposed to
assets that were added via Dynamic Discovery connections.
The new action appears in the list of automated actions, with a status of Ready, which means that
any time an asset matching the filter criteria is re-discovered, the action will be taken.
Remote Registry is a Windows service which allows a non-local user to read or make changes to
the registry on your Windows system when they are authorized to do so. Users may configure a
site to temporarily enable Remote Registry on all Windows devices as they are being scanned.
This allows information to be retrieved from the registry and means Nexpose can collect more
accurate data from the assets.
In the site configuration, a user will need to add credentials that have appropriate permissions on
the target systems to read from the registry. Once the scan is complete, the Remote Registry
service will be returned to its prior state. Only a Global Administrator or Administrator may enable
the Remote Registry Activation.
4. Under the Select Scan Template section, copy an existing template using the icons at the end
of the table row (or edit a custom template).
5. In the new window showing the Scan Template Configuration options, enable the check box
marked Allow Nexpose to enable Windows Services.
To disable Remote Registry for a site, an authorized user can update the template that is being
used for a site in the site configuration or select a different scan template that does not have the
option switched on.
After you discover all the assets and vulnerabilities in your environment, it is important to parse
this information to determine what the major security threats are, such as high-risk assets,
vulnerabilities, potential malware exposures, or policy violations.
Assess gives you guidance on viewing and sorting your scan results to determine your security
priorities. It includes the following sections:
Locating and working with assets on page 235: There are several ways to drill down through
scan results to find specific assets. For example, you can find all assets that run a particular
operating system or that belong to a certain site. This section covers these different paths. It also
discusses how to sort asset data by different security metrics and how to look at the detailed
information about each asset.
Working with vulnerabilities on page 259: Depending on your environment, your scans may
discover thousands of vulnerabilities. This section shows you how to sort vulnerabilities based on
various security metrics, affected assets, and other criteria, so that you can find the threats that
require immediate attention. The section also covers how to exclude vulnerabilities from reports
and risk score calculations.
Working with Policy Manager results on page 287: If you work for a U.S. government agency or
a vendor that transacts business with the government, you may be running scans to verify that
your assets comply with United States Government Configuration Baseline (USGCB) or Federal
Desktop Core Configuration (FDCC) policies. Or you may be testing assets for compliance with
customized policies based on USGCB or FDCC policies. This section shows you how to track
your overall compliance, view scan results for policies and the specific rules that make up those
policies, and override rule results.
Assess 234
Locating and working with assets
By viewing and sorting asset information based on scans, you can perform quick assessments of
your environment and any security issues affecting it.
Tip: While it is easy to view information about scanned assets, it is a best practice to create asset
groups to control which users can see which asset information in your organization. See Using
asset groups to your advantage on page 305.
You can view all discovered assets that you have access to by simply clicking the Assetsicon and
viewing the Assets table on the Assets page.
The number of all discovered assets to which you have access appears at the top of the page, as
well as the number of sites, asset groups, and tagged assets to which you have access.
Note: If you are using a Dynamic Discovery connection, such as mobile, AWS, VMware, or
DHCP, the total asset count includes assets that have been discovered as well as those that
have been assessed.
Also near the top of the page are pie charts displaying aggregated information about the assets in
the Assets table below. With these charts, you can see an overview of your vulnerability status as
well as interact with that data to help prioritize your remediations.
The Assets by Operating System chart shows how many assets are running each operating
system. You can mouse over each section for a count and percentage of each operating system.
On the Exploitable Assets by Skill Level chart, your assets with exploitable vulnerabilities are
classified according to skill level required for exploits. Novice-level assets are the easiest to
exploit, and therefore the ones you want to address most urgently. Assets are not counted more
than once, but are categorized according to the most exploitable vulnerability on the asset. For
example, if an asset has a Novice-level vulnerability, two Intermediate-level vulnerabilities, and
one Expert-level vulnerability, that asset will fall into the Novice category. Assets without any
known exploits appear in the Non-Exploitable slice.
Note: A similar pie chart appears on the Vulnerabilities page, but that one classifies the individual
vulnerabilities rather than the assets. For more information, see Working with vulnerabilities on
page 259.
A third pie chart shows the numbers of assets that have been assessed for vulnerabilities and
policy compliance as well as those that have been discovered and not yet assessed, either by
scan or Dynamic Discovery connection.
If you use Dynamic Discovery (see Managing dynamic discovery of assets on page 146), the
Assets page displays two separate asset tables.
The other table lists assets that have been discovered through a Dynamic Discovery connection.
These latter assets have yet to be scanned for vulnerabilities or policy compliance. After any of
these latter assets are scanned for the first time, they are removed from the Discovered by
Connection table and displayed in the Scanned table.
Note: IP addresses are not listed for mobile devices. Instead the column displays the value
Mobile device for each of these assets.
If you have created at least one discovery connection but you have not initiated a connection to
actually discover assets, the Discovered by Connection appears with no assets listed.
Viewing assets that have been discovered but not yet assessed is a good way to expose areas in
your environment that may have unknown security issues.
Note: The Discovered by Connection table does not list assets that have been scanned with a
discovery scan. Those assets appear in the Scanned table.
You can sort assets in the Assets table by clicking a row heading for any of the columns. For
example, click the top row of the Risk column to sort numerically by the total risk score for all
vulnerabilities discovered on each asset.
You can generate a comma-separated values (CSV) file of the asset kit list to share with others in
your organization. Click the Export to CSV . Depending on your browser settings, you will
see a pop-up window with options to save the file or open it in a compatible program.
You can control the number of assets that appear in each table by selecting a value in the Rows
per page dropdown list in the bottom, right frame of the table. Use the navigation options in that
area to view more asset records.
To view assets by sites to which they have been assigned, click the hyperlinked number of sites
displayed at the top of the Assets page. The Security Console displays the Sites page. From this
page you can create a new site.
Charts and graphs at the top of the Sites page provide a statistical overview of sites, including
risks and vulnerabilities.
Click the link for any site in the Site Listing pane to view its assets. The Security Console displays
a page for that site, including recent scan information, statistical charts and graphs.
The Site Summary page displays trending chart as well as a scatter plot. The default selection for
the trend chart matches the Home page – risk and assets over time. You can also use the drop
down menu to choose to view Vulnerabilities over time for this site. This vulnerabilities chart will
populate with data starting from the time that you installed the August 6, 2014 product update. If
you recently installed the update, the chart will show limited data now, but additional data will be
gathered and displayed over time.
The scatter plot chart permits you to easily spot outliers so you can spot assets that have above
average risk. Assets with the highest amount of risk and vulnerabilities will appear outside of the
cluster. The position and colors also indicate the risk associated with the asset by the asset's risk
score - the further to the right and redder the color, the higher the risk. You can take action by
selecting an asset directly from the chart, which will transfer you to the asset level view.
If a site has more 7,000 assets, a bubble chart view first appears which allows you to select a
group of assets to then refine your view by selecting a bubble and showing the scatter plot for that
bubble.
The Assets table shows the name and IP address of every scanned asset. If your site includes
IPv4 and IPv6 addresses, the Address column groups these addresses separately. You can
change the order of appearance for these address groups by clicking the sorting icon in the
Address column.
Note: IP addresses are not listed for mobile devices. Instead the column displays the value
Mobile device for each of these assets.
In the Assetstable, you can view important security-related information about each asset to help
you prioritize remediation projects: the number of available exploits, the number of vulnerabilities,
and the risk score.
You will see an exploit count of 0 for assets that were scanned prior to the January 29, 2010,
release, which includes the Exploit Exposure feature. This does not necessarily mean that these
assets do not have any available exploits. It means that they were scanned before the feature
was available. For more information, see Using Exploit Exposure on page 636.
To view information about an asset listed in the Assets table, click the link for that asset. See
Viewing the details about an asset on page 243.
To view assets by asset groups in which they are included, click the hyperlinked number of asset
groups displayed at the top of the Assets page. The Security Console displays the Asset Groups
page.
Charts and graphs at the top of the Asset Groups page provide a statistical overview of asset
groups, including risks and vulnerabilities. From this page you can create a new asset group. See
Using asset groups to your advantage on page 305.
Click the link for any group in the Scanned table to view its assets. The Security Console displays
a page for that asset group, including statistical charts and graphs and a list of assets. In the
Assets pane, you can view the scan, risk, and vulnerability information about any asset. You can
click a link for the site to which the asset belongs to view information about the site. You also can
click the link for any asset address to view information about it. See Viewing the details about an
asset on page 243.
To view assets by the operating systems running on them, see the Assets by Operating System
chart or table on the Assets page.
The Assets by Operating System pie chart offers drill down functionality, meaning you can select
an operating system to view a further breakdown of the category selected. For example, if
Microsoft is selected for the OS you will then see a listing of all Windows OS versions present,
such as Windows Server 2008, Windows Server 2012, and so on. Continuing to click on wedges
further breaks down the systems to specific editions and service packs, if applicable. A large
number of unknowns in your chart indicates that those assets were not fingerprinted successfully
and should be investigated.
Note: If your assets have more than 10 types of operating systems, the chart shows the nine
most frequently found operating systems, and an Other category. Click the Other wedge to see
the remaining operating systems.
The Assets by Operating System table lists all the operating systems running in your network and
the number of instances of each operating system. Click the link for an operating system to view
the assets that are running it.The Security Console displays a page that lists all the assets
running that operating system. You can view scan, risk, and vulnerability information about any
asset. You can click a link for the site to which the asset belongs to view information about the
site. You also can click the link for any asset address to view information about it. See Viewing
the details about an asset on page 243.
To view assets by the software running on them, see the Software Listing table on the
Assets page. The table lists any software that the application found running in your network, the
number of instances of program, and the type of program.
The application only lists software for which it has credentials to scan. An exception to this would
be when it discovers a vulnerability that permits root/admin access.
Click the link for a program to view the assets that are running it.
The Security Console displays a page that lists all the assets running that program. You can view
scan, risk, and vulnerability information about any asset. You can click a link for the site to which
the asset belongs to view information about the site. You also can click the link for any asset
address or name to view information about it. See Viewing the details about an asset on page
243.
To view assets by the services they are running, see the Service Listing table on the Assets page.
The table lists all the services running in your network and the number of the number of instances
of each service. Click the link for a service to view the assets that are running it. See Viewing the
details about an asset on page 243.
Regardless of how you locate an asset, you can find out more information about it by clicking its
name or IP address.
The Security Console displays a page for each asset determined to be unique. Upon discovering
a live asset, Nexpose uses correlation heuristics to identify whether the asset is unique within the
site. Factors considered include:
l MAC address(es)
l host name(s)
l IP address
l virtual machine ID (if applicable)
On the page for a discovered asset, you can view or add business context tags associated with
that asset. For more information and instructions, see Applying RealContext with tags on page
250.
The asset Trend chart gives you the ability to view risk or vulnerabilities over time for this specific
asset. Use the drop-down list to switch the view to risk or vulnerabilities.
You can view the Vulnerability Listing table for any reported vulnerabilities and any vulnerabilities
excluded from reports. The table lists any exploits or malware kits associated with vulnerabilities
to help you prioritize remediation based on these exposures.
Additionally, the table displays a special icon for any vulnerability that has been validated with an
exploit. If a vulnerability has been validated with an exploit via a Metasploit module, the column
displays the icon. If a vulnerability has been validated with an exploit published in the Exploit
Database, the column displays the icon. For more information, see Working with validated
vulnerabilities on page 269.
You can also view information about software, services, policy listings, databases, files, and
directories on that asset as discovered by the application. You can view any users or groups
associated with the asset.
You can view any asset fingerprints. Fingerprinting is a set of methods by which the application
identifies as many details about the asset as possible. By inspecting properties such as the
specific bit settings in reserved areas of a buffer, the timing of a response, or a unique
acknowledgement interchange, it can identify indicators about the asset’s hardware and
operating system.
In the Asset Properties table, you can run a scan or create a report for the asset.
In the Vulnerability Listing table, you can open a ticket for tracking the remediation of the
vulnerabilities. See Using tickets on page 531. For more information about the Vulnerabilities
Listing table and how you can use it, see Viewing active vulnerabilities on page 259 and Working
with vulnerability exceptions on page 272. The table lists different security metrics, such as CVSS
rating, risk score, vulnerability publication date, and severity rating. You can sort vulnerabilities
according to any of these metrics by clicking the column headings. Doing so allows you to order
vulnerabilities according to these different metrics and get a quick view of your security posture
and priorities.
If you have scanned the asset with Policy Manager Checks, you can view the results of those
checks in the Policy Listing table. If you click the name of any listed policy, you can view more
information about it, such as other assets that were tested against that policy or the results of
compliance checks for individual rules that make up the policy. For more information, see
Working with Policy Manager results on page 287.
If you have scanned the asset with standard policy checks, such as for Oracle or Lotus Domino,
you can review the results of those checks in the Standard Policy Listing table.
Deleting assets
If any of the preceding situations apply to your environment, a best practice is to create a dynamic
asset group based on a scan date. See Working with asset groups on page 305. Then you can
locate the assets in that group using the steps described in Locating and working with assets on
page 235. Using the bulk asset deletion feature described in this topic, you can delete multiple
inactive assets in one step.
If you delete an asset from a site, it will no longer be included in the site or any asset groups in
which it was previously included. If you delete an asset from an asset group, it will also be deleted
from the site that contained it, as well as any other asset groups in which it was previously
included. The deleted asset will no longer appear in the Web interface or reports other than
historical reports, such as trend reports. If the asset is rediscovered in a future scan it will be
regarded in the Web interface and future reports as a new asset.
You can only delete assets in sites or asset groups to which you have access.
To delete individual assets that you locate by using the site or asset group drill-down described in
Locating and working with assets on page 235, take the following steps:
1. After locating assets you want to delete, select the row for each asset in the Assets table.
2. Click Delete Assets.
To delete individual assets that you are viewing by using the drill-down described in Viewing the
details about an asset on page 243, take the following steps:
1. After locating assets you want to delete, click the row for the asset in the Assets table to go to
the Asset Details page.
2. Click Delete Assets.
1. After locating assets you want to delete, click the top row in the Assets table.
2. Click Select Visible in the pop-up that appears. This step selects all of the assets currently
displayed in the table.
3. Click Delete Assets.
To cancel your selection, click the top row in the Assets table. Then click Clear All in the
pop-up that appears.
Note: This procedure deletes only the assets displayed in the table, not all the assets in the site or
asset group. For example, if a site contains 100 assets, but your table is configured to display 25,
you can only select those 25 at one time. You will need repeat this procedure or increase the
number of assets that the table displays to select all assets. The Total Assets Selected field on
the right side of the table indicates how many assets are contained in the site or asset group.
To delete assets that you locate by using the Asset, Operating System, Software, or Service
listing table as described in the preceding section, take the following step.
1. After locating assets you want to delete, click the Delete icon for each asset.
This action deletes an asset and all of its related data (including vulnerabilities) from any site or
asset group to which it belongs, as well as from any reports in which it is included.
Deleting assets located via the scanned and discovered by connection drill-downs
If you are globally linking matching assets across all sites (see Linking assets across sites on
page 628), you also have the option to remove an asset from a site, which breaks the link
between the site and the asset. Unlike a deleted asset, the removed asset is still available in other
sites in which is it was already present. However, if the asset is only in one site, it will be deleted
from the entire workspace.
When tracking assets in your organization, you may want to identify, group, and report on them
according to how they impact your business.
For example, you have a server with sensitive financial data and a number of workstations in your
accounting office located in Cleveland, Ohio. The accounting department recently added three
new staff members. Their workstations have just come online and will require a number of
security patches right away. You want to assign the security-related maintenance of these
accounting assets to different IT administrators: A SQL and Linux expert is responsible for the
server, and a Windows administrator handles the workstations. You want to make these
administrators aware that these assets have high priority.
These assets are of significant importance to your organization. If they were attacked, your
business operations could be disrupted or even halted. The loss or corruption of their data could
be catastrophic.
The scan data distinguishes these assets by their IP addresses, vulnerability counts, risk scores,
and installed operating systems and services. It does not isolate them according to the unique
business conditions described in the preceding scenario.
Using a feature called RealContext, you can apply tags to these assets to do just that. Your can
tag all of these accounting assets with a Cleveland location and a Very High criticality level. You
can tag your accounting server with a label, Financials, and assign it an owner named Chris, who
is a Linux administrator with SQL expertise. You can assign your Windows workstations to a
Windows administrator owner named Brett. And you can tag the new workstations with the label
First-quarter hires. Then, you can create dynamic asset groups based on these tags and send
reports on the tagged assets to Chris and Brett, so that they know that the workstation assets
should be prioritized for remediation. For information on using tag-related search filters to create
dynamic asset groups, see Performing filtered asset searches on page 313.
You also can use tags as filters for report scope. See Creating a basic report on page 341.
l You can tag and track assets according to their geographic or physical Locations, such as
data centers.
l You can associate assets with Owners, such as members of your IT or security team, who are
in charge of administering them.
l You can apply levels of Criticality to assets to indicate their importance to your business or the
negative impact resulting from an attack on them. A criticality level can be Very Low, Low,
Medium, High, or Very High. Additionally, you can apply numeric values to criticality levels and
use the numbers as multipliers that impact risk score. For more information, see Adjusting risk
with criticality on page 621.
You can also create custom tags that allow you to isolate and track assets according to any
context that might be meaningful to you. For example, you could tag certain assets PCI, Web site
back-end, or consultant laptops.
You can tag an asset individually on the details page for that asset. You also can tag a site or an
asset group, which would apply the tag to all member assets. The tagging workflow is identical,
regardless of where you tag an asset:
1. If you are creating or editing a site: Go to the General page of the Site Configuration panel,
and select Add tags.
If you are creating or editing a static asset group: Go to the General page of the Asset Group
Configuration panel, and select Add tags.
If you are creating or editing a dynamic asset group: In the Configuration panel for the asset
group, select Add tags.
If you have just run a filtered asset search: To tag all of the search results, select Add tags,
which appears above the search results table on the Filtered Asset Search page.
OR
If you are creating a new custom tag, select a color in which the tag name will appear. All
built-in tags have preset colors.
If you select Criticality, select a criticality level from the drop-down list.
4. Click Add.
5. If you are creating or editing a site or asset group, click Save to save the configuration
changes.
Another way to apply tags is by specifying criteria for which tags can be dynamically applied. This
allows you to apply business context based on filters without having to create new sites or
groups. It also allows you to add new criteria for which assets should have the tags as you think of
1. Click the name of any tag to go to the details page for that tag.
2. Click Add Tag Criteria.
3. Select the search filters. The available filters are the same as those available in the asset
search filters. See Performing filtered asset searches on page 313. There are some
restrictions on which filters you can use with criticality tags. See Filter restrictions for criticality
tags on page 254.
4. Select Search.
5. Select Save.
You can add criteria for when a tag will be dynamically applied
l On the details page for that tag, select View Tag Criteria.
1. Click the name of any tag to go to the details page for that tag.
2. Click Edit Tag Criteria.
3. Edit or add the search filters. The available filters are the same as those available in the asset
search filters. See Performing filtered asset searches on page 313. There are some
restrictions on which filters you can use with criticality tags. See Filter restrictions for criticality
tags on page 254.
4. Select Search.
5. Select Save.
l On the details page for that tag, select Clear Tag Criteria.
You can take different actions to view or modify rules for tags
Certain filters are restricted for criticality tags, in order to prevent circular references. These
restrictions apply to criticality tags applied through tag criteria, and to those added through
dynamic asset groups. See Performing filtered asset searches on page 313.
If a tag no longer accurately reflects the business context of an asset, you can remove it from that
asset. To do so, click the x button next to the tag name. If the tag name is longer than one line,
mouse over the ampersand below the name to expand it and then click the x button. Removing a
tag is not the same as deleting it.
If you tag a site or an asset group, all of the member assets will "inherit" that tag. You cannot
remove an inherited tag at the individual asset level. Instead, you will need to edit the site or asset
group in which the tag was applied and remove it there.
If a tag no longer has any business relevance at all, you can delete it completely.
Click the name of any tag to go to the details page for that tag. Then click the View All Tags
breadcrumb.
OR
Click the Assets icon, then click the number of tags listed for Tagged Assets, even if that number
is zero.
Go to the Asset Tag Listing table of theTags page. Select the check box for any tag you want to
delete. To select all displayed tags, select the check box in the top row. Then, click Delete.
Tip: If you want to see which assets are associated with the tag before deleting it, click the tag
name to view its details page. This could be helpful in case you want to apply a different tag to
those assets.
Over time, the criticality of an asset may change. For example, a laptop may initially be used by a
temporary worker and not contain sensitive data, which would indicate low criticality. That laptop
may later be used by a senior executive and contain sensitive data, which would merit a higher
criticality level.
l If you apply a criticality level to a site and then change the criticality of a member asset, you
can only increase the criticality level. For example, if you apply a criticality level of Medium to a
site and then change the criticality level of an individual member asset, you can only change
the level to High or Very High.
l If you apply a criticality level to an asset group, and if any asset has had a criticality level
applied elsewhere (in sites, other asset groups, or individually), the asset will retain the
highest-applied criticality level. For example, an asset named Server_1 belongs to a site
named Boston with a criticality level of Medium. A criticality level of Very High is later applied
to Server_1 individually. If you apply a High criticality level to a new asset group that includes
Server_1, it will retain the Very High criticality level.
l If you apply a criticality level to an asset group, and if any asset has had a criticality level
applied elsewhere (in sites, other asset groups, or individually), the asset will retain the
highest-applied criticality level. For example, an asset named Server_1 belongs to a site
named Boston with a criticality level of Medium. A criticality level of Very High is later applied
to Server_1 individually. If you apply a High criticality level to a new asset group that includes
Server_1, it will retain the Very High criticality level.
l If you apply a criticality level to an individual asset, you can later change the criticality to any
desired level.
You can create tags without immediately applying them to assets. This could be helpful if, for
example, you want to establish a convention for how tag names are written.
1. Click the Assets icon, then click the number of tags listed for Tagged Assets, even if that
number is zero.
OR
Click the Create tab at the top of the page and then select Tags from the drop-down list.
2. Click Add tags and add any tags as described in Tagging assets, sites, and asset groups on
page 251.
You may apply the same tag to an asset as well as an asset group that contains it. For example,
you might want to create a group based on assets tagged with a certain location or owner. This
may occasionally lead to a circular reference loop in which tags refer to themselves instead of the
assets or groups to which they were originally applied. This could prevent you from getting useful
context from the tags.
The following example shows how a circular reference can occur with criticality:
1. You create a dynamic asset group Priorities for all assets that have an original risk score of
less than 1,000. One of these assets is named Server_1.
2. You tag this group with a Very High criticality level, so that every asset in the group inherits the
tag.
3. Your Security Console has been configured to double the risk score of assets with a Very
High criticality level. See Adjusting risk with criticality on page 621.
4. Server_1 has its risk score doubled, which causes it to no longer meet the filter criteria of
Priorities. Therefore, it is removed from Priorities.
5. Since Server_1 no longer inherits the Very High criticality level applied to Priorities, it reverts
to its original risk score, which is lower than 1,000.
6. Server_1 now once again meets the criteria for membership in Priorities, so it once again
inherits the Very High criticality level applied to the asset group. This, again, causes its risk
score to double, so that it no longer meets the criteria for membership in Priorities. This is a
circular reference loop.
The best way to prevent circular references is to look at the Tags page to see what tags have
been created. Then go to the details page for a tag that you are considering using and to see
which assets, sites, and asset groups it is applied to. This is especially helpful if you have multiple
Security Console users and high numbers of tags and asset groups. To access to the details
page for a tag, simply click the tag name.
Analyzing the vulnerabilities discovered in scans is a critical step in improving your security
posture. By examining the frequency, affected assets, risk level, exploitability and other
characteristics of a vulnerability, you can prioritize its remediation and manage your security
resources effectively.
Every vulnerability discovered in the scanning process is added to vulnerability database. This
extensive, full-text, searchable database also stores information on patches, downloadable fixes,
and reference content about security weaknesses. The application keeps the database current
through a subscription service that maintains and updates vulnerability definitions and links. It
contacts this service for new information every six hours.
The database has been certified to be compatible with the MITRE Corporation’s Common
Vulnerabilities and Exposures (CVE) index, which standardizes the names of vulnerabilities
across diverse security products and vendors. The index rates vulnerabilities according to
MITRE’s Common Vulnerabilities Scoring System (CVSS) Version 2.
An application algorithm computes the CVSS score based on ease of exploit, remote execution
capability, credentialed access requirement, and other criteria. The score, which ranges from 1.0
to 10.0, is used in Payment Card Industry (PCI) compliance testing. For more information about
CVSS scoring, go to the FIRST Web site (http://www.first.org/cvss/cvss-guide.html).
Viewing vulnerabilities and their risk scores helps you to prioritize remediation projects. You also
can find out which vulnerabilities have exploits available, enabling you to verify those
vulnerabilities. See Using Exploit Exposure on page 636.
Click the Vulnerabilitiesicon that appears on every page of the console interface.
The Security Console displays the Vulnerabilities page, which lists all the vulnerabilities for assets
that the currently logged-on user is authorized to see, depending on that user’s permissions.
Since Global Administrators have access to all assets in your organization, they will see all the
vulnerabilities in the database.
The charts on the Vulnerabilities page display your vulnerabilities by CVSS score and exploitable
skill levels. The CVSS Score chart displays how many of your vulnerabilities fall into each of the
CVSS score ranges. This score is based on access complexity, required authentication, and
impact on data. The score ranges from 1 to 10, with 10 being the worst, so you should prioritize
the vulnerabilities with the higher numbers.
The Exploitable Vulnerabilities by Skill Level chart shows you your vulnerabilities categorized by
the level of skill required to exploit them. The most easily exploitable vulnerabilities present the
greatest threat, since there will be more people who possess the necessary skills, so you should
prioritize remediating the Novice-level ones and work your way up to Expert.
You can change the sorting criteria by clicking any of the column headings in the Vulnerability
Listing table.
For each discovered vulnerability that has at least one malware kit (also known as an exploit kit)
associated with it, the console displays a malware exposure icon . If you click the icon, the
console displays the Threat Listing pop-up window that lists all the malware kits that attackers
can use to write and deploy malicious code for attacking your environment through the
vulnerability. You can generate a comma-separated values (CSV) file of the malware kit list to
share with others in your organization. Click the Export to CSV icon . Depending on your
browser settings, you will see a pop-up window with options to save the file or open it in a
compatible program.
You can also click the Exploits tab in the pop-up window to view published exploits for the
vulnerability.
In the context of the application a published exploit is one that has been developed in Metasploit
or listed in the Exploit Database (www.exploit-db.com).
For each discovered vulnerability with an associated exploit the console displays a exploit icon. If
you click this icon the console displays the Threat Listing pop-up window that lists descriptions
about all available exploits, their required skill levels, and their online sources. The Exploit
Database is an archive of exploits and vulnerable software. If a Metasploit exploit is available,
the console displays the ™ icon and a link to a Metasploit module that provides detailed exploit
information and resources.
There are three levels of exploit skill: Novice, Intermediate, and Expert. These map to
Metasploit's seven-level exploit ranking. For more information, see the Metasploit Framework
page (http://www.metasploit.com/redmine/projects/framework/wiki/Exploit_Ranking).
You can generate a comma-separated values (CSV) file of the exploit list and related data to
share with others in your organization. Click the Export to CSV icon . Depending on your
browser settings, you will see a pop-up window with options to save the file or open it in a
compatible program.
The CVSS Score column lists the score for each vulnerability.
The Published On column lists the date when information about each vulnerability became
available.
The Risk column lists the risk score that the application calculates, indicating the potential danger
that each vulnerability poses to an attacker exploits it. The application provides two risk scoring
models, which you can configure. See Selecting a model for calculating risk scores in the
administrator's guide. The risk model you select controls the scores that appear in the Risk
column. To learn more about risk scores and how they are calculated, see the PCI, CVSS, and
risk scoring FAQs, which you can access in the Support page.
The application assigns each vulnerability a severity level, which is listed in the Severity column.
The three severity levels—Critical, Severe, and Moderate—reflect how much risk a given
vulnerability poses to your network security. The application uses various factors to rate severity,
including CVSS scores, vulnerability age and prevalence, and whether exploits are available.
See the PCI, CVSS, and risk scoring FAQs, which you can access in the Support page.
Note: The severity ranking in the Severity column is not related to the severity score in PCI
reports.
1 to 3 = Moderate
4 to 7 = Severe
8 to 10 = Critical
The Instances column lists the total number of instances of that vulnerability in your site. If you
click the link for the vulnerability name, you can view which specific assets are affected by the
vulnerability. See Viewing vulnerability details on page 268.
You can click the icon in the Exclude column for any listed vulnerability to exclude that
vulnerability from a report.
An administrative change to your network, such as new credentials, may change the level of
access that an asset permits during its next scan. If the application previously discovered certain
vulnerabilities because an asset permitted greater access, that vulnerability data will no longer be
available due to diminished access. This may result in a lower number of reported vulnerabilities,
even if no remediation has occurred. Using baseline comparison reports to list differences
between scans may yield incorrect results or provide more information than necessary because
The Vulnerability Categories and Vulnerability Check Types tables list all categories and check
types that the Application can scan for. Your scan template configuration settings determine
which categories or check types the application will scan for. To determine if your environment
has a vulnerability belonging to one of the listed checks or types, click the appropriate link. The
Security Console displays a page listing all pertinent vulnerabilities. Click the link for any
vulnerability to see its detail page, which lists any affected assets.
Your scans may discover hundreds, or even thousands, of vulnerabilities, depending on the size
of your scan environment. A high number of vulnerabilities displayed in the Vulnerability Listing
table may make it difficult to assess and prioritize security issues. By filtering your view of
vulnerabilities, you can reduce the sheer number of those displayed, and restrict the view to
vulnerabilities that affect certain assets. For example, a Security Manager may only want to see
vulnerabilities that affect assets in sites or asset groups that he or she manages. Or you can
restrict the view to vulnerabilities that pose a greater threat to your organization, such as those
with higher risk scores or CVSS rankings.
Filtering your view of vulnerabilities involves selecting one or more filters, which are criteria for
displaying specific vulnerabilities. For each filter you then select an operator, which controls how
the filter is applied.
Site name is a filter for vulnerabilities that affect assets in specific sites. It works with the following
operators:
l The is operator displays a drop-down list of site names. Click a name to display vulnerabilities
that affect assets in that site. Using the SHIFT key, you can select multiple names.
l The is not operator displays a drop-down list of site names. Click a name to filter out
vulnerabilities that affect assets in that site, so that they are not displayed. Using the SHIFT
key, you can select multiple names.
l The is operator displays a drop-down list of asset group names. Click a name to display
vulnerabilities that affect assets in that asset group. Using the SHIFT key, you can select
multiple names.
l The is not operator displays a drop-down list of asset group names. Click a name to filter out
vulnerabilities that affect assets in that asset group, so that they are not displayed. Using the
SHIFT key, you can select multiple names.
CVE ID is a filter for vulnerabilities based on the CVE ID. The CVE identifiers (IDs) are unique,
common identifiers for publicly known information security vulnerabilities. For more information,
see https://cve.mitre.org/cve/identifiers/index.html. The filter applies a search string to the CVE
IDs, so that the search returns vulnerabilities that meet the specified criteria. It works with the
following operators:
l is returns all vulnerabilities whose names match the search string exactly.
l is not returns all vulnerabilities whose names do not match the search string.
l contains returns all vulnerabilities whose names contain the search string anywhere in the
name.
l does not contain returns all vulnerabilities whose names do not contain the search string.
After you select an operator, you type a search string for the CVE ID in the blank field.
CVSS score is a filter for vulnerabilities with specific CVSS rankings. It works with the following
operators:
l The is operator displays all vulnerabilities that have a specified CVSS score.
l The is not operator displays all vulnerabilities that do not have a specified CVSS score.
l The is in the range of operator displays all vulnerabilities that fall within the range of two
specified CVSS scores and include the high and low scores in the range.
l The is higher than operator displays all vulnerabilities that have a CVSS score higher than a
specified score.
l The is lower than operator displays all vulnerabilities that have a CVSS score lower than a
specified score.
After you select an operator, enter a score in the blank field. If you select the range operator, you
would enter a low score and a high score to create the range. Acceptable values include any
numeral from 0.0 to 10. You can only enter one digit to the right of the decimal. If you enter more
Risk score is a filter for vulnerabilities with certain risk scores. It works with the following
operators:
l The is operator displays all vulnerabilities that have a specified risk score.
l The is not operator displays all vulnerabilities that do not have a specified risk score.
l The is in the range of operator displays all vulnerabilities that fall within the range of two
specified risk scores and include the high and low scores in the range.
l The is higher than operator displays all vulnerabilities that have a risk score higher than a
specified score.
l The is lower than operator displays all vulnerabilities that have a risk score lower than a
specified score.
After you select an operator, enter a score in the blank field. If you select the range operator, you
would type a low score and a high score to create the range. Keep in mind your currently selected
risk strategy when searching for assets based on risk scores. For example, if the currently
selected strategy is Real Risk, you will not find assets with scores higher than 1,000. Learn about
different risk score strategies. Refer to the risk scores in your vulnerability and asset tables for
guidance.
Vulnerability category is a filter that lets you search for vulnerabilities based on the categories
that have been flagged on them during scans. Lists of vulnerability categories can be found in the
scan template configuration or the report configuration.
l contains returns all vulnerabilities whose category contains the search string. You can use an
asterisk (*) as a wildcard character.
l does not contain returns all vulnerabilities that do not have a vulnerability whose category
contains the search string. You can use an asterisk (*) as a wildcard character.
l is returns all vulnerabilities whose category matches the search string exactly.
l is not returns all vulnerabilities that do not have a vulnerability whose category matches the
exact search string.
l starts with returns all vulnerabilities whose categories begin with the same characters as the
search string.
l ends with returns all vulnerabilities whose categories end with the same characters as the
search string.
After you select an operator, you type a search string for the vulnerability category in the blank
field.
Vulnerability title is a filter that lets you search vulnerabilities based on their titles.The filter applies
a search string to vulnerability titles, so that the search returns a list of assets that either have or
do not have the specified string in their titles. It works with the following operators:
l contains returns all vulnerabilities whose name contains the search string. You can use an
asterisk (*) as a wildcard character.
l does not contain returns all vulnerabilities whose name does not contain the search string.
You can use an asterisk (*) as a wildcard character.
l is returns all vulnerabilities whose name matches the search string exactly.
l is not returns all vulnerabilties whose names do not match the exact search string.
l starts with returns all vulnerabilities whose names begin with the same characters as the
search string.
l ends with returns all vulnerabilities whose names end with the same characters as the search
string.
After you select an operator, you type a search string for the vulnerability name in the blank field.
The Security Console displays vulnerabilities that meet all filter criteria in the table.
Currently, filters do not change the number of displayed instances for each vulnerability.
Click the link for any vulnerability listed on the Vulnerabilities page to view information about it.
The Security Console displays a page for that vulnerability.
At the top of the page is a description of the vulnerability, its severity level and CVSS rating, the
date that information about the vulnerability was made publicly available, and the most recent
date that Rapid7 modified information about the vulnerability, such as its remediation steps.
Below these items is a table listing each affected asset, port, and the site on which a scan
reported the vulnerability. You can click on the link for the device name or address to view all of its
vulnerabilities. On the device page, you can create a ticket for remediation. See Using tickets on
page 531. You also can click the site link to view information about the site.
The Port column in the Affected Assets table lists the port that the application used to contact the
affected service or software during the scan. The Status column lists a Vulnerable status for an
asset if the application confirmed the vulnerability. It lists a Vulnerable Version status if the
The Proof column lists the method that the application used to detect the vulnerability on each
asset. It uses exploitation methods typically associated with hackers, inspecting registry keys,
banners, software version numbers, and other indicators of susceptibility.
The Exploits table lists descriptions of available exploits and their online sources. The Exploit
Database is an archive of exploits and vulnerable software. If a Metasploit exploit is available, the
console displays the ™ icon and a link to a Metasploit module that provides detailed exploit
information and resources.
The Malware table lists any malware kit that attackers can use to write and deploy malicious
code for attacking your environment through the vulnerability.
The References table, which appears below the Affected Assets pane, lists links to Web sites
that provide comprehensive information about the vulnerability. At the very bottom of the page is
the Solution pane, which lists remediation steps and links for downloading patches and fixes.
If you wish to query the database for a specific vulnerability, and you know its name, type all or
part of the name in the Search box that appears on every page of the console interface, and click
the magnifying glass icon. The console displays a page of search results organized by different
categories, including vulnerabilities.
There are many ways to sort and prioritize vulnerabilities for remediation. One way is to give
higher priority to vulnerabilities that have been validated, or proven definitively to exist. The
application uses a number of methods to flag vulnerabilities during scans, such as fingerprinting
software versions known to be vulnerable. These methods provide varying degrees of certainty
that a vulnerability exists. You can increase your certainty that a vulnerability exists by exploiting
it, which involves deploying code that penetrates your network or gains access to a computer
through that specific vulnerability.
As discussed in the topic Viewing active vulnerabilities on page 259, any vulnerability that has a
published exploit associated with it is marked with a Metasploit or Exploit Database icon. You can
integrate Rapid7 Metasploit as a tool for validating vulnerabilities discovered in scans and then
have Nexpose indicate that these vulnerabilities have been validated on specific assets.
Note: Metasploit is the only exploit application that the vulnerability validation feature supports.
See a tutorial for performing vulnerability validation with Metasploit.
1. After performing exploits in Metasploit, click the Assets tab of the NexposeSecurity Console
Web interface.
2. Locate an asset that you would like to see validated vulnerabilities for. See Locating and
working with assets on page 235.
3. Double-click the asset's name or IP address.
The Security Console displays the details page for the asset.
4. If a vulnerability has been validated with an exploit via a Metasploit module, the column
displays the icon.
If a vulnerability has been validated with an exploit published in the Exploit Database, the
column displays the icon.
5. To sort the vulnerabilities according to whether they have been validated, click the title row in
the Exploits column.
As seen in the following screen shot, the descending sort order for this column is 1)
vulnerabilities that have been validated with a Metasploit exploit, 2) vulnerabilities that can
be validated with a Metasploit exploit, 3) vulnerabilities that have been validated with an
Exploit database exploit, 4) vulnerabilities that can be validated with an Exploit database
exploit.
All discovered vulnerabilities appear in the Vulnerabilities table of the Security Console Web
interface. Your organization can exclude certain vulnerabilities from appearing in reports or
affecting risk scores.
There are several possible reasons for excluding vulnerabilities from reports.
Compensating controls: Network managers may mitigate the security risks of certain
vulnerabilities, which, technically, could prevent their organization from being PCI compliant. It
may be acceptable to exclude these vulnerabilities from the report under certain circumstances.
For example, the application may discover a vulnerable service on an asset behind a firewall
because it has authorized access through the firewall. While this vulnerability could result in the
asset or site failing the audit, the merchant could argue that the firewall reduces any real risk
under normal circumstances. Additionally, the network may have host- or network-based
intrusion prevention systems in place, further reducing risk.
Acceptable use: Organizations may have legitimate uses for certain practices that the application
would interpret as vulnerabilities. For example, anonymous FTP access may be a deliberate
practice and not a vulnerability.
Acceptable risk: In certain situations, it may be preferable not to remediate a vulnerability if the
vulnerability poses a low security risk and if remediation would be too expensive or require too
much effort. For example, applying a specific patch for a vulnerability may prevent an application
from functioning. Re-engineering the application to work on the patched system may require too
much time, money, or other resources to be justified, especially if the vulnerability poses minimal
risk.
False positives: According to PCI criteria, a merchant should be able to report a false positive,
which can then be verified and accepted by a Qualified Security Assessor (QSA) or Approved
Scanning Vendor (ASV) in a PCI audit. Below are scenarios in which it would be appropriate to
exclude a false positive from an audit report. In all cases, a QSA or ASV would need to approve
the exception.
Backporting may cause false positives. For example, an Apache update installed on an older
Red Hat server may produce vulnerabilities that should be excluded as false positives.
If an exploit reports false positives on one or more assets, it would be appropriate to exclude
these results.
Your ability to work with vulnerability exceptions depends on your permissions. If you do not now
know what your permissions are, consult your Global administrator.
l Submit Vulnerability Exceptions: A user with this permission can submit requests to exclude
vulnerabilities from reports.
l Review Vulnerability Exceptions: A user with this permission can approve or reject requests
to exclude vulnerabilities from reports.
l Delete Vulnerability Exceptions: A user with this permission can delete vulnerability
exceptions and exception requests. This permission is significant in that it is the only way to
overturn a vulnerability request approval. In that sense, a user with this permission can wield a
check and balance against users who have permission to review requests.
Every vulnerability has an exception status, including vulnerabilities that have never been
considered for exception. The range of actions you can take with respect to exceptions depends
on the exception status, as well as your permissions, as indicated in the following table:
If the vulnerability has the ...and you have the ...you can take the following
following exception status... following permission... action:
never been submitted for an Submit Exception
submit an exception request
exception Request
previously approved and later Submit Exception
submit an exception request
deleted or expired Request
under review (submitted, but Review Vulnerability
approve or reject the request
not approved or rejected) Exceptions
excluded for another instance, Submit Exception
submit an exception request
asset, or site Request
under review (and submitted by
recall the exception
you)
under review (submitted, but Delete Vulnerability
delete the request
not approved or rejected) Exceptions
Review Vulnerability view and change the details of the
approved
Exceptions approval, but not overturn the approval
Submit Exception
rejected submit another exception request
Request
Delete Vulnerability delete the exception, thus overturning
approved or rejected
Exceptions the approval
A vulnerability may be discovered once or multiple times on a certain asset. The vulnerability may
also be discovered on hundreds of assets. Before you submit a request for a vulnerability
exception, review how many instances of the vulnerability have been discovered and how many
assets are affected. It’s also important to understand the circumstances surrounding each
affected asset. You can control the scope of the exception by using one of the following options
when submitting a request:
l You can create an exception for all instances of a vulnerability on all affected assets. For
example, you may have many instances of a vulnerability related to an open SSH port.
However, if in all instances a compensating control is in place, such as a firewall, you may
want to exclude that vulnerability globally.
l You can create an exception for all instances of a vulnerability in a site. As with global
exceptions, a typical reason for a site-specific exclusion is a compensating control, such as all
of a site’s assets being located behind a firewall.
l You can create an exception for all instances of a vulnerability on a single asset. For example
one of the assets affected by a particular vulnerability may be located in a DMZ. Or perhaps it
only runs for very limited periods of time for a specific purpose, making it less sensitive.
l You can create an exception for a single instance of a vulnerability. For example, a
vulnerability may be discovered on each of several ports on a server. However, one of those
ports is behind a firewall. You may want to exclude the vulnerability instance that affects that
protected port.
A global vulnerability exception means that the application will not report the vulnerability on any
asset in your environment that has that vulnerability. Only a Global Administrator can approve
requests for global vulnerability exceptions. A non-admin user with the correct account
permissions can approve vulnerability exceptions that are not global.
Locate the vulnerability for which you want to request an exception. There are several ways to
locate to a vulnerability. The following way is easiest for a global exception.
Tip: If a vulnerability has an action icon other than Exclude, see Understanding vulnerability
exception permissions on page 273.
3. Select All instances if it is not already displayed from the Scope drop-down list.
4. Select a reason for the exception from the drop-down list.
For information about exception reasons, see Understanding cases for excluding
vulnerabilities on page 272.
These are especially helpful for a reviewer to understand your reasons for the request.
Note: If you select Other as a reason from the drop-down list, additional comments are
required.
6. Click Submit & Approve to have the exception take effect.
7. (Optional) Click Submit to place the exception under review and have another individual in
your organization review it.
After you approve an exception, the vulnerability no longer appears in the list on the
Vulnerabilities page.
Note: If you enabled the option to link matching assets across all sites after the April 8, 2015,
product update, you cannot use this Web interface feature to exclude vulnerabilities in sites after
enabling the linking option. Site-level exceptions created in the Web interface before the option
was enabled will continue to apply. See Linking assets across sites on page 628. You can use the
API to exclude vulnerabilities at the site level. See the API guide.
Note: The vulnerability information in the page for a scan is specific to that particular scan
instance. The ability to create an exception is available in more cumulative levels such as the site
or vulnerability listing in order for the vulnerability to be excluded in future scans.
Locate the vulnerability for which you want to request an exception. There are several ways to
locate to a vulnerability. The following ways are easiest for a site-specific exception:
1. If you want to find a specific vulnerability, click the Vulnerabilities icon of the Security Console
Web interface.
2. Locate the vulnerability in the Vulnerabilities table, and click the link for it.
3. Find an asset in a particular site for which you want to exclude vulnerability instances in the
Affects table of the vulnerability details page.
OR
1. If you want to see what vulnerabilities are affecting assets in different sites, click the Assets
icon.
The Security Console displays the page for the selected site.
The Security Console displays the page for the selected asset.
5. Locate the vulnerability you want to exclude in the Vulnerabilities table and click the link for it.
1. Look at the Exceptions column for the located vulnerability. If an exception request has not
previously been submitted for that vulnerability, the column displays an Exclude icon. If it was
submitted and then rejected, the column displays a Resubmit icon.
2. Click the Exclude icon.
Note: If a vulnerability has an action link other than Exclude, see Understanding cases for
excluding vulnerabilities on page 272.
3. Select All instances in this site from the Scope drop-down list.
4. Select a reason for the exception from the drop-down list.
For information about exception reasons, see Understanding cases for excluding
vulnerabilities on page 272.
These are especially helpful for a reviewer to understand your reasons for the request. If you
select Other as a reason from the drop-down list, additional comments are required.
Locate the vulnerability for which you want to request an exception. There are several ways to
locate to a vulnerability. The following ways are easiest for an asset-specific exception.
1. If you want to find a specific vulnerability, click the Vulnerabilities icon of the Security Console
Web interface.
OR
1. If you want to see what vulnerabilities are affecting specific assets that you find using different
grouping categories, click the Assets icon.
2. Select one of the options to view assets according to different grouping categories: sites they
belong to, asset groups they belong to, hosted operating systems, hosted software, or hosted
services. Or click the link to view all assets.
3. Depending on the category you selected, click through displayed subcategories until you find
the asset you are searching for. See Locating and working with assets on page 235.
The Security Console displays the page for the selected asset.
4. Locate the vulnerability that you want to exclude in the Vulnerabilities table and click the link
for it.
Note: If a vulnerability has an action link other than Exclude, see Understanding vulnerability
exception status and work flow on page 274.
1. Look at the Exceptions column for the located vulnerability. This column displays one of
several possible actions. If an exception request has not previously been submitted for that
vulnerability, the column displays an Exclude icon. If it was submitted and then rejected, the
column displays a Resubmit icon.
2. Click the icon.
3. Select All instances on this asset from the Scope drop-down list.
Note: If you select Other as a reason from the drop-down list, additional comments are required.
These are especially helpful for a reviewer to understand your reasons for the request.
This procedure is useful if you want to exclude a large number of vulnerabilities because, for
example, they all have the same compensating control.
1. After going to the Vulnerabilities table as described in the preceding section, select the row for
each vulnerability that you want to exclude.
OR
To select all the vulnerabilities displayed in the table, click the check box in the top row. Then
select the pop-up option Select Visible.
2. Click Exclude for vulnerabilities that have not been submitted for exception, or click Resubmit
for vulnerabilities that have been rejected for exception.
3. Proceed with the vulnerability exception workflow as described in the preceding section.
If you've selected multiple vulnerabilities but then want to cancel the selection, click the top
row. Then select the pop-up option Clear All.
Note: If you select all listed vulnerabilities for exclusion, it will only apply to vulnerabilities that
have not been excluded. For example, if the Vulnerabilities table includes vulnerabilities that are
under review or rejected, the global exclusion will not apply to them. The same applies for global
resubmission: It will only apply to listed vulnerabilities that have been rejected for exclusion.
Verify the exception (if you submitted and approved it). After you approve an exception, the
vulnerability no longer appears in the list on the Vulnerabilities page.
When you create an exception for a single instance of a vulnerability, the application will not
report the vulnerability against the asset if the device, port, and additional data match.
Locate the instance of the vulnerability for which you want to request an exception. There are
several ways to locate to a vulnerability. The following way is easiest for a site-specific exception.
Note: If a vulnerability has an action link other than Exclude, see Understanding vulnerability
exception status and work flow on page 274 .
1. Look at the Exceptions column for the located vulnerability. This column displays one of
several possible actions. If an exception request has not previously been submitted for that
vulnerability, the column displays an Exclude icon. If it was submitted and then rejected, the
column displays a Resubmit icon.
2. Click the icon.
3. Select Specific instance on this asset from the Scope drop-down list.
If you select Other as a reason from the drop-down list, additional comments are required.
4. Enter additional comments. These are especially helpful for a reviewer to understand your
reasons for the request.
5. Click Submit & Approve to have the exception take effect.
6. (Optional) Click Submit to place the exception under review and have another individual in
your organization review it.
This procedure is useful if you want to exclude a large number of vulnerabilities because, for
example, they all have the same compensating control.
1. After going to the Vulnerabilities table as described in the preceding section, select the row for
each vulnerability that you want to exclude.
OR
2. To select all the vulnerabilities displayed in the table, click the check box in the top row. Then
select the pop-up option Select Visible.
3. Click Exclude for vulnerabilities that have not been submitted for exception, or click Resubmit
for vulnerabilities that have been rejected for exception.
4. Proceed with the vulnerability exception workflow as described in the preceding section.
If you've selected multiple vulnerabilities but then want to cancel the selection, click the top
row. Then select the pop-up option Clear All.
Note: If you select all listed vulnerabilities for exclusion, it will only apply to vulnerabilities that
have not been excluded. For example, if the Vulnerabilities table includes vulnerabilities that are
under review or rejected, the global exclusion will not apply to them. The same applies for global
resubmission: It will only apply to listed vulnerabilities that have been rejected for exclusion.
Verify the exception (if you submitted and approved it). After you approve an exception, the
vulnerability no longer appears in the list on the Vulnerabilities page.
You can recall, or cancel, a vulnerability exception request that you submitted if its status remains
under review.
Locate the exception request, and verify that it is still under review. The location depends on the
scope of the exception. For example, if the exception is for all instances of the vulnerability on a
single asset, locate that asset in the Affects table on the details page for the vulnerability. If the
link in the Exceptions column is Under review, you can recall it.
This procedure is useful if you want to recall a large number of requests because, for example,
you've learned that since you submitted them it has become necessary to include them in a
report.
1. After locating the exception request as described in the preceding section, select the row for
each vulnerability that you want to exclude.
OR
2. To select all the vulnerabilities displayed in the table, click the check box in the top row. Then
select the pop-up option Select Visible.
3. Click Recall.
4. Proceed with the recall workflow as described in the preceding section.
If you've selected multiple vulnerabilities but then want to cancel the selection, click the top
row. Then select the pop-up option Clear All.
Note: If you select all listed vulnerabilities for recall, it will only apply to vulnerabilities that are
under review. For example, if the Vulnerabilities table includes vulnerabilities that have not been
excluded, or have been rejected for exclusion, the global recall will not apply to them.
Upon reviewing a vulnerability exception request, you can either approve or reject it.
OR, to select all requests for review, select the top row.
If you want to select an expiration date for the review decision, click the calendar icon and
select a date. For example, you may want the exception to be in effect only until a PCI audit
is complete.
Note: You also can click the top row check box to select all requests and then approve or reject
them in one step.
OR, to select all requests for deletion, select the top row.
The entry(ies) no longer appear in the Vulnerability Exception Listing table. The affected
vulnerability(ies) appear in the appropriate vulnerability listing with an Exclude icon, which
means that a user with appropriate permission can submit an exception request for it.
When you generate a report based on the default Report Card template, each vulnerability
exception appears on the vulnerability list with the reason for its exception.
Vulnerability exceptions can be important for the prioritization of remediation projects and for
compliance audits. Report templates include a section dedicated to exceptions. See Vulnerability
Exceptions on page 663. In XML and CSV reports, exception information is also available.
XML: The vulnerability test status attribute is set to one of the following values for vulnerabilities
suppressed due to an exception:
CSV: The vulnerability result-code column will be set to one of the following values for
vulnerabilities suppressed due to an exception. Each code corresponds to results of a
vulnerability check:
l ds (skipped, disabled): A check was not performed because it was disabled in the scan
template.
l ee (excluded, exploited): A check for an exploitable vulnerability was excluded.
l ep (excluded, potential): A check for a potential vulnerability was excluded.
l er (error during check): An error occurred during the vulnerability check.
l ev (excluded, version check): A check was excluded. It is for a vulnerability that can be
identified because the version of the scanned service or application is associated with known
vulnerabilities.
l nt (no tests): There were no checks to perform.
l nv (not vulnerable): The check was negative.
l ov (overridden, version check): A check for a vulnerability that would ordinarily be positive
because the version of the target service or application is associated with known
vulnerabilities was negative due to information from other checks.
l sd (skipped because of DoS settings): sd (skipped because of DOS settings)—If unsafe
checks were not enabled in the scan template, the application skipped the check because of
the risk of causing denial of service (DOS). See Configuration steps for vulnerability check
settings on page 562.
l sv (skipped because of inapplicable version): the application did not perform a check because
the version of the scanned item is not in the list of checks.
l uk (unknown): An internal issue prevented the application from reporting a scan result.
l ve (vulnerable, exploited): The check was positive. An exploit verified the vulnerability.
l vp (vulnerable, potential): The check for a potential vulnerability was positive.
l vv (vulnerable, version check): The check was positive. The version of the scanned service or
software is associated with known vulnerabilities.
If you work for a U.S. government agency, a vendor that transacts business with the
government, or a company with strict configuration security policies, you may be running scans to
verify that your assets comply with United States Government Configuration Baseline (USGCB)
policies, Center for Internet Security (CIS) benchmarks, or Federal Desktop Core Configuration
(FDCC). Or you may be testing assets for compliance with customized policies based on these
standards.
After running Policy Manager scans, you can view information that answers the following
questions:
Viewing the results of configuration assessment scans enables you to quickly determine the
policy compliance status of your environment. You can also view test results of individual policies
and rules to determine where specific remediation efforts are required so that you can make
assets compliant.
Note: You can only view policy test results for assets to which you have access. This is true for
Policy Manager and standard policies.
This section specifically addresses Policy Manager results. The Policy Manager is a license-
enabled feature that includes the following policy checks:
l USGCB 2.0 policies (only available with a license that enables USGCB scanning)
l USGCB 1.0 policies (only available with a license that enables USGCB scanning)
l Center for Internet Security (CIS) benchmarks (only available with a license that enables CIS
scanning)
l FDCC policies (only available with a license that enables FDCC scanning)
l Custom policies that are based on USGCB or FDCC policies or CIS benchmarks (only
available with a license that enables custom policy scanning)
Standard policies are available with all licenses and include the following:
l Oracle policy
l Lotus Domino policy
l Windows Group policy
l AS/400 policy
l CIFS/SMB Account policy
You can view the results of standard policy checks on a page for a specific asset that has been
scanned with one of these checks.
If you want to get a quick overview of all the policies for which you’ve run Policy Manager checks,
go to the Policies page by clicking the Policies icon on any page of the Web interface. The page
lists tested policies for all assets to which you have access.
The Policies table shows the number of assets that passed and failed compliance checks for
each policy. It also includes the following columns:
l Each policy is grouped in a category within the application, depending on its source, purpose,
or other criteria. The category for any USGCB 2.0 or USGCB 1.0 policy is
l listed as USGCB. Another example of a category might be Custom, which would include
custom policies based on built-in Policy Manager policies. Categories are listed under the
Category heading.
l The Asset Compliance column shows the percentage of tested assets that comply with each
policy.
l The table also includes a Rule Compliance column. Each policy consists of specific rules, and
checks are run for each rule. The Rule Compliance column shows the percentage of rules
with which assets comply for each policy. Any percentage below 100 indicates failure to
comply with the policy
l The Policies table also includes columns for copying, editing, and deleting policies. For more
information about these options, see Creating a custom policy on page 589.
After assessing your overall compliance on the Policies page, you may want to view more specific
information about a policy. For example, a particular policy shows less than 100 percent rule
compliance (which indicates failure to comply with the policy) or less than 100 percent asset
compliance . You may want to learn why assets failed to comply or which specific rule tests
resulted in failure.
Tip: You can also view results of Policy Manager checks for a specific asset on the page for that
asset. See Viewing the details about an asset on page 243.
On the Policies page, you can view details about a policy in the Policies table by clicking the
name of that policy.
At the top of the page, a pie chart shows the ratio of assets that passed the policy check to those
that failed. Two line graphs show the five most and least compliant assets.
An Overview table lists general information about how the policy is identified. The benchmark ID
refers to an exhaustive collection of rules, some of which are included in the policy. The table also
lists general asset and rule compliance statistics for the policy.
The Tested Assets table lists each asset that was tested against the policy and the results of
each test, and general information about each asset. The Asset Compliance column lists each
asset’s percentage of compliance with all the rules that make up the policy. Assets with lower
compliance percentages may require more remediation work than other assets.
You can click the link for any listed asset to view more details about it.
l A Pass result means that the asset complies with all the rules that make up the policy.
l A Fail result means that the asset does not comply with at least one of the rules that makes up
the policy. The Policy Compliance column indicates the percentage of policy rules with which
the asset does comply.
l A Not Applicable result means that the policy compliance test doesn’t apply to the asset. For
example, a check for compliance with Windows Vista configuration policies would not apply to
a Windows XP asset.
Every policy is made up of individual configuration rules. When performing a Policy Manager
check, the application tests an asset for compliance with each of the rules of the policy. By
viewing results for each rule test, you can isolate the configuration issues that are preventing your
assets from being policy-compliant.
By viewing the test results for all assets against a rule, you can quickly determine which assets
require remediation work in order to become compliant.
2. In the Policies table, click the name of a policy for which you want to view rule details.
3. In the Policy Rule Compliance table, click the link for any rule that you want to view details for.
The Overview table displays general information that identifies the rule, including its name and
category, as well as the name and benchmark ID for the policy that the rule is a part of.
Every rule has a Common Configuration Enumerator (CCE) identifier. CCE is a standard for
identifying and correlating configuration data, allowing this data to be shared by multiple
information sources and tools.
You may find it useful to analyze a policy rule’s CCE data. The information may help you
understand the rule better or to remediate the configuration issue that caused an asset to fail the
test. Or, it may be simply useful to have the data available for reference.
2. In the Policies table, click the name of a policy for which you want to view rule details.
3. In the Tested Assets table, click the IP address or name of an asset that has been tested
against the policy.
4. In the Configuration Policy Rules table, click the name of the rule for which you want to view
CCE data.
Note: The application applies any current CCE updates with its automatic content updates.
l The Overview table displays the rule Common Configuration Enumerator (CCE) identifier,
the specific platform to which the rule applies, and the most recent date that the rule was
updated in the National Vulnerability Database. The application applies any current CCE
updates with its automatic content updates.
l The Parameters table lists the parameters required to implement the rule on each tested
asset.
l The Technical Mechanisms table lists the methods used to test compliance with the rule.
l The References table lists documentation sources to which the rule refers for detailed source
information as well as values that indicate the specific information in the documentation
source.
l The Configuration Policy Rules table lists the policy and the policy rule name for every
imported policy in the application.
You may want to override, or change, a test result for a particular rule on a particular asset for any
of several reasons:
When overriding a result, you will be required to enter your reason for doing so.
Another user can also override your override. Yet another user can perform another override,
and so on. For this reason, you can track all the overrides for a rule test back to the original result
in the Security Console Web interface.
The most recent override for any rule is also identified in the XCCDF Results XML Report format.
Overrides are not identified as such in the XCCDF Human Readable CSV Report format. The
All overrides and their reasons are incorporated, along with the policy check results, into the
documentation that the U.S. government reviews in the certification process.
Your ability to work with overrides depends on your permissions. If you do not know what your
permissions are, consult your Global Administrator. These permissions apply specifically to
Policy Manager policies.
Note: These permissions also include access to activities related to vulnerability exceptions. See
Managing users and authentication in the administrator's guide.
l Submit Vulnerability Exceptions and Policy Overrides: A user with this permission can submit
requests to override policy test results.
l Review Vulnerability Exceptions and Policy Overrides: A user with this permission can
approve or reject requests to override policy rule results.
l Delete Vulnerability Exceptions and Policy Overrides: A user with this permission can delete
policy test result overrides and override requests.
When overriding a rule result, you will have a number of options for the scope of the override:
Global: You can override a rule for all assets in all sites. This scope is useful if assets are failing a
policy that includes a rule that isn’t relevant to your organization. For example, an FDCC policy
includes a rule for disabling remote desktop access. This rule does not make sense for your
organization if your IT department administers all workstations via remote desktop access. This
override will apply to all future scans, unless you override it again.
All assets in a specific site: This scope is useful if a policy includes a rule that isn’t relevant to a
division within your organization and that division is encompassed in a site. For example, your
organization disables remote desktop administration except for the engineering department. If all
of the engineering department’s assets are contained within a site, you can override a Fail result
for the remote desktop rule in that site. This override will apply to all future scans, unless you
override it again.
All scan results for a single asset: This scope is useful if a policy includes a rule that isn’t
relevant for small number of assets. For example, your organization disables remote desktop
A specific scan result on a single asset: This scope is useful if a policy includes a rule that
wasn’t relevant at a particular point in time but will be relevant in the future. For example, your
organization disables remote desktop administration. However, unusual circumstances required
the feature to be enabled temporarily on an asset so that a remote IT engineer could troubleshoot
it. During that time window, a policy scan was run, and the asset failed the test for the remote
desktop rule. You can override the Fail result for that specific scan, and it will not apply to future
scans.
It may be helpful to review the overrides of previous users to give you additional context about the
rule or a tested asset.
3. In the Configuration Policy Rules table, click the rule for which you want to view the override
history.
4. See the rule’s Override History table, which lists each override for the rule, the date it
occurred, and the result after the override. The Override Status column lists whether the
override has been submitted, approved, rejected, or expired.
2. In the Policies table, click the name of the policy that includes the rule for which you want to
override the result.
3. In the Policy Rule Compliance table, click the Override icon for the rule that you want to
override.
l Fail indicates that you consider an asset to be non-compliant with the rule.
l Fixed indicates that the issue that caused a Fail result has been remediated. A Fixed
override will cause the result to appear as a Pass in reports and result listings.
l Not Applicable indicate that the rule does not apply to the asset.
OR
2. In the Policiestable, click the name of the policy that includes the rule for which you want to
override the result.
4. In the Configuration Policy Rules table, click the Override icon for the rule that you want to
override.
l Fail indicates that you consider an asset to be non-compliant with the rule.
l Fixed indicates that the issue that caused a Fail result has been remediated. A Fixed
override will cause the result to appear as a Pass in reports and result listings.
l Not Applicable indicates that the rule does not apply to the asset.
8. If you only have override request permission, click Submit to place the override under review
and have another individual in your organization review it. The override request appears in the
Override History table of the rule page.
OR
l Fail indicates that you consider an asset to be non-compliant with the rule.
l Fixed indicates that the issue that caused a Fail result has been remediated. A Fixed
override will cause the result to appear as a Pass in reports and result listings.
l Not Applicable indicates that the rule does not apply to the asset.
OR
2. In the Policiestable, click the name of the policy that includes the rule for which you want to
override the result.
l Fail indicates that you consider an asset to be non-compliant with the rule.
l Fixed indicates that the issue that caused a Fail result has been remediated. A Fixed
override will cause the result to appear as a Pass in reports and result listings.
l Not Applicable indicate that the rule does not apply to the asset.
8. If you only have override request permission, click Submit to place the override under review
and have another individual in your organization review it. The override request appears in the
Override History table of the rule page.
OR
Upon reviewing an override request, you can either approve or reject it.
OR, to select all requests for review, select the top row.
6. Enter comments in the Reviewer’s Comments text box. Doing so may be helpful for the
submitter.
7. If you want to select an expiration date for override, click the calendar icon and select a date.
8. Click Approve or Reject, depending on your decision.
The result of the review appears in the Review Status column. Also, if the rule has never been
previously overridden and the override request has been approved, its entry will switch to Yes in
the Active Overrides column in the Configuration Policy Rules table of the page. The override will
also be noted in the Override History table of the rule page.
Tip: You also can click the top row check box to select all requests and then delete them all
in one step.
3. In the Configuration Policy Override Listing table, select the check box next to the rule override
that you want to delete.
OR, to select all requests for deletion, select the top row.
After you discover what is running in your environment and assess your security threats, you can
initiate actions to remediate these threats.
Act provides guidance on making stakeholders in your organization aware of security priorities in
your environment so that they can take action.
Working with asset groups on page 305: Asset groups allow you to create logical groupings so
you can discover and scan assets. Asset groups also allow Global Administrators to control which
assets are available to different stakeholders.
Working with reports on page 337: With reports, you share critical security information with
different stakeholders in your organization. This section guides you through creating and
customizing reports and understanding the information they contain.
Using tickets on page 531: This section shows you how to use the ticketing system to manage
the remediation work flow and delegate remediation tasks.
Act 304
Working with asset groups
Asset groups provide different ways for members of your organization to grant access to, view,
scan, and report on asset information. Asset groups allow you to create logical groupings that you
can configure to dynamically incorporate new assets that meet specific criteria. You can define an
asset group within a site in order to scan based on these groupings.
One use case illustrates how asset groups can “spin off” organically from sites. A bank
purchases Nexpose with a fixed-number IP address license. The network topology includes one
head office and 15 branches, all with similar “cookie-cutter” IP address schemes. The IP
addresses in the first branch are all 10.1.1.x.; the addresses in the second branch are 10.1.2.x;
and so on. For each branch, whatever integer equals .x is a certain type of asset. For example .5
is always a server.
The security team scans each site and then “chunks” the information in various ways by creating
reports for specific asset groups. It creates one set of asset groups based on locations so that
branch managers can view vulnerability trends and high-level data. The team creates another set
of asset groups based on that last integer in the IP address. The users in charge of remediating
server vulnerabilities will only see “.5” assets. If the “x” integer is subject to more granular
divisions, the security team can create more finally specialized asset groups. For example .51
may correspond to file servers, and .52 may correspond to database servers.
Another approach to creating asset groups is categorizing them according to membership. For
example, you can have an “Executive” asset group for senior company officers who see high-
level business-sensitive reports about all the assets within your enterprise. You can have more
technical asset groups for different members of your security team, who are responsible for
remediating vulnerabilities on specific types of assets, such as databases, workstations, or Web
servers.
The page for an asset group displays charts so you can track your risk or number of vulnerabilities
in relation to the assets in that group.
The Assets by Risk and Vulnerabilities chart to the right of the Asset Risk and Vulnerabilites Over
Time line graph appears as a scatter chart, unless you have 7,000 assets or more in the asset
group. In that case, it appears as a bubble chart, and you can click on a bubble to see a scatter
chart of a specific group of assets.
On the scatter chart, each dot represents an asset. Hover over the dot to see information about
the asset. Click it to go to the page for that asset.
This snapshot provides important information about your assets and the security issues affecting
them:
With Nexpose, you can create two different kinds of “snapshots.” The dynamic asset group is a
snapshot that potentially changes with every scan; and the static asset group is an unchanging
snapshot. Each type of asset group can be useful depending on your needs.
A dynamic asset group contains scanned assets that meet a specific set of search criteria. You
define these criteria with asset search filters, such as IP address range or hosted operating
systems. The list of assets in a dynamic group is subject to change with every scan. In this regard,
a dynamic asset group differs from a static asset group. See How are sites different from asset
groups? on page 47. Assets that no longer meet the group’s Asset Filter criteria after a scan will
be removed from the list. Newly discovered assets that meet the criteria will be added to the list.
Note that the list does not change immediately, but after the application completes a scan and
integrates the new asset information in the database.
An ever-evolving snapshot of your environment, a dynamic asset group allows you to track
changes to your live asset inventory and security posture at a quick glance, and to create reports
based on the most current data. For example, you can create a dynamic asset group of assets
with a vulnerability that was included in a Patch Tuesday bulletin. Then, after applying the patch
for the vulnerability, you can scan the dynamic asset group to determine if any assets still have
this vulnerability. If the patch application was successful, the group theoretically should not
include any assets.
You can create dynamic asset groups using the filtered asset search. See Performing filtered
asset searches on page 313.
You grant user access to dynamic asset groups through the User Configuration panel.
A static asset group contains assets that meet a set of criteria that you define according to your
organization’s needs. Unlike with a dynamic asset group, the list of assets in a static group does
not change unless you alter it manually.
Static asset groups provide useful time-frozen views of your environment that you can use for
reference or comparison. For example, you may find it useful to create a static asset group of
Windows servers and create a report to capture all of their vulnerabilities. Then, after applying
patches and running a scan for patch verification, you can create a baseline report to compare
vulnerabilities on those same assets before and after the scan.
You can create static asset groups through any of three options:
l using the Group Configuration panel; see Configuring a static asset group by manually
selecting assets on page 308
l using the filtered asset search; see Performing filtered asset searches on page 313
l copying and modifying an existing asset group; see Creating a dynamic or static asset group
by copying an existing one on page 311
Manually selecting assets is one of three ways to create a static asset group. This manual method
is ideal for environments that have small numbers of assets. For an approach that is ideal for
large numbers of assets, see Creating a dynamic or static asset group from asset searches on
page 334.
Click the Assets icon to go to the Assets page, and then click view next to Groups.
OR
Click the Create tab at the top of the page and then select Asset Group from the drop-down
list.
OR
Click the Administration icon to go to the Administration page, and then click manage next
to Groups.
2. Click New Static Asset Group to create a new static asset group.
3. Click Edit to change any group listed with a static asset group icon.
Note: You can only create an asset group after running an initial scan of assets that you wish to
include in that group.
OR
The console displays the General page of the Asset Group Configuration panel.
2. Use any of these filters to find assets that meet certain criteria, then click Display matching
assets to run the search.
For example, you can select all of the assets within an IP address range that run on a
particular operating system.
OR
3. Click Display all assets, which is convenient if your database contains a small number of
assets.
Note: There may be a delay if the search returns a very large number of assets.
4. Select the assets you wish to add to the asset group. To include all assets, select the check
box in the header row.
5. Click Save.
When you use this asset selection feature to create a new asset group, you will not see any
assets displayed. When you use this asset selection feature to edit an existing report, you
will see the list of assets that you selected when you created, or most recently edited, the
report.
You can create a new dynamic or static group by copying an existing one. This method is useful
when you want to create an asset group that is similar to an existing one, but with some
differences.
1. From the Home page, in the Asset Groups listing, select the Copy icon for the asset group you
want to copy.
OR
2. The asset group configuration page appears. Make the changes to the settings and rename
the asset group appropriately.
Note: By default, Copy will be appended to the original name. Additional copies of the
original group will have a number appended (for example, Copy 2 and so on).
When dealing with networks of large numbers of assets, you may find it necessary or helpful to
concentrate on a specific subset. The filtered asset search feature allows you to search for assets
based on criteria that can include IP address, site, operating system, software, services,
vulnerabilities, and asset name. You can then save the results as a dynamic asset group for
tracking, scanning, and reporting purposes. See Using the search feature on page 35.
Using search filters, you can find assets of immediate interest to you. This helps you to focus your
remediation efforts and to manage the sheer quantity of assets running on a large network.
Click the Asset Filter icon , which appears below and to the right of the Search box in the
Web interface.
OR
Click the Create tab at the top of the page and then select Dynamic Asset Group from the drop-
down list.
OR
Click the Administration icon to go to the Administration page, and then click the dynamic link
next to Asset Groups.
OR
Note: Performing a filtered asset search is the first step in creating a dynamic asset group
Click New Dynamic Asset Group if you are on the Asset Groups page.
A search filter allows you to choose the attributes of the assets that you are interested in. You
can add multiple filters for more precise searches. For example, you could create filters for a
given IP address range, a particular operating system, and a particular site, and then combine
these filters to return a list of all the assets that simultaneously meet all the specified criteria.
Using fewer filters typically increases the number of search results.
You can combine filters so that the search result set contains only the assets that meet all of the
criteria in all of the filters (leading to a smaller result set). Or you can combine filters so that the
To select filters in the Filtered asset search panel take the following steps:
When you select a filter, the configuration options, operators, for that filter dynamically
become available.
2. Select the appropriate operator. Note: Some operators allow text searches. You can use the *
wildcard in any of the text searches.
3. Use the + button to add filters.
4. Use the - button to remove filters.
5. Click Reset to remove all filters.
The asset name filter lets you search for assets based on the asset name. The filter applies a
search string to the asset names, so that the search returns assets that meet the specified
criteria. It works with the following operators:
l is returns all assets whose names match the search string exactly.
l is not returns all assets whose names do not match the search string.
l starts with returns all assets whose names begin with the same characters as the search
string.
l ends with returns all assets whose names end with the same characters as the search string.
l contains returns all assets whose names contain the search string anywhere in the name.
l does not contain returns all assets whose names do not contain the search string.
After you select an operator, you type a search string for the asset name in the blank field.
Filtering by CVE ID
The CVE ID filter lets you search for assets based on the CVE ID. The CVE identifiers (IDs) are
unique, common identifiers for publicly known information security vulnerabilities. For more
information, see https://cve.mitre.org/cve/identifiers/index.html. The filter applies a search string
to the CVE IDs, so that the search returns assets that meet the specified criteria. It works with the
following operators:
l is returns all assets whose CVE IDs match the search string exactly.
l is not returns all assets whose CVE IDs do not match the search string.
l contains returns all assets whose CVE IDs contain the search string anywhere in the name.
l does not contain returns all assets whose CVE IDs do not contain the search string.
After you select an operator, you type a search string for the CVE ID in the blank field.
The Host type filter lets you search for assets based on the type of host system, where assets can
be any one or more of the following types:
You can use this filter to track, and report on, security issues that are specific to host types. For
example, a hypervisor may be considered especially sensitive because if it is compromised then
any guest of that hypervisor is also at risk.
The filter applies a search string to host types, so that the search returns a list of assets that either
match, or do not match, the selected host types.
l is returns all assets that match the host type that you select from the adjacent drop-down list.
l is not returns all assets that do not match the host type that you select from the adjacent drop-
down list.
You can combine multiple host types in your criteria to search for assets that meet multiple
criteria. For example, you can create a filter for “is Hypervisor” and another for “is virtual machine”
to find all-software hypervisors.
If your environment includes IPv4 and IPv6 addresses, you can find assets with either address
format. This allows you to track and report on specific security issues in these different segments
of your network. The IP address type filter works with the following operators:
After selecting the filter and desired operator, select the desired format: IPv4 or IPv6.
The IP address range filter lets you specify a range of IP addresses, so that the search returns a
list of assets that are either in the IP range, or not in the IP range. It works with the following
l is returns all assets with an IP address that falls within the IP address range.
l is not returns all assets whose IP addresses do not fall into the IP address range.
When you select the IP address range filter, you will see two blank fields separated by the word
to. You use the left field to enter the start of the IP address range, and use the right to enter the
end of the range.
192.168.2.1 to 192.168.2.254
The last scan date filter lets you search for assets based on when they were last scanned. You
may want, for example, to run a report on the most recently scanned assets. Or, you may want to
find assets that have not been scanned in a long time and then delete them from the database
because they are no longer be considered important for tracking purposes. The filter works with
the following operators:
l on or before returns all assets that were last scanned on or before a particular date. After
selecting this operator, click the calendar icon to select the date.
l on or after returns all assets that were last scanned on or after a particular date. After
selecting this operator, click the calendar icon to select the date.
l between and including returns all assets that were last scanned between, and including, two
dates. After selecting this operator, click the calendar icon next to the left field to select the first
date in the range. Then click the calendar icon next to the right field to select the last date in the
range.
l earlier than returns all assets that were last scanned earlier than a specified number of days
preceding the date on which you initiate the search. After selecting this operator, enter a
number in the days ago field. The starting point of the search is midnight of the day that the
search is performed. For example, you initiate a search at 3 p.m. on January 23. You select
this operator and enter 3 in the days ago field. The search returns all assets that were last
scanned prior to midnight on January 20.
l within the last returns all assets that were last scanned within a specified number of preceding
days. After selecting this operator, enter a number in the days field. The starting point of the
search is midnight of the day that the search is performed. For example: You initiate the
search at 3 p.m. on January 23. You select this operator and enter 1 in the days field. The
search returns all assets that were last scanned since midnight on January 22.
l The search only returns last scan dates. If an asset was scanned within the time frame
specified in the filter, and if that scan was not the most recent scan, it will not appear in the
search results.
l Dynamic asset group membership can change as new scans are run.
l Dynamic asset group membership is recalculated daily at midnight. If you create a dynamic
asset group based on searches with the relative-day operators (earlier than or within the last),
the asset membership will change accordingly.
Note: This filter is only available with WinRM/PowerShell and WinRM/Office 365 Dynamic
Discovery connections.
With the Last Synch Time filter, you can track mobile devices based on the most recent time they
synchronized with the Exchange server. This filter can be useful if you do not want your reports to
include data from old devices that are no longer in use on the network. It works with the following
operators.
l earlier than returns all mobile devices that synchronized earlier than a number of preceding
days that you enter in a text box.
l within the last returns all mobile devices that synchronized within a number of preceding days
that you enter in a text box.
Having certain ports open may violate configuration policies. The open port number filter lets you
search for assets with a specified port open. By isolating assets with open ports, you can then
close those ports and then re-scan them to verify that they are closed. Select an operator, and
then enter your port or port range. Depending on your criteria, search results will return assets
that have open ports, assets that do not have open ports, and assets with a range of open ports.
The operating system name filter lets you search for assets based on their hosted operating
systems. Depending on the search, you choose from a list of operating systems, or enter a
search string. The filter returns a list of assets that meet the specified criteria.
l contains returns all assets running on the operating system whose name contains the
characters specified in the search string. You enter the search string in the adjacent field. You
can use an asterisk (*) as a wildcard character.
l does not contain returns all assets running on the operating system whose name does not
contain the characters specified in the search string. You enter the search string in the
adjacent field. You can use an asterisk (*) as a wildcard character.
l is empty returns all assets that do not have an operating system identified in their scan results.
If an operating system is not listed for a scanned asset in the Web interface or reports, this
means that the asset may not have been fingerprinted. If the asset was scanned with
credentials, failure to fingerprint indicates that the credentials were not authenticated on the
target asset. Therefore, this operator is useful for finding assets that were scanned with failed
credentials or without credentials.
l is not empty returns all assets that have an operating system identified in their scan results.
This operator is useful for finding assets that were scanned with authenticated credentials and
fingerprinted.
This filter allows you to find assets that have other IPv4 or IPv6 addresses in addition to the
address(es) that you are aware of. When the application scans an IP address that has been
included in a site configuration, it discovers any other addresses for that asset. This may include
addresses that have not been scanned. For example: A given asset may have an IPv4 address
and an IPv6 address. When configuring scan targets for your site, you may have only been aware
of the IPv4 address, so you included only that address to be scanned in the site configuration.
When you run the scan, the application discovers the IPv6 address. By using this asset search
filter, you can search for all assets to which this scenario applies. You can add the discovered
address to a site for a future scan to increase your security coverage.
After you select the filter and operators, you select either IPv4 or IPv6 from the drop-down list.
l is returns all assets that have other IP addresses that are either IPv4 or IPv6.
The PCI status filter lets you search for assets based on whether they return Pass or Fail results
when scanned with the PCI audit template. Finding assets that fail compliance scans can help
you determine at a glance which require remediation in advance of an official PCI audit.
After you select an operator, select the Pass or Fail option from the drop-down list.
The service name filter lets you search for assets based on the services running on them. The
filter applies a search string to service names, so that the search returns a list of assets that either
have or do not have the specified service.
l contains returns all assets running a service whose name contains the search string. You can
use an asterisk (*) as a wildcard character.
l does not contain returns all assets that do not run a service whose name contains the search
string. You can use an asterisk (*) as a wildcard character.
After you select an operator, you type a search string for the service name in the blank field.
The site name filter lets you search for assets based on the name of the site to which the assets
belong.
This is an important filter to use if you want to control users’ access to newly discovered assets in
sites to which users do not have access. See the note in Using dynamic asset groups on page
307.
The filter applies a search string to site names, so that the search returns a list of assets that
either belong to, or do not belong to, the specified sites.
l is returns all assets that belong to the selected sites. You select one or more sites from the
adjacent list.
l is not returns all assets that do not belong to the selected sites. You select one or more sites
from the adjacent list.
The software name filter lets you search for assets based on software installed on them. The filter
applies a search string to software names, so that the search returns a list of assets that either
runs or does not run the specified software.
l contains returns all assets with software installed such that the software’s name contains the
search string. You can use an asterisk (*) as a wildcard character.
l does not contain returns all assets that do not have software installed such that the software’s
name does not contain the search string. You can use an asterisk (*) as a wildcard character.
After you select an operator, you enter the search string for the software name in the blank field.
The Validated vulnerabilities filter lets you search for assets with vulnerabilities that have been
validated with exploits through Metasploit integration. By using this filter, you can isolate assets
with vulnerabilities that have been proven to exist with a high degree of certainty. For more
information, see Working with validated vulnerabilities on page 269.
l The are operator, combined with the present drop-down list option, returns all assets with
validated vulnerabilities.
l The are operator, combined with the not present drop-down list option, returns all assets
without validated vulnerabilities.
The user-added criticality level filter lets you search for assets based on the criticality tags that
you and your users have applied to them. For example, a user may set all assets belonging to
company executives to be of a “Very High” criticality in their organization. Using this filter, you
could identify assets with that criticality set, regardless of their sites or other associations. You
can search for assets with or without a specific criticality level, assets whose criticality is above or
After you select an operator, you select a criticality level from the drop-down menu. Available
criticality levels are Very High, High, Medium, Low, and Very Low.
The user-added custom tag filter lets you search for assets based on the custom tags that users
have applied to them. For example, your company may have assets involved in an online banking
process distributed throughout various locations and subnets, and a user may have tagged the
involved assets with a custom “Online Banking” tag. Using this filter, you could identify assets with
that tag, regardless of their sites or other associations. You can search for assets with or without
a specific tag, assets whose custom tags meet certain criteria, or assets with or without any user-
added custom tags. For more information on user-added custom tags, see Applying
RealContext with tags on page 250.
l is returns all assets with custom tags that match the search string exactly.
l is not returns all assets that do not have a custom tag that matches the exact search string.
l starts with returns all assets with custom tags that begin with the same characters as the
search string.
l ends with returns all assets with custom tags that end with the same characters as the search
string.
l contains returns all assets whose custom tags contain the search string anywhere in their
names.
l does not contain returns all assets whose custom tags do not contain the search string.
l is applied returns all assets that have any custom tag applied.
l is not applied returns all assets that have no custom tags applied.
The user-added tag (location) filter lets you search for assets based on the location tags that
users have applied to them. For example, a user may have created and applied tags for “Akron”
and “Cincinnati” to clarify the physical location of assets in a user-friendly way. Using this filter,
you could identify assets with that tag, regardless of their other associations. You can search for
assets with or without a specific tag, assets whose location tags meet certain criteria, or assets
with or without any user-added location tags. For more information on user-added location tags,
see Applying RealContext with tags on page 250.
l is returns all assets with location tags that match the search string exactly.
l is not returns all assets that do not have a location tag that matches the exact search string.
l starts with returns all assets with location tags that begin with the same characters as the
search string.
l ends with returns all assets with location tags that end with the same characters as the search
string.
l contains returns all assets whose location tags contain the search string anywhere in their
names.
l does not contain returns all assets whose location tags do not contain the search string.
l is applied returns all assets that have any location tag applied.
l is not applied returns all assets that have no location tags applied.
After you select an operator, you type a search string for the location tag in the blank field.
The user-added tag (owner) filter lets you search for assets based on the owner tags that users
have applied to them. For example, a company may have different people responsible for
different assets. A user can tag the assets each person is responsible for and use this information
to track the risk level of those assets. You can search for assets with or without a specific tag,
assets whose owner tags meet certain criteria, or assets with or without any user-added owner
tags. For more information on user-added owner tags, see Applying RealContext with tags on
page 250.
l is returns all assets with owner tags that match the search string exactly.
l is not returns all assets that do not have an owner tag that matches the exact search string.
l starts with returns all assets with owner tags that begin with the same characters as the
search string.
l ends with returns all assets with owner tags that end with the same characters as the search
string.
l contains returns all assets whose owner tags contain the search string anywhere in their
names.
l does not contain returns all assets whose owner tags do not contain the search string.
l is applied returns all assets that have any owner tag applied.
l is not applied returns all assets that have no owner tags applied.
After you select an operator, you type a search string for the location tag in the blank field.
The following vAsset filters let you search for virtual assets that you track with vAsset discovery.
Creating dynamic asset groups for virtual assets based on specific criteria can be useful for
analyzing different segments of your virtual environment. For example, you may want to run
reports or assess risk for all the virtual assets used by your accounting department, and they are
all supported by a specific resource pool. For information about vAsset discovery, see
Discovering virtual machines managed by VMware vCenter or ESX/ESXi on page 155.
The vAsset cluster filter lets you search for virtual assets that belong, or don’t belong, to specific
clusters. This filter works with the following operators:
l is returns all assets that belong to clusters whose names match an entered string exactly.
l is not returns all assets that belong to clusters whose names do not match an entered string.
l contains returns all assets that belong to clusters whose names contain an entered string.
l does not contain returns all assets that belong to clusters whose names do not contain an
entered string.
l starts with returns all assets that belong to clusters whose names begin with the same
characters as an entered string.
After you select an operator, you enter the search string for the cluster in the blank field.
The vAsset datacenter filter lets you search for assets that are managed, or are not managed, by
specific datacenters. This filter works with the following operators:
l is returns all assets that are managed by datacenters whose names match an entered string
exactly.
l is not returns all assets that are managed by datacenters whose names do not match an
entered string.
After you select an operator, you enter the search string for the datacenter name in the blank
field.
The vAsset host filter lets you search for assets that are guests, or are not guests, of specific host
systems. This filter works with the following operators:
l is returns all assets that are guests of hosts whose names match an entered string exactly.
l is not returns all assets that are guests of hosts whose names do not match an entered string.
l contains returns all assets that are guests of hosts whose names contain an entered string.
l does not contain returns all assets that are guests of hosts whose names do not contain an
entered string.
l starts with returns all assets that are guests of hosts whose names begin with the same
characters as an entered string.
After you select an operator, you enter the search string for the host name in the blank field.
The vAsset power state filter lets you search for assets that are in, or are not in, a specific power
state. This filter works with the following operators:
l is returns all assets that are in a power state selected from a drop-down list.
l is not returns all assets that not are in a power state selected from a drop-down list.
After you select an operator, you select a power state from the drop-down list. Power states
include on, off, or suspended.
The vAsset resource pool path filter lets you discover assets that belong, or do not belong, to
specific resource pool paths. This filter works with the following operators:
l contains returns all assets that are supported by resource pool paths whose names contain an
entered string.
l does not contain returns all assets that are supported by resource pool paths whose names
do not contain an entered string.
You can specify any level of a path, or you can specify multiple levels, each separated by a
hyphen and right arrow: ->. This is helpful if you have resource pool path levels with identical
names.
For example, you may have two resource pool paths with the following levels:
Human Resources
Management
Workstations
Advertising
Management
Workstations
The virtual machines that belong to the Management and Workstations levels are different in
each path. If you only specify Management in your filter, the search will return all virtual machines
that belong to the Management and Workstations levels in both resource pool paths.
However, if you specify Advertising -> Management -> Workstations, the search will only return
virtual assets that belong to the Workstations pool in the path with Advertising as the highest
level.
After you select an operator, you enter the search string for the resource pool path in the blank
field.
The filters for the following Common Vulnerability Scoring System (CVSS) risk vectors let you
search for assets based on vulnerabilities that pose different types or levels of risk to your
organization’s security:
These filters refer to the industry-standard vectors used in calculating CVSS scores and PCI
severity levels. They are also used in risk strategy calculations for risk scores. For detailed
information about CVSS vectors, go to the National Vulnerability Database Web site at
nvd.nist.gov/cvss.cfm.
Using these filters, you can find assets based on different exploitability attributes of the
vulnerabilities found on them, or based on the different types and degrees of impact to the asset
in the event of compromise through the vulnerabilities found on them. Isolating these assets can
help you to make more informed decisions on remediation priorities or to prepare for a PCI audit.
l is returns all assets that match a specific risk level or attribute associated with the CVSS
vector.
l is not returns all assets that do not match a specific risk level or attribute associated with the
CVSS vector.
After you select a filter and an operator, select the desired impact level or likelihood attribute from
the drop-down list:
l For each of the three impact vectors (Confidentiality, Integrity, and Availability), the options
are Complete, Partial, or None.
l For CVSS Access Vector, the options are Local (L), Adjacent (A), or Network (N).
l For CVSS Access Complexity, the options are Low, Medium, or High.
l For CVSS Authentication Required, the options are None, Single, or Multiple.
The vulnerability category filter lets you search for assets based on the categories of
vulnerabilities that have been flagged on them during scans. This is a useful filter for finding out at
a quick glance how many, and which, assets have a particular type of vulnerability, such as ones
related to Adobe, Cisco, or Telnet. Lists of vulnerability categories can be found in the
Vulnerability Checks section of the scan template configuration or the report configuration, where
you can filter report scope based on vulnerabilities.
The filter applies a search string to vulnerability categories, so that the search returns a list of
assets that either have or do not have vulnerabilities in categories that match that search string. It
works with the following operators:
l contains returns all assets with a vulnerability whose category contains the search string. You
can use an asterisk (*) as a wildcard character.
l does not contain returns all assets that do not have a vulnerability whose category contains
the search string. You can use an asterisk (*) as a wildcard character.
l is returns all assets with that have a vulnerability whose category matches the search string
exactly.
l is not returns all assets that do not have a vulnerability whose category matches the exact
search string.
l starts with returns all assets with vulnerabilities whose categories begin with the same
characters as the search string.
l ends with returns all assets with vulnerabilities whose categories end with the same
characters as the search string.
After you select an operator, you type a search string for the vulnerability category in the blank
field.
The Vulnerability CVSS score filter lets you search for assets with vulnerabilities that have a
specific CVSS score or fall within a range of scores. You may find it helpful to create asset groups
according to CVSS score ranges that correspond to PCI severity levels: low (0.0-3.9), medium
(4.0-6.9), and high (7.0-10). Doing so can help you prioritize assets for remediation.
l is returns all assets with vulnerabilities that have a specified CVSS score.
l is not returns all assets with vulnerabilities that do not have a specified CVSS score.
l is in the range of returns all assets with vulnerabilities that fall within the range of two specified
CVSS scores and include the high and low scores in the range.
l is higher than returns all assets with vulnerabilities that have a CVSS score higher than a
specified score.
l is lower than returns all assets with vulnerabilities that have a CVSS score lower than a
specified score.
After you select an operator, type a score in the blank field. If you select the range operator, you
would type a low score and a high score to create the range. Acceptable values include any
numeral from 0.0 to 10. You can only enter one digit to the right of the decimal. If you enter more
than one digit, the score is automatically rounded up. For example, if you enter a score of 2.25,
the score is automatically rounded up to 2.3.
The vulnerability exposures filter lets you search for assets based on the following types of
exposures known to be associated with vulnerabilities discovered on those assets:
This is a useful filter for isolating and prioritizing assets that have a higher likelihood of
compromise due to these exposures.
The filter applies a search string to one or more of the vulnerability exposure types, so that the
search returns a list of assets that either have or do not have vulnerabilities associated with the
specified exposure types. It works with the following operators:
l includes returns all assets that have vulnerabilities associated with specified exposure types.
l does not include returns all assets that do not have vulnerabilities associated with specified
exposure types.
After you select an operator, select one or more exposure types in the drop-down list. To select
multiple types, hold down the <Ctrl> key and click all desired types.
The vulnerability risk score filter lets you search for assets with vulnerabilities that have a specific
risk score or fall within a range of scores. Isolating and tracking assets with higher risk scores, for
example, can help you prioritize remediation for those assets.
l is in the range of returns all assets with vulnerabilities that fall within the range of two specified
risk scores and include the high and low scores in the range.
l is higher than returns all assets with vulnerabilities that have a risk score higher than a
specified score.
l is lower than returns all assets with vulnerabilities that have a risk score lower than a specified
score.
After you select an operator, enter a score in the blank field. If you select the range operator, you
would type a low score and a high score to create the range. Keep in mind your currently selected
risk strategy when searching for assets based on risk scores. For example, if the currently
selected strategy is Real Risk, you will not find assets with scores higher than 1,000. Refer to the
risk scores in your vulnerability and asset tables for guidance.
The vulnerability title filter lets you search for assets based on the vulnerabilities that have been
flagged on them during scans. This is a useful filter to use for verifying patch applications, or
finding out at a quick glance how many, and which, assets have a particular high-risk
vulnerability.
l contains returns all assets with a vulnerability whose name contains the search string. You
can use an asterisk (*) as a wildcard character.
l does not contain returns all assets that do not have a vulnerability whose name contains the
search string. You can use an asterisk (*) as a wildcard character.
l is returns all assets with that have a vulnerability whose name matches the search string
exactly.
l is not returns all assets that do not have a vulnerability whose name matches the exact search
string.
l starts with returns all assets with vulnerabilities whose names begin with the same characters
as the search string.
l ends with returns all assets with vulnerabilities whose names end with the same characters as
the search string.
After you select an operator, you type a search string for the vulnerability name in the blank field.
Combining filters
If you create multiple filters, you can have Nexpose return a list of assets that match all the criteria
specified in the filters, or a list of assets that match any of the criteria specified in the filters. You
can make this selection in a drop-down list at the bottom of the Search Criteria panel.
The difference between All and Any is that the All setting will only return assets that match the
search criteria in all of the filters, whereas the Any setting will return assets that match any given
filter. For this reason, a search with All selected typically returns fewer results than Any.
For example, suppose you are scanning a site with 10 assets. Five of the assets run Linux, and
their names are linux01, linux02, linux03, linux04, and linux05. The other five run Windows, and
their names are win01, win02, win03, win04, and win05.
Suppose you create two filters. The first filter is an operating system filter, and it returns a list of
assets that run Windows. The second filter is an asset filter, and it returns a list of assets that have
“linux” in their names.
If you perform a filtered asset search with the two filters using the All setting, the search will return
a list of assets that run Windows and have “linux” in their asset names. Since no such assets
exist, there will be no search results. However, if you use the same filters with the Any setting, the
search will return a list of assets that run Windows or have “linux” in their names. Five of the
After you configure asset search filters as described in the preceding section, you can create an
asset group based on the search results. Using the assets search is the only way to create a
dynamic asset group. It is one of two ways to create a static asset group and is more ideal for
environments with large numbers of assets. For a different approach, which involves manually
selecting assets, see Configuring a static asset group by manually selecting assets on page 308.
Note: If you have permission to create asset groups, you can save asset search results as an
asset group.
(Optional) Click the Export to CSV link at the bottom of the table to export the results to a
comma-separated values (CSV) file that you can view and manipulate in a spreadsheet
program.
Note: Only Global Administrators or users with the Manage Group Assets permission can create
asset groups, so only these users can save Asset Filter search results.
3. Select either the Dynamic or Static option, depending on what kind of asset group you want
to create. See Comparing dynamic and static asset groups on page 307.
If you create a dynamic asset group, the asset list is subject to change with every scan. See
Using dynamic asset groups on page 307.
You must give users access to an asset group for them to be able view assets or perform
asset-related operations, such as reporting, with assets in that group.
Note: You must be a Global Administrator or have Manage Asset Group Access permission to
add users to an asset group.
6. Select the check box for every user account that you want to add to the access list or select the
check box in the top row to add all users.
You can change search criteria for membership in a dynamic asset group at any time.
Click the Administration icon to go to the Administration page, and then click the
manage link below Groups.
Click the Assets icon to go to the Assets page, and then click the blue number above
AssetyGroups.
2. Click Edit to find a dynamic asset group that you want to modify.
OR
Click the link for the name of the desired asset group.
3. Click Edit Asset Group or click View Asset Filter to review a summary of filter criteria.
Any of these approaches causes the application to display the Filtered asset search panel
with the filters set for the most recent asset search.
4. Change the filters according to your preferences, and run a search. See Configuring asset
search filters on page 313.
5. Click Save.
You may want any number of people in your organization to view asset and vulnerability data
without actually logging on to the Security Console. For example, a chief information security
officer (CISO) may need to see statistics about your overall risk trends over time. Or members of
your security team may need to see the most critical vulnerabilities for sensitive assets so that
they can prioritize remediation projects. It may be unnecessary or undesirable for these
stakeholders to access the application itself. By generating reports, you can distribute critical
information to the people who need it via e-mail or integration of exported formats such as XML,
CSV, or database formats.
Reports provide many, varied ways to look at scan data, from business-centric perspectives to
detailed technical assessments. You can learn everything you need to know about vulnerabilities
and how to remediate them, or you can just list the services are running on your network assets.
You can create a report on a site, but reports are not tied to sites. You can parse assets in a
report any number of ways, including all of your scanned enterprise assets, or just one.
Note: For information about other tools related to compliance with Policy Manager policies, see
What are your compliance requirements?, which you can download from the Support page in
Help.
If you are verifying compliance with PCI, you will use the following report templates in the audit
process:
l Attestation of Compliance
l PCI Executive Summary
l Vulnerability Details
If you are verifying compliance with United States Government Configuration Baseline
(USGCB) or Federal Desktop Core Configuration (FDCC) policies, you can use the following
report formats to capture results data:
Note: You also can click the top row check box to select all requests and then approve or reject
them in one step.
Reports are primarily how your asset group members view asset data. Therefore, it’s a best
practice to organize reports according to the needs of asset group members. If you have an asset
group for Windows 2008 servers, create a report that only lists those assets, and include a
section on policy compliance.
Creating reports is very similar to creating scan jobs. It’s a simple process involving a
configuration panel. You select or customize a report template, select an output format, and
choose assets for inclusion. You also have to decide what information to include about these
assets, when to run the reports, and how to distribute them.
All panels have the same navigation scheme. You can either use the navigation buttons in the
upper-right corner of each panel page to progress through each page of the panel, or you can
click a page link listed on the left column of each panel page to go directly to that page.
Note: Parameters labeled in red denote required parameters on all panel pages.
To save configuration changes, click Save that appears on every page. To discard changes, click
Cancel.
You may need to view, edit, or run existing report configurations for various reasons:
l On occasion, you may need to run an automatically recurring report immediately. For
example, you have configured a recurring report on Microsoft Windows vulnerabilities.
Microsoft releases an unscheduled security bulletin about an Internet Explorer vulnerability.
You apply the patch for that flaw and run a verification scan. You will want to run the report to
demonstrate that the vulnerability has been resolved by the patch.
l You may need to change a report configuration. For example, you may need add assets to
your report scope as new workstations come online.
The application lists all report configurations in a table, where you can view run or edit them, or
view the histories of when they were run in the past.
Note: On the View Reports panel, you can start a new report configuration by clicking the
New button.
1. Click the Reports icon that appears on every page of the Web interface. The Security Console
displays the Reports page.
2. Click the View reports panel to see all the reports of which you have ownership. A Global
Administrator can see all reports.
A table list reports by name and most recent report generation date. You can sort reports by
either criteria by clicking the column heading. Report names are unique in the application.
Every time the application writes a new instance of a report, it changes the date in the Most
Recent Report column. You can click the link for that date to view the most recent instance of
the report.
For example, you may have a report that only includes Windows vulnerabilities for a given set of
assets. You may still want to create another report for those assets, focusing only on Adobe
vulnerabilities. Copying the report configuration would make the most sense if no other attributes
are to be changed.
Whether you click Edit or Copy, the Security Console displays the Configure a Report panel for
that configuration. See Creating a basic report on page 341.
l To view all instances of a report that have been run, click History in the tools drop-down menu
for that report. You can also see the history for a report that has previously run at least once by
clicking the report name, which is a hyperlink. If a report name is not a hyperlink, it is because
an instance of the report has not yet run successfully. By reviewing the history, you can see
any instances of the report that failed.
l Clicking Delete will remove the report configuration and all generated instances from the
application database.
There are additional configuration steps for the following types of reports:
l Export
l Configuring an XCCDF report
l Configuring an ARF report
l Database Export
l Baseline reports
l Risk trend reports
After you complete a basic report configuration, you will have the option to configure additional
properties, such as those for distributing the report.
You will have the options to either save and run the report, or just to save it for future use. For
example, if you have a saved report and want to run it one time with an additional site in it, you
could add the site, save and run, return it to the original configuration, and then just save. See
Viewing, editing, and running reports on page 339.
Note: Resetting the Search templates field by clicking the close X displays all templates in
alphabetical order.
contain asset and vulnerability information. Some of the formats available for this
template type—Text, PDF, RTF, and HTML—are convenient for sharing information to
be read by stakeholders in your organization, such as executives or security team
members tasked with performing remediation.
l Export templates are designed for integrating scan information into external systems.
The formats available for this type include various XML formats, Database Export, and
CSV. For more information, see Working with report formats on page 519.
6. Click Close on the Search templates field to reset the search or enter a new term.
The Security Console displays template thumbnail images that you can browse, depending on
the template type you selected. If you selected the All option, you will be able to browse all
available templates. Click the scroll arrows on the left and the right to browse the templates.
You also can click the Preview icon in the lower right corner of any thumbnail (highlighted in
the preceding screen shot) to enlarge and click through a preview of template. This can be
helpful to see what kind of sections or information the template provides.
When you see the see the desired template, click the thumbnail. It becomes highlighted and
displays a Selected label in the top, right corner.
7. Select a format for the report. Formats not only affect how reports appear and are consumed,
but they also can have some influence on what information appears in reports. For more
information, see Working with report formats on page 519.
Tip: See descriptions of all available report templates to help you select the best template
for your needs.
If you are using the PCI Attestation of Compliance or PCI Executive Summary template, or a
custom template made with sections from either of these templates, you can only use the RTF
format. These two templates require ASVs to fill in certain sections manually.
8. (Optional) Select the language for your report: Click Advanced Settings, select Language,
and choose an output language from the drop-down list.
To change the default language of reports, click your user name in the upper-right corner,
select User Preferences, and select a language from the drop-down list. The newly
9. If you are using the CyberScope XML Export format, enter the names for the component,
bureau, and enclave in the appropriate fields. For more information see Entering
CyberScope information on page 345. Otherwise, continue with specifying the scope of your
report.
When configuring a CyberScope XML Export report, you must enter additional information, as
indicated in the CyberScope Automated Data Feeds Submission Manual published by the U.S.
Office of Management and Budget. The information identifies the entity submitting the data:
If you are creating one of the XCCDF reports, and you have selected one of the XCCDF
formatted templates on the Create a report panel take the following steps:
Note: You cannot filter vulnerabilities by category if you are creating an XCCDF or CyberScope
XML report.
The Policies option only appears when you select one of the XCCDF formats in the
Template section of the Create a report panel.
Use the Asset Reporting Format (ARF) export template to submit policy or benchmark scan
results to the U.S. government in compliance with Security Content Automation Protocol (SCAP)
1.2 requirements. To do so, take the following steps:
Note: To run ARF reports you must first run scans that have been configured to save SCAP
data. See Selecting Policy Manager checks on page 567 for more information.
1. Click Select sites, assets, asset groups, or tags in the Scope section of the Create a
report panel. The tags filter is available for all report templates except Audit Report,
Baseline Comparison, Executive overview, Database export and XCCDF Human
Readable CSV Export.
2. To use only the most recent scan data in your report, select Use the last scan data only
check box. Otherwise, the report will include all historical scan data in the report.
Tip: The asset selection options are not mutually exclusive. You can combine selections of
sites, asset groups, and individual assets.
3. Select Sites, Asset Groups, Assets, or Tags from the drop-down list.
4. If you selected Sites, Asset Groups, or Tags, click the check box for any displayed site or
asset group to select it. You also can click the check box in the top row to select all options.
If you selected Assets, the Security Console displays search filters. Select a filter, an
operator, and then a value.
For example, if you want to report on assets running Windows operating systems, select the
operating system filter and the contains operator. Then enter Windows in the text field.
To add more filters to the search, click the + icon and configure your new filter.
Select an option to match any or all of the specified filters. Matching any filters typically
returns a larger set of results. Matching all filters typically returns a smaller set of results
because multiple criteria make the search more specific.
Click the check box for any displayed asset to select it. You also can click the check box in
the top row to select all options.
5. Click OK to save your settings and return the Create a report panel. The selections are
referenced in the Scope section.
Reports can also be created to exclude a type of vulnerability or a list of categories. For example,
if there is an Adobe Acrobat vulnerability in your environment that is addressed with a scheduled
patching process, you can run a report that contains all vulnerabilities except those Adobe
Acrobat vulnerabilities. This provides a report that is easier to read as unnecessary information
has been filtered out.
Note: You can manage vulnerability filters through the API. See the API guide for more
information.
Organizations that have distributed IT departments may need to disseminate vulnerability reports
to multiple teams or departments. For the information in those reports to be the most effective,
the information should be specific for the team receiving it. For example, a security administrator
can produce remediation reports for the Oracle database team that only include vulnerabilities
that affect the Oracle database. These streamlined reports will enable the team to more
effectively prioritize their remediation efforts.
A security administrator can filter by vulnerability category to create reports that indicate how
widespread a vulnerability is in an environment, or which assets have vulnerabilities that are not
being addressed during patching. The security administrator can also include a list of historical
vulnerabilities on an asset after a scan template has been edited. These reports can be used to
monitor compliance status and to ensure that remediation efforts are effective.
The following document report template sections can include filtered vulnerability information:
l Discovered Vulnerabilities
l Discovered Services
l Index of Vulnerabilities
l Remediation Plan
l Vulnerability Exceptions
l Vulnerability Report Card Across Network
l Vulnerability Report Card by Node
l Vulnerability Test Errors
1. Click Filter report scope based on vulnerabilities on the Scope section of the Create a
report panel.
Select Vulnerability Filters section with option to include only validated vulnerabilities
2. To filter vulnerabilities by severity level, select the Critical vulnerabilities or Critical and
severe vulnerabilities option. Otherwise, select All severities.
These are not PCI severity levels or CVSS scores. They map to numeric severity rankings
that are assigned by the application and displayed in the Vulnerability Listing table of the
Vulnerabilities page. Scores range from 1 to 10:
1-3= Moderate; 4-7= Severe; and 8-10= Critical.
3. If you selected a CSV report template, you have the option to filter vulnerability result types.
To include all vulnerability check results (positive and negative), select the Vulnerable and
non-vulnerable option next to Results.
If you want to include only positive check results, select the Vulnerable option.
You can filter positive results based on how they were determined by selecting any of the
check boxes for result types:
4. If you want to include or exclude specific vulnerability categories, select the appropriate option
button in the Categories section.
Tip: Categories that are named for manufacturers, such as Microsoft, can serve as
supersets of categories that are named for their products. For example, if you filter by the
Microsoft category, you inherently include all Microsoft product categories, such as Microsoft
Path and Microsoft Windows. This applies to other "company" categories, such as Adobe,
Apple, and Mozilla.To view the vulnerabilities in a category see Configuration steps for
vulnerability check settings on page 562.
5. If you choose to include or exclude specific categories, the Security Console displays a text
box containing the words Select categories. You can select categories with two different
methods:
l Click the text box to display a window that lists all available categories. Scroll down the
list and select the check box for each desired category. Each selection appears in a text
field at the bottom of the window.
l Click the text box to display a window that lists all available categories. Enter part or all a
category name in the Filter: text box, and select the categories from the list that appears. If
you enter a name that applies to multiple categories, all those categories appear. For
example, you type Adobe or ado, several Adobe categories appear. As you select
categories, they appear in the text field at the bottom of the window.
If you use either or both methods, all your selections appear in a field at the bottom of the
selection window. When the list includes all desired categories, click outside of the window
to return to the Scope page. The selected categories appear in the text box.
Note: Existing reports will include all vulnerabilities unless you edit them to filter by
vulnerability category.
6. Click the OK button to save scope selections.
You can run the completed report immediately on a one-time basis, configure it to run after every
scan, or schedule it to run on a repeating basis. The third option is useful if you have an asset
group containing assets that are assigned to many different sites, each with a different scan
template. Since these assets will be scanned frequently, it makes sense to run recurring reports
automatically.
basis.
l Select Run a recurring report after each scan to generate a report every time a scan
is completed on the assets defined in the report scope.
l Select Run a recurring report on a repeated schedule if you wish to schedule reports
for regular time intervals.
If you selected either of the first two options, ignore the following steps.
If you selected the scheduling option, the Security Console displays controls for configuring
a schedule.
OR
6. Enter an hour and minute for the start time, and click the Up or Down arrow to select AM or
PM.
7. Enter a value in the field labeled Repeat every and select a time unit from the drop-down list
to set a time interval for repeating the report.
If you select months on the specified date, the report will run every month on the selected
calendar date. For example, if you schedule a report to run on October 15, the report will run
on October 15 every month.
If you select months on the specified day of the month, the report will run every month on the
same ordinal weekday. For example, if you schedule the first report to run on October 15,
which is the third Monday of the month, the report will run every third Monday of the month.
The frequency with which you schedule and distribute reports depends your business needs and
security policies. You may want to run quarterly executive reports. You may want to run monthly
vulnerability reports to anticipate the release of Microsoft hotfix patches. Compliance programs,
such as PCI, impose their own schedules.
The amount of time required to generate a report depends on the number of included live IP
addresses the number of included vulnerabilities—if vulnerabilities are being included—and the
level of details in the report template. Generating a PDF report for 100-plus hosts with 2500-plus
vulnerabilities takes fewer than 10 seconds.
The application can generate reports simultaneously, with each report request spawning a new
thread. Technically, there is no limit on the number supported concurrent reports. This means
that you can schedule reports to run simultaneously as needed. Note that generating a large
number of concurrent reports—20 or more—can take significantly more time than usual.
The remediation plan templates provide information for assessing the highest impact remediation
solutions. You can use the Remediation Display settings to specify the number of solutions you
want to see in a report. The default is 25 solutions, but you can set the number from 1 to 1000 as
you require. Keep in mind that if the number is too high you may have a report with an unwieldy
level of data and too low you may miss some important solutions for your assets.
You can also specify the criteria for sorting data in your report. Solutions can be sorted by
Affected asset, Risk score, Remediated vulnerabilities, Remediated vulnerabilities with known
exploits, and Remediated vulnerabilities with malware kits.
The Vulnerability Trends template provides information about how vulnerabilities in your
environment have changed have changed over time. You can configure the time range for the
report to see if you are improving your security posture and where you can make improvements.
To ensure readability of the report and clarity of the charts there is a limit of 15 data points that
can be included in the report. The time range you set controls the number of data points that
appear in the report. For example, you can set your date range for a weekly interval for a two-
month period, and you will have eight data points in your report.
Note: Ensure you schedule adequate time to run this report template because of the large
amount of data that it aggregates. Each data point is the equivalent of a complete report. It may
take a long time to complete.
To configure the time range of the report, use the following procedure:
To set a custom range, enter a start date, end date, and specify the interval, either days,
months, or years.
Best practices for using the Vulnerability Trends report template 359
Vulnerability trend date range
After you complete a basic report configuration, you will have the option to configure additional
properties, such as those for distributing the report. You can access those properties by clicking
Configure advanced settings...
If you have configured the report to run in the future, either by selecting Run a recurring report
after every scan or Run a recurring report in a schedule in the Frequency section (see
Configuring report frequency on page 356), you can save the report configuration by clicking
Save the report or run it once immediately by clicking Save & run the report. Even if you
configure the report to run automatically with one of the frequency settings, you can run the report
manually any time you want if the need arises. See Viewing, editing, and running reports on
page 339.
If you configured the report to run immediately on a one-time basis, you will also see buttons
allowing you to either save and run the report, or just to save it. See Viewing, editing, and running
reports on page 339.
Designating an earlier scan as a baseline for comparison against future scans allows you to track
changes in your network. Possible changes between scans include newly discovered assets,
services and vulnerabilities; assets and services that are no longer available; and vulnerabilities
that were mitigated or remediated.
You must select the Baseline Comparison report template in order to be able to define a baseline.
See Starting a new report configuration on page 341.
4. Click Use first scan, Use previous scan, or Use scan from a specific date to specify which
scan to use as the baseline scan.
5. Click the calendar icon to select a date if you chose Use scan from a specific date.
6. Click Save & run the report or Save the report, depending on what you want to do.
Risks change over time as vulnerabilities are discovered and old vulnerabilities are remediated
on assets or excluded from reports. As system configurations are changed, assets or sites that
have been added or removed also will impact your risk over time. Vulnerabilities can lead to asset
compromise that might impact your organization’s finances, privacy, compliance status with
government agencies, and reputation. Tracking risk trends helps you assess threats to your
organization’s standings in these areas and determine if your vulnerability management efforts
are satisfactorily maintaining risk at acceptable levels or reducing risk over time.
A risk trend can be defined as a long-term view of an asset’s potential impact of compromise that
may change over a time period. Depending on your strategy you can specify your trend data
based on average risk or total risk. Your average risk is based on a calculation of your risk scores
on assets over a report date range. For example, average risk gives you an overview of how
vulnerable your assets might be to exploits whether it’s high or low or unchanged. Your total risk
is an aggregated score of vulnerabilities on assets over a specified period. See Prioritize
according to risk score on page 530 for more information about risk strategies.
Over time vulnerabilities that are tracked in your organization’s assets indicate risks that may
have be reflected in your reports. Using risk trends in reports will help you understand how
vulnerabilities that have been remediated or excluded will impact your organization. Risk trends
appear in your Executive Overview or custom report as a set of colored line graphs illustrating
how your risk has changed over the report period.
See Selecting risk trends to be included in the report on page 364 for information on including
risk trends in your Executive Overview report.
Changes in assets have an impact on risk trends; for example, assets added to a group may
increase the number of possible vulnerabilities because each asset may have exploitable
vulnerabilities that have not been accounted for nor remediated. Using risk trends you can
demonstrate, for example, why the risk level per asset is largely unchanged despite a spike in the
overall risk trend due to the addition of an asset. The date that you added the assets will show an
increase in risk until any vulnerabilities associated with those assets have been remediated. As
vulnerabilities are remediated or excluded from scans your data will show a downward trend in
your risk graphs.
Changing your risk strategy will have an impact on your risk trend reporting. Some risk strategies
incorporate the passage of time in the determination of risk data. These time-based strategies will
demonstrate risk even if there were no new scans and no assets or vulnerabilities were added in
Configure your reports to display risk trends to show you the data you need. Select All assets in
report scope for an overall high-level risk trends report to indicate trends in your organization’s
exploitable vulnerabilities. Vulnerabilities that are not known to have exploits still pose a certain
amount of risk but it is calculated to be much smaller. The highest-risk graphs demonstrate the
biggest contributors to your risk on the site, group, or asset level. These graphs disaggregate
your risk data, breaking out the highest-risk factors at various asset collection methods included
in the scope of your report.
Note: The risk trend settings in the Advanced Properties page of the Report Configuration
panel will not appear if the selected template does not include ‘Executive overview’ or ‘Risk
Trend’ sections.
You can specify your report configuration on the Scope and Advanced Properties pages of the
Report Configuration panel. On the Scope page of the report configuration settings you can set
the assets to include in your risk trend graphs. On the Advanced Properties page you can specify
on which asset collections within the scope of your report you want to include in risk trend graphs.
You can generate a graph representing how risk has changed over time for all assets in the
scope of the report. If you generate this graph, you can choose to display how risk for all the
assets has changed over time, how the scope of the assets in the report has changed over time
or both. These trends will be plotted on two y-axes. If you want to see how the report scope has
changed over the report period, you can do this by trending either the number of assets over the
report period or the average risk score for all the assets in the report scope. When choosing to
display a trend for all assets in the report scope, you must choose one or both of the two trends.
You may also choose to include risk trend graphs for the five highest-risk sites in the scope of
your report, or the five highest-risk asset groups, or the five highest risk assets. You can only
display trends for sites or asset groups if your report scope includes sites or asset groups,
respectively. Each of these graphs will plot a trend line for each asset, group, or site that
comprises the five-highest risk entities in each graph. For sites and groups trend graphs, you can
choose to represent the risk trend lines either in terms of the total risk score for all the assets in
each collection or in terms of the average risk score of the assets in each collection.
You can select All assets in report scope and you can further specify Total risk score and
indicate Scope trend if you want to include either the Average risk score or Number of
assets in your graph. You can also choose to include the five highest risk sites, five highest risk
asset groups, and five highest risk assets depending on the level of detail you want and require in
Tip: Including the five highest risk sites, assets, or asset groups in your report can help you
prioritize candidates for your remediation efforts.
Asset group membership can change over time. If you want to base risk data on asset group
membership for a particular period you can select to include asset group membership history by
selecting Historical asset group membership on the Advanced Properties page of the Report
Configuration panel. You can also select Asset group membership at the time of report
generation to base each risk data point on the assets that are members of the selected groups at
the time the report is run. This allows you to track risk trends for date ranges that precede the
creation of the asset groups.
You must have assets selected in your report scope to include risk trend reports in your report.
See Selecting assets to report on on page 347 for more information.
1. Select the Executive Overview template on the General page of the Report Configuration
panel.
(Optional) You can also create a custom report template to include a risk trend section.
To include historical asset group membership in your reports make sure that you have
selected at least one asset group on the Scope page of the Report Configuration panel and
that you have selected the 5 highest-risk asset group graph.
4. Set the date range for your risk trends. You can select Past 1 year, Past 6 months, Past 3
months, Past 1 month, or Custom range.
(Optional) You can select Use the report generation date for the end date when you set a
custom date range. This allows a report to have a static custom start date while dynamically
lengthening the trend period to the most recent risk data every time the report is run.
Your risk trend graphs will be included in the Executive Overview report on the schedule you
specified. See Selecting risk trends to be included in the report on page 364 for more information
about understanding risk trends in reports.
Risk trend reports are available as part of the Executive Overview reports. Risk trend reports are
not constrained by the scope of your organization. They can be customized to show the data that
is most important to you. You can view your overall risk for a high level view of risk trends across
your organization or you can select a subset of assets, sites, and groups and view the overall risk
trend across that subset and the highest risk elements within that subset.
Overall risk trend graphs, available by selecting All assets in report scope, provide an
aggregate view of all the assets in the scope of the report. The highest-risk graphs provide
detailed data about specific assets, sites, or asset groups that are the five highest risks in your
environment. The overall risk trend report will demonstrate at a high level where risks are present
in your environment. Using the highest-risk graphs in conjunction with the overall risk trend report
will provide depth and clarity to where the vulnerabilities lie, how long the vulnerabilities have
been an issue, and where changes have taken place and how those changes impact the trend.
For example, Company A has six assets, one asset group, and 100 sites. The overall risk trend
report shows the trend covering a date range of six months from March to September. The
overall risk graph has a spike in March and then levels off for the rest of the period. The overall
To explain the spike in the graph the 5 highest-risk assets graph is included. You can see that in
March the number of assets increased from five to six. While the number of vulnerabilities has
seemingly increased the additional asset is the reason for the spike. After the asset was added
you can see that the report levels off to an expected pattern of risk. You can also display the
Average risk score to see that the average risk per asset in the report scope has stayed
effectively the same, while the aggregate risk increased. The context in which you view changes
to the scope of assets over the trend report period will affect the way the data displays in the
graphs.
You can run SQL queries directly against the reporting data model and then output the results in
a comma-separated value (CSV) format. This gives you the flexibility to access and share asset
and vulnerability data that is specific to the needs of your security team. Leveraging the
capabilities of CSV format, you can create pivot tables, charts, and graphs to manipulate the
query output for effective presentation.
Prerequisites
To use the SQL Query Export feature, you will need a working knowledge of SQL, including
writing queries and understanding data types.
You will also benefit from an Understanding the reporting data model: Overview and query
design on page 372, which maps database elements to business processes in your
environments.
Defining a query and running a report
The Security Console displays a box for defining a query and a drop-down list for selecting a
data model version. Currently, versions 1.2.0 and 1.1.0 are available. It is the current version
and covers all functionality available in preceding versions.
3. Optional: If you want to focus the query on specific assets, click the control to Select Sites,
Assets, or Asset Groups, and make your selections. If you do not select specific assets, the
query results will be based on all assets in your scan history.
4. Optional: If you want to limit the query results with vulnerability filters, click the control to Filter
report scope based on vulnerabilities, and make your selections.
The Security Console displays a page for defining a query, with a text box that you can edit.
Tip: Click the Help icon to view a list of sample queries. You can select any listed query to
use it for the report.
7. Click the Validate button to view and correct any errors with your query. The validation
process completes quickly.
8. Click the Preview button to verify that the query output reflects what you want to include in the
report. The time required to run a preview depends on the amount of data and the complexity
of the query.
9. If necessary, edit the query based on the validation or preview results. Otherwise, click the
Done button to save the query and run a report.
Note: If you click Cancel, you will not save the query.
The Security Console displays the Create a report page with the query displayed for
reference.
10. Click Save & run the report or Save the report, depending on what you want to do.
11. For example, if you have a saved report and want to run it one time with an additional site in it,
you could add the site, save and run, return it to the original configuration, and then just save.
12. In either case, the saved SQL query export report appears on the View reports page.
On this page:
Overview
The Reporting Data Model is a dimensional model that allows customized reporting. Dimensional
modeling is a data warehousing technique that exposes a model of information around business
processes while providing flexibility to generate reports. The implementation of the Reporting
Data Model is accomplished using the PostgreSQL relational database management system,
version 9.0.13. As a result, the syntax, functions, and other features of PostgreSQL can be
utilized when designing reports against the Reporting Data Model.
The Reporting Data Model is available as an embedded relational schema that can be queried
against using a custom report template. When a report is configured to use a custom report
template, the template is executed against an instance of the Reporting Data Model that is
scoped and filtered using the settings defined with the report configuration. The following settings
will dictate what information is made available during the execution of a custom report template.
Report Owner
The owner of the report dictates what data is exposed with the Reporting Data Model. The report
owner’s access control and role specifies what scope may be selected and accessed within the
report.
Scope Filters
Scope filters define what assets, asset groups, sites, or scans will be exposed within the reporting
data model. These entities, along with matching configuration options like “Use only most recent
scan data”, dictate what assets will be available to the report at generation time. The scope filters
Understanding the reporting data model: Overview and query design 372
are also exposed within dimensions to allow the designer to output information embedded within
the report that identify what the scope was during generation time, if desired.
Vulnerability Filters
Vulnerability filters define what vulnerabilities (and results) will be exposed within the data model.
There are three types of filters that are interpreted prior to report generation time:
1. Severity: filters vulnerabilities into the report based on a minimum severity level.
2. Categories: filters vulnerabilities into or out of the report based on metadata associated to the
vulnerability.
3. Status: filters vulnerabilities into the report based on what the result status is.
Query design
Access to the information in the Reporting Data Model is accomplished by using queries that are
embedded into the design of the custom report templates.
Dimensional Modeling
Dimension is the context that accompanies measured data and is typically textual. Dimension
tables are named with the prefix “dim_” to indicate that they store context data. Dimensions allow
facts to be sliced and aggregated in ways meaningful to the business. Each record in the fact
table does not specify a primary key but rather defines a one-to-many set of foreign keys that link
to one or more dimensions. Each dimension has a primary key that identifies the associated data
that may be joined on. In some cases the primary key of the dimension is a composite of multiple
columns. Every primary key and foreign key in the fact and dimension tables are surrogate
identifiers.
Unlike traditional relational models, dimensional models favor denormalization to ease the
burden on query designers and improve performance. Each fact and its associated dimensions
comprise what is commonly referred to as a “star schema”. Visually a fact table is surrounded by
multiple dimension tables that can be used to slice or join on the fact. In a fully denormalized
dimensional model that uses the star schema style there will only be a relationship between the
fact and a dimension, but the dimension is fully self-contained. When the dimensions are not fully
There exist three different types of fact tables: (1) transaction (2) accumulating snapshot and (3)
periodic snapshot. The level of grain of a transaction fact is an event that takes place at a certain
point in time. Transaction facts identify measurements that accompany a discrete action,
process, or activity that is performed on a non-regular interval or schedule. Accumulating
snapshot facts aggregate information that is measured over time or multiple events into a single
consolidated measurement. The measurement shows the current state at a certain level of grain.
The periodic snapshot fact table provides measurements that are recorded on a regular interval,
typically by day or date. Each record measures the state at a discrete moment in time.
Types Dimension tables are often classified based on the nature of the dimensional data they
provided, or to indicate the frequency (if any) with which they are updated.
Within a dimensional model it is an anti-pattern to have a NULL value for a foreign key within a
fact table. As a result, when a foreign key to a dimension does not apply, a default value for the
key will be placed in the fact record (the value of -1). This value will allow a “natural” join against
the dimension( s) to retrieve either a “Not Applicable” or “Unknown” value. The value of “Not
Applicable” or “N/A” implies that the value is not defined for the fact record or dimension and
could never have a valid value. The value of “Unknown” implies that the value could not be
determined or assessed, but could have a valid value. This practice encourages the use of
natural joins (rather than outer joins) when joining between a fact and its associated dimensions.
Query Language
As the dimensional model exposed by the Reporting Data Model is built on a relational database
management system, the queries to access the facts and dimensions are written using the
Structured Query Language (SQL). All SQL syntax supported by the PostgreSQL DBMS can be
leveraged. The use of the star or snowflake schema design encourages the use of a repeatable
SQL pattern for most queries. This pattern is as follows:
FROM fact_table
JOIN ...
... and other SQL constructs such as GROUP BY, HAVING, and LIMIT.
The SELECT clause projects all the columns of data that need to be returned to populate or fill
the various aspects of the report design. This clause can make use of aggregate expressions,
functions, and similar SQL syntax. The FROM clause is built by first pulling data from a single fact
table and then performing JOINs on the surrounding dimensions. Typically only natural joins are
required to join against dimensions, but outer joins may be required on a case-by-case basis. The
WHERE clause in queries against a dimensional model will filter on conditions from the data
either in the fact or dimension based on whether the filter numerical or textual.
The data types of the columns returned from the query will any of those supported by the
PostgreSQL DBMS. If a column projected within the query is a foreign key to a dimension and
there is no appropriate value, a sentinel will be used depending on the data type. These values
signify either not applicable or unknown depending on the dimension. If the data type cannot
support translation to the text “Unknown” or a similar sentinel value, then NULL will be used.
macaddr NULL
inet NULL
character, character
‘-’
varying
bigint, integer -1
Note: Data model 2.0.0 exposes information about linking assets across sites. All previous
information is still available, and in the same format. As of data model 2.0.0, there is a sites
column in the dim_asset dimension that lists the sites to which an asset belongs.
The following facts are provided by the Reporting Data Model. Each fact table provides access to
only information allowed by the configuration of the report. Any vulnerability status, severity or
category filters will be applied in the facts, only allowing those results, findings, and counts for
vulnerabilities in the scope to be exposed. Similarly, only assets within the scope of the report
configuration are made available in the fact tables. By default, all facts are interpreted to be asset-
centric, and therefore expose information for all assets in the scope of the report, regardless as to
whether they were configured to be in scope with the use of an asset, scan, asset group, or site
selection.
Note: Data model 2.0.0 exposes information about linking assets across sites. All previous
information is still available, and in the same format. As of data model 2.0.0, there is a sites
column in the dim_asset dimension that lists the sites to which an asset belongs.
For each fact, a dimensional star or snowflake schema is provided. For brevity and readability,
only one level in a snowflake schema is detailed, and only two levels of dimensions are displayed.
For more information on the attributes of these dimensions, refer to the Dimensions section
below.
When dates are displayed as measures of facts, they will always be converted to match the time
zone specified in the report configuration.
Only data from fully completed scans of assets are included in the facts. Results from aborted or
interrupted scans will not be included.
Common measures
It will be helpful to keep in mind some characteristics of certain measures that appear in the
following tables.
This attribute measures the ratio of assets that are compliant with the policy rule to the total
number of assets that were tested for the policy rule.
assets
This attribute measures the number of assets within a particular level of aggregation.
compliant_assets
This attribute measures the number of assets that are compliant with the policy rule (taking into
account policy rule overrides.)
exploits
This attribute measures the number of distinct exploit modules that can be used exploit
vulnerabilities on each asset. When the level of grain aggregates multiple assets, the total is the
summation of the exploits value for each asset. If there are no vulnerabilities found on the asset or
there are no vulnerabilities that can be exploited with a exploit module, the count will be zero.
malware_kits
This attribute measures the number of distinct malware kits that can be used exploit
vulnerabilities on each asset. When the level of grain aggregates multiple assets, the total is the
summation of the malware kits value for each asset. If there are no vulnerabilities found on the
asset or there are no vulnerabilities that can be exploited with a malware kit, the count will be
zero.
noncompliant_assets
This attribute measures the number of assets that are not compliant with the policy rule (taking
into account policy rule overrides.)
not_applicable_assets
This attribute measures the number of assets that are not applicable for the policy rule (taking into
account policy rule overrides.)
riskscore
This attribute measures the risk score of each asset, which is based on the vulnerabilities found
on that asset. When the level of grain aggregates multiple assets, the total is the summation of
the riskscore value for each asset.
This attribute measures the ratio of policy rule test result that are compliant or not applicable to
the total number of rule test results.
vulnerabilities
This attribute measures the number of vulnerabilities discovered on each asset. When the level of
grain aggregates multiple assets, the total is the summation of the vulnerabilities on each asset.
If a vulnerability was discovered multiple times on the same asset, it will only be counted once per
asset. This count may be zero if no vulnerabilities were found vulnerable on any asset in the latest
scan, or if the scan was not configured to perform vulnerability checks (as in the case of discovery
scans).
vulnerabilities_with_exploit
This attribute measures the total number of a vulnerabilities on all assets that can be exploited
with a published exploit module. When the level of grain aggregates multiple assets, the total is
the summation of the vulnerabilities_with_exploit value for each asset. This value is guaranteed
to be less than the total number of vulnerabilities. If no vulnerabilities are present, or none are
subject to an exploit, the value will be zero.
vulnerabilities_with_malware_kit
This attribute measures the number of vulnerabilities on each asset that are exploitable with a
malware kit. When the level of grain aggregates multiple assets, the total is the summation of
the vulnerabilities_with_malware_kit value for each asset. This value is guaranteed to be less
than the total number of vulnerabilities. If no vulnerabilities are present, or none are subject to a
malware kit, the value will be zero.
vulnerability_instances
This attribute measures the number of occurrences of all vulnerabilities found on each asset.
When the level of grain aggregates multiple assets, the total is the summation of the vulnerability_
instances value for each asset. This value will count each instance of a vulnerability on each
asset. This value may be zero if no instances were tested or found vulnerable (e.g. discover
scans).
fact_all
Level of Grain: The summary of the current state of all assets within the scope of the report.
Description: Summaries of the latest vulnerability details across the entire report. This is an
accumulating snapshot fact that updates after every scan of any asset within the report
completes. This fact will include the data for the most recent scan of each asset that is contained
within the scope of the report. As the level of aggregation is all assets in the report, this fact table
is guaranteed to return one and only one row always.
Columns
Data Associated
Column Description
type Nullable dimension
The number of vulnerabilities across all
vulnerabilities bigint No
assets.
critical_ The number of critical vulnerabilities
bigint No
vulnerabilities across all assets.
severe_ The number of severe vulnerabilities
bigint No
vulnerabilities across all assets.
moderate_ The number of moderate vulnerabilities
bigint No
vulnerabilities across all assets.
The number of malware kits across all
malware_kits integer No
assets.
The number of exploit modules across
exploits integer No
all assets.
vulnerabilities_ The number of vulnerabilities with a
integer No
with_malware_kit malware kit across all assets.
vulnerabilities_ The number of vulnerabilities with an
integer No
with_exploit exploit module across all assets.
vulnerability_ The number of vulnerability instances
bigint No
instances across all assets.
double
riskscore No The risk score across all assets.
precision
Dimensional model
fact_asset
Description: The fact_asset fact table provides the most recent information for each asset within
the scope of the report. For every asset in scope there will be one record in the fact table.
Columns
Associated
Column Data type Description
Nullable dimension
asset_id bigint No The identifier of the asset. dim_asset
The identifier of the scan with the most
last_scan_id bigint No dim_scan
recent information being summarized.
timestamp
The date and time at which the latest
scan_started with time No
scan for the asset started.
zone
timestamp
The date and time at which the latest
scan_finished with time No
scan for the asset completed.
zone
Description: This fact table provides a periodic snapshot for summarized values on an asset by
date. The fact table takes three dynamic arguments, which refine what data is returned. Starting
from startDate and ending on endDate, a summarized value for each asset in the scope of the
report will be returned for every dateInterval period of time. This will allow trending on asset
information by a customizable interval of time. In terms of a chart, startDate represents the lowest
value in the range, the endDate the largest value in the range, and the dateInterval is the
separation of the ticks of the range axis. If an asset did not exist prior to a summarization date, it
will have no record for that date value. The summarized values of an asset represent the state of
the asset in the most recent scan prior to the date being summarized; therefore, if an asset has
not been scanned before the next summary interval, the values for the asset will remain the
same.
Arguments
Columns
Dimensional model
fact_asset_discovery
Description: The fact_asset_discovery fact table provides an accumulating snapshot for each
asset within the scope of the report and details when the asset was first and last discovered. The
discovery date is interpreted as the precise time that the asset was first communicated with
during a scan, during the discovery phase of the scan. If an asset has only been scanned once
both the first_discovered and last_discovered dates will be the same.
Columns
Associated
Column Data type Description
Nullable dimension
asset_id big_int No The identifier of the asset. dim_asset
Dimensional model
fact_asset_group
Description: The fact_asset_group fact table provides the most recent information for each
asset group within the scope of the report. Every asset group that any asset within the scope of
Columns
Data
Column Description Associated
type Nullable
dimension
asset_group_id
(as named in
versions 1.2.0
dim_
and later of the
bigint No The identifier of the asset group. asset_
data model)
group
group_id
(as named in
version 1.1.0)
The number of distinct assets associated to the
assets bigint No asset group. If the asset group contains no
assets, the count will be zero.
The number of all vulnerabilities discovered on
vulnerabilities bigint No
assets in the asset group.
critical_ The number of all critical vulnerabilities
bigint No
vulnerabilities discovered on assets in the asset group.
severe_ The number of all severe vulnerabilities
bigint No
vulnerabilities discovered on assets in the asset group.
moderate_ The number of all moderate vulnerabilities
bigint No
vulnerabilities discovered on assets in the asset group.
The number of malware kits associated with
malware_kits integer No vulnerabilities discovered on assets in the asset
group.
The number of exploits associated with
exploits integer No vulnerabilities discovered on assets in the asset
group.
The number of vulnerabilities with a known
vulnerabilities_
integer No malware kit discovered on assets in the asset
with_malware
group.
vulnerabilities_ The number of vulnerabilities with a known
integer No
with_exploits exploit discovered on assets in the asset group.
Dimensional model
Level of Grain: An asset group and its summary information on a specific date.
Description: This fact table provides a periodic snapshot for summarized values on an asset
group by date. The fact table takes three dynamic arguments, which refine what data is returned.
Starting from startDate and ending on endDate, a summarized value for each asset group in the
scope of the report will be returned for every dateInterval period of time. This will allow trending
on asset group information by a customizable interval of time. In terms of a chart, startDate
represents the lowest value in the range, the endDate the largest value in the range, and the
dateInterval is the separation of the ticks of the range axis. If an asset group did not exist prior to a
summarization date, it will have no record for that date value. The summarized values of an asset
group represent the state of the asset group prior to the date being summarized; therefore, if the
Arguments
Columns
Data
Column Description Associated
type Nullable
dimension
dim_
group_id bigint No The identifier of the asset group. asset_
group
The number of distinct assets associated to the
assets bigint No asset group. If the asset group contains no
assets, the count will be zero.
The number of all vulnerabilities discovered on
vulnerabilities bigint No
assets in the asset group.
critical_ The number of all critical vulnerabilities
bigint No
vulnerabilities discovered on assets in the asset group.
severe_ The number of all severe vulnerabilities
bigint No
vulnerabilities discovered on assets in the asset group.
moderate_ The number of all moderate vulnerabilities
bigint No
vulnerabilities discovered on assets in the asset group.
The number of malware kits associated with
malware_kits integer No vulnerabilities discovered on assets in the asset
group.
The number of exploits associated with
exploits integer No vulnerabilities discovered on assets in the asset
group.
Dimensional model
fact_asset_group_policy_date
Description: This fact table provides a periodic snapshot for summarized policy values on an
asset group by date. The fact table takes three dynamic arguments, which refine what data is
returned. Starting from startDate and ending on endDate, the summarized policy value for each
Arguments
Data
Column Description
type Nullable
startDate date No The first date to return summarizations for.
endDate date No The last date to return summarizations for.
The interval between the start and end date to return
dateInterval interval No
summarizations for.
Columns
Data Associated
Column Description
type Nullable Dimension
The unique identifier of
group_id bigint Yes dim_asset
the asset group.
The date which the
day date No summarized policy scan
results snapshot is taken.
The unique identifier of
policy_id bigint Yes dim_scan
the policy within a scope.
The identifier for scope of
policy. Policies that are
automatically available
scope text Yes have "Built-in" scope, dim_policy
whereas policies created
by users have scope as
"Custom".
The total number of
assets that are in the
assets integer Yes scope of the report and
associated to the asset
group.
fact_asset_policy
Description: This table provides an accumulating snapshot of policy test results on an asset. It
displays a record for each policy that was tested on an asset in its most recent scan. Only policies
scanned within the scope of report are included.
Columns
Data
Column Description Associated
type Nullable
dimension
asset_id bigint No The identifier of the asset dim_asset
last_scan_id bigint No The identifier of the scan dim_scan
policy_id bigint No The identifier of the policy dim_policy
Dimensional model
Description: This fact table provides a periodic snapshot for summarized policy values on an
asset by date. The fact table takes three dynamic arguments, which refine what data is returned.
Starting from startDate and ending on endDate, the summarized policy value for each asset in
the scope of the report will be returned for every dateInterval period of time. This will allow
trending on asset information by a customizable interval of time. In terms of a chart, startDate
represents the lowest value in the range, the endDate the largest value in the range, and the
dateInterval is the separation of the ticks of the range axis. If an asset did not exist prior to a
summarization date, it will have no record for that date value. The summarized policy values of an
asset represent the state of the asset prior to the date being summarized; therefore, if the assets
in an asset group have not been scanned before the next summary interval, the values for the
asset will remain the same.
Arguments
Data
Column Description
type Nullable
startDate date No The first date to return summarizations for.
endDate date No The last date to return summarizations for.
The interval between the start and end date to return
dateInterval interval No
summarizations for.
Columns
Data Associated
Column Description
type Nullable Dimension
The unique identifier of
asset_id bigint Yes dim_asset
the asset.
The date which the
day date No summarized policy scan
results snapshot is taken.
The unique identifier of
scan_id bigint Yes dim_scan
the scan.
The unique identifier of
policy_id bigint Yes dim_policy
the policy within a scope.
fact_asset_policy_rule
Description: This table provides the rule results of the most recent policy scan for an asset within
the scope of the report. For each rule, only assets that are subject to that rule and that have a
result in the most recent scan are counted.
Data
Column Description Associated
type Nullable
dimension
asset_id bigint No The identifier of the asset dim_asset
policy_id bigint No The identifier of the policy dim_policy
The identifier for scope of policy. Policies that are
automatically available have "Built-in" scope,
scope text No
whereas policies created by users have scope as
"Custom".
dim_policy_
rule_id bigint No The identifier of the policy rule.
rule
scan_id bigint No The identifier of the scan dim_scan
timestamp The end date and time for the scan of the asset that
date_
without was tested for the policy, in the time zone specified
tested
timezone in the report configuration.
The identifier of the status for the policy rule finding
character dim_policy_
status_id No on the asset (taking into account policy rule
(1) rule_status
overrides.)
Whether the asset is compliant with the rule. True if
and only if all of the policy checks for this rule have
compliance boolean No
not failed, or the rule is overridden with the value
true on the asset.
proof text Yes The proof of the policy checks on the asset.
The unique identifier of the policy rule override that
is applied to the rule on an asset. If multiple dim_policy_
override_id bigint Yes overrides apply to the rule at different levels of rule_
scope, the identifier of the override having the true override
effect on the rule (latest override) is returned.
The unique identifiers of the policy rule override that
are applied to the rule on an asset. If multiple dim_policy_
override_
bigint[] Yes overrides apply to the rule at different levels of rule_
ids
scope, the identifier of each override is returned in a override
comma-separated list.
fact_asset_scan
Description: The fact_asset_scan transaction fact provides summary information of the results of
a scan for an asset. A fact record will be present for every asset and scan in which the asset was
fully scanned in. Only assets configured within the scope of the report and vulnerabilities filtered
within the report will take part in the accumulated totals. If no vulnerabilities checks were
performed during the scan, for example as a result of a discovery scan, the vulnerability related
counts will be zero.
Columns
Associated
Column Data type Description
Nullable dimension
scan_id bigint No The identifier of the scan. dim_scan
asset_id bigint No The identifier of the asset. dim_asset
timestamp
The time at which the scan for the
scan_started without time No
asset was started.
zone
fact_asset_scan_operating_system
Columns
Data
Column Description Associated
type Nullable
dimension
The identifier of the asset the operating system is
asset_id bigint No dim_asset
associated to.
scan_id bigint No The identifier of the scan the asset was fingerprinted in. dim_scan
The identifier of the operating system that was dim_
operating_
bigint No fingerprinted on the asset in the scan. If a fingerprint operating_
system_id
was not found, the value will be -1. system
The identifier of the source that was used to acquire dim_
fingerprint_ No the fingerprint. If a fingerprint was not found, the value fingerprint_
integer
source_id will be -1. source
Dimensional model
fact_asset_scan_policy
Description: This table provides the details of policy test results on an asset during a scan. Each
record provides the policy test results for an asset for a specific scan. Only policies within the
scope of report are included.
Columns
Note: As of version 1.3.0, passed_rules and failed_rules are now called compliant_rules and
noncompliant_rules.
Dimensional model
fact_asset_scan_software
Columns
Data Associated
Column Description
type Nullable dimension
asset_id bigint No The identifier of the asset dim_asset
scan_id bigint No The identifier of the scan . dim_scan
software_id bigint No The identifier of the software fingerprinted. dim_software
fingerprint_ The identifier of the source used to dim_fingerprint_
bigint No
source_id fingerprint the software. source
Dimensional model
fact_asset_scan_service
Description: The fact_asset_scan_service fact table provides the services detected during a
scan of an asset. If an asset had no services enumerated in a scan there will be no records in this
fact.
Data
Column Description Associated
type Nullable
dimension
asset_id bigint No The identifier of the asset. dim_asset
scan_id bigint No The identifier of the scan. dim_scan
Dimensional model
Description: This fact tables provides an accumulating snapshot for all vulnerability findings on
an asset in every scan of the asset. This table will display a record for each unique vulnerability
discovered on each asset in the every scan of the asset. If multiple occurrences of the same
vulnerability are found on the asset, they will be rolled up into a single row with a vulnerability_
instances count greater than one. Only vulnerabilities with no active exceptions applies will be
displayed.
Dimensional model
fact_asset_scan_vulnerability_instance
Columns
Data Associated
Column Description
type Nullable dimension
asset_id bigint No The identifier of the asset . dim_asset
scan_id bigint No The identifier of the scan. dim_scan
Dimensional model
fact_asset_scan_vulnerability_instance_excluded
Data Associated
Column Description
type Nullable dimension
asset_id bigint No The identifier of the asset. dim_asset
scan_id bigint No The identifier of the scan. dim_scan
dim_
vulnerability_ integer No The identifier of the vulnerability.
vulnerability
id
The date and time at which the vulnerability
timestamp
finding was detected. This time is the time at
date without No
which the asset completed scanning during the
time zone
scan.
The identifier of the status of the vulnerability dim_
character
status_id No finding that indicates the level of confidence of vulnerability_
(1)
the finding. status
The proof indicating the reason that the
vulnerability exists. The proof is exposed in
proof text No formatting markup that can be striped using the
function
proofAsText .
The secondary identifier of the vulnerability
finding that discriminates the result from similar
results of the same vulnerability on the same
key text Yes
asset. This value is optional and will be null
when a vulnerability does not need a secondary
discriminator.
The service the vulnerability was discovered on,
service_id integer No or -1 if the vulnerability is not associated with a dim_service
service.
The port on which the vulnerable service was
port integer No running, or -1 if the vulnerability is not associated
with a service.
The protocol the vulnerable service was
dim_
protocol_id integer No running, or -1 if the vulnerability is not associated
protocol
with a service.
fact_asset_vulnerability_age
Description: This fact table provides an accumulating snapshot for vulnerability age and
occurrence information on an asset. For every vulnerability to which an asset is currently
vulnerable, there will be one fact record. The record indicates when the vulnerability was first
found, last found, and its current age. The age is computed as the difference between the time
the vulnerability was first discovered on the asset, and the current time. If the vulnerability was
temporarily remediated, but rediscovered, the age will be from the first discovery time. If a
vulnerability was found on a service, remediated and discovered on another service, the age is
still computed as the first time the vulnerability was found on any service on the asset.
Columns
Associated
Column Data type Description
Nullable dimension
asset_id bigint No The unique identifier of the asset. dim_asset
fact_asset_vulnerability_finding
Description: This fact tables provides an accumulating snapshot for all current vulnerability
findings on an asset. This table will display a record for each unique vulnerability discovered on
each asset in the most recent scan of the asset. If multiple occurrences of the same vulnerability
are found on the asset, they will be rolled up into a single row with a vulnerability_instances count
greater than one. Only vulnerabilities with no active exceptions applies will be displayed.
Columns
Data
Column Description Associated
type Nullable
dimension
asset_id bigint No The identifier of the asset. dim_asset
The identifier of the last scan for the asset in which
scan_id bigint No dim_scan
the vulnerability was detected.
dim_
vulnerability_ No The identifier of the vulnerability.
integer vulnerability
id
Dimensional model
fact_asset_vulnerability_instance
Description: This table provides an accumulating snapshot for all current vulnerability instances
on an asset. Only vulnerability instance found to be vulnerable and with no exceptions actively
applied will be present within the fact table. If the multiple occurrences of the same vulnerability
are found on the asset, a row will be present for each instance.
Data Associated
Column Description
type Nullable dimension
asset_id bigint No The identifier of the asset. dim_asset
The identifier of the scan the vulnerability
scan_id bigint No dim_scan
instance was found in.
dim_
vulnerability_ integer No The identifier of the vulnerability.
vulnerability
id
The unique identifier of a vulnerability exception
that is pending for the vulnerability instance. If a
vulnerability instance has no pending exceptions, dim_
vulnerability_
integer Yes this value will be null. If multiple pending vulnerability_
exception_id
exceptions apply to the vulnerability at different exception
levels of scope, the identifier of the exception at
the lowest (most fine-grained) level is returned.
The unique identifiers of all vulnerability
exceptions that are pending for the vulnerability
instance. If a vulnerability instance has no
vulnerability_ dim_
pending exceptions, this value will be null. If
exception_ text Yes vulnerability_
multiple pending exceptions apply to the
ids exception
vulnerability at different levels of scope, then the
the identifier of all exceptions will be returned in a
comma-separated value string.
The date and time at which the vulnerability
timestamp
finding was detected. This time is the time at
date without No
which the asset completed scanning during the
time zone
scan.
The identifier of the status of the vulnerability dim_
character
status_id No finding that indicates the level of confidence of vulnerability_
(1)
the finding. status
The proof indicating the reason that the
vulnerability exists. The proof is exposed in
proof text No
formatting markup that can be striped using the
function proofAsText .
The secondary identifier of the vulnerability
finding that discriminates the result from similar
results of the same vulnerability on the same
key text Yes
asset. This value is optional and will be null
when a vulnerability does not need a secondary
discriminator.
Dimensional model
fact_asset_vulnerability_instance_excluded
Level of Grain: A vulnerability instance on an asset with an active vulnerability exception applied.
Data Associated
Column Description
type Nullable dimension
asset_id bigint No The identifier of the asset. dim_asset
dim_
vulnerability_ integer No The identifier of the vulnerability.
vulnerability
id
The date and time at which the vulnerability
timestamp
finding was detected. This time is the time at
date_tested without No
which the asset completed scanning during the
time zone
scan.
The identifier of the status of the vulnerability dim_
character
status_id No finding that indicates the level of confidence of vulnerability_
(1)
the finding. status
The proof indicating the reason that the
vulnerability exists. The proof is exposed in
proof text No
formatting markup that can be striped using the
function proofAsText .
The secondary identifier of the vulnerability
finding that discriminates the result from similar
results of the same vulnerability on the same
key text Yes
asset. This value is optional and will be null
when a vulnerability does not need a secondary
discriminator.
The service the vulnerability was discovered on,
service_id integer No or -1 if the vulnerability is not associated with a dim_service
service.
The port on which the vulnerable service was
port integer No running, or -1 if the vulnerability is not associated
with a service.
The protocol the vulnerable service was
dim_
protocol_id integer No running, or -1 if the vulnerability is not associated
protocol
with a service.
fact_pci_asset_scan_service_finding
Columns
Data Associated
Column Nullable Description
type dimension
asset_id bigint No The unique identifier of the asset. dim_asset
The unique identifier of the scan the service
scan_id bigint No dim_scan
finding was found in.
fact_pci_asset_service_finding
Level of Grain: A service finding on an asset from the latest scan of the asset.
Columns
Data Associated
Column Nullable Description
type dimension
asset_id bigint No The unique identifier of the asset. dim_asset
The unique identifier of the scan the service
scan_id bigint No dim_scan
finding was found in.
service_id integer No The identifier of the definition of the service. dim_service
vulnerability_
integer No The unique identifier of the vulnerability. dim_vulnerability
id
The identifier of the protocol the service was
protocol_id smallint No dim_protocol
utilizing.
port integer No The port the service was running on.
fact_pci_asset_special_note
Columns
Data Associated
Column Nullable Description
type dimension
asset_id bigint No The unique identifier of the asset. dim_asset
scan_id bigint No The unique identifier of the scan. dim_scan
service_
integer No The identifier of the definition of the service. dim_service
id
protocol_
smallint No The identifier of the protocol the service was utilizing. dim_protocol
id
port integer No The port the service was running on.
pci_ The unique identifier of the pci special note applied to
integer No dim_pci_note
note_id the vulnerability or service finding.
items_ A list of distinct identifiers for findings on a given asset,
text No
noted port, and protocol.
fact_policy
Description: This table provides a summary for the results of the most recent policy scan for
assets within the scope of the report. For each policy, only assets that are subject to that policy's
rules and that have a result in the most recent scan with no overrides are counted.
Columns
Note: As of version 1.3.0, a separate value has been created for not_applicable_assets and is no
longer included in compliant_assets.
Dimensional model
Description: This table provides a summary for the group rules's results of the most recent policy
scan for assets within the scope of the report. All rules that are directly or indirectly descend from
it and are counted.
Columns
Data Associated
Column Nullable Description
Type Dimension
The identifier for scope of policy. Policies that are
scope text No automatically available have "Built-in" scope, whereas
policies created by users have scope as "Custom".
policy_id bigint No The identifier of the policy. dim_policy
dim_policy_
group_id bigint No The identifier of the policy group.
group
non_
The number of rules that doesn't have 100% asset
compliant_ integer No
compliance (taking into account policy rule overrides.)
rules
compliant_ The number of rules that have 100% asset compliance
integer No
rules (taking into account policy rule overrides.)
The ratio of rule test result that are compliant or not
applicable to the total number of rule test results within
rule_
numeric True the policy group. If the group has no rule or no testable
compliance
rules (rule with no check, hence no result exists), this
will have a null value.
fact_policy_rule
Description: This table provides a summary for the rule results of the most recent policy scan for
assets within the scope of the report. For each rule, only assets that are subject to that rule and
that have a result in the most recent scan are counted.
Data Associated
Column Nullable Description
Type Dimension
The identifier for scope of policy. Policies
that are automatically available have "Built-
scope text No
in" scope, whereas policies created by
users have scope as "Custom".
policy_id bigint No The identifier of the policy. dim_policy
dim_policy_
rule_id bigint No The identifier of the policy rule.
rule
The number of assets that are compliant
compliant_
integer No with the rule (taking into account policy rule
assets
overrides.)
The number of assets that are not
noncompliant_
integer No compliant with the rule (taking into account
assets
policy rule overrides.)
not_ The number of assets that are not
applicable_ integer No applicable for the rule (taking into account
asset policy rule overrides.)
The ratio of assets that are compliant with
asset_
numeric No the policy rule to the total number of assets
compliance
that were tested for the policy rule.
Level of Grain: A solution with the highest level of supercedence and the effect applying that
solution would have on the scope of the report.
Description: A function which returns a result set of the top "count" solutions showing their
impact as specified by the sorting criteria. The criteria can be used to find solutions that have a
desirable impact on the scope of the report, and can be limited to a subset of all solutions. The
aggregate effect of applying each solution is computed and returned for each record. Only the
highest-level superceding solutions will be selected, in other words, only solutions which have no
superceding solution.
Arguments
Data
Description
Column type
integer The number of solutions to limit the output of this function to. The sorting and
count
aggregation are performed prior to the limit.
Columns
Data
Column Description Associated
type Nullable
dimension
solution_id integer No The identifier of the solution.
The number of assets that require the solution to
assets bigint No be applied. If the solution applies to a vulnerability
not detected on any asset, the value may be zero.
The total number of vulnerabilities that would be
vulnerabilities numeric No
remediated.
critical_ The total number of critical vulnerabilities that
numeric No
vulnerabilities would be remediated.
severe_ The total number of severe vulnerabilities that
numeric No
vulnerabilities would be remediated.
moderate_ The total number of moderate vulnerabilities that
numeric No
vulnerabilities would be remediated.
The total number of malware kits that would no
malware_kits integer No longer be used to exploit vulnerabilities if the
solution were applied.
The total number of exploits that could no longer
exploits integer No be used to exploit vulnerabilities if the solution
were applied.
The total number of vulnerabilities with a known
vulnerabilities_ integer No malware kit that would remediated by the
with_malware solution.
The total number of vulnerabilities with a
vulnerabilities_ integer No published exploit module that would remediated
with_exploits by the solution.
vulnerability_ The total number of occurrences of any
numeric No
instances vulnerabilities that are remediated by the solution.
double The risk score that is reduced by performing the
riskscore No
precision remediation.
Dimensional model
Level of Grain: A solution with the highest level of supercedence and the affect applying that
solution would have on the scope of the report.
Description: Fact that provides a summarization of the impact that applying a subset of all
remediations would have on the scope of the report. The criteria can be used to find solutions that
have a desirable impact on the scope of the report, and can be limited to a subset of all solutions.
The aggregate effect of applying all solutions is computed and returned as a single record. This
fact will be guaranteed to return one and only one record.
Arguments
Data
Description
Column type
integer The number of solutions to determine the impact for. The sorting and aggregation
count
are performed prior to the limit.
Columns
Data
Column Description Associated
type Nullable
dimension
The number of solutions selected and for which
solutions integer No the remediation impact is being summarized (will
be less than or equal to count).
The total number of assets that require a
assets bigint No
remediation to be applied.
The total number of vulnerabilities that would be
vulnerabilities bigint No
remediated.
critical_ The total number of critical vulnerabilities that
bigint No
vulnerabilities would be remediated.
severe_ The total number of severe vulnerabilities that
bigint No
vulnerabilities would be remediated.
moderate_ The total number of moderate vulnerabilities that
bigint No
vulnerabilities would be remediated.
The total number of malware kits that would no
malware_kits integer No longer be used to exploit vulnerabilities if all
selected remediations were applied.
The total number of exploits that would no longer
exploits integer No be used to exploit vulnerabilities if all selected
remediations were applied.
Dimensional model
fact_scan
Description: The fact_scan fact provides the summarized information for every scan any asset
within the scope of the report was scanned during. For each scan, there will be a record in this
fact table with the summarized results.
Columns
Data Associated
Column Description
type Nullable dimension
scan_id bigint No The identifier of the scan. dim_scan
assets bigint No The number of assets that were scanned
Dimensional model
Description: The fact_site table provides a summary record at the level of grain for every site
that any asset in the scope of the report belongs to. For each site, there will be a record in this fact
table with the summarized results, taking into account any vulnerability filters specified in the
report configuration. The summary of each site will display the accumulated information for the
most recent scan of each asset, not just the most recent scan of the site.
Columns
Data Associated
Column Description
type Nullable dimension
site_id bigint No The identifier of the site. dim_site
assets bigint No The total number of assets in the site.
The identifier of the most recent scan for the
last_scan_id bigint No
site.
The number of vulnerabilities discovered on
vulnerabilities bigint No
assets in the site.
critical_ The number of critical vulnerabilities
bigint No
vulnerabilities discovered on assets in the site.
severe_ The number of severe vulnerabilities
bigint No
vulnerabilities discovered on assets in the site.
moderate_ The number of moderate vulnerabilities
bigint No
vulnerabilities discovered on assets in the site.
The number malware kits associated with
malware_kits integer No
vulnerabilities discovered on assets in the site.
The number exploits associated with
exploits integer No
vulnerabilities discovered on assets in the site.
vulnerabilities_ The number of vulnerabilities with a malware
integer No
with_malware kit discovered on assets in the site.
vulnerabilities_ The number of vulnerabilities with an exploit kit
integer No
with_exploits discovered on assets in the site.
vulnerability_ The number of vulnerability instances
bigint No
instances discovered on assets in the site.
Dimensional model
Description: This fact table provides a periodic snapshot for summarized values on a site by
date. The fact table takes three dynamic arguments, which refine what data is returned. Starting
from startDate and ending on endDate, a summarized value for each site in the scope of the
report will be returned for every dateInterval period of time. This will allow trending on site
information by a customizable interval of time. In terms of a chart, startDate represents the lowest
value in the range, the endDate the largest value in the range, and the dateInterval is the
separation of the ticks of the range axis. If a site did not exist prior to a summarization date, it will
have no record for that date value. The summarized values of a site represent the state of the site
in the most recent scans prior to the date being summarized; therefore, if a site has not been
scanned before the next summary interval, the values for the site will remain the same.
Arguments
Columns
Data Associated
Column Description
type Nullable dimension
site_id bigint No The identifier of the site. dim_site
assets bigint No The total number of assets in the site.
The identifier of the most recent scan for the
last_scan_id bigint No
site.
The number of vulnerabilities discovered on
vulnerabilities bigint No
assets in the site.
critical_ The number of critical vulnerabilities
bigint No
vulnerabilities discovered on assets in the site.
severe_ The number of severe vulnerabilities
bigint No
vulnerabilities discovered on assets in the site.
moderate_ The number of moderate vulnerabilities
bigint No
vulnerabilities discovered on assets in the site.
The number malware kits associated with
malware_kits integer No
vulnerabilities discovered on assets in the site.
The number exploits associated with
exploits integer No
vulnerabilities discovered on assets in the site.
vulnerabilities_ The number of vulnerabilities with a malware
integer No
with_malware kit discovered on assets in the site.
vulnerabilities_ The number of vulnerabilities with an exploit kit
integer No
with_exploits discovered on assets in the site.
vulnerability_ The number of vulnerability instances
bigint No
instances discovered on assets in the site.
double
riskscore precision No The risk score of the site.
Dimensional model
fact_site_policy_date
Description: This fact table provides a periodic snapshot for summarized policy values on site by
date. The fact table takes three dynamic arguments, which refine what data is returned. Starting
from startDate and ending on endDate, the summarized policy value for each site in the scope of
the report will be returned for every dateInterval period of time. This will allow trending on site
information by a customizable interval of time. In terms of a chart, startDate represents the lowest
value in the range, the endDate the largest value in the range, and the dateInterval is the
separation of the ticks of the range axis. If a site did not exist prior to a summarization date, it will
have no record for that date value. The summarized policy values of a site represent the state of
the site prior to the date being summarized; therefore, if the site has not been scanned before the
next summary interval, the values for the site will remain the same.
Data
Column Description
type Nullable
startDate date No The first date to return summarizations for.
The end of the period where the scan results of an asset will be
endDate date No returned. If it is later the the current date, it will be replaced by the
later.
The interval between the start and end date to return
dateInterval interval No
summarizations for.
Columns
Data Associated
Column Description
type Nullable Dimension
The unique identifier of the
site_id bigint Yes dim_site
site.
The date when the
day date No summarized policy scan
results snapshot is taken.
The unique identifier of the
policy_id bigint Yes dim_site
policy within a scope.
The identifier for scope of
policy. Policies that are
automatically available
scope text Yes have "Built-in" scope,
whereas policies created
by users have scope as
"Custom".
The total number of assets
that are in the scope of the
assets integer Yes
report and associated to
the asset group.
The number of assets
associated to the asset
compliant_
integer Yes group that have not failed
assets
any while passed at least
one policy rule test.
fact_tag
Description: The fact_tag table provides an accumulating snapshot fact for the summary
information of a tag. The summary information provided is based on the most recent scan of
every asset associated with the tag. If a tag has no accessible assets, there will be a fact record
with zero counts. Only tags associated with assets, sites, or asset groups in the scope of the
report will be present in this fact.
Columns
Data
Column Description Associated
type Nullable
dimension
tag_id integer No The unique identifier of the tag. dim_tag
The total number of accessible assets associated
with the tag. If the tag has no accessible assets in
assets bigint No
the current scope or membership, this value can
be zero.
fact_tag_policy_date
Arguments
Data
Column Description
type Nullable
startDate date No The first date to return summarizations for.
The end of the period where the scan results of an asset will be
endDate date No returned. If it is later the the current date, it will be replaced by the
later.
The interval between the start and end date to return
dateInterval interval No
summarizations for.
Columns
Data Associated
Column Description
type Nullable Dimension
The unique identifier of the
tag_id bigint Yes dim_tag
tag.
The date which the
day date No summarized policy scan
results snapshot is taken.
The unqique identifier of the
policy_id bigint Yes dim_policy
policy within a scope.
The identifier for scope of
policy. Policies that are
automatically available have
scope text Yes
"Built-in" scope, whereas
policies created by users
have scope as "Custom".
The total number of assets
that are in the scope of the
assets integer Yes
report and associated to the
asset group.
The number of assets
associated to the asset
compliant_
integer Yes group that have not failed
assets
any while passed at least
one policy rule test.
fact_vulnerability
Description: The fact_vulnerability table provides a summarized record for each vulnerability
within the scope of the report. For each vulnerability, the count of assets subject to the
vulnerability is measured. Only assets with a finding in their most recent scan with no exception
applied are included in the totals.
Columns
dim_
vulnerability_ integer No The identifier of the vulnerability.
vulnerability
id
The number of assets that have the vulnerability.
affected_
bigint No This count may be zero if no assets are
assets
vulnerable.
Dimensional model
Note: Data model 2.0.0 exposes information about linking assets across sites. All previous
information is still available, and in the same format. As of data model 2.0.0, there is a sites
column in the dim_asset dimension that lists the sites to which an asset belongs.
On this page:
The following dimensions are provided to allow the report designer access to the specific
configuration parameters related to the scope of the report, including vulnerability filters.
dim_pci_note
Type: junk
Columns
Data Associated
Column Nullable Description
type dimension
The code that represents the PCI note
pci_note_id integer No
description
dim_scope_asset
Description: Provides access to the assets specifically configured within the configuration of the
report. This dimension will contain a record for each asset selected within the report
configuration.
Type: junk
Columns
dim_scope_asset_group
Description: Provides access to the asset groups specifically configured within the configuration
of the report. This dimension will contain a record for each asset group selected within the report
configuration.
Type: junk
Columns
dim_scope_filter_vulnerability_category_include
Description: Provides access to the names of the vulnerability categories that are configured to
be included within the scope of the report. One record will be present for every category that is
included. If no vulnerability categories are enabled for inclusion, this dimension table will be
empty.
Type: junk
Data
Description Associated dimension
Column type Nullable
The name of the vulnerability dim_vulnerability_
name text No
category. category
dim_scope_filter_vulnerability_severity
Description: Provides access to the severity filter enabled within the report configuration. The
severity filter is exposed as the maximum severity score a vulnerability can have to be included
within the scope of the report. This dimension is guaranteed to only have one record. If no
severity filter is explicitly enabled, the minimum severity value will be 0.
Type: junk
Columns
Data Associated
Column Description
type Nullable dimension
The minimum severity that a vulnerability must have dim_
min_
numeric No to be included in the scope of the report. If no filter is vulnerability_
severity
(2) applied to severity, defaults to 0. category
severity_ A human-readable description of the severity filter
text No
description that is enabled.
dim_scope_filter_vulnerability_status
Description: Provides access to the vulnerability status filters enabled within the configuration of
the report. A record will be present for every status filter that is enabled, and is guaranteed to
have between a minimum one and maximum three statuses enabled.
Type: junk
Columns
dim_scope_policy
Columns
dim_scope_scan
Description: Provides access to the scans specifically configured within the configuration of the
report. This dimension will contain a record for each scan selected within the report configuration.
Type: junk
Columns
dim_scope_site
Description: Provides access to the sites specifically configured within the configuration of the
report. This dimension will contain a record for each site selected within the report configuration.
Type: junk
Columns
dim_asset
Description: Dimension that provides access to the textual information of all assets configured to
be within the scope of the report. Only the information from the most recent scan of each asset is
used to provide an accumulating summary. There will be one record in this dimension for every
single asset in scope, including assets specified through configuring scans, sites, or asset groups
to be within scope.
Columns
Data
Column Description Associated
type Nullable
dimension
asset_id bigint No The identifier of the asset.
The primary MAC address of the asset. If an asset
mac_ has had no MAC address identified, the value will be
Yes
address macaddr null . If an asset has multiple MAC addresses, the
primary or best address is selected.
The primary IP address of the asset. If an asset has
ip_ multiple IP addresses, the primary or best address is
inet No
address selected. The IP address may be an IPv4 or IPv6
address.
The primary host name of the asset. If an asset has
had no host name identified, the value will be null . If
an asset has multiple host names, the primary or best
host_
text Yes address is selected. If the asset was scanned as a
name
result of configuring the site with a host name target,
that name will be guaranteed to be selected ss the
primary host name.
The identifier of the operating system fingerprint with dim_
operating_ bigint No the highest certainty on the asset. If the asset has no operating_
system_id operating system fingerprinted, the value will be -1. system
The identifier of the type of host the asset is classified
host_ dim_host_
integer No as. If the host type could not be detected, the value will
type_id type
be -1.
sites string No
dim_asset_file
Description: Dimension for files and directories that have been enumerated on an asset. Each
record represents one file or directory discovered on an asset. If an asset has no files or groups
enumerated, there will be no records in this dimension for the asset.
Columns
Data Associated
Description
Column type Nullable dimension
asset_
bigint No The identifier of the asset. dim_asset
id
file_id bigint No The identifier of the file or directory.
type text No The type of the item: Directory, File, or Unknown.
name text No The name of the file or directory.
The size of the file or directory in bytes. If the size is
size bigint No
unknown, the value will be -1.
dim_asset_group_account
Description: Dimension that provides the group accounts detected on an asset during the most
recent scan of the asset.
Columns
dim_asset_group
Description: Dimension that provides access to the asset groups within the scope of the
report. There will be one record in this dimension for every asset group which any asset in the
scope of the report is associated to, including assets specified through configuring scans, sites, or
asset groups.
Columns
Data
Column Description Associated
type Nullable
dimension
asset_
integer No The identifier of the asset group.
group_id
name text No The name of the asset group.
The optional description of the asset group. If no
description text Yes
description is specified, the value will be null .
Indicates whether the membership of the asset
dynamic_ group is computed dynamically using a dynamic
No
membership boolean asset filter, or is static (true if this group is a dynamic
asset group).
dim_asset_group_asset
Description: Dimension that provides access to the relationship between an asset group and its
associated assets. For each asset group membership of an asset there will be a record in this
table.
Columns
Data Associated
Column Description
type Nullable dimension
asset_
integer No The identifier of the asset group. dim_asset_group
group_id
dim_asset_host_name
Description: Dimension that provides all primary and alternate host names for an asset. Unlike
the dim_asset dimension, this dimension will provide detailed information for the alternate host
names detected on the asset. If an asset has no known host names, a record with an unknown
host name will be present in this dimension.
Columns
Data Associated
Description
Column type Nullable dimension
asset_
bigint No The identifier of the asset . dim_asset
id
host_ The host name associated to the asset, or 'Unknown'
text No
name if no host name is associated with the asset.
The identifier of the type of source which was used to dim_host_
source_ character No detect the host name, or '-' if no host name is name_
type_id (1) associated with the asset. source_type
dim_asset_ip_address
Description: Dimension that provides all primary and alternate IP addresses for an asset. Unlike
the dim_asset dimension, this dimension will provide detailed information for the alternate IP
addresses detected on the asset. As each asset is guaranteed to have at least one IP address,
this dimension will contain at least one record for every asset in the scope of the report.
Columns
Data Associated
Description
Column type Nullable dimension
asset_
bigint No The identifier of the asset. dim_asset
id
dim_asset_mac_address
Description: Dimension that provides all primary and alternate MAC addresses for an asset.
Unlike the dim_asset dimension, this dimension will provide detailed information for the alternate
MAC addresses detected on the asset. If an asset has no known MAC addresses, a record with
null MAC address will be present in this dimension.
Columns
Data Associated
Description
Column type Nullable dimension
asset_ The identifier of the asset the MAC address was
bigint No dim_asset
id detected on.
The MAC address associated to the asset, or null if
Yes
address macaddr the asset has no known MAC address.
dim_asset_operating_system
Description: Dimension that provides the primary and all alternate operating system fingerprints
for an asset. Unlike the dim_asset dimension, this dimension will provide detailed information for
all operating system fingerprints on an asset. If an asset has no known operating system, a
record with an unknown operating system fingerprint will be present in this dimension.
Columns
Data
Column Description Associated
type Nullable
dimension
asset_id bigint No The identifier of the asset. dim_asset
dim_asset_service
Description: Dimension that provides the services detected on an asset during the most recent
scan of the asset. If an asset had no services enumerated during the scan, there will be no
records in this dimension.
Columns
Data
Column Description Associated
type Nullable
dimension
asset_id bigint No The identifier of the asset. dim_asset
dim_
service_id integer No The identifier of the service.
service
dim_
protocol_id No The identifier of the protocol.
smallint protocol
port integer No The port on which the service is running.
service_ dim_
The identifier of the fingerprint for the service, or -1 if
fingerprint_ bigint No service_
a fingerprint is not available.
id fingerprint
The confidence level of the fingerprint, which ranges
certainty real No
from 0 to 1.0. If there is no fingerprint, the value is 0.
dim_asset_service_configuration
Columns
Data Associated
Column Nullable Description
type dimension
asset_id bigint No The identifier of the asset. dim_asset
service_
integer No The identifier of the service. dim_service
id
name text No The name of the configuration value.
The configuration value, which may be empty
value text Yes
or null.
port integer No The port on which the service was running.
dim_asset_service_credential
Description: Dimension that presents the most recent credential statuses asserted for services
on an asset in the latest scan.
Columns
Data Associated
Column Nullable Description
type dimension
asset_id bigint No The identifier of the asset. dim_asset
service_id integer No The identifier of the service. dim_service
dim_
credential_ The identifier of the credential status for
smallint No credential_
status_id the service credential.
status
The identifier of the protocol of the
protocol_id smallint No dim_protocol
service.
port integer No The port on which the service is running.
Description: Dimension that provides the software enumerated on an asset during the most
recent scan of the asset. If an asset had no software packages enumerated during the scan,
there will be no records in this dimension.
Columns
Data Associated
Column Description
type Nullable dimension
asset_id bigint No The identifier of the asset. dim_asset
software_id bigint No The identifier of the software package. dim_software
fingerprint_ The source which was used to detect dim_fingerprint_
integer No
source_id the software. source
dim_asset_user_account
Description: Dimension that provides the user accounts detected on an asset during the most
recent scan of the asset.
Columns
Data Associated
Description
Column type Nullable dimension
asset_
bigint No The identifier of the asset . dim_asset
id
The short, abbreviated name of the user account,
name text Yes
which may be null .
full_ The longer full name of the user account, which
text Yes
name may be null .
dim_asset_vulnerability_solution
Description: Dimension that provides access to what solutions can be used to remediate a
vulnerability on an asset. Multiple solutions may be selected as the means to remediate a
vulnerability on an asset. This occurs when either a single solution could not be selected, or if
multiple solutions must be applied together to perform the remediation. The solutions provided
Columns
Data Associated
Column Description
type Nullable dimension
asset_id bigint No The surrogate identifier of the asset. dim_asset
dim_
vulnerability_ No The identifier of the vulnerability.
integer vulnerability
id
The surrogate identifier of the solution that may be dim_
solution_id No
integer used to remediate the vulnerability on the asset. solution
dim_fingerprint_source
Description: Dimension that provides access to the means by which an operating system or
software package were detected on an asset.
Columns
Data Associated
Column Description
type Nullable dimension
fingerprint_ The identifier of the source of a
integer No
source_id fingerprint.
source text No The description of the source.
dim_mobile_asset_attribute
Data Associated
Column Nullable Description
type dimension
asset_id bigint No The identifier of the asset . dim_asset
The host name associated to the asset, or 'Unknown' if
no host name is associated with the asset. Possible
names include:
l Mobile Device ID
attribute_
text No l Mobile Device Useragent
name
l Mobile Device Owner
l Mobile Device Model
l Mobile Device OS
dim_operating_system
Description: Dimension provides access to all operating system fingerprints detected on assets
in any scan of the assets within the scope of the report.
Columns
Data
Column Description Associated
type Nullable
dimension
operating_
bigint No The identifier of the operating system.
system_id
The type of asset the operating system applies to,
which categorizes the operating system fingerprint.
asset_type No
integer This type can distinguish the purpose of the asset that
the operating system applies to.
The verbose description of the operating system,
description text No which combines the family, vendor, name, and version
.
dim_policy
Description: This is the dimension for all metadata related to a policy. It contains one record for
every policy that currently exists in the application.
Columns
Data
Column Nullable Description
Type
policy_id bigint No The identifier of the policy.
The identifier for scope of policy. Policies that are automatically
scope text No available have "Built-in" scope, whereas policies created by users
have scope as "Custom".
title text No The title of the policy as visible to the user.
description text A description of the policy.
total_rules bigint The sum of all the rules within the policy
dim_policy_group
Description: This is the dimension for all the metadata for each rule within a policy. It contains
one record for every rule within each policy.
Columns
Description: This is the dimension for all the metadata for each rule within a policy. It contains
one record for every rule within each policy.
Columns
Data
Column Nullable Description
Type
policy_id bigint No The identifier of the policy.
parent_
bigint Yes
group_id
The identifier of the group the rule directly belongs to. If the rule
scope text No
belongs directly to the policy this will be null.
rule_id bigint No The identifier of the rule.
The title of the rule, for each policy, that is visible to the user. It
title text describes a state or condition with which a tested asset should
comply.
description text A description of the rule.
dim_policy_override
Description: Dimension that provides access to all policy rule overrides in any state that may
apply to any assets within the scope of the report. This includes overrides that have expired or
have been superceded by newer overrides.
Columns
dim_policy_override_scope
Description: Dimension for the possible scope for a Policy override, such as Global, Asset, or
Asset Instance.
Type: normal
Columns
dim_policy_override_review_state
Type: normal
Columns
dim_policy_result_status
Description: Dimension for the possible statuses for a Policy Check result, such as Pass, Fail, or
Not Applicable.
Type: normal
Columns
dim_scan_engine
Description: Dimension for all scan engines that are defined. A record is present for each scan
engine to which the owner of the report has access.
Columns
Data Associated
Column Description
type Nullable dimension
scan_
integer No The unique identifier of the scan engine.
engine_id
name text No The name of the scan engine.
dim_scan_template
Description: Dimension for all scan templates that are defined. A record is present for each scan
template in the system.
Columns
Data Associated
Column Description
type Nullable dimension
scan_
text No The identifier of the scan template.
template_id
The short, human-readable name of the
name text No
scan template.
The verbose description of the scan
description text No
template.
dim_service
Description: Dimension that provides access to the name of a service detected on an asset in a
scan. This dimension will contain a record for every service that was detected during any scan of
any asset within the scope of the report.
Columns
Description: Dimension that provides access to the detailed information of a service fingerprint.
This dimension will contain a record for every service fingerprinted during any scan of any asset
within the scope of the report.
Columns
Data Associated
Column Description
type Nullable dimension
service_
fingerprint_ No The identifier of the service fingerprint.
bigint
id
The vendor name for the service. If the vendor was not
vendor text No
detected, the value will be 'Unknown'.
The family name or product line of the service. If the
family text No
family was not detected, the value will be 'Unknown'.
The name of the service. If the name was not detected,
name text No
the value will be 'Unknown'.
The version name or number of the service. If the
version text No
version was not detected, the value will be 'Unknown'.
dim_site
Description: Dimension that provides access to the textual information of all sites configured to
be within the scope of the report. There will be one record in this dimension for every site which
any asset in the scope of the report is associated to, including assets specified through
configuring scans, sites, or asset groups.
Columns
Data
Column Description Associated
type Nullable
dimension
site_id integer No The identifier of the site.
name text No The name of the site.
The optional description of the site. If the site has no
description text Yes
description, the value will be null .
dim_site_asset
Description: Dimension that provides access to the relationship between a site and its
associated assets. For each asset within the scope of the report, a record will be present in this
table that links to its associated site. The values in this dimension will change whenever a scan of
a site is completed.
Columns
dim_scan
Description: Dimension that provides access to the scans for any assets within the scope of the
report.
Columns
dim_site_scan
Description: Dimension that provides access to the relationship between a site and its
associated scans. For each scan of a site within the scope of the report, a record will be present in
this table.
Columns
dim_site_scan_config
Columns
Data Associated
Column Description
type Nullable dimension
site_id integer No The unique identifier of the site. dim_site
dim_site_target
Description: Dimension for all the included and excluded targets of a site. For all sites in the
scope of the report, a record will be present for each unique IP range and/or host name defined
as an included or excluded address in the site configuration. If any global exclusions are applied,
these will also be provided at the site level.
Columns
Data
Description Associated
Column type Nullable
dimension
site_id integer No The identifier of the site. dim_site
type text No Either host or ip to indicate the type of address.
True if the target is included in the configuration, or false
No
included boolean if it is excluded.
The address of the target. If host, this is the host name. If
target text No ip type, this is the IP address in text form (result of running
the HOST function).
dim_software
Description: Dimension that provides access to all the software packages that have been
enumerated across all assets within the scope of the report. Each record has detailed information
for the fingerprint of the software package.
Data Associated
Column Description
type Nullable dimension
software_
bigint No The identifier of the software package.
id
The vendor that produced or published the software
vendor text No
package.
family text No The family or product line of the software package.
name text No The name of the software.
version text No The version of the software.
dim_
software_
No The identifier of the class of software. software_
class_id integer
class
The Common Platform Enumeration (CPE) value
cpe text Yes
that corresponds to the software.
dim_software_class
Description: Dimension for the types of classes of software that can be used to classify or group
the purpose of the software.
Columns
Data Associated
Column Description
type Nullable dimension
software_
integer No The identifier of the software class.
class_id
The description of the software class, which
description text No
may be 'Unknown'.
dim_solution
Data
Column Description Associated
type Nullable
dimension
solution_
integer No The identifier of the solution.
id
nexpose_
text No The identifier of the solution within the application.
id
The amount of required time estimated to implement
interval
estimate No this solution on a single asset. The minimum value is 0
(0)
minutes, and the precision is measured in seconds.
An optional URL link defined for getting more
information about the solution. When defined, this
url text Yes may be a web page defined by the vendor that
provides more details on the solution, or it may be a
download link to a patch.
Type of the solution, can be PATCH, ROLLUP or
WORKAROUND. A patch type indicates that the
solution involves applying a patch to a product or
solution_
solution_ No operating system. A rollup patch type indicates that
type
type the solution supercedes other vulnerabilities and rolls
up many workaround or patch type solutions into one
step.
The steps that are a part of the fix this solution
prescribes. The fix will usually contain a list of
fix text Yes procedures that must be followed to remediate the
vulnerability. The fix will be provided in an HTML
format.
A short summary of solution which describes the
summary text No purpose of the solution at a high level and is suitable
for use as a summarization of the solution.
Description: Dimension that provides all superceding associations between solutions. Unlike
dim_solution_highest_supercedence , this dimension provides access to the entire graph of
superceding relationships. If a solution does not supercede any other solution, it will not have any
records in this dimension.
Columns
Data Associated
Column Description
type Nullable dimension
solution_id integer No The identifier of the solution. dim_solution
superceding_ The identifier of the superceding
integer No dim_solution
solution_id solution .
dim_solution_highest_supercedence
Description: Dimension that provides access to the highest level superceding solution for every
solution. If a solution has multiple superceding solutions that themselves are not superceded, all
will be returned. Therefore a single solution may have multiple records returned. If a solution is
not superceded by any other solution, it will be marked as being superceded by itself (to allow
natural joining behavior).
Columns
Data
Column Description Associated
type Nullable
dimension
dim_
solution_id No The identifier of the solution.
integer solution
The surrogate identifier of a solution that is known to
supercede the solution, and which itself is not
dim_
superceding_ No superceded (the highest level of supercedence). If
integer solution
solution_id the solution is not superceded, this is the same
identifier as solution_id .
Description: Dimension that provides an association between a solution and all the prerequisite
solutions that must be applied before it. If a solution has no prerequisites, it will have no records in
this dimension.
Columns
Data Associated
Column Description
type Nullable dimension
dim_tag
Description: Dimension for all tags that any assets within the scope of the report belong to. Each
tag has either a direct association or indirection association to an asset based off site or asset
group association or off dynamic membership criteria.
Columns
Data Associated
Column Description
type Nullable dimension
tag_id integer No The identifier of the tag.
tag_ The name of the tag. Names are unique for tags
text No
name within a type.
The type of the tag. The supported types are
tag_type text No CRITICALITY, LOCATION, OWNER, and
CUSTOM.
source text No The original application that created the tag.
creation_ No The date and time at which the tag was created.
timestamp
date
dim_tag_asset
Description: Dimension for the association between an asset and a tag. For each asset there will
be one record with an association to only one tag. This dimension only provides current
associations. It does not indicate whether an asset was previously associated with a tag.
Columns
Data
Column Description Associated
type Nullable
dimension
dim_vulnerability_solution
Description: Dimension that provides access to the relationship between a vulnerability and its
(direct) solutions. These solutions are only those which are directly known to remediate the
vulnerability, and does not include rollups or superceding solutions. If a vulnerability has more
Columns
Data Associated
Column Description
type Nullable dimension
dim_
vulnerability_ integer No The identifier of the vulnerability.
vulnerability
id
The identifier of the solution that vulnerability
solution_id integer No dim_solution
may be remediated with.
dim_vulnerability
Description: Dimension for all the metadata related to a vulnerability. This dimension will contain
one record for every vulnerability included within the scope of the report. The values in this
dimension will change whenever the risk model of the Security Console is modified.
Columns
Data Associated
Column Description
type Nullable dimension
vulnerability_id integer No The identifier of the vulnerability.
Long description for the
description text No
vulnerability.
A textual identifier of a
nexpose_id text No vulnerability unique to the
application.
The short, succinct title of the
title text No
vulnerability.
The date that the vulnerability
was published by the source of
date_published date No the vulnerability (third-party,
software vendor, or another
authoring source).
dim_vulnerability_category
Description: Dimension that provides the relationship between a vulnerability and a vulnerability
category.
Type: normal
Columns
Data Associated
Column Description
type Nullable dimension
category_id integer No The identifier of the vulnerability category.
dim_vulnerability_exception
Description: Dimension that provides access to all vulnerability exceptions in any state (including
deleted) that may apply to any assets within the scope of the report. The exceptions available in
this dimension will change as the their state changes, or any new exceptions are created over
time.
Data
Column Description Associated
type Nullable
dimension
dim_
vulnerability_ integer No The identifier of the vulnerability.
vulnerability
id
dim_
character The scope of the vulnerability exception, which
scope_id No exception_
(1) dictates what assets the exception applies to.
scope
dim_
character The reason that the vulnerability exception was
reason_id No exception_
(1) submitted.
reason
additional_ Optional comments associated with the last state
text Yes
comments change of the vulnerability exception.
dim_vulnerability_exploit
Description: Dimension that provides the relationship between a vulnerability and an exploit.
Type: normal
Columns
Data
Column Description Associated
type Nullable
dimension
dim_
vulnerability_ No The identifier of the vulnerability.
integer vulnerability
id
title text No The short, succinct title of the exploit.
The optional verbose description of the exploit. If
description text Yes
there is no description, the value is null .
dim_vulnerability_malware_kit
Description: Dimension that provides the relationship between a vulnerability and a malware kit.
Type: normal
Columns
Data
Column Description Associated
type Nullable
dimension
dim_vulnerability_reference
Description: Dimension that provides the references associated to a vulnerability, which provide
links to external sources of data and information related to a vulnerability.
Type: normal
Data
Column Description Associated
type Nullable
dimension
dim_
vulnerability_ No The identifier of the vulnerability .
integer vulnerability
id
The name of the source of the vulnerability
source text No information. The value is guaranteed to be provided in
all upper-case characters.
The reference that keys or links into the source of the
vulnerability information. If the source is 'URL', the
reference text No reference is 'URL'. Otherwise, the value is typically a
key or identifier that indexes into the source
repository.
The following dimensions are static in nature and all represent mappings of codes, identifiers,
and other constant values to human readable descriptions.
dim_access_type
Type: normal
Columns
Associated
Column Data type Description
Nullable dimension
character
type_id No The identifier of the access vector type.
(1)
The description of the access vector
text No
description type.
Notes &
status_
Detailed Description
id
Description
A vulnerability exploitable with only local access requires the attacker to
'L' 'Local' have either physical access to the vulnerable system or a local (shell)
account.
A vulnerability exploitable with adjacent network access requires the
'A' 'Adjacent attacker to have access to either the broadcast or collision domain of the
Network' vulnerable software.
A vulnerability exploitable with network access means the vulnerable
'N' software is bound to the network stack and the attacker does not require
'Network'
local network access or local access.
dim_aggregated_credential_status
Description: Dimension the containing the status aggregated across all available services for the
given asset in the given scan.
Type: normal
Columns
Data
Column Nullable Description Associated
type
dimension
The credential
aggregated_ status ID
credential_ smallint No associated with the No
status_id fact_asset_scan_
service.
aggregated_ The human-
credential_ readable
text No No
status_ description of the
description credential status.
Notes &
Detailed status_ Description
Description id
'No
One or more services for which credential status is reported were detected in
credentials 1
the scan, but there were no credentials supplied for any of them.
supplied'
'All One or more services for which credential status is reported were detected
credentials 2 in the scan, and all credentials supplied for these services failed to
failed' authenticate.
'Credentials At least two of the four services for which credential status is reported were
partially 3 detected in the scan, and for some services the provided credentials failed to
successful' authenticate, but for at least one there was a successful authentication.
'All One or more services for which credential status is reported were detected in
credentials 4 the scan, and for all of these services for which credentials were supplied
successful' authentication with provided credentials was successful.
None of the four applicable services (SNMP, SSH, Telnet, CIFS) was
'N/A' -1
discovered in the scan.
dim_credential_status
Description: Dimension for the scan service credential status in human-readable form.
Type: normal
Columns
Data
Column Nullable Description Associated
type
dimension
The credential
status ID
credential_
smallint No associated with the
status_id
fact_asset_scan_
service.
The human-
credential_
readable
status_ text No
description of the
description
credential status.
dim_cvss_access_complexity_type
Type: normal
Columns
Associated
Column Data type Description
Nullable dimension
character The identifier of the access complexity
type_id No
(1) type.
The description of the access complexity
text No
description type.
dim_cvss_authentication_type
Type: normal
Columns
Associated
Column Data type Description
Nullable dimension
character
type_id No The identifier of the authentication type.
(1)
The description of the authentication
text No
description type.
Values
Columns
Notes &
status_
Detailed Description
id
Description
Exploiting the vulnerability requires that the attacker authenticate two
'M'
'Multiple' or more times, even if the same credentials are used each time.
The vulnerability requires an attacker to be logged into the system
'S' 'Single'
(such as at a command line or via a desktop session or web interface).
'N' 'None' Authentication is not required to exploit the vulnerability.
dim_cvss_confidentiality_impact_type
Columns
Associated
Column Data type Description
Nullable dimension
character The identifier of the confidentiality impact
type_id No
(1) type.
The description of the confidentiality
text No
description impact type.
Values
Columns
Notes &
status_id
Detailed Description
Description
There is considerable informational disclosure. Access to some system
'P' 'Partial' files is possible, but the attacker does not have control over what is
obtained, or the scope of the loss is constrained.
There is total information disclosure, resulting in all system files being
'C' revealed. The attacker is able to read all of the system's data (memory,
'Complete'
files, etc.).
'N' 'None' There is no impact to the confidentiality of the system.
dim_cvss_integrity_impact_type
Type: normal
Columns
Associated
Column Data type Description
Nullable dimension
character The identifier of the confidentiality impact
type_id No
(1) type.
The description of the confidentiality
text No
description impact type.
Notes &
status_id
Detailed Description
Description
Modification of some system files or information is possible, but the
'P' 'Partial' attacker does not have control over what can be modified, or the scope of
what the attacker can affect is limited.
There is a total compromise of system integrity. There is a complete loss
'C' of system protection, resulting in the entire system being compromised.
'Complete'
The attacker is able to modify any files on the target system.
'N' 'None' There is no impact to the integrity of the system.
dim_cvss_availability_impact_type
Type: normal
Columns
Associated
Column Data type Description
Nullable dimension
character The identifier of the availability impact
type_id No
(1) type.
The description of the availability impact
text No
description type.
Values
Columns
Description: Dimension that provides all scopes a vulnerability exception can be defined on.
Type: normal
Columns
Data Associated
Column Description
type Nullable dimension
character The identifier of the scope of a
scope_id No
(1) vulnerability exception.
short_ A succinct, one-word description of the
text No
description scope.
description text No A verbose description of the scope.
Values
Columns
dim_exception_reason
Description: Dimension for all possible reasons that can be used within a vulnerability exception.
Type: normal
Data Associated
Column Description
type Nullable dimension
character The identifier for the reason of the
reason_id No
(1) vulnerability exception.
text No
description
Values
Columns
dim_exception_status
Type: normal
Columns
Associated
Column Data type Description
Nullable dimension
character
status_id No The identifier of the exception status.
(1)
The description or name of the exception
text No
description status.
dim_host_name_source_type
Description: Dimension for the types of sources used to detect a host name on an asset.
Type: normal
Columns
Associated
Column Data type Description
Nullable dimension
character
type_id No The identifier of the source type.
(1)
The description of the source type
text No
description code.
Values
Columns
Notes &
Detailed type_id Description
Description
'User The host name of the asset was acquired as a result of being
'T'
Defined' specified as a target within the scan (in the site configuration).
The host name discovered during a scan using the domain name
'D' 'DNS'
system (DNS).
dim_host_type
Description: Dimension for the types of hosts that an asset can be classified as.
Type: normal
Columns
Data Associated
Column Description
type Nullable dimension
host_type_
integer No The identifier of the host type.
id
The description of the host type
description text No
code.
Values
Columns
host_type_
Description Explanation
id
'Virtual The asset is a generic virtualized asset resident within a virtual
1
Machine' machine.
2 'Hypervisor' The asset is a virtualized asset within Hypervisor.
3 'Bare Metal' The asset is a physical machine.
4 'Mobile' The asset type is a mobile device (added in version 2.0.1)
-1 'Unknown' The asset type is unknown or could not be determined.
dim_scan_status
Type: normal
Associated
Column Data type Description
Nullable dimension
character The identifier of the status a scan can
status_id No
(1) have.
Values
Columns
Notes &
Detailed Status_id Description
Description
The scan was either manually or automatically aborted by the system. If
a scan is marked as aborted, it usually terminated abnormally. Aborted
'A' 'Aborted'
scans can occur when an engine is interrupted (terminated) while a scan
is actively running.
The scan was successfully completed and no errors were encountered
'C'
'Successful' (this includes scans that were manually or automatically resumed).
'U' 'Running' The scan is actively running and is in a non-paused state.
'S' 'Stopped' The scan was manually stopped by the user.
'E' 'Failed' The scan failed to launch or run successfully.
The scan is halted because a user manually paused the scan or the scan
'P' 'Paused'
has met its maximum scan duration.
'-' 'Unknown' The status of the scan cannot be determined.
dim_scan_type
Type: normal
Columns
Associated
Column Data type Description
Nullable dimension
character The identifier of the type a scan can
type_id No
(1) be.
Values
Columns
dim_vulnerability_status
Description: Dimension for the statuses a vulnerability finding result can be classified as.
Type: normal
Columns
Associated
Column Data type Description
Nullable dimension
character
status_id No The identifier of the vulnerability status.
(1)
The description of the vulnerability
text No
description status.
Values
Columns
dim_protocol
Description: Dimension that provides all possible protocols that a service can be utilizing on an
asset.
Type: normal
Columns
Data Associated
Column Description
type Nullable dimension
protocol_
integer No The identifier of the protocol.
id
name text No The name of the protocol.
The non-abbreviated description of the
text No
description protocol.
Values
Columns
To ease the development and design of queries against the Reporting Data Model, several utility
functions are provided to the report designer.
Note: Data model 2.0.0 exposes information about linking assets across sites. All previous
information is still available, and in the same format. As of data model 2.0.0, there is a sites
column in the dim_asset dimension that lists the sites to which an asset belongs.
age
added in version 1.2.0
Description: Computes the difference in time between the specified date and now. Unlike the
built-in age function, this function takes as an argument the unit to calculate in. This function will
compute the age and round based on the specified unit. Valid unit values are (precision of the
output):
The computation of age is not timezone aware, and uses heuristic values for time. In other words,
the age is computed as the elapsed time between the date and now, not the calendar time. For
example, a year is assumed to comprise 365.25 days, and a month 30.4 days.
Input: (timestamp, text) The date to compute the age for, and the unit of the computation.
baselineComparison
Input: (bigint, bigint) The identifier of any value in either the new or old state, followed by the
identifier of the most recent state.
Output: (text) A value indicating whether the baseline evaluates to ‘New’, ‘Old’, or ‘Same’.
csv
added in version 1.2.0
htmlToText
added in version 1.2.0
Description:Formats HTML content and structure into a flattened, plain-text format. This function
can be used to translate fields with content metadata, such as vulnerability proofs, vulnerability
descriptions, solution fixes, etc.
lastScan
maximumSeverity
added in version 1.2.0
Description:Returns the maximum severity value within an aggregated group. When used
across a grouping that contains multiple vulnerabilities with varying severities, this aggregate can
be used to select the highest severity of them all. For example, the aggregate of Severe and
Moderate is Severe. This aggregate should only be used on columns containing severity rankings
for a vulnerability.
Output: (text) The maximum severity value found within a group: Critical, Moderate, or Severe.
previousScan
Description: Returns the identifier of the scan that took place prior to the most recent scan of the
asset (see the function lastScan).
Output: (bigint) The identifier of the scan that occurred prior to the most recent scan of the asset.
If an asset was only scanned once, this will return null.
proofAsText
Deprecated as of version 1.2.0. Use htmlToText() instead.
Description: Formats the proof of a vulnerability instance to be output into a flattened, plain-text
format. This function is an alias for the htmlToText() function.
Output: (text) The proof value formatted for display as plain text.
scanAsOf
Description: Returns the identifier of the scan that took place on an asset prior to the specified
date (exclusive).
Input: (bigint, timestamp) The identifier of the asset and the date to search before.
Output: (bigint) The identifier of the scan that occurred prior to the specified date on the asset, or
null if no scan took place on the asset prior to the date.
Description:Returns the identifier of the scan that took place on an asset prior to the specified
date. See scanAsOf() if you are using a timestamp field.
Input: (bigint, date) The identifier of the asset and the date to search before.
Output: (bigint) The identifier of the scan that occurred prior to the specified date on the asset, or
null if no scan took place on the asset prior to the date.
When configuring a report, you have a number of options related to how the information will be
consumed and by whom. You can restrict report access to one user or a group of users. You can
restrict sections of reports that contain sensitive information so that only specific users see these
sections. You can control how reports are distributed to users, whether they are sent in e-mails or
stored in certain directories. If you are exporting report information to external databases, you
can specify certain properties related to the data export.
After a report is generated, only a Global Administrator and the designated report owner can see
that report on the Reports page. You also can have a copy of the report stored in the report
owner’s directory. See Storing reports in report owner directories on page 494.
If you are a Global Administrator, you can assign ownership of the report one of a list of users.
If you are not a Global Administrator, you will automatically become the report owner.
When the application generates a report, it stores it in the reports directory on the Security
Console host:
[installation_directory]/nsc/reports/[user_name]/
You can configure the application to also store a copy of the report in a user directory for the
report owner. It is a subdirectory of the reports folder, and it is given the report owner's user
name.
You can use string literals, variables, or a combination of these to create a directory path.
l $(report_name): the name of the report, which was created on the General section of the
Create a Report panel
After you create the path and run the report, the application creates the report owner’s user
directory and the subdirectory path that you specified on the Output page. Within this
subdirectory will be another directory with a hexadecimal identifier containing the report copy.
For example, if you specify the path windows_scans/$(date), you can access the newly
created report at:
reports/[report_owner]/windows_scans/$(date)/[hex_number]/[report_file_
name]
Consider designing a path naming convention that will be useful for classifying and organizing
reports. This will become especially useful if you store copies of many reports.
Another option for sharing reports is to distribute them via e-mail. Click the Distribution link in the
left navigation column to go the Distribution page. See Managing the sharing of reports on page
496.
Every report has a designated owner. When a Global Administrator creates a report, he or she
can select a report owner. When any other user creates a report, he or she automatically
becomes the owner of the new report.
In the console Web interface, a report and any generated instance of that report, is visible only to
the report owner or a Global Administrator. However, it is possible to give a report owner the
ability to share instances of a report with other individuals via e-mail or a distributed URL. This
expands a report owner’s ability to provide important security-related updates to a targeted group
of stakeholders. For example, a report owner may want members of an internal IT department to
view vulnerability data about a specific set of servers in order to prioritize and then verify
remediation tasks.
Note: The granting of this report-sharing permission potentially means that individuals will be
able to view asset data to which they would otherwise not have access.
l configuring the application to redirect users who click the distributed report URL link to the
appropriate portal
l granting users the report-sharing permission
Note: If a report owner creates an access list for a report and then copies that report, the copy
will not retain the access list of the original report. The owner would need to create a new access
list for the copied report.
Report owners who have been granted report-sharing permission can then create a report
access list of recipients and configure report-sharing settings.
By default, URLs of shared reports are directed to the Security Console. To redirect users who
click the distributed report URL link to the appropriate portal, you have to add an element to the
oem.xml configuration file.
The element reportLinkURL includes an attribute called altURL, with which you can specify the
redirect destination.
Note: If you are creating the oem.xml file, make sure to specify the tag at the beginning and
the tag at the end.
2. Add or edit the reports sub-element to include the reportLinkURL element with the altURL
attribute set to the appropriate destination, as in the following example:
<reports>
<reportEmail>
<reportSender>[email protected]</reportSender>
<reportSubject>${report-name}
</reportSubject>
</reportMessage>
</reportMessage>
</reportMessage>
</reportEmail>
<reportLinkURL altURL="base_url.net/directory_
path${variable}?loginRedir="/>
</reports>
Global Administrators automatically have permission to share reports. They can also assign this
permission to others users or roles.
1. Go to the Administration page, and click the Create link next to Users.
6. Click Save when you have finished configuring the account settings.
1. Go to the Administration page, and click the manage link next to Users.
(Optional) Go to the Users page and click the Edit icon for one of the listed accounts.
Note: You also can grant this permission by making the user a Global Administrator.
5. Click Save when you have finished configuring the account settings.
If you are a Global Administrator, or if you have been granted permission to share reports, you
can create an access list of users when configuring a report. These users will only be able to view
the report. They will not be able to edit or copy it.
To create a report access list with the Web-based interface, take the following steps:
If you are a Global Administrator or have Super-User permissions, you can select a report
owner. Otherwise, you are automatically the report owner.
Report Access
3. Click Add User to select users for the report access list.
4. Select the check box for each desired user, or select the check box in the top row to select all
users.
5. Click Done.
Note: Adding a user to a report access list potentially means that individuals will be able to
view asset data to which they would otherwise not have access.
6. Click Run the report when you have finished configuring the report, including the settings for
sharing it.
Note: Before you distribute the URL, you must configure URL redirection.
You can share a report with your access list either by sending it in an e-mail or by distributing a
URL for viewing it.
Report Distribution
3. Enter the sender’s e-mail address and SMTP relay server. For example, E-mail sender
address: [email protected] and SMTP relay server: mail.server.com.
You may require an SMTP relay server for one of several reasons. For example, a firewall
may prevent the application from accessing your network’s mail server. If you leave the
SMTP relay server field blank, the application searches for a suitable mail server for sending
reports. If no SMTP server is available, the Security Console does not send the e-mails and
will report an error in the log files.
8. (Optional) Select the check box to send the report to all users with access to assets in the
report.
Adding a user to a report access list potentially means that individuals will be able to
view asset data to which they would otherwise not have access.
Note: You cannot distribute a URL to users who are not on the report access list.
10. Select the method to send the report as: File or Zip Archive.
11. Click Run the report when you have finished configuring the report, including the settings for
sharing it.
Creating a report access list and configuring report-sharing settings with the API
Note: This topic identifies the API elements that are relevant to creating report access lists and
configuring report sharing. For specific instructions on using API v1.1 and Extended API v1.2,
see the API guide, which you can download from the Support page in Help.
l With the Users sub-element of ReportConfig, you can specify the IDs of the users whom
you want add to the report access list.
l With the Delivery sub-element of ReportConfig, you can use the sendToAclAs attribute to
specify how to distribute reports to your selected users.
Note: To obtain a list of users and their IDs, use the MultiTenantUserListing API, which is part of
the Extended API v1.2.
For general information on accessing the API and a sample LoginRequest, see the section
API overview in the API guide, which you can download from the Support page in Help.
2. Specify the user IDs you want to add to the report access list and the manner of report
distribution using the ReportSave API, as in the following XML example:
For additional, detailed information about the ReportSave API, see the API guide.
Every report is based on a template, whether it is one of the preset templates that ship with the
product or a customized template created by a user in your organization. A template consists of
one or more sections. Each section contains a subset of information, allowing you to look at scan
data in a specific way.
Security policies in your organization may make it necessary to control which users can view
certain report sections, or which users can create reports with certain sections. For example, if
your company is an Approved Scanning Vendor (ASV), you may only want a designated group of
users to be able to create reports with sections that capture Payment Card Industry (PCI)-related
scan data. You can find out which sections in a report are restricted by using the API (see the
section SiloProfileConfig in the API guide.)
The sub-element RestrictedReportSections is part of the SiloProfileCreate API for new silos and
SiloProfileUpdate API for existing silos. It contains the sub-element RestrictedReportSection for
which the value string is the name of the report section that you want to restrict.
In the following example, the Baseline Comparison report section will become restricted.
For general information on accessing the API and a sample LoginRequest, see the section
API overview in the API v1.1 guide, which you can download from the Support page in Help.
2. Identify the report section you want to restrict. This XML example of
SiloProfileUpdateRequest includes the RestrictedReportSections
element.
Note: To verify restricted report sections, use the SiloProfileConfig API. See the API guide.
The Baseline Comparison section is now restricted. This has the following implications for users
who have permission to generate reports with restricted sections:
l They can see Baseline Comparison as one of the sections they can include when creating
custom report templates.
l They can generate reports that include the Baseline Comparison section.
The restriction has the following implications for users who do not have permission to generate
reports with restricted sections:
l These users will not see Baseline Comparison as one of the sections they can include when
creating custom report templates.
l If these users attempt to generate reports that include the Baseline Comparison section, they
will see an error message indicating that they do not have permission to do so.
For additional, detailed information about the SiloProfile API, see API guide.
Global Administrators automatically have permission to generate restricted reports. They can
also assign this permission to others users.
1. Go to the Administration page, and click the Create link next to Users.
Note: You also can grant this permission by making the user a Global Administrator.
1. Go to the Administration page, and click the manage link next to Users.
OR
2. (Optional) Go to the Users page and click the Edit icon for one of the listed accounts.
3. Click the Roles link in the User Configuration panel.
If you selected Database Export as your report format, the Report Configuration—Output page
contains fields specifically for transferring scan data to a database.
Before you type information in these fields, you must set up a JDBC-compliant database. In
Oracle, MySQL, or Microsoft SQL Server, create a new database called nexpose with
administrative rights.
You can configure warehousing settings to store scan data or to export it to a PostgreSQL
database. You can use this feature to obtain a richer set of scan data for integration with your
own internal reporting systems.
Note: Due to the amount of data that can be exported, the warehousing process may take a long
time to complete.
If you are an approved scan vendor (ASV), you must use the following PCI-mandated report
templates for PCI scans as of September 1, 2010:
l Attestation of Compliance
l PCI Executive Summary
l Vulnerability Details
You may find it useful and convenient to combine multiple reports into one template. For example
you can create a template that combines sections from the Executive Summary, Vulnerability
Details, and Host Details templates into one report that you can present to the customer for the
initial review. Afterward, when the post-scan phase is completed, you can create another
template that includes the PCI Attestation of Compliance with the other two templates for final
delivery of the complete report set.
l Cover Page
l Payment Card Industry (PCI) Scan Information
l Payment Card Industry (PCI) Component Compliance Summary
l Payment Card Industry (PCI) Vulnerabilities Noted
l Payment Card Industry (PCI) Special Notes
l Cover Page
l Table of Contents
l Payment Card Industry (PCI) Scan Information
l Payment Card Industry (PCI) Vulnerability Details
For ASVs: Consolidating three report templates into one custom template 507
PCI Host Detail contains the following sections:
l Table of Contents
l Payment Card Industry (PCI) Scan Information
l Payment Card Industry (PCI) Host Details
Note: Due to PCI Council restrictions, section numbers of PCI reports are static and cannot
change to reflect the section structure of a customized report. Therefore, a customized report that
mixes PCI report sections with non-PCI report sections may have section numbers that appear
out of sequence.
For ASVs: Consolidating three report templates into one custom template 508
Consolidated report template for ASVs.
3. Enter a name and description for your custom report on the View Reports page.
Note: Do not use sections related to “legacy” reports. These are deprecated and no longer
sanctioned by PCI as of September 1, 2010.
8. Click Save.
For ASVs: Consolidating three report templates into one custom template 509
The Security Console displays the Manage report templates page with the new report
template.
Note: If you use sections from PCI Executive Summary or PCI Attestation of Compliance
templates, you will only be able to use the RTF format. If you attempt to select a different format,
an error message is displayed.
For ASVs: Consolidating three report templates into one custom template 510
Configuring custom report templates
The application includes a variety of built-in templates for creating reports. These templates
organize and emphasize asset and vulnerability data in different ways to provide multiple looks at
the state of your environment’s security. Each template includes a specific set of information
sections.
If you are new to the application, you will find built-in templates especially convenient for creating
reports. To learn about built-in report templates and the information they include, see Report
templates and sections on page 644.
As you become more experienced with the application and want to tailor reports to your unique
informational needs, you may find it useful to create or upload custom report templates.
Creating custom report templates enables you to include as much, or as little, scan information in
your reports as your needs dictate. For example, if you want a report that lists assets organized
by risk level, a custom report might be the best solution. This template would include only the
Discovered System Information section. Or, if you want a report that only lists vulnerabilities, you
may create a document template with the Discovered Vulnerabilities section or create a data
export template with vulnerability-related attributes.
You can also upload a custom report template that has been created by Rapid7at your request to
suit your specific needs. For example, custom report templates can be designed to provide high-
level information presented in a dashboard format with charts for quick reference that include
asset or vulnerability information that can be tailored to your requirements.Contact your account
representative for information about having custom report templates designed for your needs.
Templates that have been created for you will be provided to you. Otherwise, you can download
additional report templates in the Rapid7Community Web site at https://community.rapid7.com/ .
After you create or upload a custom report template, it appears in the list of available templates
on the Template section of the Create a report panel. See Working with externally created report
templates on page 517.
You must have permission to create a custom report template. To find out if you do, consult your
Global Administrator. To create a custom report template, take the following steps:
The Security Console displays the Create a New Report Template panel.
1. Enter a name and description for the new template on the General section of the Create a
New Report Template panel.
Tip: If you are a Global Administrator, you can find out if your license enables a specific
feature. Click the Administration tab and then the Manage link for the Security Console. In
the Security Console Configuration panel, click the Licensing link.
Note: The Vulnerability details setting only affects document report templates. It does not
affect data export templates.
3. Select a level of vulnerability details from the drop-down list in the Content section of the
Create a New Report Template panel.
Vulnerability details filter the amount of information included in document report templates:
5. Select the sections to include in your template and click Add>. See Report templates and
sections on page 644.
Set the order for the sections to appear by clicking the up or down arrows.
You can create a new custom report template based on any built-in or existing custom report
template. This allows you to take advantage of some of a template's useful features without
having to recreate them as you tailor a template to your needs.
To create a custom template based on an existing template, take the following steps:
3. From the table, select a template that you want to base a new template on.
OR
If you have a large number of templates and don't want to scroll through all of them, start
typing the name of a template in the Find a report template text box. The Security Console
displays any matches. The search is not case-sensitive.
4. Hover over the tool icon of the desired template. If it is a built-in template, you will have the
option to copy and then edit it. If it is a custom template, you can edit it directly unless you
prefer to edit a copy. Select an option.
The Security Console displays the Create a New Report Template panel.
5. Edit settings as described in Editing report template settings on page 512. If you are editing a
copy of a template, give the template a new name.
6. Click Save.
By default, a document report cover page includes a generic title, the name of the report, the date
of the scan that provided the data for the report, and the date that the report was generated. It
also may include the Rapid7 logo or no logo at all, depending on the report template. See Cover
Page on page 658. You can easily customize a cover page to include your own title and logo.
\shared\reportImages\custom\silo\default.
2. Go to the Cover Page Settings section of the Create a New Report Template panel.
3. Enter the name of the file for your own logo, preceded by the word “image:” in the Add
logo field.
Example: image:file_name.png. Do not insert a space between the word “image:” and the
file name.
The application provides built-in report templates and the ability to create custom templates
based on those built-in templates. Beyond these options, you may want to use compatible
templates that have been created outside of the application for your specific business needs.
These templates may have been provided directly to your organization or they may have been
posted in the Rapid7 Community at https://community.rapid7.com/community/nexpose/report-
templates.
See Fine-tuning information with custom report templates on page 511 for information about
requesting custom report templates.
Making one of these externally created templates available in the Security Console involves two
actions:
1. downloading the template to the workstation that you use to access the Security Console
2. uploading the template to the Security Console using the Reports configuration panel
Note: Your license must enable custom reporting for the template upload option to be available.
Also, externally created custom template files must be approved by Rapid7 and archived in the
.JAR format.
After you have downloaded a template archive, take the following steps:
3. Click New.
The Security Console displays the Create a New Report Template panel.
4. Enter a name and description for the new template on the General section of the Create a
New Report Template panel.
5. Select Upload a template file from the Template type drop-down list.
6. Click Browse in the Select file field to display a directory for you to search for custom
templates.
7. Select the report template file and click Open.
The report template file appears in the Select file field in the Content section.
Note: Contact Technical Support if you see errors during the upload process.
8. Click Save.
The custom report template file will now appear in the list of available report templates on the
Manage report templates panel.
The choice of a format is important in report creation. Formats not only affect how reports appear
and are consumed, but they also can have some influence on what information appears in
reports.
Several formats make report data easy to distribute, open, and read immediately:
Note: If you wish to generate PDF reports with Asian-language characters, make sure that UTF-
8 fonts are properly installed on your host computer. PDF reports with UTF-8 fonts tend to be
slightly larger in file size.
If you are using one of the three report templates mandated for PCI scans as of September 1,
2010 (Attestation of Compliance, PCI Executive Summary, or Vulnerability Details), or a custom
template made with sections from these templates, you can only use the RTF format. These
three templates require ASVs to fill in certain sections manually.
Tip: For information about XML export attributes, see Export template attributes on page 664.
That section describes similar attributes in the CSV export template, some of which have slightly
different names.
Various XML formats make it possible to integrate reports with third-party systems.
l Asset Report Format (ARF) provides asset information based on connection type, host name,
and IP address. This template is required for submitting reports of policy scan results to the
U.S. government for SCAP certification.
l XML Export, also known as “raw XML,” contains a comprehensive set of scan data with
minimal structure. Its contents must be parsed so that other systems can use its information.
l XML Export 2.0 is similar to XML Export, but contains additional attributes:
l Nexpose TM Simple XML is also a “raw XML” format. It is ideal for integration of scan data
with the Metasploit vulnerability exploit framework. It contains a subset of the data available in
the XML Export format:
l hosts scanned
l SCAP Compatible XML is also a “raw XML” format that includes Common Platform
Enumeration (CPE) names for fingerprinted platforms. This format supports compliance with
Security Content Automation Protocol (SCAP) criteria for an Unauthenticated Scanner
product.
l XML arranges data in clearly organized, human-readable XML and is ideal for exporting to
other document formats.
l XCCDF Results XML Report provides information about compliance tests for individual
USGCB or FDCC configuration policy rules. Each report is dedicated to one rule. The XML
output includes details about the rule itself followed by data about the scan results. If any
results were overridden, the output identifies the most recent override as of the time the report
was run. See Overriding rule test results.
l XCCDF Results XML Report provides information about compliance tests for individual
USGCB or FDCC configuration policy rules. Each report is dedicated to one rule. The XML
output includes details about the rule itself followed by data about the scan results. If any
results were overridden, the output identifies the most recent override as of the time the report
was run. See Overriding rule test results.
l CyberScope XML Export organizes scan data for submission to the CyberScope application.
Certain entities are required by the U.S. Office of Management and Budget to submit
CyberScope-formatted data as part of a monthly program of reporting threats.
l Qualys* XML Export is intended for integration with the Qualys reporting framework.
You can open a CSV (comma separated value) report in Microsoft Excel. It is a powerful and
versatile format. Not only does it contain a significantly greater amount of scan information than is
available in report templates, but you can easily use macros and other Excel tools to manipulate
this data and provide multiple views of it. Two CSV formats are available:
The CSV Export format works only with the Basic Vulnerability Check Results template and any
Data-type custom templates. See Fine-tuning information with custom report templates on page
511.
Using Excel pivot tables to create custom reports from a CSV file
The pivot table feature in Microsoft Excel allows you to process report data in many different
ways, essentially creating multiple reports one exported CSV file. Following are instructions for
using pivot tables. These instructions reflect Excel 2007. Other versions of Excel provide similar
workflows.
If you have Microsoft Excel installed on the computer with which you are connecting to the
Security Console, click the link for the CSV file on the Reports page. This will start Microsoft
Excel and open the file. If you do not have Excel installed on the computer with which you are
connecting to the console, download the CSV file from the Reports page, and transfer it to a
computer that has Excel installed. Then, use the following procedure.
Excel opens a new, blank sheet. To the right of this sheet is a bar with the title PivotTable
Field List, which you will use to create reports. In the top pane of this bar is a list of fields that
you can add to a report. Most of these fields re self-explanatory.
The result-code field provides the results of vulnerability checks. See How vulnerability
exceptions appear in XML and CSV formats on page 524 for a list of result codes and their
descriptions.
The severity field provides numeric severity ratings. The application assigns each
vulnerability a severity level, which is listed in the Severity column. The three severity levels—
Critical, Severe, and Moderate—reflect how much risk a given vulnerability poses to your
network security. The application uses various factors to rate severity, including CVSS
scores, vulnerability age and prevalence, and whether exploits are available.
Note: The severity field is not related to the severity score in PCI reports.
l 8 to 10 = Critical
l 4 to 7 = Severe
l 1 to 3 = Moderate
The next steps involve choosing fields for the type of report that you want to create, as in the three
following examples.
Example 1: Creating a report that lists the five most numerous exploited vulnerabilities
The resulting report lists the five most numerous exploited vulnerabilities.
Example 2: Creating a report that lists required Microsoft hot-fixes for each asset
The resulting report lists required Microsoft hot-fixes for each asset.
Example 3: Creating a report that lists the most critical vulnerabilities and the systems that are at
risk
The resulting report lists the most critical vulnerabilities and the assets that are at risk.
Vulnerability exceptions can be important for the prioritization of remediation projects and for
compliance audits. Report templates include a section dedicated to exceptions. See Vulnerability
Exceptions on page 663. In XML and CSV reports, exception information is also available.
XML: The vulnerability test status attribute will be set to one of the following values for
vulnerabilities suppressed due to an exception:
l ds (skipped, disabled): A check was not performed because it was disabled in the scan
template.
l ee (excluded, exploited): A check for an exploitable vulnerability was excluded.
l ep (excluded, potential): A check for a potential vulnerability was excluded.
l er (error during check): An error occurred during the vulnerability check.
l ev (excluded, version check): A check was excluded. It is for a vulnerability that can be
identified because the version of the scanned service or application is associated with known
vulnerabilities.
l nt (no tests): There were no checks to perform.
l nv (not vulnerable): The check was negative.
l ov (overridden, version check): A check for a vulnerability that would ordinarily be positive
because the version of the target service or application is associated with known
vulnerabilities was negative due to information from other checks.
l sd (skipped because of DoS settings): sd (skipped because of DOS settings)—If unsafe
checks were not enabled in the scan template, the application skipped the check because of
the risk of causing denial of service (DOS). See Configuration steps for vulnerability check
settings on page 562.
l sv (skipped because of inapplicable version): the application did not perform a check because
the version of the scanned item is not included in the list of checks.
l uk (unknown): An internal issue prevented the application from reporting a scan result.
l ve (vulnerable, exploited): The check was positive as indicated by asset-specific vulnerability
tests. Vulnerabilities with this result appear in the CSV report if the Vulnerabilities found result
type was selected in the report configuration. See Filtering report scope with vulnerabilities on
page 349.
l vp (vulnerable, potential): The check for a potential vulnerability was positive.
l vv (vulnerable, version check): The check was positive. The version of the scanned service or
software is associated with known vulnerabilities.
You can output the Database Export report format to Oracle, MySQL, and Microsoft SQL Server.
Nexpose provides a schema to help you understand what data is included in the report and how
the data is arranged, which is helpful in helping you understand how to you can work with the
data. You can request the database export schema from Technical Support.
Reports contain a great deal of information. It’s important to study them carefully for better
understanding, so that they can help you make more informed security-related decisions.
The data in a report is a static snapshot in time. The data displayed in the Web interface changes
with every scan. Variance between the two, such as in the number of discovered assets or
vulnerabilities, is most likely attributable to changes in your environment since the last report.
For stakeholders in your organization who need fresh data but don’t have access to the Web
interface, run reports more frequently. Or use the report scheduling feature to automatically
synchronize report schedules with scan schedules.
In environments that are constantly changing, Baseline Comparison reports an be very useful.
If your report data turns out to be much different from what you expected, consider several
factors that may have skewed the data.
l Lack of credentials: If certain information is missing from a report, such as discovered files,
spidered Web sites, or policy evaluations, check to see if the scan was configured with proper
logon information. The application cannot perform many checks without being able to log onto
target systems as a normal user would.
l Policy checks not enabled: Another reason that policy settings may not appear in a report is
that policy checks were not enabled in the scan template.
l Discovery-only templates: If no vulnerability data appears in a report, check to see if the scan
was preformed with a discovery-only scan template, which does not check for vulnerabilities.
l Certain vulnerability checks enabled or disabled: If your report shows vulnerabilities than you
expected, check the scan template to see which checks have been enabled or disabled.
l Unsafe checks not enabled: If a report shows indicates that a check was skipped because of
Denial of Service (DOS) settings, as with the sd result code in CSV reports, then unsafe
checks were not enabled in the scan template.
l Manual scans: A manual scan performed under unusual conditions for a site can affect
reports. For example, an automatically scheduled report that only includes recent scan data is
related to a specific, multiple-asset site that has automatically scheduled scans. A user runs a
manual scan of a single asset to verify a patch update. The report may include that scan data,
showing only one asset, because it is from the most recent scan.
If you are disseminating reports using multiple formats, keep in mind that different formats affect
not only how data is presented, but what data is presented. The human-readable formats, such
as PDF and HTML, are intended to display data that is organized by the document report
templates. These templates are more “selective” about data to include. On the other hand, XML
Export, XML Export 2.0, CSV, and export templates essentially include all possible data from
scans.
Remediating confirmed vulnerabilities is a high security priority, so it’s important to look for
confirmed vulnerabilities in reports. However, don’t get thrown off by listings of potential or
unconfirmed vulnerabilities. And don’t dismiss these as false positives.
The application will flag a vulnerability if it discovers certain conditions that make it probable that
the vulnerability exists. If, for any reason, it cannot absolutely verify that the vulnerability is there, it
will list the vulnerability as potential or unconfirmed. Or it may indicate that the version of the
scanned operating system or application is vulnerable.
The fact that a vulnerability is a “potential” vulnerability or otherwise not officially confirmed does
not diminish the probability that it exists or that some related security issue requires your
attention. You can confirm a vulnerability by running an exploit if one is available. See Working
with vulnerabilities on page 259. You also can examine the scan log for the certainty with which a
potentially vulnerable item was fingerprinted. A high level of fingerprinting certainty may indicate
a greater likelihood of vulnerability.
You can find out the certainty level of a reported vulnerability in different areas:
l The PCI Audit report includes a table that lists the status of each vulnerability. Status refers to
the certainty characteristic, such as Exploited, Potential, or Vulnerable Version.
l The Report Card report includes a similar status column in one of its tables, which also lists
information about the test that the application performed for each vulnerability on each asset.
l The XML Export and XML Export 2.0 reports include an attribute called test status, which
includes certainty characteristics, such as vulnerable-exploited, and not-vulnerable.
l The CSV report includes result codes related to certainty characteristics.
l If you have access to the Web interface, you can view the certainty characteristics of a
vulnerability on the page that lists details about the vulnerability.
When reviewing reports, look beyond vulnerabilities for other signs that may put your network at
risk. For example, the application may discover a telnet service and list it in a report. A telnet
service is not a vulnerability. However, telnet is an unencrypted protocol. If a server on your
network is using this protocol to exchange information with a remote computer, it's easy for an
uninvited party to monitor the transmission. You may want to consider using SSH instead.
In another example, it may discover a Cisco device that permits Web requests to go to an HTTP
server, instead of redirecting them to an HTTPS server. Again, this is not technically a
vulnerability, but this practice may be exposing sensitive data.
A long list of vulnerabilities in a report can be a daunting sight, and you may wonder which
problem to tackle first. The vulnerability database contains checks for over 12,000 vulnerabilities,
and your scans may reveal more vulnerabilities than you have time to correct.
One effective way to prioritize vulnerabilities is to note which have real exploits associated with
them. A vulnerability with known exploits poses a very concrete risk to your network. The Exploit
ExposureTM feature flags vulnerabilities that have known exploits and provides exploit
information links to Metasploit modules and the Exploit Database. It also uses the exploit ranking
data from the Metasploit team to rank the skill level required for a given exploit. This information
appears in vulnerability listings right in the Security Console Web interface, so you can see right
away
Since you can’t predict the skill level of an attacker, it is a strongly recommend best practice to
immediately remediate any vulnerability that has a live exploit, regardless of the skill level
required for an exploit or the number of known exploits.
l Using most recent scan data: If old assets that are no longer in use still appear in your reports,
and if this is not desirable, make sure to enable the check box labeled Use the last scan data
only.
l Report schedule out of sync with scan schedule: If a report is showing no change in the
number of vulnerabilities despite the fact that you have performed substantial remediation
since the last report was generated, check the report schedule against the scan schedule.
Make sure that reports are automatically generated to follow scans if they are intended to
show patch verification.
l Assets not included: If a report is not showing expected asset data, check the report
configuration to see which sites and assets have been included and omitted.
l Vulnerabilities not included: If a report is not showing an expected vulnerability, check the
report configuration to vulnerabilities that have been filtered from the report. On the
Scope section of the Create a report panel, click Filter report scope based on
vulnerabilities and verify the filters are set appropriately to include the categories and severity
level you need.
Another way to prioritize vulnerabilities is according to their risk scores. A higher score warrants
higher priority.
The application calculates risk scores for every asset and vulnerability that it finds during a scan.
The scores indicate the potential danger that the vulnerability poses to network and business
security based on impact and likelihood of exploit.
Risk scores are calculated according to different risk strategies. See Working with risk strategies
to analyze threats on page 610.
You can use the ticketing system to manage the remediation work flow and delegate remediation
tasks. Each ticket is associated with an asset and contains information about one or more
vulnerabilities discovered during the scanning process.
Viewing tickets
Click the Tickets icon to view all active tickets. The console displays the Tickets page.
Click a link for a ticket name to view or update the ticket. See the following section for details
about editing tickets. From the Tickets page, you also can click the link for an asset's address to
view information about that asset, and open a new ticket.
The process of creating a new ticket for an asset starts on the Security Console page that lists
details about that asset. You can get to that page by selecting a view option on the Assets page
and following the sequence of console pages that ends with asset. See Locating and working
with assets on page 235.
Opening a ticket
When you want to create a ticket for a vulnerability, click the Open a ticket button, which appears
at the bottom of the Vulnerability Listings pane on the detail page for each asset. See Locating
assets by sites on page 238. The console displays the General page of the Ticket Configuration
panel.
On the Ticket Configuration–General page, type name for the new ticket. These names are not
unique. They appear in ticket notifications, reports, and the list of tickets on the Tickets page.
The status of the ticket appears in the Ticket State field. You cannot modify this field in the panel.
The state changes as the ticket issue is addressed.
Note: If you need to assign the ticket to a user who does not appear on the drop down list, you
must first add that user to the associated asset group.
Assign a priority to the ticket, ranging from Critical to Low, depending on factors such as the
vulnerability level. The priority of a ticket is often associated with external ticketing systems.
You can close the ticket to stop any further remediation action on the related issue. To do so, click
the Close Ticket button on this page. The console displays a box with a drop down list of reasons
for closing the ticket. Options include Problem fixed, Problem not reproducible, and Problem not
considered an issue (policy reasons). Add any other relevant information in the dialog box and
click the Save button.
Adding vulnerabilities
Click the Select Vulnerabilities... button. The console displays a box that lists all reported
vulnerabilities for the asset. You can click the link for any vulnerability to view details about it,
including remediation guidance.
Select the check boxes for all the vulnerabilities you wish to include in the ticket, and click the
Save button. The selected vulnerabilities appear on the Vulnerabilities page.
You can update coworkers on the status of a remediation project, or note impediments,
questions, or other issues, by annotating the ticket history. As Nexpose users and administrators
add comments related to the work flow, you can track the remediation progress.
3. Click Save.
As you use the application to gather, view, and share security information, you may want to adjust
settings of features that these operations.
Tune provides guidance on adjusting or customizing settings for scans, risk calculation, and
configuration assessment.
Working with scan templates and tuning scan performance on page 534: After familiarizing
yourself with different built-in scan templates, you may want to customize your own scan
templates for maximum speed or accuracy in your network environment. This section provides
best practices for scan tuning and guides you through the steps of creating a custom scan
template.
Working with risk strategies to analyze threats on page 610: The application provides several
strategies for calculating risk. This section explains how each strategy emphasizes certain
characteristics, allowing you to analyze risk according to your organization’s unique security
needs or objectives. It also provides guidance for changing risk strategies and supporting custom
strategies.
Creating a custom policy on page 589: You can create custom configuration policies based an
USGCB and FDCC policies, allowing you to check your environment for compliance with your
organization’s unique configuration policies. This section guides you through configuration steps.
Tune 533
Working with scan templates and tuning scan
performance
You may want to improve scan performance. You may want to make scans faster or more
accurate. Or you may want scans to use fewer network resources. The following section provides
best practices for scan tuning and instructions for working with scan templates.
Tuning scans is a sensitive process. If you change one setting to attain a certain performance
boost, you may find another aspect of performance diminished. Before you tweak any scan
templates, it is important for you to know two things:
Identify your goals and how they’re related to the performance “triangle.” See Keep the “triangle”
in mind when you tune on page 536. Doing so will help you look at scan template configuration in
the more meaningful context of your environment. Make sure to familiarize yourself with scan
template elements before changing any settings.
Also, keep in mind that tuning scan performance requires some experimentation, finesse, and
familiarity with how the application works. Most importantly, you need to understand your unique
network environment.
This introductory section talks about why you would tune scan performance and how different
built-in scan templates address different scanning needs:
See also the appendix that compares all of our built-in scan templates and their use cases:
Familiarizing yourself with built-in templates is helpful for customizing your own templates. You
can create a custom template that incorporates many of the desirable settings of a built-in
template and just customize a few settings vs. creating a new template from scratch.
Before you tune scan performance, make sure you know why you’re doing it. What do you want
to change? What do you need it to do better? Do you need scans to run more quickly? Do you
need scans to be more accurate? Do you want to reduce resource overhead?
Your goal may be to increase overall scan speed, as in the following scenarios:
l Actual scan-time windows are widening and conflicting with your scan blackout periods. Your
organization may schedule scans for non-business hours, but scans may still be in progress
when employees in your organization need to use workstations, servers, or other network
resources.
l A particular type of scan, such as for a site with 300 Windows workstations, is taking an
especially long time with no end in sight. This could be a “scan hang” issue rather than simply
a slow scan.
Note: If a scan is taking an extraordinarily long time to finish, terminate the scan and contact
Technical Support.
l You need to able to schedule more scans within the same time window.
l Policy or compliance rules have become more stringent for your organization, requiring you to
perform “deeper” authenticated scans, but you don't have additional time to do this.
l You have to scan more assets in the same amount of time.
l You have to scan the same number of assets in less time.
l You have to scan more assets in less time.
Your goal may be to lower the hit on resources, as in the following scenarios:
l Your scans are taking up too much bandwidth and interfering with network performance for
other important business processes.
l The computers that host your Scan Engines are maxing out their memory if they scan a
certain number of ports.
l The security console runs out of memory if you perform too many simultaneous scans.
Scans may not be giving you enough information, as in the following scenarios:
Any tuning adjustment that you make to scan settings will affect one or more main performance
categories.
These categories reflect the general goals for tuning discussed in the preceding section:
l accuracy
l resources
l time
If you lengthen one side of the triangle—that is, if you favor one performance category—you will
shorten at least one of the other two sides. It is unrealistic to expect a tuning adjustment to
lengthen all three sides of the triangle. However, you often can lengthen two of the three sides.
Providing more time to run scans typically means making scans run faster. One use case is that of
a company that holds auctions in various locations around the world. Its asset inventory is slightly
over 1,000. This company cannot run scans while auctions are in progress because time-
sensitive data must traverse the network at these times without interruptions. The fact that the
company holds auctions in various time zones complicates scan scheduling. Scan windows are
extremely tight. The company's best solution is to use a lot of bandwidth so that scan can finish as
quickly as possible.
In this case it’s possible to reduce scan time without sacrificing accuracy. However, a high
workload may tap resources to the point that the scanning mechanisms could become unstable.
In this case, it may be necessary to reduce the level of accuracy by, for example, turning off
credentialed scanning.
There are many various ways to increase scan speeds, including the following:
l Increase the number of assets that are scanned simultaneously. Be aware that this will tax
RAM on Scan Engines and the Security Console.
l Allocate more scan threads. Doing so will impact network bandwidth.
l Use a less exhaustive scan template. Again, this will diminish the accuracy of the scan.
l Add Scan Engines, or position them in the network strategically. If you have one hour to scan
200 assets over low bandwidth, placing a Scan Engine on the same side of the firewall as
those assets can speed up the process. When deploying a Scan Engine relative to target
assets, choose a location that maximizes bandwidth and minimizes latency. For more
information on Scan Engine placement, refer to the administrator’s guide.
Increasing accuracy
There are many ways to this, each with its own “cost” according to the performance triangle:
Increase the number of discovered assets, services, or vulnerability checks. This will take more
time.
“Deepen” scans with checks for policy compliance and hotfixes. These types of checks require
credentials and can take considerably more time.
Scan assets more frequently. For example, peripheral network assets, such as Web servers or
Virtual Private Network (VPN) concentrators, are more susceptible to attack because they are
exposed to the Internet. It’s advisable to scan them often. Doing so will either require more
Be aware of license limits when scanning network services. When the application attempts to
connect to a service, it appears to that service as another “client,” or user. The service may have
a defined limit for how many simultaneous client connections it can support. If service has
reached that client capacity when the application attempts a connection, the service will reject the
attempt. This is often the case with telnet-based services. If the application cannot connect to a
service to scan it, that service won’t be included in the scan data, which means lower scan
accuracy.
Making more resources available primarily means reducing how much bandwidth a scan
consumes. It can also involve lowering RAM use, especially on 32-bit operating systems.
Consider bandwidth availability in four major areas of your environment. Any one of or more of
these can become bottlenecks:
l The computer that hosts the application can get bogged down processing responses from
target assets.
l The network infrastructure that the application runs on, including firewalls and routers, can get
bogged down with traffic.
l The network on which target assets run, including firewalls and routers, can get bogged down
with traffic.
l The target assets can get bogged down processing requests from the application.
Of particular concern is the network on which target assets run, simply because some portion of
total bandwidth is always in use for business purposes. This is especially true if you schedule
scans to run during business hours, when workstations are running and laptops are plugged into
the network. Bandwidth sharing also can be an issue during off hours, when backup processes
are in progress.
Two related bandwidth metrics to keep an eye on are the number of data packets exchanged
during the scan, and the correlating firewall states. If the application sends too many packets per
second (pps), especially during the service discovery and vulnerability check phases of a scan, it
can exceed a firewall’s capacity to track connection states. The danger here is that the firewall will
start dropping request packets, or the response packets from target assets, resulting in false
negatives. So, taxing bandwidth can trigger a drop in accuracy.
There is no formula to determine how much bandwidth should be used. You have to know how
much bandwidth your enterprise uses on average, as well as the maximum amount of bandwidth
For example, if your network can handle a maximum of 10,000 pps without service disruptions,
and your normal business processes average about 3,000 pps at any given time, your goal is to
have the application work within a window of 7,000 pps.
The primary scan template settings for controlling bandwidth are scan threads and maximum
simultaneous ports scanned.
For example, a company operates full-service truck stops in one region of the United States. Its
security team scans multiple remote locations from a central office. Bandwidth is considerably
low due to the types of network connections. Because the number of assets in each location is
lower than 25, adding remote Scan Engines is not a very efficient solution. A viable solution in this
situation is to reduce the number of scan threads to between two and five, which is well below the
default value of 10.
There are various other ways to increase resource availability, including the following:
l Reduce the number of target assets, services, or vulnerability checks. The cost is accuracy.
l Reduce the number of assets that are scanned simultaneously. The cost is time.
l Perform less exhaustive scans. Doing so primarily reduces scan times, but it also frees up
threads.
Scan templates contain a variety of parameters for defining how assets are scanned. Most tuning
procedures involve editing scan template settings.
The built-in scan templates are designed for different use cases, such as PCI compliance,
Microsoft Hotfix patch verification, Supervisory Control And Data Acquisition (SCADA)
equipment audits, and Web site scans. You can find detailed information about scan templates in
the section titled Scan templates on page 639. This section includes use cases and settings for
each scan template.
Note: Until you are familiar with technical concepts related to scanning, such as port discovery
and packet delays, it is recommended that you use built-in templates.
You will notice that if you select the option to create a new template, many basic configuration
settings have built-in values. It is recommended that you do not change these values unless you
have a thorough working knowledge of what they are for. Use particular caution when changing
any of these built-in values.
If you customize a template based on a built-in template, you may not need to change every
single scan setting. You may, for example, only need to change a thread number or a range of
ports and leave all other settings untouched.
For these reasons, it’s a good idea to perform any customizations based on built-in templates.
Start by familiarizing yourself with built-in scan templates and understanding what they have in
common and how they differ. The following section is a comparison of four sample templates.
Understanding the phases of scanning is helpful in understanding how scan templates are
structured.
l asset discovery
l service discovery
l vulnerability checks
Note: The discovery phase in scanning is a different concept than that of asset discovery, which
is a method for finding potential scan targets in your environment.
During the asset discovery phase, a Scan Engine sends out simple packets at high speed to
target IP addresses in order to verify that network assets are live. You can configure timing
intervals for these communication attempts, as well as other parameters, on the Asset
Discovery and Discovery Performance pages of the Scan Template Configuration panel.
Upon locating the asset, the Scan Engine begins the service discovery phase, attempting to
connect to various ports and to verify services for establishing valid connections. Because the
application scans Web applications, databases, operating systems and network hardware, it has
many opportunities for attempting access. You can configure attributes related to this phase on
During the third phase, known as the vulnerability check phase, the application attempts to
confirm vulnerabilities listed in the scan template. You can select which vulnerabilities to scan for
in Vulnerability Checking page of the Scan Template Configuration panel.
Other configuration options include limiting the types of services that are scanned, searching for
specific vulnerabilities, and adjusting network bandwidth usage.
In every phase of scanning, the application identifies as many details about the asset as possible
through a set of methods called fingerprinting. By inspecting properties such as the specific bit
settings in reserved areas of a buffer, the timing of a response, or a unique acknowledgment
interchange, the application can identify indicators about the asset's hardware, operating system,
and, perhaps, applications running under the system. A well-protected asset can mask its
existence, its identity, and its components from a network scanner.
When you become familiar with the built-in scan templates, you may find that they meet different
performance needs at different times.
Tip: Use your variety of report templates to parse your scan results in many useful ways. Scans
are a resource investment, especially “deeper” scans. Reports help you to reap the biggest
possible returns from that investment.
You could, for example, schedule a Web audit to run on a weekly basis, or even more frequently,
to monitor your Internet-facing assets. This is a faster scan and less of a drain on resources. You
could also schedule a Microsoft hotfix scan on a monthly basis for patch verification. This scan
requires credentials, so it takes longer. But the trade-off is that it doesn't have to occur as
frequently. Finally, you could schedule an exhaustive scan on a quarterly basis do get a detailed,
all-encompassing view of your environment. It will take time and bandwidth but, again, it's a less
frequent scan that you can plan for in advance
Note: If you change templates regularly, you will sacrifice the conveniences of scheduling scans
to run at automatic intervals with the same template.
Another way to maximize time and resources without compromising on accuracy is to alternate
target assets. For example, instead of scanning all your workstations on a nightly basis, scan a
third of them and then scan the other two thirds over the next 48 hours. Or, you could alternate
target ports in a similar fashion.
Sometimes, tuning scan performance is a simple matter of turning off one or two settings in a
template. The fewer things you check for, the less time or bandwidth you'll need to complete a
scan. However, your scan will be less comprehensive, and so, less accurate.
Note: Credentialed checks are critical for accuracy, as they make it possible to perform “deep”
system scans. Be absolutely certain that you don't need credentialed checks before you turn
them off.
If the scope of your scan does not include Web assets, turn off Web spidering, and disable Web-
related vulnerability checks. If you don't have to verify hotfix patches, disable any hotfix checks.
Turn off credentialed checks if you are not interested in running them. If you do run credentialed
checks, make sure you are only running necessary ones.
An important note here is that you need to know exactly what's running on your network in order
to know what to turn off. This is where discovery scans become so valuable. They provide you
with a reliable, dynamic asset inventory. For example, if you learn, from a discovery scan, that
you have no servers running Lotus Notes/Domino, you can exclude those policy checks from the
scan.
To begin modifying a default template go to the Administration page, and click manage for Scan
Templates. The console displays the Scan Templates pages.
You cannot directly edit a built-in template. Instead, make a copy of the template and edit that
copy. When you click Copy for any default template listed on the page, the console displays the
Scan Template Configuration panel.
To create a custom scan template from scratch, go to the Administration page, and click
create for Scan Templates.
Note: The PCI-related scanning and reporting templates are packaged with the application, but
they require purchase of a license in order to be visible and available for use. The FDCC template
is only available with a license that enables FDCC policy scanning.
The console displays the Scan Template Configuration panel. All attribute fields are blank.
Configuring templates to fine-tune scan performance involves trial and error and may include
unexpected results at first. You can prevent some of these by knowing your network topology,
your asset inventory, and your organization’s schedule and business practices. And always keep
the triangle in mind. For example, don’t increase simultaneous scan tasks dramatically if you
know that backup operations are in progress. The usage spike might impact bandwidth.
Familiarize yourself with built-in scan templates and how they work before changing any settings
or customizing templates from scratch. See Scan templates on page 639.
Many products provide default login user IDs and passwords upon installation. Oracle ships with
over 160 default user IDs. Windows users may not disable the guest account in their system. If
you don’t disable the default account vulnerability check type when creating a scan template, the
application can perform checks for these items. See Configuration steps for vulnerability check
settings on page 562 for information on enabling and disabling vulnerability check types.
l CVS
l Sybase
l AS/400
l DB2
l SSH
l Oracle
l Telnet
l CIFS (Windows File Sharing)
l FTP
l POP
l HTTP
l SNMP
l SQL/Server
l SMTP
To specify users IDs and passwords for logon, you must enter appropriate credentials during site
configuration See Configuring scan credentials on page 87. If a specific asset is not chosen to
restrict credential attempts then the application will attempt to use these credentials on all assets.
If a specific service is not selected then it will attempt to use the supplied credentials to access all
services.
If you are creating a new scan template from scratch, start with the following steps:
1. On the Administration page, click the Create link for Scan templates.
OR
If you are in the Browse Scan Templates window for a site configuration, click Create.
2. On the Scan Template Configuration—General page, enter a name and description for the
new template.
3. Configure any other template settings as desired. When you have finished configuring the
scan template, click Save.
You can configure your template to include all available types of scanning, or you can limit the
scope of the scan to focus resources on specific security needs. To select the type of scanning
you want to do, take the following steps.
l Asset Discovery —Asset discovery occurs with every scan, so this option is always selected. If
you select only Asset Discovery, the template will not include any vulnerability or policy
checks. By default, all other options are selected, so you need to clear the other option check
boxes to select asset discovery only.
l Vulnerabilities —Select this option if you want the scan to include vulnerability checks. To
select or exclude specific checks, click the Vulnerability Checks link in the left navigation
pane of the configuration panel. See Configuration steps for vulnerability check settings on
page 562.
l Web Spidering—Select this option if you want the scan to include checks that are performed in
the process of Web spidering. If you want to perform Web spidering checks only, you will need
to click the Vulnerability Checks link in the left navigation pane of the configuration panel and
disable non-Web spidering checks. See Configuration steps for vulnerability check settings
on page 562. You must select the vulnerabilities option first in order to select Web spidering.
l Policies—Select this option if you want the scan to include policy checks, including Policy
Manager. You will need to select individual checks and configure other settings, depending on
the policy. See Selecting Policy Manager checks on page 567, Configuring verification of
standard policies on page 570, and Performing configuration assessment on page 637.
3. Configure any other template settings as desired. When you have finished configuring the
scan template, click Save.
You may want to improve scan performance by tuning the number of scan processes or tasks
that occur simultaneously.
Increasing the number of simultaneous scan processes against a host with an excessive number
of ports can reduce scan time. In “tar pits,” or scan environments with targets that have a very
high number of ports open, such as 60,000 or more, scanning multiple hosts simultaneously can
help the scan complete more quickly without timing out.
Note: If protocol fingerprinting exceeds one hour, it will stop and be reported as a timeout in the
scan log.
To access these settings, click the General tab of the Scan Template Configuration panel. The
settings appear at the bottom of the General page. To change the value for either default setting,
enter a different value in the respective text box.
For built-in scan templates, the default values depend on the scan template. For example, in the
Discovery Scan - Aggressive template, the default number of hosts to scan simultaneously per
Scan Engine is 25. This setting is higher than most built-in templates, because it is designed for
higher-speed networks.
You can optimize scan performance by configuring the number of simultaneous scan processes
against each host to match the average number of ports open per host in your environment.
Resource considerations
Scanning high numbers of assets simultaneously can be memory intensive. You can consider
lowering them if you are encountering short-term memory issues. As a general rule, keep the
setting for simultaneous host scanning to within 10 per 4 GB memory on the Scan Engine.
Certain scan operations, such as if policy scanning or Web spidering consume more memory per
host. If such operations are enabled, you may need to reduce the number of hosts being scanned
in parallel to compensate.
If you choose not to configure asset discovery in a custom scan template, the scan will begin with
service discovery.
Determining whether target assets are live can be useful in environments that contain large
numbers of assets, which can be difficult to keep track of. Filtering out dead assets from the scan
job helps reduce scan time and resource consumption.
The potential downside is that firewalls or other protective devices may block discovery
connection requests, causing target assets to appear dead even if they are live. If a firewall is on
the network, it may block the requests, either because it is configured to block network access for
any packets that meet certain criteria, or because it regards any scan as a potential attack. In
either case, the application reports the asset to be DEAD in the scan log. This can reduce the
overall accuracy of your scans. Be mindful of where you deploy Scan Engines and how Scan
Engines interact with firewalls. See Make your environment “scan-friendly” on page 585.
Using more than one discovery method promotes more accurate results. If the application cannot
verify that an asset is live with one method, it will revert to another.
Note: The Web audit and Internet DMZ audit templates do not include any of these discovery
methods.
Peripheral networks usually have very aggressive firewall rules in place, which blunts the
effectiveness of asset discovery. So for these types of scans, it’s more efficient to have the
By default, the Scan Engine uses ICMP protocol, which includes a message type called ECHO
REQUEST, also known as a ping, to seek out an asset during device discovery. A firewall may
discard the pings, either because it is configured to block network access for any packets that
meet certain criteria, or because it regards any scan as a potential attack. In either case, the
application infers that the device is not present, and reports it as DEAD in the scan log.
Note: Selecting both TCP and UDP for device discovery causes the application to send out
more packets than with one protocol, which uses up more network bandwidth.
You can select TCP and/or UDP as additional or alternate options for locating lives hosts. With
these protocols, the application attempts to verify the presence of assets online by opening
connections. Firewalls are often configured to allow traffic on port 80, since it is the default HTTP
port, which supports Web services. If nothing is registered on port 80, the target asset will send a
“port closed” response, or no response, to the Scan Engine. This at least establishes that the
asset is online and that port scans can occur. In this case, the application reports the asset to be
ALIVE in scan logs.
If you select TCP or UDP for device discovery, make sure to designate ports in addition to 80,
depending on the services and operating systems running on the target assets. You can view
TCP and UDP port settings on default scan templates, such as Discovery scan and Discovery
scan (aggressive) to get an idea of commonly used port numbers.
TCP is more reliable than UDP for obtaining responses from target assets. It is also used by
more services than UDP. You may wish to use UDP as a supplemental protocol, as target
devices are also more likely to block the more common TCP and ICMP packets.
If a scan target is listed as a host name in the site configuration, the application attempts DNS
resolution. If the host name does not resolve, it is considered UNRESOLVED, which, for the
purposes of scanning, is the equivalent of DEAD.
UDP is a less reliable protocol for asset discovery since it doesn’t incorporate TCP’s handshake
method for guaranteeing data integrity and ordering. Unlike TCP, if a UDP port doesn’t respond
to a communication attempt, it is usually regarded as being open.
Asset discovery can be an efficient accuracy boost. Also, disabling asset discovery can actually
bump up scan times. The application only scans an asset if it verifies that the asset is live.
It is a good idea to enable ICMP and to configure intervening firewalls to permit the exchange of
ICMP echo requests and reply packets between the application and the target network.
Make sure that TCP is also enabled for asset discovery, especially if you have strict firewall rules
in your internal networks. Enabling UDP may be excessive, given the dependability issues of
UDP ports. To make the judgment call with UDP ports, weigh the value of thoroughness
(accuracy) against that of time.
If you do not select any discovery methods, scans assume that all target assets are live, and
immediately begin service discovery.
If the application uses TCP or UDP methods for asset discovery, it sends request packets to
specific ports. If the application contacts a port and receives a response that the port is open, it
reports the host to be “live” and proceeds to scan it.
The PCI audit template includes extra TCP ports for discovery. With PCI scans, it’s critical not to
miss any live assets.
You can collect certain information about discovered assets and the scanned network before
performing vulnerability checks. All of these discovery settings are optional.
The application can query DNS and WINS servers to find other network assets that may be
scanned.
Microsoft developed Windows Internet Name Service (WINS) for name resolution in the LAN
manager environment of NT 3.5. The application can interrogate this broadcast protocol to locate
the names of Windows workstations and servers. WINS usually is not required. It was developed
originally as a system database application to support conversion of NETBIOS names to IP
addresses.
If you enable the option to discover other network assets, the application will discover and
interrogate DNS and WINS servers for the IP addresses of all supported assets. It will include
those assets in the list of scanned systems.
Whois is an Internet service that obtains information about IP addresses, such as the name of the
entity that owns it. You can improve Scan Engine performance by not requiring interrogation of a
Whois server for every discovered asset if a Whois server is unavailable in the network.
The application identifies as many details about discovered assets as possible through a set of
methods called IP fingerprinting. By scanning an asset’s IP stack, it can identify indicators about
the asset’s hardware, operating system, and, perhaps, applications running on the system.
Settings for IP fingerprinting affect the accuracy side of the performance triangle.
The retries setting defines how many times the application will repeat the attempt to fingerprint
the IP stack. The default retry value is 0. IP fingerprinting takes up to a minute per asset. If it can’t
fingerprint the IP stack the first time, it may not be worth additional time make a second attempt.
However, you can set it to retry IP fingerprinting any number of times.
Whether or not you do enable IP fingerprinting, the application uses other fingerprinting methods,
such as analyzing service data from port scans. For example, by discovering Internet Information
Services (IIS) on a target asset, it can determine that the asset is a Windows Web server.
The certainty value, which ranges between 0.0 and 1.0 reflects the degree of certainty with which
and asset is fingerprinted. If a particular fingerprint is below the minimum certainty value, the
application discards the IP fingerprinting information for that asset. As with the performance
You can configure scans to report unauthorized MAC addresses as vulnerabilities. The Media
Access Control (MAC) address is a hardware address that uniquely identifies each node in a
network.
In IEEE 802 networks, the Data Link Control (DLC) layer of the OSI Reference Model is divided
into two sub layers: the Logical Link Control (LLC) layer and the Media Access Control (MAC)
layer.The MAC layer interfaces directly with the network media. Each different type of network
media requires a different MAC layer. On networks that do not conform to the IEEE 802
standards but do conform to the OSI Reference Model, the node address is called the Data Link
Control (DLC) address.
l SNMP must be enabled on the router or switch managing the appropriate network segment.
l The application must be able to perform authenticated scans on the SNMP service for the
router or switch that is controlling the appropriate network segment. See Enabling
authenticated scans of SNMP services on page 553.
l The application must have a list of trusted MAC address against which to check the set of
assets located during a scan. See Creating a list of authorized MAC addresses on page 554.
l The scan template must have MAC address reporting enabled. See Enabling reporting of
MAC addresses in the scan template on page 554.
l The Scan Engine performing the scan must reside on the same segment as the systems
being scanned.
To enable the application to perform authenticated scans to obtain the MAC address, take the
following steps:
1. Click Edit of the site for which you are creating the new scan template on the Home page of
the console interface.
The console displays the Site Configuration panel for that site.
3. Enter logon information for the SNMP service for the router or switch that is controlling the
appropriate network segment. This will allow the application to retrieve the MAC addresses
from the router using ARP requests.
4. Test the credential if desired.
For detailed information about configuring credentials, see Configuring scan credentials on
page 87.
5. Click Save.
6. Click the Save tab to save the change to the site configuration.
1. Using a text editor, create a file listing trusted MAC addresses. The application will not report
these addresses as violating the trusted MAC address vulnerability. You can give the file any
valid name.
2. Save the file in the application directory on the host computer for the Security Console.
C:Program Files\[installation_directory]\plugins\java\1\NetworkScanners\1\[file_name]
/opt/[installation_directory]/java/1/NetworkScanners/1/[filename]
To enable reporting of unauthorized MAC addresses in the scan template, take the following
steps:
With the trusted MAC file in place and the scanner value set, the application will perform trusted
MAC vulnerability testing. To do this it first makes a direct ARP request to the target asset to pick
up its MAC address. It also retrieves the ARP table from the router or switch controlling the
segment. Then, it uses SNMP to retrieve the MAC address from the asset and interrogates the
asset using its NetBIOS name to retrieve its MAC address.
Once the application verifies that a host is live, or running, it begins to scan ports to collect
information about services running on the computer. The target range for service discovery can
include TCP and UDP ports.
TCP ports (RFC 793) are the endpoints of logical connections through which networked
computers carry on “conversations.”
Well Known ports are those most commonly found to be open on the Internet.
The range of ports may be extended beyond Well Known Port range. Each vulnerability check
may add a set of ports to be scanned. Various back doors, trojan horses, viruses, and other
worms create ports after they have installed themselves on computers. Rogue programs and
hackers use these ports to access the compromised computers. These ports are not predefined,
and they may change over time. Output reports will show which ports were scanned during
vulnerability testing, including maliciously created ports.
Various types of port scan methods are available as custom options. Most built-in scan templates
incorporate the Stealth scan (SYN) method, in which the port scanner process sends TCP
packets with the SYN (synchronize) flag. This is the most reliable method. It's also fast. In fact, a
SYN port scan is approximately 20 times faster than a scan with the full-connect method, which is
one of the other options for the TCP port scan method.
The exhaustive template and penetration tests are exceptions in that they allow the application to
determine the optimal scan method. This option makes it possible to scan through firewalls in
some cases; however, it is somewhat less reliable.
Although most templates include UDP ports in the scope of a scan, they limit UDP ports to well-
known numbers. Services that run on UDP ports include DNS, TFTP, and DHCP. If you want to
be absolutely thorough in your scanning, you can include more UDP ports, but doing so will
increase scan time.
Scanning all possible ports takes a lot of time. If the scan occurs through a firewall, and the
firewall has been set up to drop packets sent to non-authorized devices, than a full-port scan may
span several hours to several days. If you configure the application to scan all ports, it may be
necessary to change additional parameters.
Service discovery is the most resource-sensitive phase of scanning. The application sends out
hundreds of thousands of packets to scan ports on a mere handful of assets.
Note: The application relies on network devices to return “ICMP port unreachable” packets for
closed UDP ports.
If you want to be a little more thorough, use the target list of TCP ports from more aggressive
templates, such as the exhaustive or penetration test template.
If you plan to scan UDP ports, keep in mind that aside from the reliability issues discussed earlier,
scanning UDP ports can take a significant amount of time. By default, the application will only
send two UDP packets per second to avoid triggering the ICMP rate-limiting mechanisms that
are built into TCP/IP stacks for most network devices. Sending more packets could result in
packet loss. A full UDP port scan can take up to nine hours, depending on bandwidth and the
number of target assets.
To reduce scan time, do not run full UDP port scans unless it is necessary. UDP port scanning
generally takes longer than TCP port scanning because UDP is a “connectionless” protocol. In a
UDP scan, the application interprets non-response from the asset as an indication that a port is
open or filtered, which slows the process. When configured to perform UDP scanning, the
application matches the packet exchange pace of the target asset. Oracle Solaris only responds
to 2 UDP packet failures per second as a rate limiting feature, so this scanning in this
environment can be very slow in some cases.
Tip: You can achieve the most “stealthy” scan by running a vulnerability test with port scanning
disabled. However, if you do so, the application will be unable to discover services, which will
hamper fingerprinting and vulnerability discovery.
If you want to scan additional TCP ports, enter the numbers or range in the Additional
ports text box.
4. Select which UDP ports you want to scan from the drop-down list.
If you want to scan additional UDP ports, enter the desired range in the Additional ports text box.
Note: Consult Technical Support to change the default service file setting.
5. If you want to change the service names file, enter the new file name in the text box.
This properties file lists each port and the service that commonly runs on it. If scans cannot
identify actual services on ports, service names will be derived from this file in scan results.
You can replace the file with a custom version that lists your own port/service mappings.
6. Configure any other template settings as desired. When you have finished configuring the
scan template, click Save.
You can change default scan settings to maximize speed and resource usage during asset and
service discovery. If you do not change any of these discovery performance settings, scans will
auto-adjust based on network conditions.
Changing packet-related settings can affect the triangle. See Keep the “triangle” in mind when
you tune on page 536. Shortening send-delay intervals theoretically increases scan speeds, but it
also can lead to network congestion depending on bandwidth. Lengthening send-delay intervals
increases accuracy. Also, longer delays may be necessary to avoid blacklisting by firewalls or
IDS devices.
In the following explanation of how ports are scanned, the numbers indicated are default settings
and can be changed. The application sends a block of 10 packets to a target port, waits 10
milliseconds, sends another 10 packets, and continues this process for each port in the range. At
the end of the scan, it sends another round of packets and waits 10 milliseconds for each block of
If the application receives a response within the defined number of retries, it will proceed with the
next phase of scanning: service discovery. If it does not receive a response after exhausting all
discovery methods defined in the template, it reports the asset as being DEAD in the scan log.
When the target asset is on a local system segment (not behind a firewall), the scan occurs more
rapidly because the asset will respond that ports are closed. The difficulty occurs when the device
is behind a firewall, which consumes packets so that they do not return to the Scan Engine. In this
case the application will wait the maximum time between port scans. TCP port scanning can
exceed five hours, especially if it includes full-port scans of 65K ports.
Try to scan the asset on the local segment inside the firewall. Try not to perform full TCP port
scans outside a device that will drop the packets like a firewall unless necessary.
Note: For minimum retries, packet-per-second rate, and simultaneous connection requests, the
default value of 0 disables manual settings, in which case, the application auto-adjusts the
settings. To enable manual settings, enter a value of 1 or greater.
Maximum retries
This is the maximum number of attempts to contact target assets. If the limit is exceeded with no
response, the given asset is not scanned. The default number of UDP retries is 5, which is high
for a scan through a firewall. If UDP scanning is taking longer than expected, try reducing the
retry value to 2 or 3.
You may be able speed up the scanning process by reducing the maximum retry count from the
default of 4. Lowering the number of retries for sending packets is a good accuracy adjustment in
a network with high-traffic or strict firewall rules. In an environment like this, it’s easier to lose
packets. Consider setting the retry value at 3. Note that the scan will take longer.
Timeout interval
Set the number of milliseconds to wait between retries. You can set an initial timeout interval,
which is the first setting that the scan will use. You also can set a range. For maximum timeout
interval, any value lower than 5 ms disables manual settings, in which case, the application auto-
adjusts the settings. The discovery may auto-adjust interval settings based on varying network
conditions.
Scan delay
This is the number of milliseconds to wait between sending packets to each target host.
Increasing the delay interval for sending TCP packets will prevent scans from overloading
routers, triggering firewalls, or becoming blacklisted by Intrusion Detection Systems (IDS).
Increasing the delay interval for sending packets is another measure that increases accuracy at
the expense of time.
You can increase the accuracy of port scans by slowing them down with 10- to 25-millisecond
delays.
Packet-per-second rate
This is the number of packets to send each second during discovery attempts. Increasing this rate
can increase scan speed. However, more packets are likely to be dropped in congestion-heavy
networks, which can skew scan results.
Note: To enable the defeat rate limit, you must have the Stealth (SYN) scan method selected.
See Scan templates on page 639.
An additional control, called Defeat Rate Limit (also known as defeat-rst-rate limit), enforces the
minimum packet-per-second rate. This may improve scan speed when a target host limits its rate
of RST (reset) responses to a port scan. However, enforcing the packet setting under these
circumstances may cause the scan to miss ports, which lowers scan accuracy. Disabling the
defeat rate limit may cause the minimum packet setting to be ignored when a target host limits its
rate of RST (reset) responses to a port scan. This can increase scan accuracy.
This is the number of discovery connection requests to be sent to target hosts simultaneously.
More simultaneous requests can mean faster scans, subject to network bandwidth. This setting
has no effect if values have been set for scan delay.
When the application fingerprints an asset during the discovery phases of a scan, it automatically
determines which vulnerability checks to perform, based on the fingerprint. On the Vulnerability
Checks page of the Scan Template Configuration panel, you can manually configure scans to
include more checks than those indicated by the fingerprint. You also can disable checks.
Unsafe checks include buffer overflow tests against applications like IIS, Apache, services like
FTP and SSH. Others include protocol errors in some database clients that trigger system
failures. Unsafe scans may crash a system or leave a system in an indeterminate state, even
though it appears to be operating normally. Scans will most likely not do any permanent damage
to the target system. However, if processes running in the system might cause data corruption in
the event of a system failure, unintended side effects may occur.
The benefit of unsafe checks is that they can verify vulnerabilities that threaten denial of service
attacks, which render a system unavailable by crashing it, terminating a service, or consuming
services to such an extent that the system using them cannot do any work.
You should run scheduled unsafe checks against target assets outside of business hours and
then restart those assets after scanning. It is also a good idea to run unsafe checks in a pre-
production environment to test the resistance of assets to denial-of-service conditions.
If you want to perform checks for potential vulnerabilities, select the appropriate check box. For
information about potential vulnerabilities, see Setting up scan alerts on page 133.
If you want to correlate reliable checks with regular checks, select the appropriate check box.
With this setting enabled, the application puts more trust in operating system patch checks to
attempt to override the results of other checks that could be less reliable. Operating system patch
checks are more reliable than regular vulnerability checks because they can confirm that a target
asset is at a patch level that is known to be not vulnerable to a given attack. For example, if a
vulnerability check is positive for an Apache Web server based on inspection the HTTP banner,
but an operating system patch check determines that the Apache package has been patched for
this specific vulnerability, it will not report a vulnerability. Enabling reliable check correlation is a
best practice that reduces false positives.
l Microsoft Windows
l Red Hat
l CentOS
l Solaris
l VMware
Note: To use check correlation, you must use a scan template that includes patch verification
checks, and you must typically include logon credentials in your site configuration. See
Configuring scan credentials on page 87.
A scan template may specify certain vulnerability checks to be enabled, which means that the
application will scan only for those vulnerability check types or categories with that template. If
you do not specifically enable any vulnerability checks, then you are essentially enabling all of
them, except for those that you specifically disable.
A scan template may specify certain checks as being disabled, which means that the application
will scan for all vulnerabilities except for those vulnerability check types or categories with that
template. In other words, if no checks are disabled, it will scan for all vulnerabilities. While the
exhaustive template includes all possible vulnerability checks, the full audit and PCI audit
templates exclude policy checks, which are more time consuming. The Web audit template
appropriately only scans for Web-related vulnerabilities.
Note the order of precedence for modifying vulnerability check settings, which is described
at the top of the page.
A safe vulnerability check will not alter data, crash a system, or cause a system outage
during its validation routines.
Tip: To see which vulnerabilities are included in a category, click the category name.
4. Click the check boxes for those categories you wish to scan for, and click Save.
The console lists the selected categories on the Vulnerability Checks page.
Note: If you enable any specific vulnerability categories, you are implicitly disabling all other
categories. Therefore, by not enabling specific categories, you are enabling all categories
5. Click Remove categories... to prevent the application from scanning for vulnerability
categories listed on the Vulnerability Checks page.
6. Click the check boxes for those categories you wish to exclude from the scan, and click Save.
The console displays Vulnerability Checks page with those categories removed.
Tip: To see which vulnerabilities are included in a check type, click the check type name.
2. Click the check boxes for those categories you wish to scan for, and click Save.
To avoid scanning for vulnerability types listed on the Vulnerability Checks page, click types listed
on the Vulnerability Checks page:
The console displays Vulnerability Checks page with those types removed.
Vulnerability Vulnerability
types types
Default account Safe
The console displays a box where you can search for specific vulnerabilities in the database.
Note: The application only checks vulnerabilities relevant to the systems that it scans. It will not
perform a check against a non-compatible system even if you specifically selected that check.
4. Click Search.
The box displays a table of vulnerability names that match your search criteria.
5. Click the check boxes for vulnerabilities that you wish to include in the scan, and click Save.
The selected vulnerabilities appear on the Vulnerability Checks page.
6. Click Disable vulnerability checks... to exclude specific vulnerabilities from the scan.
7. Search for the names of vulnerabilities you wish to exclude.
8. Click the check boxes for vulnerabilities that you wish to exclude from the scan, and click
Save.
9. Configure any other template settings as desired. When you have finished configuring the
scan template, click Save.
The fewer the vulnerabilities included in the scan template, the sooner the scan completes. It is
difficult to gauge how long exploit test actually take. Certain checks may require more time than
others.
l The Microsoft IIS directory traversal check tests 500 URL combinations. This can take several
minutes against a busy Web server.
l Unsafe, denial-of-service checks take a particularly long time, since they involve large
amounts of data or multiple requests to target systems.
l Cross-site scripting (CSS/XSS) tests may take a long time on Web applications with many
forms.
Be careful not to sacrifice accuracy by disabling too many checks—or essential checks. Choose
vulnerability checks in a focused way whenever possible. If you are only scanning Web assets,
enable Web-related vulnerability checks. If you are performing a patch verification scan, enable
hotfix checks.
The application is designed to minimize scan times by grouping related checks in one scan pass.
This limits the number of open connections and time interval that connections remain open. For
checks relying solely on software version numbers, the application requires no further
communication with the target system once it extracts the version information.
If you have created custom vulnerability checks, use the custom vulnerability content plug-in to
ensure that these checks are available for selection in your scan template. The process involves
simply copying the check content into a directory of your Security Console installation.
In Linux, the location is in the plugins/java/1/CustomScanner/1 directory inside the root of your
installation path. For example:
[installation_directory]/plugins/java/1/CustomScanner/1
[installation_directory]\plugins\java\1\CustomScanner\1
After copying the files, you can use the checks immediately by selecting them in your scan
template configuration.
If you work for a U.S. government agency, a vendor that transacts business with the government
or for a company with strict configuration security policies, you may be running scans to verify that
your assets comply with United States Government Configuration Baseline (USGCB) policies,
Center for Internet Security (CIS) benchmarks, or Federal Desktop Core Configuration (FDCC).
Or you may be testing assets for compliance with customized policies based on these standards.
The built-in USGCB, CIS, and FDCC scan templates include checks for compliance with these
standards. See Scan templates on page 639.
These templates do not include vulnerability checks, so if you want to run vulnerability checks
with the policy checks, create a custom version of a scan template using one of the following
methods:
l Add vulnerability checks to a customized copy of USGCB, CIS, DISA, or FDCC template.
l Add USGCB, CIS, DISA STIG, or FDCC checks to one of the other templates that includes
the vulnerability checks that you want to run.
l Create a scan template and add USGCB, CIS, DISA STIG, or FDCC checks and vulnerability
checks to it.
To use the second or third method, you will need to select USGCB, CIS, DISA STIGS, or
FDCC checks by taking the following steps. You must have a license that enables the Policy
Manager and FDCC scanning.
1. Select Policies in the General page of the Scan Template Configuration panel.
2. Go to the Policy Manager page of the Scan Template Configuration panel.
3. Select a policy.
4. Review the name, affected platform, and description for each policy.
5. Select the check box for any policy that you want to include in the scan.
6. If you are required to submit policy scan results in Asset Reporting Format (ARF) reports to
the U.S. government for SCAP certification, select the check box to store SCAP data.
Note: Stored SCAP data can accumulate rapidly, which can have a significant impact on file
storage.
7. If you want to enable recursive file searches on Windows systems, select the appropriate
check box. It is recommended that you not enable this capability unless your internal security
practices require it. See Enabling recursive searches on Windows on page 568.
8. Configure any other template settings as desired. When you have finished configuring the
scan template, click Save.
For information about verifying USGCB, CIS, or FDCC compliance, see " Working with Policy
Manager results" on page 287.
By default, recursive file searches are disabled for scans on assets running Microsoft Windows.
Searching every sub-folder of a parent folder in a Windows file system can increase scan times
on a single asset by hours, depending on the number of folders and files and other conditions.
Only enable recursive file searches if your internal security practices require it or if you require it
for certain rules in your policy scans. The following rules require recursive file searches:
DISA-6/Win2008
SV-29465r1_rule
Remove Certificate Installation Files
DISA-1/Win7
SV-25004r1_rule
Remove Certificate Installation Files
To configure the application to test for Oracle policy compliance you must edit the default XML
policy template for Oracle (oracle.xml), which is located in [installation_directory]
/plugins/java/1/OraclePolicyScanner/1.
1. Go to the Credentials page for the site that will incorporate the new scan template.
2. Select Oracle as the login service domain.
3. Type a user name and password for an Oracle account with DBA access. See Configuring
scan credentials on page 87.
4. Configure any other template settings as desired. When you have finished configuring the
scan template, click Save.
To configure the application to test for Lotus Domino policy compliance you must edit the default
XML policy template for Lotus Domino (domino.xml), which is located in [installation_directory]
/plugins/java/1/NotesPolicyScanner/1.
1. Go to the Credentials page for the site that will incorporate the new scan template.
2. Select Lotus Notes/Domino as the login service domain.
3. Type a Notes ID password in the text field. See Configuring scan credentials on page 87.
4. For Lotus Notes/Domino policy compliance scanning, you must install a Notes client on the
same host computer that is running the Security Console.
5. Configure any other template settings as desired. When you have finished configuring the
scan template, click Save.
You can configure Nexpose to verify whether assets running with Windows operating systems
are compliant with Microsoft security standards. The installation package includes three different
policy templates that list security criteria against that you can use to check settings on assets.
These templates are the same as those associated with Windows Policy Editor and Active
Directory Group Policy. Each template contains all of the policy elements for one of the three
types of Windows target assets: workstation, general server, and domain controller.
A target asset must meet all the criteria listed in the respective template for the application to
regard it as compliant with Windows Group Policy. To view the results of a policy scan, create a
report based on the Audit or Policy Evaluation report template. Or, you can create a custom
report template that includes the Policy Evaluation section. See Fine-tuning information with
custom report templates on page 511.
The templates are .inf files located in the plugins/java/1/WindowsPolicyScanner/1 path relative to
the application base installation directory:
Note: Use caution when running the same scan more than once with less than the lockout policy
time delay between scans. Doing so could also trigger account lockout.
You also can import template files using the Security Templates Snap-In in the Microsoft Group
Policy management Console, and then saving each as an .inf file with a specific name
corresponding to the type of target asset.
You must provide the application with proper credentials to perform Windows policy scanning.
See Configuring scan credentials on page 87.
Nexpose can test account policies on systems supporting CIFS/SMB, such as Microsoft
Windows, Samba, and IBM AS/400:
This the maximum number of failed logins a user is permitted before the asset locks out the
account.
This the maximum number of failed logins a user is permitted before the asset locks out the
account. The number corresponds to the QMAXSIGN system value.
This number corresponds to the QPWDMINLEN system value and specifies the minimum
length of the password field required.
This level corresponds to the minimum value that the QSECURITY system value should be
set to. The level values range from Password security (20) to Advanced integrity protection
(50).
5. Configure any other template settings as desired. When you have finished configuring the
scan template, click Save.
This setting controls the permissions that the target system grants to any new files created
on it. If the application detects broader permissions than those specified by this value, it will
report a policy violation.
3. Configure any other template settings as desired. When you have finished configuring the
scan template, click Save.
Nexpose can spider Web sites to discover their directory structures, default directories, the files
and applications on their servers, broken links, inaccessible links, and other information.
The application then analyzes this data for evidence of security flaws, such as SQL injection,
cross-site scripting (CSS/XSS), backup script files, readable CGI scripts, insecure password
use, and other issues resulting from software defects or configuration errors.
l Web audit
l HIPAA compliance
l Internet DMZ audit
l Payment Card Industry (PCI) audit
l Full audit
You can adjust the settings in these templates. You can also configure Web spidering settings in
a custom template. The spider examines links within each Web page to determine which pages
have been scanned. In many Web sites, pages that are yet to be scanned will show a base URL,
followed by a parameter directed-link, in the address bar.
If you do not enable the setting, the spider will only check the base URL without the ?id=6
parameter.
To gain access to a Web site for scanning, the application makes itself appear to the Web server
application as a popular Web browser. It does this by sending the server a Web page request as
a browser would. The request includes pieces of information called headers. One of the headers,
called User-Agent, defines the characteristics of a user’s browser, such as its version number
and the Web application technologies it supports. User-Agent represents the application to the
Web site as a specific browser, because some Web sites will refuse HTTP requests from
browsers that they do not support. The default User-Agent string represents the application to
the target Web site as Internet Explorer 7.
Note: Including query strings with Web spidering check box causes the spider to make many
more requests to the Web server. This will increase overall scan time and possibly affect the Web
server's performance for legitimate users.
3. Select the appropriate check box to include query strings when spidering if desired.
4. If you want the spider to test for persistent cross-site scripting during a single scan, select the
check box for that option.
This test helps to reduce the risk of dangerous attacks via malicious code stored on Web
servers. Enabling it may increase Web spider scan times.
Note: Changing the default user agent setting may alter the content that the application receives
from the Web site.
5. If you want to change the default value in the Browser ID (User-Agent) field enter a new
value.
If you are unsure of what to enter for the User-Agent string, consult your Web site developer.
6. Select the option to check the use of common user names and passwords if desired. The
application reports the use of these credentials as a vulnerability. It is an insecure practice
because attackers can easily guess them. With this setting enabled, the application attempts
to log onto Web applications by submitting common user names and passwords to discovered
authentication forms. Multiple logon attempts may cause authentication services to lock out
accounts with these credentials.
(Optional) Enable the Web spider to check for the use of weak credentials:
As the Web spider discovers logon forms during a scan, it can determine if any of these forms
accept commonly used user names or passwords, which would make them vulnerable to
automated attacks that exploit this practice. To perform the check, the Web spider attempts to log
on through these forms with commonly used credentials. Any successful attempt counts as a
vulnerability.
Note: This check may cause authentication services with certain security policies to lock out
accounts with these commonly used credentials.
1. Enter a maximum number of foreign hosts to resolve, or leave the default value of 100.
This option sets the maximum number of unique host names that the spider may resolve.
This function adds substantial time to the spidering process, especially with large Web sites,
because of frequent cross-link checking involved. The acceptable host range is 1 to 500.
2. Enter the amount of time, in milliseconds, in the Spider response timeout field to wait for a
response from a target Web server. You can enter a value from 1 to 3600000 ms (1 hour).
The default value is 120000 ms (2 minutes). The Web spider will retry the request based on
the value specified in the Maximum retries for spider requests field.
3. Type a number in the field labeled Maximum directory levels to spider to set a directory
depth limit for Web spidering.
Limiting directory depth can save significant time, especially with large sites. For unlimited
directory traversal, type 0 in the field. The default value is 6.
Note: If you run recurring scheduled scans with a time limit, portions of the target site may remain
unscanned at the end of the time limit. Subsequent scans will not resume where the Web spider
left off, so it is possible that the target Web site may never be scanned in its entirety.
4. Type a number in the Maximum spidering time (minutes) field to set a maximum number of
minutes for scanning each Web site.
A time limit prevents scans from taking longer than allotted time windows for scan jobs,
especially with large target Web sites. If you leave the default value of 0, no time limit is
applied. The acceptable range is 1 to 500.
5. Type a number in the Maximum pages to spider field to limit the number of pages that the
spider requests.
This is a time-saving measure for large sites. The acceptable range is 1 to 1,000,000 pages.
Note: If you set both a time limit and a page limit, the Web spider will stop scanning the target
Web site when the first limit is reached.
6. Enter the number of time to retry a request after a failure in the Maximum retries for spider
requests field. Enter a value from 0 to 100. A value of 0 means do not retry a failed request.
The default value is 2 retries.
The application reports field names that are designated to be sensitive as vulnerabilities:
Form action submits sensitive data in the clear. Any matches to the regular expression will
be considered sensitive data field names.
2. Enter a regular expression for sensitive content. The application reports as vulnerabilities
strings that are designated to be sensitive. If you leave the field blank, it does not search for
sensitive strings.
1. Select the check box to instruct the spider to adhere to standards set forth in the robots.txt
protocol.
Robots.txt is a convention that prevents spiders and other Web robots from accessing all or
part of Web site that are otherwise publicly viewable.
Note: Scan coverage of any included bootstrap paths is subject to time and page limits that you
set in the Web spider configuration. If the scan reaches your specified time or page limit before
scanning bootstrap paths, it will not scan those paths.
2. Enter the base URL paths for applications that are not linked from the main Web site URLs in
the Bootstrap paths field if you want the spider to include those URLS.
Example: /myapp. Separate multiple entries with commas. If you leave the field blank, the
spider does not include bootstrap paths in the scan.
3. Enter the base URL paths to exclude in the Excluded paths field. Separate multiple entries
with commas.
If you specify excluded paths, the application does not attempt to spider those URLs or
discovery any vulnerabilities or files associated with them. If you leave the field blank, the
spider does not exclude any paths from the scan.
Configure any other scan template settings as desired. When you have finished configuring the
scan template, click Save.
The Web spider crawls Web servers to determine the complete layout of Web sites. It is a
thorough process, which makes it valuable for protecting Web sites. Most Web application
vulnerability tests are dependent on Web spidering.
By default, the Web spider crawls a site using three threads and a per-request delay of 20 ms.
The amount of traffic that this generates depends on the amount of discovered, linked site
content. If you’re running the application on a multiple-processor system, increase the number of
spider threads to three per processor.
A complete Web spider scan will take slightly less than 90 seconds against a responsive server
hosting 500 pages, assuming the target asset can serve one page on average per 150 ms. A
scan against the same server hosting 10,000 pages would take approximately 28 minutes.
When you configure a scan template for Web spidering, enter the maximum number of
directories, or depth, as well as the maximum number of pages to crawl per Web site. These
values can limit the amount of time that Web spidering takes. By default, the spider ignores cross-
site links and stays only on the end point it is scanning.
If your asset inventory doesn’t include Web sites, be sure to turn this feature off. It can be very
time consuming.
Mail relay is a feature that allows SMTP servers to act as open gateways through which mail
applications can send e-mail. Commercial operators, who send millions of unwanted spam e-
mails, often target mail relay for exploitation. Most organizations now restrict mail relay services
to specific domain users.
This e-mail address should be external to your organization, such as a Yahoo! or Hotmail
address. The application will attempt to send e-mail from this account to itself using any mail
services and mail scripts that it discovers during the scan. If the application receives the e-
mail, this indicates that the servers are vulnerable.
This is typically a Web form that spammers might use to generate Spam e-mails.
4. Configure any other template settings as desired. When you have finished configuring the
scan template, click Save.
Nexpose attempts to verify an SID on a target asset through various methods, such as
discovering common configuration errors and default guesses. You can now specify
additional SIDs for verification.
4. Enter the names of Oracle SIDs in the appropriate text field, to which it can connect. Separate
multiple SIDs with commas.
5. Configure any other template settings as desired. When you have finished configuring the
scan template, click Save.
This setting is the interval at which the application retries accessing the mail server. The
default value is 30 seconds.
This setting is a threshold outside of which the application will report inaccurate time
readings by system clocks. The inaccuracy will be reported in the system log.
4. Configure any other template settings as desired. When you have finished configuring the
scan template, click Save.
Nexpose tests a number of vulnerabilities in the Concurrent Versions System (CVS) code
repository. For example, in versions prior to v1.11.11 of the official CVS server, it is possible for
an attacker with write access to the CVSROOT/passwd file to execute arbitrary code as the cvsd
process owner, which usually is root.
DHCP Servers provide Border Gateway Protocol (BGP) information, domain naming help, and
Address Resolution Protocol (ARP) table information, which may be used to reach hosts that are
otherwise unknown. Hackers exploit vulnerabilities in these servers for address information.
Telnet is an unstructured protocol, with many varying implementations. This renders Telnet
servers prone to yielding inaccurate scan results. You can improve scan accuracy by providing
Nexpose with regular expressions.
7. Configure any other template settings as desired. When you have finished configuring the
scan template, click Save.
If Nexpose gains access to an asset’s file system by performing an exploit or a credentialed scan,
it can search for the names of files in that system.
File name searching is useful for finding software programs that are not detected by
fingerprinting. It also is a good way to verify compliance with policies in corporate environments
that don't permit storage of certain types of files on workstation drives:
l copyrighted content
l confidential information, such as patient file data in the case of HIPAA compliance
l unauthorized software
The application reads the contents of these files, and it does not retrieve them. You can view the
names of scanned file names in the File and Directory Listing pane of a scan results page.
Beyond customizing scan templates, you can do other things to improve scan performance.
Depending on bandwidth availability, adding Scan Engines can reduce scan time over all, and it
can improve accuracy. Where you put Scan Engines is as important as how many you have. It’s
helpful to place Scan Engines on both sides of network dividing points, such as firewalls. See the
topic Distribute Scan Engines strategically in the administrator's guide.
Tailor your site configuration to support your performance goals. Try increasing the number of
sites and making sites smaller. Try pairing sites with different scan templates. Adjust your scan
schedule to avoid bandwidth conflicts.
Increase resources
l Network bandwidth
l RAM and CPU capacity of hosts
If your organization has the means and ability, enhance network bandwidth. If not, find ways to
reduce bandwidth conflicts when running scans.
Increasing the capacity of host computers is a little more straightforward. The installation
guide lists minimum system requirements for installation. Your system may meet those
requirements, but if you want to bump up maximum number of scan threads, you may find your
host system slowing down or becoming unstable. This usually indicates memory problems.
If increasing scan threads is critical to meeting your performance goals, consider installing the 64-
bit version of Nexpose. A Scan Engine running on a 64-bit operating system can use as much
RAM as the operating system supports, as opposed to a maximum of approximately 4 GB on 32-
bit systems. The vertical scalability of 64-bit Scan Engines significantly increases the potential
number simultaneous scans that Nexpose can run.
Always keep in mind that best practices for Scan Engine placement. See the topic Distribute
Scan Engines strategically in the administrator's guide. Bandwidth is also important to consider.
Any well constructed network will have effective security mechanisms in place, such as firewalls.
These devices will regard Nexpose as a hostile entity and attempt to prevent it from
communicating with assets that they are designed to attack.
If you can find ways to make it easier for the application to coexist with your security
infrastructure—without exposing your network to risk or violating security policies—you can
enhance scan speed and accuracy.
For example, when scanning Windows XP workstations, you can take a few simple measures to
improve performance:
You can open firewalls on Windows assets to allow Nexpose to perform deep scans on those
targets within your network.
By default, Microsoft Windows XP SP2, Vista, Server 2003, and Server 2008 enable firewalls to
block incoming TCP/IP packets. Maintaining this setting is generally a smart security practice.
However, a closed firewall limits the application to discovering network assets during a scan.
Opening a firewall gives it access to critical, security-related data as required for patch or
compliance checks.
To find out how to open a firewall without disabling it on a Windows platform, see Microsoft’s
documentation for that platform. Typically, a Windows domain administrator would perform this
procedure.
During scans, Nexpose checks Web sites and TLS or SSL servers for specific Root certificates to
verify that these entities are validated by trusted Certificate Authorities (CAs).
The Security Console installation includes a number of preset certificates trusted by commonly
used browsers from Microsoft, Google, Mozilla, and Apple. Additionally, you can import Root
certificates that were expressly created by trusted CAs for targets that you want to scan.
The permission required for this task is Manage Global Settings, which typically belongs to a
Global Administrator.
Note: Make sure the custom certificate is a Root CA certificate. Otherwise, the full certificate
chain cannot be validated during a scan. Also, the certificate must be in Privacy Enhanced Mail
(PEM) base64 encoded format.
1. Open the certificate file on the computer hosting it, and copy the contents of the file, including
the entire BEGIN and END lines. Example:
-----BEGIN CERTIFICATE-----
MIIHQDCCBSigAwIBAgIJAMStF8UUH6doMA0GCSqGSIb3DQEBCwUAMIHBMQswCQYD
VQQGEwJVUzERMA8GA1UECBMITmVicmFza2ExFjAUBgNVBAcTDU5lYnJhc2thIENp
-----END CERTIFICATE-----
The certificate appears in the Custom Certificates table. You also can view preset certificates on
the Root Certificates page.
Removing certificates
Removing a root certificate from the trust store affects future scans in that any scanned
certificates that were signed by the root certificate authority will no longer be trusted. They will be
reported as vulnerabilities.
The permission required for removing custom certificates is Manage Global Settings, which
typically belongs to a Global Administrator.
Note: To edit policies you must have the Policy Editor license. Contact your account
representative if you want to add this feature.
You create a custom policy by editing copies of built-in configuration policies or other custom
policies. A policy consists of rules that may be organized within groups or sub-groups. You edit a
custom policy to fit the requirements of your environment by changing the values required for
compliance.
You can create a custom policy and then periodically check the settings to improve scan results or
adapt to changing organizational requirements.
For example, you need a different way to present vulnerability data to show compliance
percentages to your auditors. You create a custom policy to track one vulnerability to measure
the risks over time and show improvements. Or you show what percentage of computers are
compliant for a specific vulnerability.
l Built-in policies are installed with the application (Policy Manager configuration policies based
on USGCB, FDCC, or CIS). These policies are not editable.
Policy Manager is a license-enabled scanning feature that performs checks for compliance
with United States Government Configuration Baseline (USGCB) policies, Center for
Internet Security (CIS) benchmarks, and Federal Desktop Core Configuration (FDCC)
policies.
l Custom policies are editable copies of built-in policies. You can make copies of a custom
policy if you need custom policies with similar changes, such as policies for different locations.
You can determine which policies are editable (custom) on the Policy Listing table on the Policies
page. The Source column displays which policies are built-in and custom. The Copy, Edit and
Delete buttons display for only custom policies for users with Manage Policies permission.
You can edit policies during a scan without affecting your results. While you modify policies,
manual or scheduled scans that are in process or paused scans that are resumed use the policy
configuration settings in effect when the scan initially launched. Changes saved to a custom
policy are applied during the next scheduled scan or a subsequent manual scan.
If your session times out when you try to save a policy, reestablish a session and then save your
changes to the policy.
Editing a policy
Note: To edit policies, you need Manage Policies permissions. Contact your administrator about
your user permissions.
The following section demonstrates how to edit the different items in a custom policy. You can
edit the following items:
2. You can modify the Name to identify which policies are customized for your organization. For
example, add your organization name or abbreviation, such as XYZ Org -USGCB 1.2.1.0 -
Windows 7 Firewall.
3. (Optional) You can modify the Description to explain what settings are applied in the custom
policy using this policy.
4. Click Save.
The Policy Configuration panel displays the groups and rules in item order for the selected policy.
By opening the groups, you drill down to an individual group or rule in a policy.
1. Click View on the Policy Listing table to display the policy configuration.
2. Click the icon to expand groups or rules to display details on the Policy Configuration panel.
Use the policy Find box to locate a specific rule. See Using policy find on page 594.
3. Select an item (rule or group) in the policy tree (hierarchy) to display the detail in the right
panel.
For example, your organization has specific requirements for password compliance. Select
the Password Complexity rule to view the checks used during a scan to verify password
compliance. If your organization policy does not enforce strong passwords then you can
change the value to Disabled.
Use the policy find to quickly locate the policy item that you want to modify.
For example, type IPv6 to locate all policy items with that criteria. Click the Up ( ) and Down (
) arrows to display the next or previous instance of IPv6 found by the policy find.
As you type, the application searches then highlights all matches in the policy hierarchy.
2. Click the Up ( ) and Down ( ) arrows to move to the next or previous items that match the
find criteria.
3. (Optional) Refine your criteria if you receive too many results. For example, replace
password with password age.
You modify the group Name and Description to change the description of items that you
customized. The policy find uses this text to locate items in the policy hierarchy. See Using policy
find on page 594.
You select a group in the policy hierarchy to display the details. You can modify this text to identify
which groups contain modified (custom) rules and add a description of what type of changes.
You can modify policy rules to get different scan results. You select a rule in the Policy
Configuration hierarchy to see the list of editable checks and values related to that rule.
2. Modify the checks for the rule using the fields displayed.
Refer to the guidelines about what value to apply to get the correct result.
For example, disable the Use FIPS compliant algorithms for encryption, hashing and signing
rule by typing ‘0’ in the text box.
For example, change the Behavior of the elevation prompt for administrators in Admin
Approval Mode check by typing a value for the total seconds. The guidelines list the options
for each value.
Deleting a policy
Note: To delete policies, you need Manage Policies permissions. Contact your administrator
about your user permissions.
You can remove custom policies that you no longer use. When you delete a policy, all scan data
related to the policy is removed. The policy must be removed from scan templates and report
configurations before deleting.
Click Delete for the custom policy that you want to remove.
If you try to delete a policy while running a scan, then a warning message displays indicating that
the policy can not be deleted.
Note: To perform policy checks in scans, make sure that your Scan Engines are updated to the
August 8, 2012 release.
You add custom policies to the scan templates to apply your modifications across your sites. The
Policy Manager list contains the custom policies.
Click Custom Policies to display the custom policies. Select the custom policies to add.
There is no one-size-fits-all solution for managing configuration security. The application provides
policies that you can apply to scan your environments. However, you may create custom scripts
to verify items specific to your company, such as health check scripts that prioritize security
settings. You can create policies from scratch, upload your custom content to use in policy scans,
and run it with your other policy and vulnerability checks.
Note: To upload policies you must have the Policy Editor capability enabled in your license.
Contact your account representative if you want to update your license.
File specifications
SCAP 1.0 policy files must be compressed to an archive (ZIP or JAR file format) with no folder
structure. The archive can contain only XML or TXT files. If the archive contains other file types,
such as CSV, then the application does not upload the policy.
l XCCDF file—This file contains the structure of the policy. It must have a unique name (title)
and ID (benchmark ID). This file is required.
The SCAP XCCDF benchmark file name must end with -xccdf.xml (For example, XYZ-
xccdf.xml).
l OVAL file—These files contain policy checks. The file names must end with -oval.xml (For
example, XYZ-oval.xml).
l accesstoken_test
l auditeventpolicysubcategories_test
l auditeventpolicy_test
l family_test
l fileeffectiverights53_test
l lockoutpolicy_test
l passwordpolicy_test
l registry_test
l sid_test
l unknown_test
l user_test
l variable_test
The following XML files can be included in the archive file to define specific policy information.
These files are not required for a successful upload.
l CPE files—These files contain the Uniform Resource Identifiers (URI) that correspond to
fingerprinted platforms and applications.
The file must begin with cpe: and includes segments for the hardware facet, the operating
system facet, and the application environment facet of the fingerprinted item (For example,
cpe:/o:microsoft:windows_xp:-:sp3:professional).
l CCE files—These files contain CCE identifiers for known system configurations to facilitate
fast and accurate correlation of configuration data across multiple information sources and
tools.
l CVE files—These files contain CVE (Common Vulnerabilities and Exposures) identifiers to
known vulnerabilities and exposures.
You can name your custom policies to meet your company’s needs. The application identifies
policies by the benchmark ID and title. You must create unique names and IDs in your
benchmark file to upload them successfully. The application verifies that the benchmark version
to identifies a benchmark (v1.2.1.0) that is supported.
Note: Custom policies uploaded to the application can be edited with the Policy Manager. See
Creating a custom policy on page 589.
If you cannot see this button then you must log on as a Global Administrator.
To identify which policies are customized for your organization you can devise a file naming
convention. For example, add your organization name or abbreviation, such as XYZ Org -
USGCB 1.2.1.0 - Windows 7 Firewall.
4. Enter a description that explains what settings are applied in the custom policy.
5. Click the Browse button to locate the archive file.
6. Click the Upload button to upload the policy.
l If the policy uploads successfully, go to step 7.
l If you receive an error message the policy is not loaded. You must resolve the issue
noted in the error message then repeat these steps until the policy loads successfully.
For more information about errors, see Troubleshooting upload errors on page 604.
During the upload, a "spinning" image appears: . The time to complete the upload
depends on the policy's complexity and size, which typically reflects the number of rules that
it includes.
When the upload completes, your custom policies appear in the Policy Listing panel on the
Policies page. You can edit these policies using the Policy Manager. See Creating a
custom policy on page 589.
7. Add your custom policies to the scan templates to apply to future scans. See Selecting Policy
Manager checks on page 567.
You can select any combination of datastream or the underlying benchmark in the following
manner: Upload an SCAP 1.2 XML policy file using the steps described in Uploading custom
SCAP policies on page 600. After you specify the XML file for upload, the Security Console
displays a page for selecting individual components from the datastream collection. All
components are selected by default. To prevent any component from being included, clear the
check box for that component. Then, click Upload.
Policies are not uploaded to the application unless certain criteria are met. Error messages
identify the criteria that have not been met. You must resolve the issues and upload the policy
successfully to apply your custom SCAP policy to scans.
Each of the following errors (in italics) is listed with the resolution indented after it. In the error
messages, value is a placeholder for a specific reference in the error message.
There are characters positioned before the first bracket (<). For example:
l White space
l Byte Order Mark character in UTF8 encoded XML file, that is caused by text editors like
Microsoft® Notepad.
l Any other type of invisible characters.
The SCAP XCCDF Benchmark file cannot be found. Verify that the SCAP XCCDF benchmark
file name ends in “-xccdf.xml” and is not under a folder in the archive.
The application cannot find the SCAP XCCDF benchmark file in the archive.
The SCAP XCCDF benchmark file name must end with -xccdf.xml (For example, XYZ-
xccdf.xml). The archive (ZIP or JAR) cannot have a folder structure.
Verify that the SCAP XCCDF benchmark file exists in the archive using the required naming
convention.
The SCAP XCCDF benchmark file must contain a valid schema version.
Add the schema version (SCAP policy) to the SCAP XCCDF benchmark file.
The SCAP XCCDF benchmark file must contain a version in supported format (for example,
1.1.4). The application currently supports version 1.1.4 or earlier.
Replace the version number using a valid format. Verify that there are no blank spaces.
The SCAP XCCDF Benchmark file [value] contains a Benchmark ID that contains an invalid
character: [value]. The Benchmark cannot be uploaded.
The SCAP XCCDF Benchmark file [value] contains a reference to an OVAL definition file [value]
that is not included in the archive.
Verify that the archive file contains all policy definition files referenced in the SCAP XCCDF
benchmark file. Or remove the reference to the missing definition file.
The SCAP XCCDF Benchmark file [value] contains a test [value] that is not supported within the
product. The test must be removed for the policy to be uploaded.
The SCAP XCCDF benchmark file includes a test that the application does not support.
Compress your policy files to an archive (ZIP or JAR) with no folder structure.
The SCAP XCCDF Benchmark file contains a rule [value] that refers to a check system that is
not supported. Please only use OVAL check systems.
Remove the unsupported items from the SCAP XCCDF benchmark file.
The item [value] is not a XCCDF Benchmark or Group. Only XCCDF Benchmarks or Groups
can contain other items.
Revise the SCAP XCCDF benchmark file. so only benchmarks or groups contain other
benchmark items.
A requirement in the SCAP XCCDF benchmark file is missing a reference to a group or rule.
Review the requirement specified in the error message to determine what group or rule to add.
The SCAP XCCDF item [value] requires a group or rule [value] to not be enabled that is not
present in the Benchmark and cannot be uploaded.
A conflict in the SCAP XCCDF benchmark file is referencing an item that is not recognized
or is the wrong item.
Review the conflict specified in the error message to determine which item to replace.
The SCAP XCCDF item [value] requires a group or rule [value] to not be enabled, but the item
reference is neither a group or rule. The Benchmark cannot be uploaded.
A conflict in the SCAP XCCDF benchmark file is missing a reference to a group or rule.
Review the conflict specified in the error message to determine what group or rule to add.
The SCAP XCCDF Benchmark contains two profiles with the same Profile ID [value]. This is
illegal and the Benchmark cannot be uploaded.
There are two profiles in the SCAP XCCDF benchmark file that have the same ID.
Revise the SCAP XCCDF benchmark file so that each <profile> has a unique ID.
The SCAP XCCDF Benchmark contains a value [value] that does not have a default value set.
The value [value] must have a default value defined if there is no selector tag. The Benchmark
failed to upload.
A default selection must be included for items with multiple options for an element, such as a
rule.
If the item has multiple options that can be selected then you must specify the default option.
The SCAP XCCDF Benchmark [value] contains reference to a CPE platform [value] that is not
referenced in the CPE Dictionary. The SCAP XCCDF Benchmark cannot be uploaded.
The application does not recognize CPE platform reference in the SCAP XCCDF
benchmark file.
Remove the CPE platform reference from the SCAP XCCDF benchmark file.
Review the SCAP XCCDF benchmark file to locate the infinite loop and revise the code to
correct this error.
The SCAP XCCDF Benchmark file [value] contains an item that attempts to extend another item
that does not exist, or is an illegal extension. The Benchmark cannot be uploaded.
There is an item referenced in the SCAP XCCDF benchmark file that is not included in the
Benchmark.
Revise the SCAP XCCDF benchmark file to remove the reference to the missing item or add the
item to the Benchmark.
There is an check referenced in the SCAP XCCDF benchmark file that is not included in the
Benchmark.
Revise the SCAP XCCDF benchmark file to remove the reference to the missing check or add
the check to the Benchmark.
[value] benchmark files were found within the archive, you can only upload one benchmark at a
time.
Create a separate archive for each benchmark and upload each archive to the application.
The SCAP XCCDF Benchmark Value [value] cannot be created within the policy [value].
The SCAP XCCDF benchmark file cannot be parsed due to the issue indicated at the end of
the error message.
The SCAP XCCDF item [value] does not reference a valid value [value] and the Benchmark
cannot be parsed.
Review the requirement specified in the error message to determine which item to replace.
The SCAP XCCDF Benchmark file contains a XCCDF Value [value] that has no value provided.
The Benchmark cannot be parsed.
Add a value to XCCDF value reference in the SCAP XCCDF benchmark file.
This parsing error identifies the issue preventing the SCAP OVAL file from loading.
Review the SCAP OVAL file and located the issue listed in the error message to determine the
appropriate revision.
The application cannot find the SCAP OVAL Source file in the archive. This file must end
with -oval.xml or -patches.xml.
Verify that the SCAP OVAL Source file exists in the archive and the file name ends in the
correct format.
One of the biggest challenges to keeping your environment secure is prioritizing remediation of
vulnerabilities. If Nexpose discovers hundreds or even thousands of vulnerabilities with each
scan, how do you determine which vulnerabilities or assets to address first?
Each vulnerability has a number of characteristics that indicate how easy it is to exploit and what
an attacker can do to your environment after performing an exploit. These characteristics make
up the vulnerability’s risk to your organization.
Every asset also has risk associated with it, based on how sensitive it is to your organization’s
security. For example, if a database that contains credit card numbers is compromised, the
damage to your organization will be significantly greater than if a printer server is compromised.
The application provides several strategies for calculating risk. Each strategy emphasizes certain
characteristics, allowing you to analyze risk according to your organization’s unique security
needs or objectives. You can also create custom strategies and integrate them with the
application.
After you select a risk strategy you can use it in the following ways:
l Sort how vulnerabilities appear in Web interface tables according to risk. By sorting
vulnerabilities you can make a quick visual determination as to which vulnerabilities need your
immediate attention and which are less critical.
l View risk trends over time in reports, which allows you to track progress in your remediation
effort or determine whether risk is increasing or decreasing over time in different segments of
your network.
l Changing your risk strategy and recalculating past scan data on page 615
l Using custom risk strategies on page 617
l Changing the appearance order of risk strategies on page 619
Each risk strategy is based on a formula in which factors such as likelihood of compromise,
impact of compromise, and asset importance are calculated. Each formula produces a different
range of numeric values. For example, the Real Risk strategy produces a maximum score of
1,000, while the Temporal strategy has no upper bounds, with some high-risk vulnerability scores
reaching the hundred thousands. This is important to keep in mind if you apply different risk
strategies to different segments of scan data. See Changing your risk strategy and recalculating
past scan data on page 615.
This strategy is recommended because you can use it to prioritize remediation for vulnerabilities
for which exploits or malware kits have been developed. A security hole that exposes your
environment to an unsophisticated exploit or an infection developed with a widely accessible
malware kit is likely to require your immediate attention. The Real Risk algorithm applies unique
exploit and malware exposure metrics for each vulnerability to CVSS base metrics for likelihood
and impact.
Specifically, the model computes a maximum impact between 0 and 1,000 based on the
confidentiality impact, integrity impact, and availability impact of the vulnerability. The impact is
multiplied by a likelihood factor that is a fraction always less than 1. The likelihood factor has an
initial value that is based on the vulnerability's initial exploit difficulty metrics from CVSS: access
vector, access complexity, and authentication requirement. The likelihood is modified by threat
exposure: likelihood matures with the vulnerability's age, growing ever closer to 1 over time. The
rate at which the likelihood matures over time is based on exploit exposure and malware
exposure. A vulnerability's risk will never mature beyond the maximum impact dictated by its
CVSS impact metrics.
The Real Risk strategy can be summarized as base impact, modified by initial likelihood of
compromise, modified by maturity of threat exposure over time. The highest possible Real Risk
score is 1,000.
TemporalPlus strategy
Like the Temporal strategy, TemporalPlus emphasizes the length of time that the vulnerability
has been known to exist. However, it provides a more granular analysis of vulnerability impact by
expanding the risk contribution of partial impact vectors.
The TemporalPlus risk strategy aggregates proximity-based impact of the vulnerability, using
confidentiality impact, integrity impact, and availability impact in conjunction with access vector.
The impact is tempered by an aggregation of the exploit difficulty metrics, which are access
complexity and authentication requirement. The risk then grows over time with the vulnerability
age.
The TemporalPlus strategy has no upper bounds. Some high-risk vulnerability scores reaching
the hundred thousands.
This strategy distinguishes risk associated with vulnerabilities with “partial” impact values from
risk associated with vulnerabilities with “none” impact values for the same vectors. This is
especially important to keep in mind if you switch to TemporalPlus from the Temporal strategy,
which treats them equally. Making this switch will increase the risk scores for many vulnerabilities
already detected in your environment.
This strategy emphasizes the length of time that the vulnerability has been known to exist, so it
could be useful for prioritizing older vulnerabilities for remediation. Older vulnerabilities are
regarded as likelier to be exploited because attackers have known about them for a longer period
of time. Also, the longer a vulnerability has been in an existence, the greater the chance that less
commonly known exploits exist.
The Temporal risk strategy aggregates proximity-based impact of the vulnerability, using
confidentiality impact, integrity impact, and availability impact in conjunction with access vector.
The impact is tempered by dividing by an aggregation of the exploit difficulty metrics, which are
access complexity and authentication requirement. The risk then grows over time with the
vulnerability age.
The Temporal strategy has no upper bounds. Some high-risk vulnerability scores reach the
hundred thousands.
Weighted strategy
The Weighted strategy can be useful if you assign levels of importance to sites or if you want to
assess risk associated with services running on target assets. The strategy is based primarily on
site importance, asset data, and vulnerability types, and it emphasizes the following factors:
The PCI ASV 2.0 Risk strategy applies a score based on the Payment Card Industry Data
Security Standard (PCI DSS) Version 2.0 to every discovered vulnerability. The scale ranges
from 1 (lowest severity) to 5 (highest severity). With this model, Approved Scan Vendors (ASVs)
and other users can assess risk from a PCI perspective by sorting vulnerabilities based on PCI
2.0 scores and viewing these scores in PCI reports. Also, the five-point severity scale provides a
simple way for your organization to assess risk at a glance.
You may choose to change the current risk strategy to get a different perspective on the risk in
your environment. Because making this change could cause future scans to show risk scores that
are significantly different from those of past scans, you also have the option to recalculate risk
scores for past scan data.
Doing so provides continuity in risk tracking over time. If you are creating reports with risk trend
charts, you can recalculate scores for a specific scan date range to make those scores consistent
with scores for future scans. This ensures continuity in your risk trend reporting.
For example, you may change your risk strategy from Temporal to Real Risk on December 1 to
do exposure-based risk analysis. You may want to demonstrate to management in your
organization that investment in resources for remediation at the end of the first quarter of the year
has had a positive impact on risk mitigation. So, when you select Real Risk as your strategy, you
will want to calculate Real Risk scores for all scan data since April 1.
Calculation time varies. Depending on the amount of scan data that is being recalculated, the
process may take hours. You cannot cancel a recalculation that is in progress.
Note: You can perform regular activities, such as scanning and reporting while a recalculation is
in progress. However, if you run a report that incorporates risk scores during a recalculation, the
scores may appear to be inconsistent. The report may incorporate scores from the previously
used risk strategy as well as from the newly selected one.
To change your risk strategy and recalculate past scan data, take the following steps:
1. Click the arrow for any risk strategy on the Risk Strategies page to view information about it.
Changing your risk strategy and recalculating past scan data 615
Information includes a description of the strategy and its calculated factors, the strategy’s
source (built-in or custom), and how long it has been in use if it is the currently selected
strategy.
This allows you to see how different risk strategies have been applied to all of your scan data.
This information can help you decide exactly how much scan data you need to recalculate to
prevent gaps in consistency for risk trends. It also is useful for determining why segments of risk
trend data appear inconsistent.
Note the Status column, which indicates whether any calculations did not complete
successfully. This could help you troubleshoot inconsistent sections in your risk trend data by
running the calculations again.
3. Click the Change Audit tab to view every modification of risk strategy usage in the history of
your installation.
The table in this section lists every instance that a different risk strategy was applied, the
affected date range, and the user who made the change. This information may also be
useful for troubleshooting risk trend inconsistencies or for other purposes.
4. (Optional) Click the Export to CSV icon to export the change audit information to CSV format,
which you can use in a spreadsheet for internal purposes.
1. Click the radio button for the date range of scan data that you want to recalculate. If you select
Entire history, the scores for all of your data since your first scan will be recalculated.
2. Click Save.
Changing your risk strategy and recalculating past scan data 616
Using custom risk strategies
You may want to calculate risk scores with a custom strategy that analyzes risk from perspectives
that are very specific to your organization’s security goals. You can create a custom strategy and
use it in Nexpose.
Each risk strategy is an XML document. It requires the RiskModel element, which contains the
id attribute, a unique internal identifier for the custom strategy.
l name: This is the name of the strategy as it will appear in the Risk Strategies page of the Web
interface. The datatype is xs:string.
l description: This is the description of the strategy as it will appear in the Risk Strategies page
of the Web interface. The datatype is xs:string.
Note: The Rapid7 Professional Services Organization (PSO) offers custom risk scoring
development. For more information, contact your account manager.
<RiskModel id="custom_risk_strategy">
<description>
</description>
<VulnerabilityRiskStrategy>
[formula]
</VulnerabilityRiskStrategy>
</RiskModel>
To make a custom risk strategy available in Nexpose, take the following steps:
[installation_directory]/shared/riskStrategies/custom/global.
The custom strategy appears at the top of the list on the Risk Strategies page.
To set the order for a risk strategy, add the optional order sub-element with a number greater
than 0 specified, as in the following example. Specifying a 0 would cause the strategy to appear
last.
<RiskModel id="janes_risk_strategy">
<description>
</description>
<order>1</order>
<VulnerabilityRiskStrategy>
[formula]
</VulnerabilityRiskStrategy>
</RiskModel>
1. Open the desired risk strategy XML file, which appears in one of the following directories:
You can change the order of how risk strategies are listed on the Risk Strategies page. This could
be useful if you have many strategies listed and you want the most frequently used ones listed
near the top. To change the order, you assign an order number to each individual strategy using
the optional order element in the risk strategy’s XML file. This is a sub-element of the
RiskModel element. See Using custom risk strategies on page 617.
For example: Three people in your organization create custom risk strategies: Jane’s Risk
Strategy, Tim’s Risk Strategy, and Terry’s Risk Strategy. You can assign each strategy an order
number. You can also assign order numbers to built-in risk strategies.
Note: The order of built-in strategies will be reset to the default order with every product update.
Custom strategies always appear above built-in strategies. So, if you assign the same number to
a custom strategy and a built-in strategy, or even if you assign a lower number to a built-in
strategy, custom strategies always appear first.
If you do not assign a number to a risk strategy, it will appear at the bottom in its respective group
(custom or built-in). In the following sample order, one custom strategy and two built-in strategies
are numbered 1.
Note that a custom strategy, Tim’s, has a higher number than two numbered, built-in strategies;
yet it appears above them.
An asset goes through several phases of scanning before it has a status of completed for that
scan. An asset that has not gone through all the required scan phases has a status of in progress.
Nexpose only calculates risk scores based on data from assets with completed scan status.
If a scan pauses or stops, The application does not use results from assets that do not have
completed status for the computation of risk scores. For example: 10 assets are scanned in
parallel. Seven have completed scan status; three do not. The scan is stopped. Risk is calculated
based on the results for the seven assets with completed status. For the three in progress assets,
it uses data from the last completed scan.
To determine scan status consult the scan log. See Viewing the scan log on page 215.
The Risk Score Adjustment setting allows you to customize your assets’ risk score calculations
according to the business context of the asset. For example, if you have set the Very High
criticality level for assets belonging to your organization’s senior executives, you can configure
the risk score adjustment so that those assets will have higher risk scores than they would have
otherwise. You can specify modifiers for your user-applied criticality levels that will affect the
asset risk score calculations for assets with those levels set.
Note that you must enable Risk Score Adjustment for the criticality levels to be taken into account
in calculating the risk score; it is not set by default.
1. On the Administration page, in Global and Console Settings, click the Manage link for global
settings.
2. In the Global Settings page, select Risk Score Adjustment.
3. Select Adjust asset risk scores based on criticality.
4. Change any of the modifiers for the listed criticality levels, per the constraints listed below.
Constraints:
l Very High: 2
l High: 1.5
l Medium: 1
l Low: 0.75
l Very Low: 0.5
The Risk Strategy and Risk Score Adjustment are independent factors that both affect the risk
score.
To calculate the risk score for an individual asset, Nexposeuses the algorithm corresponding to
the selected risk strategy. If Risk Score Adjustment is set and the asset has a criticality tag
applied, the application then multiplies the risk score determined by the risk strategy by the
modifier specified for that criticality tag.
Both the original and context-driven risk scores are displayed for an individual asset
The risk score for a site or asset-group is based on the context-driven risk scores of the assets in it.
If Risk Score Adjustment is enabled, nearly every risk score you see in your Nexposeinstallation
will be the context-driven risk score that takes into account the risk strategy and the risk score
adjustment. The one exception is the Original risk score available on the page for a selected
asset. The Original risk score takes into account the risk strategy but not the risk score
adjustment. Note that the values displayed are rounded to the nearest whole number, but the
calculations are performed on more specific values. Therefore, the context-driven risk score
shown may not be the exact product of the displayed original risk score and the multiplier.
When you first apply a criticality tag to an asset, the context-driven risk score on the page for that
asset should update very quickly. There will be a slight delay in recalculating the risk scores for
any sites or asset groups that include that asset.
If you develop custom fingerprints, you can have the Security Console distribute them
automatically to any paired Scan Engine that is currently in use when a scan is run. To do so,
simply copy the fingerprint files to the [installation_directory]/plugins/fp/custom/ directory on your
Security Console host.
You do not have to restart the Security Console afterward. The NSC.log file, located in the
[installation_directory]/nsc/logs/ directory, will display a message indicating the location and
number of the newly added fingperprints.
Custom fingerprint XML files must meet certain formatting criteria in order to work properly, as in
the following example:
<?xml version="1.0"?>
<fingerprints matches="ssh.banner">
<fingerprint pattern="^RomSShell_([\d\.]+)$">
<description>Allegro RomSShell SSH</description>
<example service.version="4.62">RomSShell_4.62</example>
<param pos="0" name="service.vendor" value="Allegro"/>
<param pos="0" name="service.product" value="RomSShell"/>
<param pos="1" name="service.version"/>
</fingerprint>
</fingerprints>
The first element is a fingerpints block with a matches attribute indicating what data the fingerprint
file is intended to match.
Every fingerprint contains a pattern attribute with the regular expression to match against the
data.
An optional flags attribute controls how the regular expression is to be interpreted. See the
Recog documentation for FLAG_MAP for more information.
At least one example element is included, though multiple example elements are preferable.
These elements are used in test coverage present in rspec, which validates that the provided
data matches the specified regular expression. Additionally, if the fingerprint is using the param
elements to extract field values from the data, you can add these expected extractions as
attributes for the example elements. In the preceding example the string
<example service.version="4.62">RomSShell_4.62</example>
tests that RomSShell_4.62 matches the provided regular expression and that the value of
service.version is 4.62.
Each param elements contains a pos attribute that indicates what capture field from the pattern
should be extracted, or 0 for a static string.
The name attribute is the key that will be reported in a successful match, and the value will either
be a static string for pos values of 0 or missing and taken from the captured field.
Create a single fingerprint for each product as long as the pattern remains clear and readable. If
that is not possible, separate the pattern logically into additional fingerprints.
Create regular expressions that allow flexible version number matching. This ensures greater
probability of matching a product. For example, all known public releases of a product report
either major.minor or major.minor.build format version numbers. If the fingerprint strictly matches
this version number format, it would fail to match a modified build of the product that reports only
a major version number format.
You can test fingerprints via command line by entering executing bin/recog_verify against the
fingerprint file:
$ bin/recog_verify xml/ssh_banners.xml
This section provides useful information and tools to help you get optimal use out of the
application.
Scan templates on page 639: This section lists all built-in scan templates and their settings. It
provides suggestions for when to use each template.
Report templates and sections on page 644: This section lists all built-in report templates and the
information that each contains. It also lists and describes report sections that make up document
report templates and data fields that make up CSV export templates. This information is useful
for configuring custom report templates.
Performing configuration assessment on page 637: This section describes how you can use the
application to verify compliance with configuration security standards such as USGCB and CIS.
Using regular expressions on page 633: This section provides tips on using regular expressions
in various activities, such as configuring scan authentication on Web targets.
Using Exploit Exposure on page 636: This section describes how the application integrates
exploitability data for vulnerabilities.
Glossary on page 669: This section lists and defines terms used and referenced in the
application.
Resources 626
Finding out what features your license supports
Some features of the application are only available with certain licenses. To determine if your
license supports a particular feature, follow these steps:
The features that your license enables are marked with a green check mark on the Licensing
page.
You can choose whether to link assets in different sites or treat them as unique entities. By linking
matching assets in different sites, you can view and report on your assets in a way that aligns with
your network configuration and reflects your asset counts across the organization. Below is some
information to help you decide whether to enable this option.
Option 1
A corporation operates a chain of retail stores, each with the same network mapping, so it has
created a site for each store. It does not link assets across sites, because each site reflects a
unique group of assets.
Option 2
A corporation has a global network with a unique configuration in each location. It has created
sites to focus on specific categories, and these categories may overlap. For example, a Linux
server may be in one site called Finance and another called Ubuntu machines. The corporation
links assets across sites so that in investigations and reporting, it is easier to recognize the
Linux server as a single machine.
An asset is a set of proprietary, unique data gathered from a target device during a scan. This
data, which distinguishes the scanned device when integrated into Nexpose, includes the
following:
l IP address
l host name
l MAC address
l vulnerabilities
l risk score
l user-applied tags
l site membership
l asset ID (a unique identifier applied by Nexpose when the asset information is integrated into
the database)
If the option to link assets across sites is disabled, Nexpose regards each asset as distinct from
any other asset in any other site whether or not a given asset in another site is likely to be the
same device.
For example, an asset named server1.example.com, with an IP address of 10.0.0.1 and a MAC
address of 00:0a:95:9d:68:16 is part of one site called Boston and another site called PCI targets.
Because this asset is in two different sites, it has two unique asset IDs, one for each site, and thus
is regarded as two different entities.
Note: Assets are considered matching if they have certain proprietary characteristics in common,
such as host name, IP address, and MAC address.
If the option to link assets across sites is enabled, Nexpose determines whether assets in
different sites match, and if they do, treats the assets that match each other as a single entity .
The information below describes some considerations to take into account when deciding
whether to enable this option.
Use Cases
You have two choices when adding assets to your site configurations:
Security considerations
l Once assets are linked across sites, users will have a unified view of an asset. Access to an
asset will be determined by factors other than site membership. If this option is enabled, and a
user has access to an asset through an asset group, for instance, that user will have access to
all information about that asset from any source, whether or not the user has access to the
source itself. Examples: The user will have access to data from scans in sites to which they do
not have access, discovery connections, Metasploit, or other means of collecting information
about the asset.
Site-level controls
l With this option enabled, vulnerability exceptions cannot be created at the site level through
the user interface at this time. They can be created at the site level through the API. Site-level
exceptions created before the option was enabled will continue to apply.
l When this option is enabled, you will have two distinct options for removing an asset:
l Removing an asset from a site breaks the link between the site and the asset, but the
asset is still available in other sites in which is it was already present. However, if the
asset is only in one site, it will be deleted from the entire workspace.
l Deleting an asset deletes it from throughout your workspace in the application.
Transition considerations
l Disabling asset linking after it has been enabled will result in each asset being assigned to the
site in which it was first scanned, which means that each asset’s data will be in only one site.
To reserve the possibility of returning to your previous scan results, back up your application
database before enabling the feature.
l The links across sites will be created over time, as assets are scanned. During the transition
period until you have scanned all assets, some will be linked across sites and others will not.
Your risk score may also vary during this period.
l You will notice that some assets are not updating with scans over time. As you scan, new data
for an asset will link with the most recently scanned asset. For example if an asset with
IP address 10.0.0.1 is included in both the Boston and the PCI targets sites, the latest scan
data will link with one of those assets and continue to update that asset with future scans. The
non-linked, older asset will not appear to update with future scans. The internal logic for
selecting which older asset is linked depends on a number of factors, such scan authentication
and the amount of information collected on each "version" of the asset.
l Your site risk scores will likely decrease over time because the score will be multiplied by
fewer assets.
Note: The cross-site asset linking feature is enabled by default for new installations as of the April
8, 2015, product update.
To disable linking so that matching assets in different sites are considered unique:
1. Review the above considerations. Also note that removing the links will take some time.
2. Log in to the application as a Global Administrator.
3. Go to the Administration page.
4. Under Global and Console Settings, next to Console, select Manage.
5. Select Asset Linking.
6. Clear the check box for Link all matching assets in all sites.
7. Click Save under Global Settings.
A regular expression, also known as a “regex,” is a text string used for searching for a piece of
information or a message that an application will display in a given situation. Regex notation
patterns can include letters, numbers, and special characters, such as dots, question marks, plus
signs, parentheses, and asterisks. These patterns instruct a search application not only what
string to search for, but how to search for it.
l searching for file names on local drives; see How the file name search works with regex on
page 633
l searching for certain results of logon attempts to Telnet servers; see Configuring scans of
Telnet servers on page 581
l determining if a logon attempt to a Web server is successful; see How to use regular
expressions when logging on to a Web site on page 635
A regex can be a simple pattern consisting of characters for which you want to find a direct match.
For example, the pattern nap matches character combinations in strings only when exactly the
characters n, a, and p occur together and in that exact sequence. A search on this pattern would
return matches with strings such as snap and synapse. In both cases the match is with the
substring nap. There is no match in the string an aperture because it does not contain the
substring nap.
When a search requires a result other than a direct match, such as one or more n's or white
space, the pattern requires special characters. For example, the pattern ab*c matches any
character combination in which a single a is followed by 0 or more bs and then immediately
followed by c. The asterisk indicates 0 or more occurrences of the preceding character. In the
string cbbabbbbcdebc, the pattern matches the substring abbbbc.
The asterisk is one example of how you can use a special character to modify a search. You can
create various types of search parameters using other single and combined special characters.
Nexpose searches for matching files by comparing the search string against the entire directory
path and file name. See Configuring file searches on target systems on page 583. Files and
directories appear in the results table if they have any greedy matches against the search pattern.
results in no matches
When Nexpose makes a successful attempt to log on to a Web application, the Web server
returns an HTML page that a user typically sees after a successful logon. If the logon attempt
fails, the Web server returns an HTML page with a failure message, such as “Invalid password.”
Configuring the application to log on to a Web application with an HTML form or HTTP headers
involves specifying a regex for the failure message. During the logon process, it attempts to
match the regex against the HTML page with the failure message. If there is a match, the
application recognizes that the attempt failed. It then displays a failure notification in the scan logs
and in the Security Console Web interface. If there is no match, the application recognizes that
the attempt was successful and proceeds with the scan.
With Nexpose Exploit Exposure™, you can now use the application to target specific
vulnerabilities for exploits using the Metasploit exploit framework. Verifying vulnerabilities through
exploits helps you to focus remediation tasks on the most critical gaps in security.
For each discovered vulnerability, the application indicates whether there is an associated exploit
and the required skill level for that exploit. If a Metasploit exploit is available, the console displays
the ™ icon and a link to a Metasploit module that provides detailed exploit information.
On a logistical level, exploits can provide critical access to operating systems, services, and
applications for penetration testing.
Also, exploits can afford better visibility into network security, which has important implications for
different stakeholders within your organization:
l Penetration testers and security consultants use exploits as compelling proof that security
flaws truly exist in a given environment, eliminating any question of a false positive. Also, the
data they collect during exploits can provide a great deal of insight into the seriousness of the
vulnerabilities.
l Senior managers demand accurate security data that they can act on with confidence. False
positives can cause them to allocate security resources where they are not needed. On the
other hand, if they refrain from taking action on reported vulnerabilities, they may expose the
organization to serious breaches. Managers also want metrics to help them determine
whether or not security consultants and vulnerability management tools are good
investments.
l System administrators who view vulnerability data for remediation purposes want to be able
to verify vulnerabilities quickly. Exploits provide the fastest proof.
Performing regular audits of configuration settings on your assets may be mandated in your
organization. Whether you work for a United States government agency, a company that does
business with the federal government, or a company with strict security rules, you may need to
verify that your assets meet a specific set of configuration standards. For example, your company
may require that all of your workstations lock out users after a given number of incorrect logon
attempts.
Like vulnerability scans, policy scans are useful for gauging your security posture. They help to
verify that your IT department is following secure configuration practices. Using the application,
you can scan your assets as part of a configuration assessment audit. A license-enabled feature
named Policy Manager provides compliance checks for several configuration standards:
USGCB 2.0 is not an “update” of 1.0. The two versions are considered separate entities. For that
reason, the application includes USGCB 1.0 checks in addition to those of the later version. For
more information, go to usgcb.nist.gov.
FDCC policies
The Federal Desktop Core Configuration (FDCC) preceded USGCB as the U.S. government-
mandated set of configuration standards. For more information, go to fdcc.nist.gov.
CIS benchmarks
Configure a site with a scan template that includes Policy Manager checks. Depending on your
license, the application provides built-in USGCB, FDCC, and CIS templates. These templates do
not include vulnerability checks. If you prefer to run a combined vulnerability/policy scan, you
can configure a custom scan template that includes vulnerability checks and Policy Manager
policies or benchmarks. See the following sections for more information:
To verify that your license enables Policy Manager and includes the specific checks that you want
to run, go the Licensing page on the Security Console Configuration panel. See Viewing,
activating, renewing, or changing your license in the administrator's guide.
For a complete list of platforms that are covered by Policy Manager checks, go to the
Rapid7Community at https://community.rapid7.com/docs/DOC-2061.
Go to the Policies page, where you can view results of policy scans, including those of individual
rules that make up policies. You can also override rule results. See Working with Policy Manager
results on page 287.
You can customize policy checks based on Policy Manager checks. See Creating a custom
policy on page 589.
This appendix lists all built-in scan templates available in Nexpose. It provides a description for
each template and suggestions for when to use it.
CIS
This template incorporates the Policy Manager scanning feature for verifying compliance with
Center for Internet Security (CIS) benchmarks. The scan runs application-layer audits. Policy
checks require authentication with administrative credentials on targets. Vulnerability checks are
not included.
DISA
This scan template performs Defense Information Systems Agency (DISA) policy compliance
tests with application-layer auditing on supported DISA-benchmarked systems. Policy checks
require authentication with administrative credentials on targets. Vulnerability checks are not
included. Only default ports are scanned.
Denial of service
This basic audit of all network assets uses both safe and unsafe (denial-of-service) checks. This
scan does not include in-depth patch/hotfix checking, policy compliance checking, or application-
layer auditing. You can run a denial of service scan in a preproduction environment to test the
resistance of assets to denial-of service conditions.
Discovery scan
This scan locates live assets on the network and identifies their host names and operating
systems. This template does not include enumeration, policy, or vulnerability scanning.
You can run a discovery scan to compile a complete list of all network assets. Afterward, you can
target subsets of these assets for intensive vulnerability scans, such as with the Exhaustive scan
template.
This fast, cursory scan locates live assets on high-speed networks and identifies their host names
and operating systems. The system sends packets at a very high rate, which may trigger IPS/IDS
sensors, SYN flood protection, and exhaust states on stateful firewalls. This template does not
perform enumeration, policy, or vulnerability scanning.
This template is identical in scope to the discovery scan, except that it uses more threads and is,
therefore, much faster. The trade-off is that scans run with this template may not be as thorough
as with the Discovery scan template.
This thorough network scan of all systems and services uses only safe checks, including
patch/hotfix inspections, policy compliance assessments, and application-layer auditing. This
scan could take several hours, or even days, to complete, depending on the number of target
assets.
Scans run with this template are thorough, but slow. Use this template to run intensive scans
targeting a low number of assets.
FDCC
This template incorporates the Policy Manager scanning feature for verifying compliance with all
Federal Desktop Core Configuration (FDCC) policies. The scan runs application-layer audits on
all Windows XP and Windows Vista systems. Policy checks require authentication with
administrative credentials on targets. Vulnerability checks are not included. Only default ports are
scanned.
If you work for a U.S. government organization or a vendor that serves the government, use this
template to verify that your Windows Vista and XP systems comply with FDCC policies.
Full audit
This full network audit of all systems uses only safe checks, including network-based
vulnerabilities, patch/hotfix checking, and application-layer auditing. The system scans only
default ports and disables policy checking, which makes scans faster than with the Exhaustive
scan. Also, This template does not check for potential vulnerabilities.
This full network audit uses only safe checks, including network-based vulnerabilities,
patch/hotfix checking, and application-layer auditing. The system scans only default ports and
disables policy checking, which makes scans faster than with the Exhaustive scan. It also does
not include the Web spider, which makes it faster than the full audit that does include it. Also, This
template does not check for potential vulnerabilities.
This is the default scan template. Use it to run a fast vulnerability scan right “out of the box.”
HIPAA compliance
This template uses safe checks in this audit of compliance with HIPAA section 164.312
(“Technical Safeguards”). The scan will flag any conditions resulting in inadequate access
control, inadequate auditing, loss of integrity, inadequate authentication, or inadequate
transmission security (encryption).
This penetration test covers all common Internet services, such as Web, FTP, mail
(SMTP/POP/IMAP/Lotus Notes), DNS, database, Telnet, SSH, and VPN. This template does
not include in-depth patch/hotfix checking and policy compliance audits.
Linux RPMs
This scan verifies proper installation of RPM patches on Linux systems. For best results, use
administrative credentials.
Use this template to scan assets running the Linux operating system.
Microsoft hotfix
This scan verifies proper installation of hotfixes and service packs on Microsoft Windows
systems. For optimum success, use administrative credentials.
Use this template to verify that assets running Windows have hotfix patches installed on them.
This audit of Payment Card Industry (PCI) compliance uses only safe checks, including network-
based vulnerabilities, patch /hotfix verification, and application-layer testing. All TCP ports and
well-known UDP ports are scanned. Policy checks are not included.
This template should be used by an Approved Scanning Vendor (ASV) to scan assets as part of
a PCI compliance program. For your internal PCI discovery scans, use the PCI Internal audit
template.
This template is intended for discovering vulnerabilities in accordance with the Payment Card
Industry (PCI) Data Security Standard (DSS) requirements. It includes all network-based
vulnerabilities and web application scanning. It specifically excludes potential vulnerabilities as
well as vulnerabilities specific to the external perimeter.
This template is intended for your organization's internal scans for PCI compliance purposes.
This in-depth scan of all systems uses only safe checks. Host-discovery and network penetration
features allow the system to dynamically detect assets that might not otherwise be detected. This
template does not include in-depth patch/hotfix checking, policy compliance checking, or
application-layer auditing.
With this template, you may discover assets that are out of your initial scan scope. Also, running a
scan with this template is helpful as a precursor to conducting formal penetration test procedures.
This non-intrusive scan of all network assets uses only safe checks. This template does not
include in-depth patch/hotfix checking, policy compliance checking, or application-layer auditing.
This is a safe-check Sarbanes-Oxley (SOX) audit of all systems. It detects threats to digital data
integrity, data access auditing, accountability, and availability, as mandated in Section 302
(“Corporate Responsibility for Fiscal Reports”), Section 404 (“Management Assessment of
Internal Controls”), and Section 409 (“Real Time Issuer Disclosures”) respectively.
SCADA audit
This is a “polite,” or less aggressive, network audit of sensitive Supervisory Control And Data
Acquisition (SCADA) systems, using only safe checks. Packet block delays have been
increased; time between sent packets has been increased; protocol handshaking has been
disabled; and simultaneous network access to assets has been restricted.
USGCB
This template incorporates the Policy Manager scanning feature for verifying compliance with all
United States Government Configuration Baseline (USGCB) policies. The scan runs application-
layer audits on all Windows 7 systems. Policy checks require authentication with administrative
credentials on targets. Vulnerability checks are not included. Only default ports are scanned.
If you work for a U.S. government organization or a vendor that serves the government, use this
template to verify that your Windows 7 systems comply with USGCB policies.
This audit of all Web servers and Web applications is suitable public-facing and internal assets,
including application servers, ASPs, and CGI scripts. The template does not include patch
checking or policy compliance audits. Nor does it scan FTP servers, mail servers, or database
servers, as is the case with the DMZ Audit scan template.
Use this appendix to help you select the right built-in report template for your needs. You can
also learn about the individual sections or data fields that make up report templates, which is
helpful for creating custom templates.
Creating custom document templates enables you to include as much, or as little, information in
your reports as your needs dictate. For example, if you want a report that only lists all assets
organized by risk level, a custom report might be the best solution. This template would include
only the section. Or, if you want a report that only lists vulnerabilities, create a template with the
section.
The Asset Report Format (ARF) XML template organizes data for submission of policy and
benchmark scan results to the U.S. Government for SCAP 1.2 compliance.
Of all the built-in templates, the Audit is the most comprehensive in scope. You can use it to
provide a detailed look at the state of security in your environment.
l The Audit Report template provides a great deal of granular information about discovered
assets:
l host names and IP addresses
l discovered services, including ports, protocols, and general security issues
l risk scores, depending on the scoring algorithm selected by the administrator
l users and asset groups associated with the assets
l discovered databases*
l discovered files and directories*
l results of policy evaluations performed*
l spidered Web sites*
l affected assets
l vulnerability descriptions
l severity levels
l references and links to important information sources, such as security advisories
l general solution information
Additionally, the Audit Report template includes charts with general statistics on discovered
vulnerabilities and severity levels.
* To gather this “deep” information the application must have logon credentials for the target
assets. An Audit Report based on a non-credentialed scan will not include this information. Also,
it must have policy testing enabled in the scan template configuration.
Note that the Audit Report template is different from the PCI Audit template. See PCI Audit
(legacy) on page 650.
l Cover Page
l Discovered Databases
l Discovered Files and Directories
l Discovered Services
l Discovered System Information
l Discovered Users and Groups
l Discovered Vulnerabilities
l Executive Summary
l Policy Evaluation
l Spidered Web Site Structure
l Vulnerability Report Card by Node
Baseline Comparison
You can use the Baseline Comparison to observe security-related trends or to assess the results
of a scan as compared with the results of a previous scan that you are using as a baseline, as in
the following examples.
l You may use the first scan that you performed on a site as a baseline. Being the first scan, it
may have revealed a high number of vulnerabilities that you subsequently remediated.
Comparing current scan results to those of the first scan will help you determine how effective
your remediation work has been.
l You may use a scan that revealed an especially low number of vulnerabilities as a benchmark
of good security “health”.
l You may use the last scan preceding the current one to verify whether a certain patch
removed a vulnerability in that scan.
Trending information indicates changes discovered during the scan, such as the following:
l Cover Page
l Executive Summary
Executive Overview
You can use the Executive Overview template to provide a high-level snapshot of security data. It
includes general summaries and charts of statistical data related to discovered vulnerabilities and
assets.
Note that the Executive Overview template is different from the PCI Executive Overview. See
PCI Executive Overview (legacy) on page 650.
l Baseline Comparison
l Cover Page
l Executive Summary
l Risk Trends
The Highest Risk Vulnerabilities template lists the top 10 discovered vulnerabilities according to
risk level. This template is useful for targeting the biggest threats to security as priorities for
remediation.
Each vulnerability is listed with risk and CVSS scores, as well references and links to important
information sources.
l Cover Page
l Highest Risk Vulnerability Details
l Table of Contents
With this template you can view assets that were discovered in scans within a specified time
period. It is useful for tracking changes to your asset inventory. In addition to general information
about each asset, the report lists risk scores and indicates whether assets have vulnerabilities
with associated exploits or malware kits.
This is one of three PCI-mandated report templates to be used by ASVs for PCI scans as of
September 1, 2010.
The PCI Attestation of Compliance is a single page that serves as a cover sheet for the
completed PCI report set.
In the top left area of the page is a form for entering the customer’s contact information. If the
ASV added scan customer organization information in the site configuration on which the scan
data is based, the form will be auto-populated with that information. See Including organization
information in a site in the user's guide or Help. In the top right area is a form with auto-populated
fields for the ASV’s information.
The Scan Status section lists a high-level summary of the scan, including whether the overall
result is a Pass or Fail, some statistics about what the scan found, the date the scan was
completed, and scan expiration date, which is the date after which the results are no longer valid.
In this section, the ASV must note the number of components left out of the scope of the scan.
Two separate statements appear at the bottom. The first is for the customer to attest that the
scan was properly scoped and that the scan result only applies to external vulnerability scan
requirement of PCI Data Security Standard (DSS). It includes the attestation date, and an
indicated area to fill in the customer’s name.
To support auto-population of these fields*, you must enter create appropriate settings in the
oem.xml configuration file. See The ASV guide, which you can request from Technical
Support.
This is one of two reports no longer used by ASVs in PCI scans as of September 1, 2010. It
provides detailed scan results, ranking each discovered vulnerability according to its Common
Vulnerability Scoring System (CVSS) ranking.
Note that the PCI Audit template is different from the Audit Report template. See Audit Report
on page 646.
The PCI Audit (Legacy) report template includes the following sections:
l Cover Page
l Payment Card Industry (PCI) Scanned Hosts/Networks
l Payment Card Industry (PCI) Vulnerability Details
l Payment Card Industry (PCI) Vulnerability Synopsis
l Table of Contents
l Vulnerability Exceptions
This is one of two reports no longer used by ASVs in PCI scans as of September 1, 2010. It
provides high-level scan information.
Note that the PCI Executive Overview template is different from the template PCI Executive
Summary. See PCI Executive Summary on page 651.
l Cover Page
l Payment Card Industry (PCI) Executive Summary
l Table of Contents
This is one of three PCI-mandated report templates to be used by ASVs for PCI scans as of
September 1, 2010.
The PCI Executive Summary begins with a Scan Information section, which lists the dates that
the scan was completed and on which it expires. This section includes the auto-populated ASV
name and an area to fill in the customer’s company name. If the ASV added scan customer
organization information in the site configuration on which the scan data is based, the customer’s
company name will be auto-populated. See Getting started: Info & Security on page 58.
The Component Compliance Summary section lists each scanned IP address with a Pass or Fail
result.
The Asset and Vulnerabilities Compliance Overview section includes charts that provide
compliance statistics at a glance.
The Vulnerabilities Noted for each IP Address section includes a table listing each discovered
vulnerability with a set of attributes including PCI severity, CVSS score, and whether the
vulnerability passes or fails the scan. The assets are sorted by IP address. If the ASV marked a
vulnerability for exception in the application, the exception is indicated here. The column labeled
Exceptions, False Positives, or Compensating Controls field in the PCI Executive Summary
report is auto-populated with the user name of the individual who excluded a given vulnerability.
In the concluding section, Special Notes, ASVs must disclose the presence of any software that
may pose a risk due to insecure implementation, rather than an exploitable vulnerability. The
notes should include the following information:
Any instance of remote access software or directory browsing is automatically noted. ASVs
must add any information pertaining to point-of-sale terminals and absence of
The PCI Executive Overview report template includes the following sections:
This template provides detailed, sorted scan information about each asset, or host, covered in a
PCI scan. This perspective allows a scanned merchant to consume, understand, and address all
the PCI-related issues on an asset-by-asset basis. For example, it may be helpful to note that a
non-PCI-compliant asset may have a number of vulnerabilities specifically related to its operating
system or a particular network communication service running on it.
The PCI Host Details report template includes the following sections:
This is one of three PCI-mandated report templates to be used by ASVs for PCI scans as of
September 1, 2010.
The PCI Vulnerability Details report begins with a Scan Information section, which lists the dates
that the scan was completed and on which it expires. This section includes the auto-populated
ASV name and an area to fill in the customer's company name.
Note: The PCI Vulnerability Details report takes into account approved vulnerability exceptions
to determine compliance status for each vulnerability instance.
The Vulnerability Details section includes statistics and descriptions for each discovered
vulnerability, including affected IP address, Common Vulnerability Enumeration (CVE) identifier,
CVSS score, PCI severity, and whether the vulnerability passes or fails the scan. Vulnerabilities
The PCI Vulnerability Details report template includes the following sections:
Policy Evaluation
The Policy Evaluation displays the results of policy evaluations performed during scans.
The application must have proper logon credentials in the site configuration and policy testing
enabled in the scan template configuration. See Establishing scan credentials and Modifying and
creating scan templates in the administrator's guide.
Note that this template provides a subset of the information in the Audit Report template.
l Cover Page
l Policy Evaluation
Remediation Plan
The Remediation Plan template provides detailed remediation instructions for each discovered
vulnerability. Note that the report may provide solutions for a number of scenarios in addition to
the one that specifically applies to the affected target asset.
l Cover Page
l Discovered System Information
l Remediation Plan
l Risk Assessment
The Report Card template is useful for finding out whether, and how, vulnerabilities have been
verified. The template lists information about the test that Nexpose performed for each
vulnerability on each asset. Possible test results include the following:
l not vulnerable
l not vulnerable version
l exploited
For any vulnerability that has been excluded from reports, the test result will be the reason for the
exclusion, such as acceptable risk.
l Cover Page
l Index of Vulnerabilities
l Vulnerability Report Card by Node
Note: The Top 10 Assets by Vulnerability Risk and Top 10 Assets by Vulnerabilities report
templates do not contain individual sections that can be applied to custom report templates.
The Top 10 Assets by Vulnerability Risk lists the 10 assets with the highest risk scores. For more
information about ranking, see Viewing active vulnerabilities on page 259 Viewing active
vulnerabilities.
This report is useful for prioritizing your remediation efforts by providing your remediation team
with an overview of the assets in your environment that pose the greatest risk.
The Top 10 Assets by Vulnerabilities report lists 10 the assets in your organization that have the
most vulnerabilities. This report does not account for cumulative risk.
You can use this report to view the most vulnerable services to determine if services should be
turned off to reduce risk. This report is also useful for prioritizing remediation efforts by listing the
assets that have the most vulnerable services.
The Top Remediations template provides high-level information for assessing the highest impact
remediation solutions. The template includes the percentage of total vulnerabilities resolved, the
percentage of vulnerabilities with malware kits, the percentage of vulnerabilities with known
exploits, and the number of assets affected when the top remediation solutions are applied.
l the number of vulnerabilities that will be remediated, including vulnerabilities with no exploits
or malware that will be remediated
l vulnerabilities and total risk score associated with the solution
l the number of targeted vulnerabilities that have known exploits associated with them
l the number of targeted vulnerabilities with available malware kits
l the number of assets to be addressed by remediation
l the amount of risk that will be reduced by the remediations
The Top Remediations with Details template provides expanded information for assessing
remediation solutions and implementation steps. The template includes the percentage of total
vulnerabilities resolved and the number of assets affected when remediation solutions are
applied.
The Top Remediations with Details includes the information from the Top Rememdiations
template with information in the following areas:
Vulnerability Trends
The Vulnerability Trends template provides information about how vulnerabilities in your
environment have changed, if your remediation efforts have succeeded, how assets have
changed over time, how asset groups have been affected when compared to other asset groups,
and how effective your asset scanning process is. To manage the readability and size of the
report, when you configure the date range there is a limit of 15 data points that can be included on
a chart. For example, you can set your date range for a weekly interval for a two-month period,
Note: Ensure you schedule adequate time to run this report template because of the large
amount of data that it aggregates. Each data point is the equivalent of a complete report. It may
take a long time to complete.
The Vulnerability Trends template provides charts and details in the following areas:
The Vulnerability Trends template helps you improve your remediation efforts by providing
information about the number of assets included in a scan and if any have been excluded, if
vulnerability exceptions have been applied or expired, and if there are new vulnerability
definitions that have been added to the application. The Vulnerability Trends survey template
differs from the vulnerability trend section in the Baseline report by providing information for more
in-depth analysis regarding your security posture and remediation efforts provides.
Some of the following documents report sections can have vulnerability filters applied to them.
This means that specific vulnerabilities can be included or excluded in these sections based on
the report Scope configuration. When the report is generated, sections with filtered vulnerabilities
will be so identified. Document report templates that do not contain any of these sections do not
contain filtered vulnerability data. The document report sections are listed below:
Payment Card Industry (PCI) Vulnerabilities Noted for each IP Address on page 661
Baseline Comparison
This section appears when you select the Baseline Report template. It provides a comparison of
data between the most recent scan and the baseline, enumerating the following changes:
Additionally, this section provides suggestions as to why changes in data may have occurred
between the two scans. For example, newly discovered vulnerabilities may be attributable to the
installation of vulnerable software that occurred after the baseline scan.
In generated reports, this section appears with the heading Trend Analysis.
Cover Page
The Cover Page includes the name of the site, the date of the scan, and the date that the report
was generated. Other display options include a customized title and company logo.
Discovered Databases
This section lists all databases discovered through a scan of database servers on the network.
For information to appear in this section, the scan on which the report is based must meet the
following conditions:
For information to appear in this section, the scan on which the report is based must meet the
following conditions:
See Configuring scan credentials on page 87 for information on configuring these settings.
Discovered Services
This section lists all services running on the network, the IP addresses of the assets running each
service, and the number of vulnerabilities discovered on each asset.
This section lists the IP addresses, alias names, operating systems, and risk scores for scanned
assets.
This section provides information about all users and groups discovered on each node during the
scan.
Note: In generated reports, the Discovered Vulnerabilities section appears with the heading
Discovered and Potential Vulnerabilities.
Discovered Vulnerabilities
This section lists all vulnerabilities discovered during the scan and identifies the affected assets
and ports. It also lists the Common Vulnerabilities and Exposures (CVE) identifier for each
vulnerability that has an available CVE identifier. Each vulnerability is classified by severity.
If you selected a Medium technical detail level for your report template, the application provides a
basic description of each vulnerability and a list of related reference documentation. If you
selected a High level of technical detail, it adds a narrative of how it found the vulnerability to the
description, as well as remediation options. Use this section to help you understand and fix
vulnerabilities.
This section does not distinguish between potential and confirmed vulnerabilities.
Executive Summary
This section provides statistics and a high-level summation of the scan data, including numbers
and types of network vulnerabilities.
This section lists highest risk vulnerabilities and includes their categories, risk scores, and their
Common Vulnerability Scoring System (CVSS) Version 2 scores. The section also provides
references for obtaining more information about each vulnerability.
Index of Vulnerabilities
This section includes the following information about each discovered vulnerability:
l severity level
l Common Vulnerability Scoring System (CVSS) Version 2 rating
l category
l URLs for reference
l description
l solution steps
In generated reports, this section appears with the heading Vulnerability Details.
This section lists each scanned IP address with a Pass or Fail result.
This section includes a statement as to whether a set of assets collectively passes or fails to
comply with PCI security standards. It also lists each scanned asset and indicates whether that
asset passes or fails to comply with the standards.
This section lists information about each scanned asset, including its hosted operating system,
names, PCI compliance status, and granular vulnerability information tailored for PCI scans.
This section includes name fields for the scan customer and approved scan vendor (ASV). The
customer's name must be entered manually. If the ASV has configured the oem.xml file to auto-
populate the name field, it will contain the ASV’s name. Otherwise, the ASV’s name must be
entered manually as well. For more information, see the ASV guide, which you can request from
Technical Support.
This section also includes the date the scan was completed and the scan expiration date, which is
the last day that the scan results are valid from a PCI perspective.
Note: Any instance of remote access software or directory browsing is automatically noted.
In this PCI report section, ASVs manually enter the notes about any scanned software that may
pose a risk due to insecure implementation, rather than an exploitable vulnerability. The notes
should include the following information:
This section includes a table listing each discovered vulnerability with a set of attributes including
PCI severity, CVSS score, and whether the vulnerability passes or fails the scan. The assets are
sorted by IP address. If the ASV marked a vulnerability for exception, the exception is indicated
here. The column labeled Exceptions, False Positives, or Compensating Controls field in the PCI
Executive Summary report is auto-populated with the user name of the individual who excluded a
given vulnerability.
Note: The PCI Vulnerability Details report takes into account approved vulnerability exceptions
to determine compliance status for each vulnerability instance.
This section contains in-depth information about each vulnerability included in a PCI Audit report.
It quantifies the vulnerability according to its severity level and its Common Vulnerability Scoring
System (CVSS) Version 2 rating.
This latter number is used to determine whether the vulnerable assets in question comply with
PCI security standards, according to the CVSS v2 metrics. Possible scores range from 1.0 to
10.0. A score of 4.0 or higher indicates failure to comply, with some exceptions. For more
information about CVSS scoring or go to the FIRST Web site at http://www.first.org/cvss/cvss-
guide.html.
This section lists vulnerabilities by categories, such as types of client applications and server-side
software.
Policy Evaluation
This sections lists the results of any policy evaluations, such as whether Microsoft security
templates are in effect on scanned systems. Section contents include system settings, registry
settings, registry ACLs, file ACLs, group membership, and account privileges.
Remediation Plan
This section consolidates information about all vulnerabilities and provides a plan for remediation.
The database of vulnerabilities feeds the Remediation Plan section with information about
patches and fixes, including Web links for downloading them. For each remediation, the
database provides a time estimate. Use this section to research fixes, patches, work-arounds,
and other remediation measures.
Risk Assessment
This section ranks each node (asset) by its risk index score, which indicates the risk that asset
poses to network security. An asset’s confirmed and unconfirmed vulnerabilities affect its risk
score.
Risk Trend
This section enables you to create graphs illustrating risk trends in reports in your Executive
Summary. The reports can include your five highest risk sites, asset groups, assets, or you can
select all assets in your report scope.
This section lists the assets that were scanned. If the IP addresses are consecutive, the console
displays the list as a range.
Table of Contents
Trend Analysis
This section appears when you select the Baseline report template. It compares the
vulnerabilities discovered in a scan against those discovered in a baseline scan. Use this section
to gauge progress in reducing vulnerabilities improving network's security.
This section, which appears in PCI Audit reports, lists each vulnerability, indicating whether it has
passed or failed in terms of meeting PCI compliance criteria. The section also includes
remediation information.
Vulnerability Details
The Vulnerability Details section includes statistics and descriptions for each discovered
vulnerability, including affected IP address, Common Vulnerability Enumeration (CVE) identifier,
CVSS score, PCI severity, and whether the vulnerability passes or fails the scan. Vulnerabilities
are grouped by severity level, and within grouping vulnerabilities are listed according to CVSS
score.
Use this template to view all vulnerability exceptions that were applied or requested within a
specified time period. The report includes information about each exception or exception request,
including the parties involved, statuses, and the reasons for the exceptions. This information is
useful for examining your organization's vulnerability management practices.
Vulnerability Exceptions
This section lists each vulnerability that has been excluded from report and the reason for each
exclusion. You may not wish to see certain vulnerabilities listed with others, such as those to be
targeted for remediation; but business policies may dictate that you list excluded vulnerabilities if
only to indicate that they were excluded. A typical example is the PCI Audit report. Vulnerabilities
of a certain severity level may result in an audit failure. They may be excluded for certain reasons,
but the exclusions must be noted.
This section lists the results of vulnerability tests for each node (asset) in the network. Use this
section to assess the vulnerability of each asset.
This section lists all tested vulnerabilities, and indicates how each node (asset) in the network
responded when the application attempted to confirm a vulnerability on it. Use this section as an
overview of the network's susceptibility to each vulnerability.
This section displays vulnerabilities that were not confirmed due to unexpected failures. Use this
section to anticipate or prevent system errors and to validate that scan parameters are set
properly.
When creating a custom export template, you can select from a full set of vulnerability data
attributes. The following table lists the name and description of each attribute that you can
include.
Attribute
Description
name
Asset
Alternate
This is the set of alternate IPv4 addresses of the scanned asset.
IPv4
Addresses
Asset
Alternate
This is the set of alternate IPv6 addresses of the scanned asset.
IPv6
Addresses
An API is a function that a developer can integrate with another software application by using
program calls. The term API also refers to one of two sets of XML APIs, each with its own
included operations: API v1.1 and Extended API v1.2. To learn about each API, see the API
documentation, which you can download from the Support page in Help.
Appliance
Asset
An asset is a single device on a network that the application discovers during a scan. In the Web
interface and API, an asset may also be referred to as a device. See Managed asset on page
675 and Unmanaged asset on page 683. An asset’s data has been integrated into the scan
database, so it can be listed in sites and asset groups. In this regard, it differs from a node. See
Node on page 676.
Asset group
An asset group is a logical collection of managed assets to which specific members have access
for creating or viewing reports or tracking remediation tickets. An asset group may contain assets
that belong to multiple sites or other asset groups. An asset group is either static or dynamic. An
asset group is not a site. See Site on page 681, Dynamic asset group on page 673, and Static
asset group on page 681.
Asset Owner
Asset Owner is one of the preset roles. A user with this role can view data about discovered
assets, run manual scans, and create and run reports in accessible sites and asset groups.
The Asset Report Format is an XML-based report template that provides asset information
based on connection type, host name, and IP address. This template is required for submitting
reports of policy scan results to the U.S. government for SCAP certification.
Glossary 669
Asset search filter
An asset search filter is a set of criteria with which a user can refine a search for assets to include
in a dynamic asset group. An asset search filter is different from a Dynamic Discovery filter on
page 673.
Authentication
Authentication is the process of a security application verifying the logon credentials of a client or
user that is attempting to gain access. By default the application authenticates users with an
internal process, but you can configure it to authenticate users with an external LDAP or
Kerberos source.
Average risk
Average risk is a setting in risk trend report configuration. It is based on a calculation of your risk
scores on assets over a report date range. For example, average risk gives you an overview of
how vulnerable your assets might be to exploits whether it’s high or low or unchanged. Some
assets have higher risk scores than others. Calculating the average score provides a high-level
view of how vulnerable your assets might be to exploits.
Benchmark
In the context of scanning for FDCC policy compliance, a benchmark is a combination of policies
that share the same source data. Each policy in the Policy Manager contains some or all of the
rules that are contained within its respective benchmark. See Federal Desktop Core
Configuration (FDCC) on page 674 and United States Government Configuration Baseline
(USGCB) on page 682.
Breadth
Breadth refers to the total number of assets within the scope of a scan.
Category
In the context of scanning for FDCC policy compliance, a category is a grouping of policies in the
Policy Manager configuration for a scan template. A policy’s category is based on its source,
purpose, and other criteria. SeePolicy Manager on page 677, Federal Desktop Core
Configuration (FDCC) on page 674, and United States Government Configuration Baseline
(USGCB) on page 682.
Check type
A check type is a specific kind of check to be run during a scan. Examples: The Unsafe check type
includes aggressive vulnerability testing methods that could result in Denial of Service on target
Glossary 670
assets; the Policy check type is used for verifying compliance with policies. The check type setting
is used in scan template configurations to refine the scope of a scan.
Center for Internet Security (CIS) is a not-for-profit organization that improves global security
posture by providing a valued and trusted environment for bridging the public and private sectors.
CIS serves a leadership role in the shaping of key security policies and decisions at the national
and international levels. The Policy Manager provides checks for compliance with CIS
benchmarks including technical control rules and values for hardening network devices,
operating systems, and middleware and software applications. Performing these checks requires
a license that enables the Policy Manager feature and CIS scanning. See Policy Manager on
page 677.
Command console
The command console is a page in the Security Console Web interface for entering commands to
run certain operations. When you use this tool, you can see real-time diagnostics and a behind-
the-scenes view of Security Console activity. To access the command console page, click the
Run Security Console commands link next to the Troubleshooting item on the Administration
page.
Common Configuration Enumeration (CCE) is a standard for assigning unique identifiers known
as CCEs to configuration controls to allow consistent identification of these controls in different
environments. CCE is implemented as part of its compliance with SCAP criteria for an
Unauthenticated Scanner product.
Common Platform Enumeration (CPE) is a method for identifying operating systems and
software applications. Its naming scheme is based on the generic syntax for Uniform Resource
Identifiers (URI). CCE is implemented as part of its compliance with SCAP criteria for an
Unauthenticated Scanner product.
The Common Vulnerabilities and Exposures (CVE) standard prescribes how the application
should identify vulnerabilities, making it easier for security products to exchange vulnerability
data. CVE is implemented as part of its compliance with SCAP criteria for an Unauthenticated
Scanner product.
Glossary 671
Common Vulnerability Scoring System (CVSS)
Common Vulnerability Scoring System (CVSS) is an open framework for calculating vulnerability
risk scores. CVSS is implemented as part of its compliance with SCAP criteria for an
Unauthenticated Scanner product.
Compliance
Continuous scan
A continuous scan starts over from the beginning if it completes its coverage of site assets within
its scheduled window. This is a site configuration setting.
Coverage
Coverage indicates the scope of vulnerability checks. A coverage improvement listed on the
News page for a release indicates that vulnerability checks have been added or existing checks
have been improved for accuracy or other criteria.
Criticality
Criticality is a value that you can apply to an asset with a RealContext tag to indicate its
importance to your business. Criticality levels range from Very Low to Very High. You can use
applied criticality levels to alter asset risk scores. See Criticality-adjusted risk.
Criticality-adjusted risk
or
Context-driven risk
Criticality-adjusted risk is a process for assigning numbers to criticality levels and using those
numbers to multiply risk scores.
Custom tag
With a custom tag you can identify assets by according to any criteria that might be meaningful to
your business.
Glossary 672
Depth
Depth indicates how thorough or comprehensive a scan will be. Depth refers to level to which the
application will probe an individual asset for system information and vulnerabilities.
Discovery is the first phase of a scan, in which the application finds potential scan targets on a
network. Discovery as a scan phase is different from Dynamic Discovery on page 673.
Document templates are designed for human-readable reports that contain asset and
vulnerability information. Some of the formats available for this template type—Text, PDF, RTF,
and HTML—are convenient for sharing information to be read by stakeholders in your
organization, such as executives or security team members tasked with performing remediation.
A dynamic asset group contains scanned assets that meet a specific set of search criteria. You
define these criteria with asset search filters, such as IP address range or operating systems. The
list of assets in a dynamic group is subject to change with every scan or when vulnerability
exceptions are created. In this regard, a dynamic asset group differs from a static asset group.
See Asset group on page 669 and Static asset group on page 681.
Dynamic Discovery
Dynamic Discovery is a process by which the application automatically discovers assets through
a connection with a server that manages these assets. You can refine or limit asset discovery
with criteria filters. Dynamic discovery is different from Discovery (scan phase) on page 673.
A Dynamic Discovery filter is a set of criteria refining or limiting Dynamic Discovery results. This
type of filter is different from an Asset search filter on page 670.
The Dynamic Scan Pool feature allows you to use Scan Engine pools to enhance the consistency
of your scan coverage. A Scan Engine pool is a group of shared Scan Engines that can be bound
to a site so that the load is distributed evenly across the shared Scan Engines. You can configure
scan pools using the Extended API v1.2.
Glossary 673
Dynamic site
A dynamic site is a collection of assets that are targeted for scanning and that have been
discovered through vAsset discovery. Asset membership in a dynamic site is subject to change if
the discovery connection changes or if filter criteria for asset discovery change. See Static site on
page 682, Site on page 681, and Dynamic Discovery on page 673.
Exploit
Export templates are designed for integrating scan information into external systems. The
formats available for this type include various XML formats, Database Export, and CSV.
Exposure
An exposure is a vulnerability, especially one that makes an asset susceptible to attack via
malware or a known exploit.
False positive
A false positive is an instance in which the application flags a vulnerability that doesn’t exist. A
false negative is an instance in which the application fails to flag a vulnerability that does exist.
The Federal Desktop Core Configuration (FDCC) is a grouping of configuration security settings
recommended by the National Institute of Standards and Technology (NIST) for computers that
are connected directly to the network of a United States government agency. The Policy
Glossary 674
Manager provides checks for compliance with these policies in scan templates. Performing these
checks requires a license that enables the Policy Manager feature and FDCC scanning.
Fingerprinting
Global Administrator
Global Administrator is one of the preset roles. A user with this role can perform all operations
that are available in the application and they have access to all sites and asset groups.
Host
A host is a physical or virtual server that provides computing resources to a guest virtual machine.
In a high-availability virtual environment, a host may also be referred to as a node. The term node
has a different context in the application. See Node on page 676.
Latency
Latency is the delay interval between the time when a computer sends data over a network and
another computer receives it. Low latency means short delays.
Locations tag
With a Locations tag you can identify assets by their physical or geographic locations.
Malware
Malware kit
Also known as an exploit kit, a malware kit is a software bundle that makes it easy for malicious
parties to write and deploy code for attacking target systems through vulnerabilities.
Managed asset
A managed asset is a network device that has been discovered during a scan and added to a
site’s target list, either automatically or manually. Only managed assets can be checked for
vulnerabilities and tracked over time. Once an asset becomes a managed asset, it counts against
the maximum number of assets that can be scanned, according to your license.
Glossary 675
Manual scan
A manual scan is one that you start at any time, even if it is scheduled to run automatically at other
times. Synonyms include ad-hoc scan and unscheduled scan.
Metasploit
Metasploit is a product that performs benign exploits to verify vulnerabilities. See Exploit on page
674.
MITRE
The MITRE Corporation is a body that defines standards for enumerating security-related
concepts and languages for security development initiatives. Examples of MITRE-defined
enumerations include Common Configuration Enumeration (CCE) and Common Vulnerability
Enumeration (CVE). Examples of MITRE-defined languages include Open Vulnerability and
Assessment Language (OVAL). A number of MITRE standards are implemented, especially in
verification of FDCC compliance.
National Institute of Standards and Technology (NIST) is a non-regulatory federal agency within
the U.S. Department of Commerce. The agency mandates and manages a number of security
initiatives, including Security Content Automation Protocol (SCAP). See Security Content
Automation Protocol (SCAP) on page 680.
Node
A node is a device on a network that the application discovers during a scan. After the application
integrates its data into the scan database, the device is regarded as an asset that can be listed in
sites and asset groups. See Asset on page 669.
Open Vulnerability and Assessment Language (OVAL) is a development standard for gathering
and sharing security-related data, such as FDCC policy checks. In compliance with an FDCC
requirement, each OVAL file that the application imports during configuration policy checks is
available for download from the SCAP page in the Security Console Web interface.
Override
An override is a change made by a user to the result of a check for compliance with a
configuration policy rule. For example, a user may override a Fail result with a Pass result.
Glossary 676
Payment Card Industry (PCI)
The Payment Card Industry (PCI) is a council that manages and enforces the PCI Data Security
Standard for all merchants who perform credit card transactions. The application includes a scan
template and report templates that are used by Approved Scanning Vendors (ASVs) in official
merchant audits for PCI compliance.
Permission
A permission is the ability to perform one or more specific operations. Some permissions only
apply to sites or asset groups to which an assigned user has access. Others are not subject to this
kind of access.
Policy
Policy Manager
Policy Manager is a license-enabled scanning feature that performs checks for compliance with
Federal Desktop Core Configuration (FDCC), United States Government Configuration
Baseline (USGCB), and other configuration policies. Policy Manager results appear on the
Policies page, which you can access by clicking the Policies icon in the Web interface. They also
appear in the Policy Listing table for any asset that was scanned with Policy Manager checks.
Policy Manager policies are different from standard policies, which can be scanned with a basic
license. See Policy on page 677 and Standard policy on page 681.
Policy Result
In the context of FDCC policy scanning, a result is a state of compliance or non-compliance with a
rule or policy. Possible results include Pass, Fail, or Not Applicable.
Glossary 677
Policy Rule
A rule is one of a set of specific guidelines that make up an FDCC configuration policy. See
Federal Desktop Core Configuration (FDCC) on page 674, United States Government
Configuration Baseline (USGCB) on page 682, and Policy on page 677.
Potential vulnerability
A potential vulnerability is one of three positive vulnerability check result types. The application
reports a potential vulnerability during a scan under two conditions: First, potential vulnerability
checks are enabled in the template for the scan. Second, the application determines that a target
is running a vulnerable software version but it is unable to verify that a patch or other type of
remediation has been applied. For example, an asset is running version 1.1.1 of a database. The
vendor publishes a security advisory indicating that version 1.1.1 is vulnerable. Although a patch
is installed on the asset, the version remains 1.1.1. In this case, if the application is running
checks for potential vulnerabilities, it can only flag the host asset as being potentially vulnerable.
The code for a potential vulnerability in XML and CSV reports is vp (vulnerable, potential). For
other positive result types, see Vulnerability check on page 684.
Published exploit
In the context of the application, a published exploit is one that has been developed in Metasploit
or listed in the Exploit Database. See Exploit on page 674.
RealContext
RealContext is a feature that enables you to tag assets according to how they affect your
business. You can use tags to specify the criticality, location, or ownership. You can also use
custom tags to identify assets according any criteria that is meaningful to your organization.
Real Risk is one of the built-in strategies for assessing and analyzing risk. It is also the
recommended strategy because it applies unique exploit and malware exposure metrics for each
vulnerability to Common Vulnerability Scoring System (CVSS) base metrics for likelihood
(access vector, access complexity, and authentication requirements) and impact to affected
assets (confidentiality, integrity, and availability). See Risk strategy on page 679.
Report template
Each report is based on a template, whether it is one of the templates that is included with the
product or a customized template created for your organization. See Document report template
on page 673 and Export report template on page 674.
Glossary 678
Risk
In the context of vulnerability assessment, risk reflects the likelihood that a network or computer
environment will be compromised, and it characterizes the anticipated consequences of the
compromise, including theft or corruption of data and disruption to service. Implicitly, risk also
reflects the potential damage to a compromised entity’s financial well-being and reputation.
Risk score
A risk score is a rating that the application calculates for every asset and vulnerability. The score
indicates the potential danger posed to network and business security in the event of a malicious
exploit. You can configure the application to rate risk according to one of several built-in risk
strategies, or you can create custom risk strategies.
Risk strategy
A risk strategy is a method for calculating vulnerability risk scores. Each strategy emphasizes
certain risk factors and perspectives. Four built-in strategies are available: Real Risk strategy on
page 678, TemporalPlus risk strategy on page 682, Temporal risk strategy on page 682, and
Weighted risk strategy on page 684. You can also create custom risk strategies.
Risk trend
A risk trend graph illustrates a long-term view of your assets’ probability and potential impact of
compromise that may change over time. Risk trends can be based on average or total risk
scores. The highest-risk graphs in your report demonstrate the biggest contributors to your risk
on the site, group, or asset level. Tracking risk trends helps you assess threats to your
organization’s standings in these areas and determine if your vulnerability management efforts
are satisfactorily maintaining risk at acceptable levels or reducing risk over time. See Average risk
on page 670 and Total risk on page 682.
Role
A role is a set of permissions. Five preset roles are available. You also can create custom roles by
manually selecting permissions. See Asset Owner on page 669, Security Manager on page 681,
Global Administrator on page 675, Site Owner on page 681, and User on page 683.
Scan
A scan is a process by which the application discovers network assets and checks them for
vulnerabilities. See Exploit on page 674 and Vulnerability check on page 684.
Glossary 679
Scan credentials
Scan credentials are the user name and password that the application submits to target assets
for authentication to gain access and perform deep checks. Many different authentication
mechanisms are supported for a wide variety of platforms. See Shared scan credentials on page
681 and Site-specific scan credentials on page 681.
Scan Engine
The Scan Engine is one of two major application components. It performs asset discovery and
vulnerability detection operations. Scan engines can be distributed within or outside a firewall for
varied coverage. Each installation of the Security Console also includes a local engine, which can
be used for scans within the console’s network perimeter.
Scan template
A scan template is a set of parameters for defining how assets are scanned. Various preset scan
templates are available for different scanning scenarios. You also can create custom scan
templates. Parameters of scan templates include the following:
Scheduled scan
A scheduled scan starts automatically at predetermined points in time. The scheduling of a scan
is an optional setting in site configuration. It is also possible to start any scan manually at any time.
Security Console
The Security Console is one of two major application components. It controls Scan Engines and
retrieves scan data from them. It also controls all operations and provides a Web-based user
interface.
Security Content Automation Protocol (SCAP) is a collection of standards for expressing and
manipulating security data. It is mandated by the U.S. government and maintained by the
National Institute of Standards and Technology (NIST). The application complies with SCAP
criteria for an Unauthenticated Scanner product.
Glossary 680
Security Manager
Security Manager is one of the preset roles. A user with this role can configure and run scans,
create reports, and view asset data in accessible sites and asset groups.
One of two types of credentials that can be used for authenticating scans, shared scan
credentials are created by Global Administrators or users with the Manage Site permission.
Shared credentials can be applied to multiple assets in any number of sites. See Site-specific
scan credentials on page 681.
Site
A site is a collection of assets that are targeted for a scan. Each site is associated with a list of
target assets, a scan template, one or more Scan Engines, and other scan-related settings. See
Dynamic site on page 674 and Static site on page 682. A site is not an asset group. See Asset
group on page 669.
One of two types of credentials that can be used for authenticating scans, a set of single-instance
credentials is created for an individual site configuration and can only be used in that site. See
Scan credentials on page 680 and Shared scan credentials on page 681.
Site Owner
Site Owner is one of the preset roles. A user with this role can configure and run scans, create
reports, and view asset data in accessible sites.
Standard policy
A standard policy is one of several that the application can scan with a basic license, unlike with a
Policy Manager policy. Standard policy scanning is available to verify certain configuration
settings on Oracle, Lotus Domino, AS/400, Unix, and Windows systems. Standard policies are
displayed in scan templates when you include policies in the scope of a scan. Standard policy
scan results appear in the Advanced Policy Listing table for any asset that was scanned for
compliance with these policies. See Policy on page 677.
A static asset group contains assets that meet a set of criteria that you define according to your
organization's needs. Unlike with a dynamic asset group, the list of assets in a static group does
not change unless you alter it manually. See Dynamic asset group on page 673.
Glossary 681
Static site
A static site is a collection of assets that are targeted for scanning and that have been manually
selected. Asset membership in a static site does not change unless a user changes the asset list
in the site configuration. For more information, see Dynamic site on page 674 and Site on page
681.
One of the built-in risk strategies, Temporal indicates how time continuously increases likelihood
of compromise. The calculation applies the age of each vulnerability, based on its date of public
disclosure, as a multiplier of CVSS base metrics for likelihood (access vector, access complexity,
and authentication requirements) and asset impact (confidentiality, integrity, and availability).
Temporal risk scores will be lower than TemporalPlus scores because Temporal limits the risk
contribution of partial impact vectors. See Risk strategy on page 679.
One of the built-in risk strategies, TemporalPlus provides a more granular analysis of vulnerability
impact, while indicating how time continuously increases likelihood of compromise. It applies a
vulnerability's age as a multiplier of CVSS base metrics for likelihood (access vector, access
complexity, and authentication requirements) and asset impact (confidentiality, integrity, and
availability). TemporalPlus risk scores will be higher than Temporal scores because
TemporalPlus expands the risk contribution of partial impact vectors. See Risk strategy on page
679.
Total risk
Total risk is a setting in risk trend report configuration. It is an aggregated score of vulnerabilities
on assets over a specified period.
Glossary 682
Unmanaged asset
An unmanaged asset is a device that has been discovered during a scan but not correlated
against a managed asset or added to a site’s target list. The application is designed to provide
sufficient information about unmanaged assets so that you can decide whether to manage them.
An unmanaged asset does not count against the maximum number of assets that can be
scanned according to your license.
Unsafe check
An unsafe check is a test for a vulnerability that can cause a denial of service on a target system.
Be aware that the check itself can cause a denial of service, as well. It is recommended that you
only perform unsafe checks on test systems that are not in production.
Update
An update is a released set of changes to the application. By default, two types of updates are
automatically downloaded and applied:
Content updates include new checks for vulnerabilities, patch verification, and security policy
compliance. Content updates always occur automatically when they are available.
Product updates include performance improvements, bug fixes, and new product features.
Unlike content updates, it is possible to disable automatic product updates and update the
product manually.
User
User is one of the preset roles. An individual with this role can view asset data and run reports in
accessible sites and asset groups.
Validated vulnerability
A validated vulnerability is a vulnerability that has had its existence proven by an integrated
Metasploit exploit. See Exploit on page 674.
Vulnerable version
Vulnerable version is one of three positive vulnerability check result types. The application reports
a vulnerable version during a scan if it determines that a target is running a vulnerable software
version and it can verify that a patch or other type of remediation has not been applied. The code
for a vulnerable version in XML and CSV reports is vv (vulnerable, version check). For other
positive result types, see Vulnerability check on page 684.
Vulnerability
Glossary 683
Vulnerability category
A vulnerability category is a set of vulnerability checks with shared criteria. For example, the
Adobe category includes checks for vulnerabilities that affect Adobe applications. There are also
categories for specific Adobe products, such as Air, Flash, and Acrobat/Reader. Vulnerability
check categories are used to refine scope in scan templates. Vulnerability check results can also
be filtered according category for refining the scope of reports. Categories that are named for
manufacturers, such as Microsoft, can serve as supersets of categories that are named for their
products. For example, if you filter by the Microsoft category, you inherently include all Microsoft
product categories, such as Microsoft Path and Microsoft Windows. This applies to other
“company” categories, such as Adobe, Apple, and Mozilla.
Vulnerability check
A vulnerability check is a series of operations that are performed to determine whether a security
flaw exists on a target asset. Check results are either negative (no vulnerability found) or positive.
A positive result is qualified one of three ways: See Vulnerability found on page 684, Vulnerable
version on page 683, and Potential vulnerability on page 678. You can see positive check result
types in XML or CSV export reports. Also, in a site configuration, you can set up alerts for when a
scan reports different positive results types.
Vulnerability exception
A vulnerability exception is the removal of a vulnerability from a report and from any asset listing
table. Excluded vulnerabilities also are not considered in the computation of risk scores.
Vulnerability found
Vulnerability found is one of three positive vulnerability check result types. The application reports
a vulnerability found during a scan if it verified the flaw with asset-specific vulnerability tests, such
as an exploit. The code for a vulnerability found in XML and CSV reports is ve (vulnerable,
exploited). For other positive result types, see Vulnerability check on page 684.
One of the built-in risk strategies, Weighted is based primarily on asset data and vulnerability
types, and it takes into account the level of importance, or weight, that you assign to a site when
you configure it. See Risk strategy on page 679.
Glossary 684