Northstar Controller User Guide
Northstar Controller User Guide
Release
5.0.0
Modified: 2019-08-08
Juniper Networks, the Juniper Networks logo, Juniper, and Junos are registered trademarks of Juniper Networks, Inc. in the United States
and other countries. All other trademarks, service marks, registered marks, or registered service marks are the property of their respective
owners.
Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right to change, modify,
transfer, or otherwise revise this publication without notice.
The information in this document is current as of the date on the title page.
Juniper Networks hardware and software products are Year 2000 compliant. Junos OS has no known time-related limitations through the
year 2038. However, the NTP application is known to have some difficulty in the year 2036.
The Juniper Networks product that is the subject of this technical documentation consists of (or is intended for use with) Juniper Networks
software. Use of such software is subject to the terms and conditions of the End User License Agreement (“EULA”) posted at
https://support.juniper.net/support/eula/. By downloading, installing or using such software, you agree to the terms and conditions of
that EULA.
Figure 169: Event View Sorting and Column Display Options . . . . . . . . . . . . . . . 249
Figure 170: Event View Bar Chart Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
Figure 171: Event View Time Span Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
Figure 172: Event View Timeline Partial Selection . . . . . . . . . . . . . . . . . . . . . . . . . 250
Figure 173: Event View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Figure 174: Event View Sorting and Column Display Options . . . . . . . . . . . . . . . . 251
Figure 175: Event View Bar Chart Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
Figure 176: Event View Time Span Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
Figure 177: Event View Timeline Partial Selection . . . . . . . . . . . . . . . . . . . . . . . . . 252
Figure 178: Create New Task Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
Figure 179: Create New Cleanup Task Options . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Figure 180: Cleanup Notifications in the Timeline . . . . . . . . . . . . . . . . . . . . . . . . 255
Figure 181: Reports Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
Figure 182: Web User Interface Nodes View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
Chapter 10 Data Collection and Analytics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Figure 183: Task List Showing System Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
Figure 184: Device Profile Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265
Figure 185: Sorting, Column Selection, and Filter Options . . . . . . . . . . . . . . . . . . 266
Figure 186: Profile Connectivity Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Figure 187: Test Connectivity Options Window . . . . . . . . . . . . . . . . . . . . . . . . . . . 268
Figure 188: Connectivity Test Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
Figure 189: Add New Device Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
Figure 190: Delete Device Confirmation Window . . . . . . . . . . . . . . . . . . . . . . . . . 274
Figure 191: Device List Displayed by Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Figure 192: Manage Device Groups Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Figure 193: Manage Device Groups Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
Figure 194: Logical and Physical Topologies Example . . . . . . . . . . . . . . . . . . . . . 278
Figure 195: Create New Task Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
Figure 196: Example Task Scheduling Window . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
Figure 197: Create New Task Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
Figure 198: Device Collection Task, All Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
Figure 199: Device Collection Task, Selective Devices . . . . . . . . . . . . . . . . . . . . . . 287
Figure 200: Device Collection Task, Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
Figure 201: Device Collection Task, Collection Options . . . . . . . . . . . . . . . . . . . . 289
Figure 202: Device Collection Task, Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Figure 203: Device Collection Results, Summary Tab . . . . . . . . . . . . . . . . . . . . . 292
Figure 204: Device Collection Results, Status Tab . . . . . . . . . . . . . . . . . . . . . . . . 292
Figure 205: Analytics Widget Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
Figure 206: Link Label Settings: Interface Util A::Z . . . . . . . . . . . . . . . . . . . . . . . . 294
Figure 207: Traffic View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Figure 208: Graphical LSP Delay View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
Figure 209: Performance-Over-Time Slide Bar . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Figure 210: Performance Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Figure 211: Analytics in Nodes View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
Figure 212: Accessing Top Traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
Figure 213: Top Traffic Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 300
Figure 214: Top Traffic With Mouseover Information . . . . . . . . . . . . . . . . . . . . . . . 301
Figure 215: More Options Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
If the information in the latest release notes differs from the information in the
documentation, follow the product Release Notes.
Juniper Networks Books publishes books by Juniper Networks engineers and subject
matter experts. These books go beyond the technical documentation to explore the
nuances of network architecture, deployment, and administration. The current list can
be viewed at https://www.juniper.net/books.
Documentation Conventions
Caution Indicates a situation that might result in loss of data or hardware damage.
Laser warning Alerts you to the risk of personal injury from a laser.
Table 2 on page xxiv defines the text and syntax conventions used in this guide.
Bold text like this Represents text that you type. To enter configuration mode, type the
configure command:
user@host> configure
Fixed-width text like this Represents output that appears on the user@host> show chassis alarms
terminal screen.
No alarms currently active
Italic text like this • Introduces or emphasizes important • A policy term is a named structure
new terms. that defines match conditions and
• Identifies guide names. actions.
• Junos OS CLI User Guide
• Identifies RFC and Internet draft titles.
• RFC 1997, BGP Communities Attribute
Italic text like this Represents variables (options for which Configure the machine’s domain name:
you substitute a value) in commands or
configuration statements. [edit]
root@# set system domain-name
domain-name
Text like this Represents names of configuration • To configure a stub area, include the
statements, commands, files, and stub statement at the [edit protocols
directories; configuration hierarchy levels; ospf area area-id] hierarchy level.
or labels on routing platform • The console port is labeled CONSOLE.
components.
< > (angle brackets) Encloses optional keywords or variables. stub <default-metric metric>;
# (pound sign) Indicates a comment specified on the rsvp { # Required for dynamic MPLS only
same line as the configuration statement
to which it applies.
[ ] (square brackets) Encloses a variable for which you can community name members [
substitute one or more values. community-ids ]
GUI Conventions
Bold text like this Represents graphical user interface (GUI) • In the Logical Interfaces box, select
items you click or select. All Interfaces.
• To cancel the configuration, click
Cancel.
> (bold right angle bracket) Separates levels in a hierarchy of menu In the configuration editor hierarchy,
selections. select Protocols>Ospf.
Documentation Feedback
We encourage you to provide feedback so that we can improve our documentation. You
can use either of the following methods:
• Online feedback system—Click TechLibrary Feedback, on the lower right of any page
on the Juniper Networks TechLibrary site, and do one of the following:
• Click the thumbs-up icon if the information on the page was helpful to you.
• Click the thumbs-down icon if the information on the page was not helpful to you
or if you have suggestions for improvement, and use the pop-up form to provide
feedback.
Technical product support is available through the Juniper Networks Technical Assistance
Center (JTAC). If you are a customer with an active Juniper Care or Partner Support
Services support contract, or are covered under warranty, and need post-sales technical
support, you can access our tools and resources online or open a case with JTAC.
• JTAC hours of operation—The JTAC centers have resources available 24 hours a day,
7 days a week, 365 days a year.
• Find solutions and answer questions using our Knowledge Base: https://kb.juniper.net/
To verify service entitlement by product serial number, use our Serial Number Entitlement
(SNE) Tool: https://entitlementsearch.juniper.net/entitlementsearch/
• Visit https://myjuniper.juniper.net.
The Juniper Networks NorthStar Controller is an SDN controller that enables granular
visibility and control of IP/MPLS tunnels in large service provider and enterprise networks.
Network operators can use the NorthStar Controller to optimize their network
infrastructure through proactive monitoring, planning, and explicit routing of large traffic
loads dynamically based on user-defined constraints.
The NorthStar Controller provides network managers with a powerful and flexible traffic
engineering solution with some important features:
• Specific ordering and synchronization of paths signaled across routed network elements
• Global view of the network state for monitoring, management, and proactive planning
• Ability to receive an abstracted view of an underlying transport network and utilize the
information to expand its packet-centric applications
The NorthStar Controller relies on PCEP to instantiate a path between the PCC routers.
The path setup itself is performed through RSVP-TE signaling, which is enabled in the
network and allows labels to be assigned from an ingress router to the egress router.
Signaling is triggered by ingress routers in the core of the network. The PCE client runs
on the routers by using a version of the Junos operating system (Junos OS) that supports
PCEP.
The NorthStar Controller provisions PCEP in all PE devices (PCCs) and uses PCEP to
retrieve the current status of the existing tunnels (LSPs) that run in the network. By
providing a view of the global network state and bandwidth demand in the network, the
NorthStar Controller is able to compute optimal paths and provide the attributes that
the PCC uses to signal the LSP.
NOTE: NorthStar supports functions related to LSPs and links for both
physical and logical systems. However, for logical systems, real-time updates
to the topology are not possible because there is no PCEP for logical systems.
Instead, you can perform periodic Netconf collection for updated logical
topology information.
The following sections describe the architecture, components, and functionality of the
NorthStar Controller:
The stateful PCE implementation in the NorthStar Controller provides the following
functions:
• Modifies other LSP attributes on the router, such as explicit route object (ERO), setup
priority, and hold priority
A TCP-based PCEP session connects a PCC to an external PCE. The PCC initiates the
PCEP session and stays connected to the PCE for the duration of the PCEP session.
During the PCEP session, the PCC requests LSP parameters from the stateful PCE. When
receiving one or more LSP parameters from the PCE, the PCC resignals the TE LSP. When
the PCEP session is terminated, the underlying TCP connection is closed immediately,
and the PCC attempts to reestablish the PCEP session.
• LSP tunnel state synchronization between a PCC and a stateful PCE— When an active
stateful PCE connection is detected, a PCC synchronizes an LSP state with the PCE.
PCEP enables a fast and timely synchronization of the LSP state to the PCE.
• Delegation of control over LSP tunnels to a stateful PCE—An active stateful PCE
controls one or more LSP attributes for computing paths, such as bandwidth, path
(ERO), and priority (setup and hold). PCEP enables such delegation of LSPs.
• Stateful PCE control of timing and sequence of path computations within and across
PCEP sessions—An active stateful PCE modifies one or more LSP attributes, such as
bandwidth, path (ERO), and priority (setup and hold). PCEP communicates these new
LSP attributes from the PCE to the PCC, after which the PCC resignals the LSP in the
specified path.
The PCCD is stateless so it does not keep any state other than current outstanding
requests, and does not remember any state for established LSPs. The PCCD requests
the state after the response comes back from the PCE and then forwards the response
to the RPD. Because the PCCD is stateless, the RPD only needs to communicate with
the PCCD when the LSP is first created. After the RPD receives the results from the PCCD,
the results are stored (even across RPD restarts), and the RPD does not need to
communicate with the PCCD again until the LSP is rerouted (when the LSP configuration
is changed or the LSP fails).
NOTE: Planner functionality is not available through the web UI. To perform
simulations without affecting the live network, you must use the NorthStar
Planner Java client UI.
• Label-switched path (LSP) reporting—Label edge routers (LERs) use PCEP reports to
report all types of LSPs (PCC_controlled, PCC_delegated, and PCE_initiated) to the
NorthStar Controller.
• LSP provisioning—Create LSPs from the NorthStar Controller or update LSPs that have
been delegated to the NorthStar Controller. You can also create multiple LSPs at one
time.
• Symmetric pair groups—Design a pair of LSPs so that the LSP from the ingress LER to
the egress LER follows the same path as the LSP from the egress LER to the ingress
LER. You can access this feature in the web UI by navigating to Applications > Provision
LSP, and clicking on the Advanced tab.
• Diverse LSPs—From the NorthStar Controller UI, design two LSPs so that the paths
are node, link, or SRLG diverse from each other.
• Standby and secondary LSPs—Provide an alternate route in the event the primary
route fails. The tunnel ID, from node, to node, and IP address of a secondary or standby
LSP are identical to that of the primary LSP. However, secondary and standby LSPs
have the following differences:
When bandwidth threshold triggers are reached on the PCC, a PCRpt message is
sent to the PCE. The PCRpt message includes the vendor TLV specifying the new
requested bandwidth. The following conditions apply:
• If a new path is not found, the process described above is repeated whenever the
adjust interval timer is triggered.
• LSP optimization—Analyze and optimize LSPs that have been delegated to the
NorthStar Controller. You can use the Analyze Now feature to run a path optimization
analysis and create an optimization report to help you determine whether optimization
should be done. You can also use the Optimize Now feature to automatically optimize
paths, with or without a user-defined timer. A report is not created when you use
Optimize Now, and the optimization is based on the current network conditions, not
on the conditions in effect the last time the analysis was done.
• Schedule maintenance events—Select nodes and links for maintenance. When you
schedule a maintenance event on nodes or links, the NorthStar Controller routes
delegated LSPs around those nodes and links that are scheduled for maintenance.
After completion of the maintenance event, delegated LSPs are reverted back to
optimal paths.
events at the time the simulation is initiated. Simulation does not simulate the
maintenance event for a future network state or simulate elements from other
concurrent maintenance events. You can run network simulations based on selected
elements for maintenance or extended failure simulations, with the option to include
exhaustive failures.
• TE++ LSPs—A TE++ LSP includes a set of paths that are configured as a specific
container statement and individual LSP statements, called sub-LSPs, which all have
equal bandwidth.
For TE++ LSPs, a normalization process occurs that resizes the LSP when either of the
following two triggers initiates the normalization process:
• A periodic timer
When either of the preceding triggers is fired, one of the following events can occur:
• No change is required.
• LSP splitting—Add another LSP and distribute bandwidth across all the LSPs.
• LSP merging—Delete an LSP and distribute bandwidth across all the LSPs.
For a TE++ LSP, the NorthStar Controller displays a single LSP with a set of paths, and
the LSP name is based on the matching prefix name of all members. The correlation
between TE-LSPs is based on association, and the LSP is deleted when there is no
remaining TE LSP.
• User authentication with an external LDAP server—You can specify that users are to
be authenticated using an external LDAP server rather than the default local
authentication. This enables in-house authentication. The client sends an authentication
request to the NorthStar Controller, which forwards it to the external LDAP server.
Once the LDAP server accepts the request, NorthStar queries the user profile for
authorization and sends the response to the client. The NorthStar web UI facilitates
LDAP authentication configuration with an admin-only window available from the
Administration menu.
• P2MP support—The NorthStar Controller receives the P2MP names used to group
sub-LSPs together from the PCC/PCE, by way of autodiscovery. In the NorthStar
Controller web UI, a new P2MP window is now available that displays the P2MP LSPs
and their sub-LSPs. Detailed information about the sub-LSPs is also available in the
Tunnel tab of the network information table. From the P2MP window, right-clicking a
P2MP name displays a graphical tree view of the group.
• Admin groups—Admin groups, also known as link coloring or resource class assignment,
are manually assigned attributes that describe the “color” of links, such that links with
the same color conceptually belong to the same class. You can use admin groups to
implement a variety of policy-based LSP setups. Admin group values for PCE-initiated
LSPs created in the controller are carried by PCEP.
The NorthStar Controller web UI also supports setting admin group attributes for LSPs
in the Advanced tab of the Provision LSP and Modify LSP windows. The admin group
for PCC-delegated and locally controlled LSPs can be viewed in the web UI as well.
For PCC-delegated LSPs, existing attributes can be modified in the web UI.
• Analytics—Streams data from the network devices, via data collectors, to the NorthStar
Controller where it is processed, stored, and made available for viewing in the web UI.
The NorthStar Controller periodically connects to the network in order to obtain the
configuration of the network devices. It uses this information to correlate IP addresses,
interfaces, and devices. The collection schedule is user-configured. Junos Telemetry
Interface (JTI) sensors generate data from the PFE (LSP traffic data, logical and physical
interface traffic data), and send probes through the data-plane. In addition to
connecting the routing engine to the management network, a data port must be
connected to the collector on one of your devices. The rest of the devices in the network
can use that interface to reach the collector. Views and work flows in the web UI support
visualization of collected data so it can be interpreted.
• Netconf Persistence—Allows you to create a collection task for netconf and display
the results of the collection. Netconf collection is used by the Analytics feature to
obtain the network device configuration information needed to organize and display
collected data in a meaningful way in the web UI.
UI Comparison
Table 3 on page 13 summarizes the major use cases for the NorthStar Controller and
NorthStar Planner.
NOTE: All user administration (adding, modifying, and deleting users) must
be done from the NorthStar Controller web UI.
Manage, monitor, and provision a live network in real-time. Design, simulate, and analyze a network offline.
Live network topology map shows node status, link utilization, Network topology map shows simulated or imported data for
and LSP paths. nodes, links, and LSP paths.
Network information table shows live status of nodes, links, Network information table shows simulated or imported data
and LSPs. for nodes, links, and LSPs.
Discover nodes, links, and LSPs from the live network using Import or add nodes, links, and LSPs for network modeling.
PCEP or NETCONF.
Provision LSPs directly to the network. Add and stage LSPs for provisioning to the network.
Create or schedule maintenance events to re-route LSPs Create or schedule simulation events to analyze the network
around the impacted nodes and links. model from failure scenarios.
Dashboard reports shows current status and KPIs of the live Report manager provides extensive reports for simulation and
network. planning.
Analytics collects real-time interface traffic or delay statistics Import interface data or aggregate archived data to generate
and stores the data for querying and chart displays. historical statistics for querying and chart displays.
Browser Compatibility
For accessing the NorthStar Controller web UI, we recommend Google Chrome and
Mozilla Firefox browsers for Windows and Mac OS. We also recommend that you keep
your browser updated to a recent version.
Your external IP address is provided to you when you install the NorthStar application.
In the address bar of your browser window, type that secure host external IP address,
followed by a colon and port number 8443 (for example, https://10.0.1.29:8443). The
NorthStar login window is displayed, as shown in Figure 2 on page 15. This same login
window grants access to the NorthStar Controller (Operator) and both versions of the
NorthStar Planner (Planner for web UI, Planner Desktop for desktop application). Make
your selection from the Access Portal drop-down menu. For Operator and Planner, enter
your username and password, and click Sign In.
If you select NorthStar Planner Desktop from the drop-down menu, the window changes
as shown in Figure 3 on page 16.
Click Download. Depending on the browser you are using when you initiate the download
and launch the NorthStar Planner desktop application, a dialog box might be displayed,
asking if you want to open or save the .jnlp file, accept downloading of the application,
and agree to run the application. Once you respond to all browser requests, a dialog box
is displayed in which you enter your user ID and password. Click Login.
You can also launch the NorthStar Planner desktop application from within the NorthStar
Controller by navigating to NorthStar Planner from the NorthStar Controller More Options
menu as shown in Figure 4 on page 16:
NOTE: If you attempt to reach the login window, but instead, are routed to
a message window that says, “Please enter your confirmation code to
complete setup,” you must go to your license file and obtain the confirmation
code as directed. Enter the confirmation code along with your administrator
password to be routed to the web UI login window. The requirement to enter
the confirmation code only occurs when the installation process was not
completed correctly and the NorthStar application needs to confirm that you
have the authorization to continue.
• Dashboard
• Topology
• Nodes
• Analytics
• Work Orders
Figure 5 on page 17 shows the buttons for selecting a view. They are located in the top
menu bar.
The Dashboard view presents a variety of status and statistics information related to the
network, in the form of widgets. Figure 6 on page 18 shows a sample of the available
widgets.
The Topology view is displayed by default when you first log in to the web UI.
Figure 7 on page 19 shows the Topology view.
The Topology view is the main work area for the live network you load into the system.
The Layout and Applications drop-down menus in the top menu bar are only available
in Topology view.
The Nodes view, shown in Figure 8 on page 19, displays detailed information about the
nodes in the network. With this view, you can see node details, tunnel and interface
summaries, groupings, and geographic placement (if enabled), all in one place.
The Analytics view, shown in Figure 9 on page 20, provides a collection of quick-reference
widgets related to analytics.
The Work Orders view, shown in Figure 10 on page 20, presents a table listing all scheduled
work orders. Clicking on a line item in the table displays detailed information about the
work order in a second table.
Functions accessible from the right side of the top menu bar have to do with user and
administrative management. Figure 11 on page 20 shows that portion of the top menu
bar. These functions are accessible whether you are in the Dashboard, Topology, Nodes,
Analytics, or Work Orders view.
• Account Settings
• Log Out
• Active Users
• Administration (the options available to any particular user depend on user group
permissions)
NOTE: The “Admin only” functions can only be accessed by the Admin.
• System Health
• Analytics
• Device Profile
• Task Scheduler
• Logs
• Transport Controller
• Planner Desktop (launches the NorthStar Planner Java client UI, without closing your
NorthStar Controller web UI)
User Management
In the NorthStar Controller application, a user has access to both the NorthStar Controller
web UI and the NorthStar Planner. Users and user groups that are created in either
Controller or Planner are carried over into the other. Because the available group
permissions are different in the Controller versus the Planner, you can adjust them in
either application.
• If you are installing the NorthStar Controller application for the first time (fresh install),
one user group is automatically created–Administrators. The Administrators user
group, by default, has full permissions in the work order management system–to create,
approve or reject, and activate work orders. See “Work Order Management” on page 30
for more information about the Work Order management system.
In a fresh install, the only user pre-added to this group is the Admin. The Admin is a
special user who can access all features and functionality within NorthStar, including
those related to system settings, license management, authentication method control,
and user management. Being assigned to the Administrators user group does not make
a user an Admin. But the Admin is assigned to the Administrators user group.
• If you are upgrading from a NorthStar release older than Release 4.1.0, two user groups
are automatically created–Administrators and Viewers.
IMPORTANT: All existing full-access users from the older release are pre-added to
the Administrators user group during the upgrade process. All view-only users from
the older release are pre-added to the Viewers user group. We recommend that the
Admin immediately access the User Management system (Administration > Users)
to create additional user groups, assign them appropriate permissions for handling
work orders, and assign each existing user to the appropriate user group based on
those permissions. The Admin is the only user who can access the User Management
system.
There is a relationship between the permissions users have and the functions in the
Administration menu that they can access (More Options in the upper right corner of the
NorthStar Controller window), as follows:
• All users (including users with Activate Work Orders, Approve Work Orders, or even no
permissions at all) can access:
• System Health
• Device Profile
• Task Scheduler
• Logs
• Users with Create Work Orders or Auto-Approve Work Orders can additionally access:
• Analytics
• Transport Controller
• Authentication
• License
• Subscribers
• System Settings
• Users
There is also a relationship between user permissions and functions available in the
Applications menu, as follows:
• Users with Create or Auto-Approve permission have access to the following functions:
• Provision LSP
• Device Configuration
• Path Optimization
• Bandwidth Calendar
• Event View
• Reports
• Top Traffic
NOTE: Add, Modify, and Delete buttons are available in the Network
Information table.
• Users with any other permission(s) have access only to the following functions:
• Bandwidth Calendar
• Event View
• Reports
• Top Traffic
NOTE: Add, Modify, and Delete buttons are not available in the Network
Information table for these users.
1. Click Manage User Groups in the upper right corner of the User Management window.
The Manage User Groups window appears as shown in Figure 13 on page 24.
2. Click Add Group in the lower left corner. You are prompted to enter the name of the
new group. Click OK. The new group is added to the list of groups in the Manage User
Groups window.
3. Select the new group in the list. On the right side of the window, click in the check
boxes for the permissions you want to assign to this group. A group can have any
combination of the available permissions selected, except that the first two
(Auto-Approve Work Orders and Create Work Orders) are mutually exclusive because
Auto-Approve permission includes Create permission. By default, none of the
permissions are checked as shown in Figure 14 on page 25.
See “Work Order Management” on page 30 for more information about the available
permissions and how the work order management system functions.
Once the groups are created, you can create new users and assign each to a group. When
you create a new user, you must assign them a username, a password, and a group. To
create a new user:
1. Click Add in the User Management window. The Add User window is displayed as
shown in Figure 15 on page 26.
2. Complete the Username, Password (this is the initial password that the user can later
change), and Confirm Password fields. Click the down arrow beside the Group field
to select a group for this user from the list of existing groups. Profile Name, Email, and
Phone are optional fields.
To modify an existing user, either select the username from the User Management window
and click Modify, or just double click the username. Both actions display the Modify User
window where you can modify the values you previously assigned.
To delete an existing user, select the username in the User Management window and
click Delete.
NOTE: There is no warning that you are about to delete the user, so be sure
of your intention before you click Delete.
To modify the permissions assigned to a user group, click Manage User Groups in the
upper right corner of the User Management window to display the Manage User Groups
window. Select the group to be modified in the left side of the window and revise the
permissions in the right side of the window.
NOTE: When you change the permissions of a group, all the members of that
group are affected.
Before you can delete a group, you must delete the users assigned to it, or reassign users
in that group to another group. To delete an empty group, select the group name in the
Manage User Groups window and click Delete.
NOTE: There is no warning that you are about to delete the group, so be sure
of your intention before you click Delete.
Active Users
The Active Users window shows who is currently logged in to the system, when they
logged in, how long they have been logged in, their user group, and whether they are
logged in to the web UI or the NorthStar Planner. This window is available to all users,
but is a particularly good user management tool for the Admin.
Access the Active Users window from the Menu icon (horizontal bars) in the upper right
corner of the web UI.
Figure 16 on page 27 shows the Active Users window, including the sorting and column
selection options that are available when you hover over a column heading and click on
the down arrow that appears.
The Force Log Out button is available only to the Admin, for the purpose of selectively
disconnecting NorthStar Controller (as opposed to Planner) user sessions. To disconnect
a user session, select the user name to disconnect and click Force Log Out.
The Account Settings window allows you to change your password, create or change a
profile name (like a nickname) for yourself, enter your contact information (e-mail address
and telephone number), and set up date/time and time zone preferences for your web
UI display. You cannot change your username. Click Update to save your changes, or
Cancel to discard them.
Work order management provides authorization and tracking for two kinds of change
requests:
Change requests (additions, deletions, and modifications) are captured as work orders
and must be approved and activated (provisioned) before they can take effect and be
seen in the network information table and in the topology (in the case of LSPs), or in the
router configurations (in the case of device configuration updates). Users can perform
the various functions within the work order management system based on their assigned
user group.
1. Created/submitted
2. Approved or rejected
3. Activated (if approved) - this step actually provisions the LSP(s) or pushes the
requested configuration change to the router(s)
4. Closed
All users can monitor the status of work orders using the Work Orders window accessible
from the top menu bar in the web UI.
Work orders are stored in the Cassandra database, each with a number of attributes
such as:
• Provisioning status
The Cassandra database is queried to populate the Work Orders window. Changes in
the Work Orders window are immediately saved back to the Cassandra database and
broadcast to all users in real time, so everyone has the most current information.
User can access the web UI window appropriate for the desired request, such as
Provision LSP, Modify LSP, Provision Multiple LSPs, Device Configuration, and so on.
Once the user clicks Submit (or Provision), a work order is created.
User can approve or reject work orders created by anyone, including those he himself
created (if he also has Create Work Orders permission).
User can create work orders which are automatically approved and activated. Create
and Auto-Approve are mutually exclusive because Auto-Approve includes Create.
Auto-Approve permission does not enable a user to approve work orders submitted
by other users. Auto-Approve permission also applies to the REST API, making
automated northbound integration possible with third-party systems or scripts.
A user with none of these permissions can view the status of work orders, but cannot
alter them in any way.
See “User Management” on page 21 for information about creating user groups and
assigning permissions to them.
The new work order appears in the Work Orders window, accessible from the top Menu
Bar in the web UI. The Status column lists the work order as Submitted. The Submitter
Figure 19 on page 32 shows the Work Orders window with work orders listed in the top
portion. The bottom portion of the window (Details) shows detailed information for the
highlighted work order, an LSP provisioning work order in this example.
Figure 20 on page 32 and Figure 21 on page 33 show the Details section for an example
device configuration work order. There are two tabs: Details Status and Configuration.
The Configuration tab lists the CLI being pushed to the device(s).
Figure 20: Details for Device Configuration Work Order, Details Status Tab
Figure 21: Details for Device Configuration Work Order, Configuration Tab
The Details part of the window for a Modify work order shows both the old and new
values.
To approve a work order, highlight the row in the Work Orders window and click Workflow
in the upper right corner of the window. Select Approve or Reject from the drop-down
window. Optionally, add a comment when prompted. The status for the work order is
updated accordingly.
A user with Activate permission must then activate the approved work order for it to
actually take effect. To activate a work order, highlight the row in the Work Orders window
and click Workflow in the upper right corner. Select Activate from the drop-down menu
to display the Schedule Work Order window. The Schedule Work Order window is different,
depending on whether the work order is related to LSP provisioning or to device
configuration.
NOTE: The Schedule Work Order window is not presented when work orders
are auto-approved. Such work orders are approved and activated immediately
upon submission.
Figure 22 on page 34 shows the Schedule Work Order window for an LSP provisioning
work order. The calendar is displayed when you click the calendar icon.
Figure 22: Schedule Work Order Window for an LSP Provisioning Work Order
Figure 23 on page 35 shows the Schedule Work Order window for a device configuration
work order. In addition to being able to schedule the work order to take effect at a future
day and time, you can also opt to run device collection immediately afterwards, to update
the NorthStar topology.
Figure 23: Schedule Work Order Window for a Device Configuration Work Order
You can opt to provision the work order immediately or at a future date and time.
Optionally, you can add a comment when prompted. Once activated, NorthStar attempts
to provision the LSP (for LSP work orders), and the LSP appears in the network information
table (Tunnel tab) and in the topology. When device configuration work orders are
activated, the configuration statements are pushed to the network devices according to
the instructions in the work order. Verify the provisioning is successful. The Work Orders
window includes a column for Provisioning Status.
Best Practices
The following best practices help to keep the Work Orders window current and meaningful
over time:
• Submitters: close your work orders when they are no longer needed.
Work orders are considered open until they are manually closed; only open work orders
are displayed in the Work Orders window. We recommend that you keep this display
as streamlined as possible by closing activated or rejected work orders when they are
no longer needed, thereby removing them from the Work Orders window. Close a work
order by highlighting the row in the work orders table and clicking Workflow in the
upper right corner of the window. Select Close.
NOTE: Only the user who submitted a work order can close it. Not even
the Admin can close a work order submitted by another user. A work order
can be closed by the user who submitted it as long as the status is
Submitted, Rejected, or Activated.
• Approvers and Activators: Monitor the Work Orders window regularly and advance
work orders promptly to keep them moving through the work order management
system.
The submitter, approver, and activator comments are retained and displayed as part
of the work order record to help clarify what is happening with the work order at each
step in the process. The submitter comment is populated automatically and can be
changed. The approver and activator comments are completely optional, but potentially
valuable.
When you first log in to the web user interface, the initial window displays the Topology
view by default, as shown in Figure 24 on page 40.
The Topology view is the main work area for the live network you load into the system,
and has the following panes (numbers correspond to the callouts in Figure 24 on page 40):
2. Interactive graphical topology map pane—Use the topology map to access element
information and further customize the map display. The color legend at the bottom
is configurable and is tied to the Performance selection from the drop-down menu in
the Left Pane.
Many familiar navigation functions are supported in the Topology window, and are
summarized in Table 4 on page 41.
Function Method
Drag and drop Left-click an element, hold while repositioning the cursor, then release.
Select multiple elements 1. Hold down the Shift key and left mouse button while dragging the mouse to create a
rectangular selection box. All elements within the box are selected.
2. Hold down the Shift key and click multiple items, one at a time.
Filter the network information Double click a link or node to display only that element in the network information table.
table to display an element
Zoom to fit Click the circular button that looks like a bull’s eye in the upper right corner of the window
to size and center the topology map to fit the window.
Right-click to access functions Right-click a blank part of the topology map or on a map element to access context-relevant
functions.
Hover You can hover over some network elements in the topology map to display the element
name or ID.
Collapse/expand pane When a left, right, up, or down arrow appears at the margin of a pane, you can click to
collapse or expand the pane.
Resize panes You can click and drag many of the pane margins to resize the panes in a display.
The topology map is interactive, meaning that you can use features within the map itself
to customize the map and the network information table. The map uses a geographic
coordinate reference system. Some features enabled by that system include:
• World wrapping/map wrapping: Scrolling the map in one direction is like spinning a
globe. This enables representation of links across an ocean, for example.
Right-Click Functions
Right-click a node, selected nodes, or node group on the topology map to execute
node-specific filtering as shown in Figure 25 on page 42 and described in
Table 5 on page 43.
Option Function
Filter in Node Table Filters the nodes displayed in the network information table to display only
the selected node(s) or node group(s).
Node SIDs from selected node Labels the nodes in the topology with the node SIDs from the perspective
of the node on which you right-clicked.
Show Config Opens the Configuration Viewer, displaying the configuration of the node
on which you right-clicked. See “Configuration Viewer” on page 54 for
prerequisites for the configuration to be available.
Show Neighbors Opens a new window displaying the neighbors of the node on which you
right-clicked.
Tunnels On or Thru Node Filters the tunnels displayed in the network information table to include only
those that meet the On or Thru Node criteria.
Tunnels Starting at Node Filters the tunnels displayed in the network information table to include only
those that meet the Starting at Node criteria.
Tunnels Ending at Node Filters the tunnels displayed in the network information table to include only
those that meet the Ending at Node criteria.
Group selected nodes Prompts you to give the group of nodes a name, after which the group can
be expanded or collapsed on the topology map. This is a shortcut to the
Layout > Group selected nodes function.
Ungroup selected nodes Ungroups the nodes in the selected group. This is a shortcut to the Layout
> Ungroup selected nodes function.
Ungroup All Ungroups the nodes in all groups. This is a shortcut to the Layout > Ungroup
All function.
Circle selected nodes Arranges the selected nodes in a roughly circular pattern with the nodes and
links separated as much as possible. This is a shortcut to the Layout > Circle
selected nodes function.
Distribute selected nodes Forces the selected elements away from each other and minimizes overlap.
This is a shortcut to the Layout > Distribute selected nodes function.
Straighten selected nodes Aligns the selected nodes in a linear pattern. This is a shortcut to the Layout
> Straighten selected nodes function.
Option Function
Filter in Link Table Filters the tunnels displayed in the network information table to display only the selected
link.
Tunnels On or Thru Link Filters the tunnels displayed in the network information table to include only those that
meet the On or Thru Link criteria.
View Link Events Opens a new window in which you select the time range for the events you wish to view.
Click Submit to open the Events window.
View Interface Traffic Opens a new tab in the network information table at the bottom of the window, displaying
the interface traffic.
View Interface Delay Opens a new tab in the network information table at the bottom of the window, displaying
interface delay over time.
View Packet Loss Opens a new tab in the network information table at the bottom of the window, displaying
packet loss statistics.
NOTE: To clear the tunnel filter so that all tunnels are again displayed, click
a different tab (Node, for example), and then click the Tunnel tab again.
Right-click blank space in the topology map pane to access the whole-map functions
shown in Figure 27 on page 45 and described in Table 7 on page 45.
Option Function
Distribute All Nodes Distributes all the nodes in the map, pushing elements away from each other
and minimizing overlap. This is a shortcut to selecting all nodes and
navigating to Layout>Distribute selected nodes.
Save Default Map Layout Saves the current layout as your default. The default layout is displayed
when you first log in to NorthStar Controller. If you already have a default
layout, this function overrides the existing default. You can also designate
a default layout by navigating to Layout>Manage Layouts.
Select All Nodes Selects all nodes on the topology map. This is a shortcut to using
shift-left-click to create a selection box around all nodes or individually
shift-clicking on all nodes.
Refresh Utilization Refreshes the display of link colors based on RSVP utilization.
The Topology Settings window contains many topology display settings, all in one place.
Figure 30 on page 47 shows the Topology Settings window with the two tabs that group
related settings.
On the Elements tab, you can select as many settings as you like by clicking the associated
check boxes. When you select to Show Label for nodes or links, you can select only one
label from the corresponding drop-down menu.
NOTE: NorthStar does not display node or link labels over a certain quantity,
even if the Topology Settings call for labels to be displayed. This improves
performance when redrawing a large number of graphic elements.
NOTE: Drawing down links as a solid, rather than dashed, line can improve
performance when redrawing the topology.
Removes from the display any links for which both end nodes are not within the field
of view. This is useful for focusing on a subset of a large network.
Distinguishes links that would have to wrap around the world map. An example is
shown in Figure 31 on page 48.
The two options available in this section are mutually exclusive; select one radio button
or the other. Clusters and Bundles is useful where the display of a large number of nodes
and links obscures visualization of the network as a whole. Clusters (of nodes) and
bundles (of links) simplify visualization by representing groups of nodes that are close
together as single, color-coded circles (clusters). Bundles (of links) are derived from the
links between nodes and clusters. Figure 33 on page 50 shows an example of how a
portion of a large network looks when represented as clusters and bundles.
The number in each circle indicates the number of nodes in the cluster. The color coding
of the clusters corresponds to the number of nodes in the cluster. You can customize the
ranges by clicking on the color legend in the lower left corner of the map window as
shown in Figure 34 on page 50.
NOTE: When you select Clusters and Bundles, node and link labels are not
displayed.
The Light and Dark options available in this section are mutually exclusive; select one
radio button or the other. Figure 35 on page 51 shows an example of the light and dark
map styles.
If you select to Show World Map, you can opt to display graticules (a grid of lines parallel
to meridians of longitude and parallels of latitude) and labeling of major populated
places (both shown in Figure 35 on page 51).
NOTE: Even if you deselect Show World Map, the topology still behaves
according to geographical coordinates in terms of displaying the topology
within the field of view.
General section
Select the check boxes for as many of the options in this group as you like:
• Show Tooltips: Displays additional information about a node or link in the bottom right
corner of the map pane when you mouse over a network element.
• Show Maintenance Marker: Displays a red M over any link currently part of a
maintenance event.
• Zoom to Selected Node from Table: With this option enabled, when you click on a
node entry in the network information table (Node tab), the topology automatically
centers the view on that selected node.
Use the Label Size drop-down menu to select a font size for node and link labels.
The Layout drop-down menu in the top menu bar includes a number of options for
arranging elements on the topology map. Figure 36 on page 52 shows the Layout
drop-down menu options.
From the Layout menu, you can group and ungroup nodes, distribute nodes using different
models, reset the topology map according to geographical coordinates, save layouts,
and manage saved layouts.
• Import a layout from a GeoJSON file. JSON format is stricter than CSV, requiring
key-value pairs.
• Export a layout to a CSV file, which has headers only for hostname, longitude, latitude,
and group (less information than the GeoJSON file has).
• Export a layout to a GeoJSON file which you could then use in various mapping
applications that support GeoJSON format.
Manage Layouts
To save a layout so you can quickly load it into the topology map pane at any time,
navigate to Layout>Manage Layouts. The Map View window is displayed as shown in
Figure 37 on page 53.
Click Save. The Save Map window is displayed as shown in Figure 38 on page 53.
Enter a name and description for the current layout and specify whether the saved layout
is to be shared by all operators (shared) or is to be available only to you (private). Click
Submit.
From the Map View window, where all your saved layouts are listed, you can click the
check box beside the layout you want as your default. The default layout is displayed
initially whenever you log in to NorthStar Controller.
NOTE: You can also right-click a blank part of the topology map pane and
select Save Default Map Layout to save the current layout as your default.
This action saves the current layout as your default, but does not change the
name of the default in the Manage Layouts window.
Select a layout and use the buttons at the bottom of the window to perform the functions
listed in Table 8 on page 54.
Button Function
NOTE: If you select an existing layout and click Save, the existing layout is replaced
by the new layout, without changing the name of the layout in the Manage Layouts
window.
Configuration Viewer
You can view (view-only) the configuration of a router in the network using the
Configuration Viewer. You must set up the Device Profile (Administration > Device
Profile) and Device Collection (Administration > Task Scheduler) to retrieve the
configuration files before they are available in the Configuration Viewer.
To access the viewer for a node in the topology, right-click a node in the topology map
and select Show Config.
The left pane displays the router configuration file. The right pane displays an outline
view that groups the configuration by statement blocks in which you can drill down. When
you click a specific statement in the right pane, it is displayed in context in the left pane.
The colored text in the configuration file in the left pane highlights nested levels, version,
password, and comment statements.
Clicking the triangle icon in the upper right corner of the viewer window opens the search
field at the bottom of the window. Enter your search text and click Find or Find Prev to
move forward or backward through the search results.
You can also access the Configuration Viewer from the Integrity Checks report. After you
perform device collection, the router configuration files are scanned and the NorthStar
Controller flags anything suspicious. The resulting report provides hints as to what might
need attention.
To inspect the router configuration file from this report, right-click a line item in the report
and select Show Config to open the Configuration Viewer. If the report line item is for an
LSP, the configuration viewer opens a separate tab for each end of the tunnel so you can
see both relevant configuration files.
From the Applications menu in the top menu bar, you can perform some of the functions
also available in the network information table including provisioning LSPs, diverse LSPs,
and multiple LSPs. You can also configure LSP delegation, set up optimization, and
access reports.
The Top Traffic option displays a pane on the right side of the Topology window that
lists the computed Top N Traffic over X period of time by Node, Interface, LSP, or Interface
Delay. Select N and X by clicking on the currently selected settings in the lower right
corner of the display.
Two utilities that open in separate browser windows or tabs are also launched from this
menu:
NOTE: The bandwidth calendar timeline is empty until you schedule LSPs.
• Event View—Displays events coming in from the topology server. You have a number
of options for how this information is organized and displayed.
You can represent a collection of nodes on the topology map as a single entity by first
selecting the nodes, and then navigating to Layout>Group selected nodes where you
are prompted to give the group a name. To ungroup the nodes in a group, select the group
on the map and then navigate to Layout>Ungroup selected nodes.
Using the Groups list in the left pane, you can control how the group is displayed in the
topology map—as a single group entity or as individual member nodes. When you expand
a group in the Groups list using the plus (+) sign next to the group name, all the member
nodes are listed in the left pane and are displayed in the map. When you collapse a group
in the Groups list using the minus sign (-), only the group name appears in the left pane,
and the group is represented by a single icon in the map. Figure 41 on page 58 shows a
collapsed group in the Groups list in the left pane and the resulting representation of the
group in the topology map.
As shown in Figure 42 on page 58, when the group is expanded in the Groups list, the
individual nodes are displayed in the map instead of a single group icon.
Auto Grouping
You can auto group nodes by navigating to Layout > Auto Grouping.
The Auto Grouping option allows you to use multiple rules in sequence to group nodes,
using rule set builder functionality. Figure 43 on page 59 shows the AutoGroup Window
with two levels of grouping configured. In this example, nodes are to be grouped first by
ISIS area and then by site.
When you click the Add button (+) to add a new rule, you then specify rule type as either
City, Country, Continent, AS, ISIS Area, OSPF Area, Site, or Regular Expression. You can
change the order of the rules by clicking on a rule and using the up and down arrows to
reposition the rule in the list. You can also select to apply auto-grouping to all nodes or
just to the nodes that you have selected on the topology map. To delete a rule, select it
and click the Delete button (trash can). The Edit function (pencil icon) is only available
for Regular Expression rules.
When you select Regular Expression as the rule type, the Regular Expression Rule window
is displayed as shown in Figure 44 on page 60.
Use the drop down menu to select Hostname, Name, IP Address, or Type. Then enter
the text in the Find the first match for field. Click the check box if you want the match
to be case sensitive.
Distribute Nodes
From the Layouts menu, you can select multiple nodes and redistribute them to improve
visual clarity or for personal preference. You can select all the nodes in the topology to
apply a distribution model, or you can select a subset such as edge devices or core devices.
Model Description
Circle Arranges the selected nodes in a roughly circular pattern with the nodes and links separated
as much as possible.
Distribute Forces the selected elements away from each other and minimizes overlap.
You can reset the distribution of nodes on the topology map according to geographical
coordinates if you have set the latitude and longitude values of the nodes. It can be useful
to have the country map backdrop displayed when you use this distribution model.
To configure latitude and longitude for a node, select the node in the network information
table at the bottom of the Topology view, and click Modify in the bottom tool bar. In the
Modify Node window, click the Location tab. Figure 45 on page 61 shows the Location
tab of the Modify Node window.
Click the Location tab and enter latitude and longitude values using signed degrees
format (DDD.dddd):
• Positive values of latitude are north of the equator; negative values (precede with a
minus sign) are south of the equator.
• Positive longitudes are east of the Prime Meridian; negative values (precede with a
minus sign) are west of the Prime Meridian.
NOTE: You can either enter the values directly or you can use the up and
down arrows to increment and decrement.
Click Submit.
To redistribute the nodes in the topology map according to the latitude and longitude
values of the nodes, navigate to Layout>Reset by Coordinates.
Turning on the World Map also triggers a reset by latitude and longitude. To turn on the
World Map in the topology window, click the Tools icon (gear) on the right side of the
topology window and select the Options tab. Click the check box for Show World Map.
You can also set node latitude and longitude coordinates in the NorthStar Planner client,
and copy those values to the nodes in the Live Network model. Any existing coordinate
values in the Live Network model are overwritten by this action, an important consideration
since the Live Network model is shared by all users.
The left pane drop-down menu offers several ways to filter the data that is displayed in
the NorthStar Controller topology map pane, as well as several views related to status
and network properties. When you first log in to the web user interface, the initial view
shows Network Status. Table 10 on page 62 summarizes the left pane drop-down menu
choices.
Option Description
Timeline Displays a list of timestamped network events. You can use filtering to
narrow the display to specific types of event. This information can be
useful for debugging purposes.
Table 10: NorthStar Controller Topology View Left Pane Options (continued)
Option Description
Types Lists node types you can opt to display or hide on the topology map.
Nodes/Groups Displays user-created groups with or without listing the member nodes.
Expanded groups are represented on the topology map by individual
node icons. Collapsed groups are represented on the topology map by
group icons, and the individual member nodes are not displayed. All nodes
start out as ungrouped.
Protocols Selects protocols to include in the topology map. Nodes configured with
selected protocols are displayed. The default option includes all protocols.
Network Status
Figure 46 on page 64 shows an example of the Network Status display in the left side
pane of the Topology view. Network Status is the view that is displayed in the left pane
when you first launch the NorthStar Controller application.
The panel displays the percentage and count of the network’s active paths, active links,
and active PCCs that are in an UP state. The display is updated every one to two minutes,
depending on the frequency of incoming events. The busier the network, the more frequent
the update.
The number of paths detoured and LSPs in the process of being provisioned are also
noted. Detoured paths are those using a bypass LSP.
These numbers could differ from what is reported in the network information table:
• Active Paths: by design, the Active Paths reported in the Network Status display is not
the same as what is reported in the Tunnel tab of the network information table because
the Tunnel tab includes secondary paths and the Active Paths display does not. If you
have a secondary path for any LSPs, the Active Paths display and the Tunnel tab in
the network information table do not match.
• Active Links: should always match the Link tab in the network information table if the
internal model is in sync with the live network. If they don’t match, it can be a symptom
that the internal model has become out of sync with the live network. On a regular
basis, when the internal model is updated, it is with changes to the live network topology,
not with a rebuilding of the entire topology. So over time, the model and the live network
can become out of sync. To correct this problem, replenish the internal model with the
entire live network information using Sync Network Model under Administration >
System Settings.
• Active PCC: by design, the Active PCC reported in the Network Status display is not
the same as what is reported in the Node tab of the network information table because
the Node tab includes pseudo nodes and the Active PCC display does not. The Active
PCC display only includes nodes that are routers; it does not include pseudo nodes
such as Ethernet nodes or AS nodes. If you have pseudo nodes in the network, the
Active PCC display and the Node tab in the network information table do not match.
Timeline
Figure 47 on page 66 shows an example of the Timeline display in the left side pane of
the Topology view.
The timeline lists activities and status checkpoints with the most recent notations first.
You can use the Timeline to track chronological events as they occur in the network, in
order to take appropriate action in real time. You can also use the scroll bar to view past
network activities, going back as far as needed.
You can use the filtering box at the bottom of the pane to narrow the display to specific
types of event, or to events associated with a specific day or time.
When the timeline is not current, a message is displayed at the top of the Timeline pane
inviting you to “click here” to update the display.
You can assess the stability of the MPLS network by tracking changes in the number of
LSP Up and Down events over time. You can then analyze whether the occurrence of
specific other events affects the number of LSP Up and Down events.
Related to nodes:
Related to links:
• Link goes Up
Related to LSPs:
Types
The Types list in the left pane of the Topology view includes categories of nodes and
links found in the network. Figure 48 on page 67 shows a sample Types list.
Different types are associated with different icons, which are reflected in the topology
map. The example shown in Figure 48 on page 67 includes transport and interlayer link
types associated with the Coriant transport controller vendor.
You can select or deselect a type by checking or clearing the check box beside it. Only
selected options are displayed in the topology map. Click Check All to select all check
boxes; click Clear All to clear all check boxes.
You can right-click on a node type and select Properties to choose the icon that will
represent that node type in the topology map. You can also upload your own icon from
there.
Nodes/Groups
You can create groups of nodes using the topology map and the Layout menu. Once you
have groups in your topology, the Groups list in the left pane of the Topology view shows
all your node groups, and lists all nodes not included in any group under the heading
UNGROUPED.
When you expand a group listing using the plus (+) sign next to the group name, all the
member nodes are listed. When you collapse a group listing using the minus sign (-), only
the group name appears. In Figure 50 on page 69, Group1 and UNGROUPED are expanded,
and Group 2 is collapsed.
The topology map reflects the expansion and collapse of the groups in the groups list.
For an expanded group, all individual nodes are displayed in the topology map, without
indication of which group they belong to. For a collapsed group, the individual node icons
are collectively represented by a group icon. Hover over or click on the group icon in the
map to display the group name. If you collapse UNGROUPED in the Groups list, the nodes
disappear from the topology map. Figure 51 on page 69 shows the arrangement from
Figure 50 on page 69 along with the corresponding topology map.
Performance
Under Performance, you have the option to display on the topology map current (live
network) or historical (analytic traffic collection) data as shown in Figure 52 on page 70.
Click the radio button for the option you want displayed on the topology map. You can
only have one option selected at a time. The color legend at the bottom of the topology
map changes to correspond with your selection. See “Topology Map Color Legend” on
page 170 for information about customizing the legend.
For the historical options, there is a slide bar in the upper left corner of the map, visible
in Figure 52 on page 70. See “Viewing Analytics Data in the Web UI” on page 293 for more
information about how to use this feature to help visualize and interpret analytics data.
Click Settings at the bottom of the Performance options window to select the amount
of historical data to load.
Protocols
The Protocols list includes all protocols present in the current topology.
Figure 53 on page 71 shows an example.
NOTE: Select Default to display all protocols on the topology map. If you do
not want elements supporting all protocols to be displayed on the topology
map, be sure to clear the Default check box.
Click Check All to select all check boxes; click Clear All to clear all check boxes.
AS
The autonomous systems (AS) list assigns a color, for purposes of representation on the
topology map, for each AS number configured in the network. In Figure 54 on page 72,
routers configured with AS 11 appear on the topology map as red dots. NONE shows the
color assigned to routers with no AS configured.
Click Check All to select all check boxes; click Clear All to clear all check boxes.
ISIS Areas
The ISIS Areas list assigns a color, for purposes of representation on the topology map,
for each IS-IS area identifier configured in the network. The area identifier is the first three
bytes of the ISO network entity title (NET) address. In Figure 55 on page 73, routers whose
NET addresses include area identifier 11.0007 appear on the topology map as red dots.
Those with area identifier 49.0011 appear as green dots. NONE shows the color assigned
to routers with no NET address configured.
Click Check All to select all check boxes; click Clear All to clear all check boxes.
OSPF Areas
The OSPF Areas list assigns a color, for purposes of representation on the topology map,
for each OSPF area configured in the network. NONE shows the color assigned to routers
with no OSPF area configured.
In Figure 56 on page 74, routers with OSPF area 0 configured appear on the topology
map as red dots. Those with OSPF area 1 appear as green dots. NONE shows the color
assigned to routers with no OSPF area configured.
Select or deselect OSPF areas by selecting or clearing the corresponding check boxes.
Only selected areas are displayed in the topology map.
Click Check All to select all check boxes; click Clear All to clear all check boxes.
Displays path optimization statistics and information, such as the number of paths that
were last optimized, the percent of bandwidth savings achieved, the percent hop count
savings, and the time and date of the next optimization if one is scheduled.
Link Coloring
This option offers bit-level link coloring as shown in Figure 58 on page 76.
Layers
The Layers list gives you the option to exclude or include individual layer information in
the topology map.
Figure 59 on page 77 shows an example of the Layers list with IP and transport layer
options.
Use the Layers list to select the layers (IP or Transport or both) that you want to display.
If you are not using the Multilayer feature, the Layers list contains only IP and is not an
applicable filter.
Click Check All to select all check boxes; click Clear All to clear all check boxes.
Figure 60 on page 78 shows an example of a topology map that includes both IP Layer
and Transport Layer elements. The dotted link lines indicate interlayer links.
Network information is displayed in the pane at the bottom of the Topology view, below
the topology map. An example of the table is shown in Figure 61 on page 79.
Tabs appear across the top of the network information table. The columns of information
change according to the tab you select (Node, Link, Tunnel, Demand, Interface,
Maintenance, P2MP Group, SRLG). Within the tables, each row represents an element.
The element information can be rearranged and, in some cases, added to, filtered,
modified, or deleted. When you select an element in the network information table, the
corresponding element is selected in the topology map.
On any element, you can right-click for options relevant to that element. For example, if
you right-click a tunnel, you have the options shown in Figure 62 on page 79.
If you select View Events, for example, you are first prompted to select a time range and
click Submit, after which a window similar to the example shown in Figure 63 on page 80
is displayed.
NOTE: The events included in the View Events window are restricted to
external communication to and from NorthStar. Most of the communications
internal to NorthStar are captured only in the log files. This allows you to
focus on the information most likely to be useful to you as a NorthStar
operator.
On any element, you can double click for detailed information about that specific element.
For example, if you double click a node, you see information similar to that shown in
Figure 64 on page 80.
The teardrop-shaped icon in the upper right corner of the details window controls the
pin behavior described in Table 11 on page 81.
When unpinned, double clicking a second element in the network information table replaces the
contents of the first details window with the details of the second element. In this scenario, there is
only one details window open at a time.
Unpinned
When pinned, double clicking a second element in the network information table opens a new details
window, leaving the first window intact.
TIP: If you double click a second element, but you still only see one details window, try moving the
window to the side by clicking-and-dragging the window heading. The windows might be stacked.
Pinned
The Node, Link, and Tunnel tabs are always displayed. The other tabs are optionally
displayed. Click the + sign in the tabs heading bar to add a tab as shown in
Figure 65 on page 81.
Click the X beside any optionally displayed tab heading to remove the tab from the
display.
Related • Sorting and Filtering Options in the Network Information Table on page 81
Documentation
• Network Information Table Bottom Tool Bar on page 83
For many of the columns in the network information table, sorting and filtering options
become available when you hover over the column heading and click the down arrow
that appears.
Table 12 on page 82 describes the sorting and filtering options that could be available,
depending on the data column.
Option Description
Columns Click the check boxes to add or remove columns in the network information table.
Filters For some columns, the Filters option provides a search box. For other columns, the Filters
option allows you to enter values in greater than (>), less than (<), or equal to (=) fields. To
remove a filter, clear the check box next to the Filters option.
NOTE: In some topologies, the list of network elements can include multiple
pages of data. NorthStar only offers sorting capabilities on the active page.
In that case, try filtering to narrow down the number of rows displayed.
Using the Filters option, you can filter the devices that are included in the display by
activating a filter on any column. For example, if you want to display only the tunnels
that have 103 in their configured IP Z address, hover over the IP Z column heading, click
the down arrow that appears, and enter 103 in the filter box. The Filters check box is
automatically selected, and the display is filtered accordingly. The IP Z column heading
appears as italicized to indicate an active filter on the column. Figure 66 on page 82
illustrates this example.
To remove a filter, clear the Filters check box. You do not need to remove the filter text,
allowing you to toggle the filter on and off without reentering the text.
The bottom tool bar in the network information table has tools for navigating through
the network element data, as well as Add, Modify, and Delete buttons for performing
actions on elements.
The Add, Modify, and Delete buttons behave differently, depending on which type of
element you are working with; these functions are not always allowed. When they are
not allowed, the buttons are grayed out. The Modify and Delete buttons become enabled
when an individual element row is selected, as long as the action is allowed on that
element.
The topology server (Toposerver) requires that certain conditions be met before it will
allow you to delete a link or node.
• To delete a link:
• The link’s operational status must be down. The operational status is changed to
down when Toposerver receives the first LINK WITHDRAW message from NTAD.
• The link cannot have active IS-IS or OSPF adjacencies. IS-IS and OSPF adjacencies
are dropped when Toposerver receives the second LINK WITHDRAW message from
NTAD.
To delete a node:
• The node must be isolated, meaning that all links associated with the node have
been deleted (after the link deletion conditions have been met).
• The node cannot have IS-IS, OSPF, or PCEP connections. IS-IS and OSPF adjacencies
are cleared when Toposerver receives a NODE WITHDRAW message from NTAD
and the PCEP session has been terminated. This workflow ensures that TED and
Toposerver are synchronized.
For some elements, you can modify or delete multiple items at once (bulk modify) by
Ctrl-clicking or Shift-clicking multiple line items in the table. For example, if you select
multiple items in the Tunnel tab and click Modify, the Modify LSP (X LSPs) window is
displayed as shown in Figure 67 on page 84.
The window supports deleting the contents of a field, leaving the contents unchanged,
or changing the contents to a specific value. Depending on the type of data the field
contains, you can click to toggle, use the up and down arrows to select a value, or
double-click to set a value. For fields where a blank value is not allowed (required fields),
the option to delete is not available.
Navigation Tools
The tools in the network information table bottom tool bar are available to help you
navigate through rows of data, refresh the display, and change the number of rows per
loaded page. These tools are especially useful for large models with many elements.
Table 13 on page 84 describes the tools in the bottom tool bar. Not all of the tools are
available for all element types (node, link, interface, and so on).
Table 13: Navigation Tools in the Network Information Bottom Tool Bar
Page __ of <total pages> Displays the specific page of data you enter.
Table 13: Navigation Tools in the Network Information Bottom Tool Bar (continued)
Causes the web UI client to retrieve the latest data from the NorthStar server. This
button turns orange to prompt you to refresh when the display is out of sync.
Opens a search criteria field. Enter the search criteria and click the Filter button on the
far right of the field. The table and the topology display only the results of the search.
Click the down arrow to specify a grouping for the table contents.
Figure 68 on page 86 shows the Properties tab of the Modify Node window. All of the
fields on this tab can be modified.
Figure 45 on page 61 shows the Location tab of the Modify Node window. NorthStar
Controller uses latitude and longitude settings to position nodes on the country map,
and also to calculate distances when performing routing by distance.
Enter latitude and longitude values using signed degrees format (DDD.dddd):
• Positive values of latitude are north of the equator; negative values (precede with a
minus sign) are south of the equator.
• Positive longitudes are east of the Prime Meridian; negative values (precede with a
minus sign) are west of the Prime Meridian.
Figure 70 on page 87 shows the Addresses tab of the Modify Node window.
The NorthStar Controller supports using a secondary loopback address as the MPLS-TE
destination address. In the Addresses tab of the Modify Node window, you have the
option to add destination IP addresses in addition to the default IPv4 router ID address,
and assign a descriptive tag to each. You can then specify a tag as the destination IP
address when provisioning an LSP.
NOTE: A secondary IP address must be configured on the router for the LSP
to be provisioned correctly.
Click Add to create a new line where you can enter the IP address and the tag.
NOTE: You can also reach the Provision LSP window from the Applications
menu in the top menu bar by navigating to Applications>Provision LSP. See
“Provision LSPs” on page 104 for descriptions of the data entry fields in this
window.
The Modify LSP window has the same data entry fields as the Provision LSP window
(not all of which can be modified).
• BGP-LS
• Transport controller
The information from these sources is merged and presented in the web UI. You can also
Add, Modify, and Delete user-defined SRLGs.
The Modify Maintenance Event window contains the same fields as the Add Maintenance
Event window.
See “Provision and Manage P2MP Groups” on page 151 for descriptions of the data entry
fields in the Add P2MP Group window.
• LDP Forwarding Equivalent Class (FEC) data compiled as a result of LDP collection
tasks. These demands can be added, modified, or deleted from the network information
table. Demands are never automatically deleted. See “LDP Traffic Collection” on
page 320 for information about this data.
• Demands resulting from the Netflow Collector, which you can add, modify, or delete.
Demands are never automatically deleted. See “Netflow Collector” on page 333 for
more information about Netflow Collector data.
Using the Device Configuration tool, together with the Work Order Management tool,
you can push configuration statements to Juniper devices in the network, without leaving
the NorthStar application. Users with the necessary permission can create templates
(called “configlets”), where you specify which routers should receive the configuration
and the specific Junos OS configuration statements to include. Once a template is
provisioned, the request enters the Work Order Management system. Logical systems
and a view-only mode are supported.
• Overview on page 91
• Creating a Configuration Template on page 91
• Role of the Work Order Management System on page 96
• Modifying or Deleting Configlets on page 97
• More About View Mode on page 97
Overview
The Device Configuration tool in NorthStar uses configuration templates called “configlets”
to push Junos OS configuration statements to Junos devices in the network. Each configlet
specifies the configuration statements to include and the routers that are to receive the
configuration. Before actually pushing the configuration, you have the option to verify
the statements in the context of Junos syntax, leveraging the Junos commit check
function.
Only users with Create or Auto-Approve permission can create, modify, or delete
templates. These users can also tag templates as being available in View Mode, where
all users can see them. Untagged templates are not available in view mode. This tagging
method can be used to keep works in progress from being viewed by all users, or to
separate what different teams have access to.
See “User Management” on page 21 for information about how permissions are assigned
to groups, and therefore, to users.
Click Add in the upper right corner of the window to display the Add Configlet window
as shown in Figure 74 on page 92.
• If you want the configlet to be visible in View Mode, click the View Mode check box.
Otherwise, leave it blank.
• All of the eligible Junos devices in the network are listed under Applies To. Click the
check box for each one that is to receive the configuration. If you want all the listed
devices to receive the configuration, click the check box beside ID.
NOTE: Logical systems are supported. Not all networks have logical
devices, but for every physical device that has a corresponding logical
device, there is an information icon beside the physical device in the list
of devices. Click the information icon to see the logical device. An
example is shown in Figure 75 on page 94.
• Enter the configuration statements, one statement per line. This is the configuration
that is to be pushed to the routers.
• To verify the statements in the context of Junos syntax, leveraging the Junos commit
check function, click Validate in the lower left corner of the window. This button is
also available on the Properties tab. A Validate CLI Commands feedback window
lets you know if the validation was successful. Performing this check does not submit
the work order or push the configuration to the routers.
Figure 77 on page 96 shows the feedback you would see if the validation were
unsuccessful and if it were successful.
• A user with Create Work Orders permission can create, modify, and delete configlets
and submit them to the work order management system.
• A user with Approve (or Reject) Work Orders permission can approve or reject device
configuration work orders created by anyone, including those he himself created (if he
also has Create Work Orders permission).
• A user with Auto-Approve Work Orders can create device configuration work orders
which are automatically approved and activated. Create and Auto-Approve are mutually
• A user with Activate Work Orders can activate (provision) approved device configuration
work orders created by anyone.
2. The user clicks Provision in the lower left corner of the window. This creates the work
order. If the submitter has Auto-Approve permission, the work order is automatically
approved and activated. Otherwise, a user with Approve permission takes the next
step.
3. A user with Approve permission approves (or rejects) the device configuration.
4. A user with Activate permission activates the approved work order. Once activated,
the configuration is pushed to the specified devices.
Figure 79 on page 98 shows what the Device Configuration window looks like in View
Mode.
Only configlets that were tagged View Mode are visible. Select a configlet and click View
in the upper right corner of the window to see details of the configlet. No changes can
be made in View Mode.
LSP Management
The NorthStar Controller uses PCEP or Netconf to learn about LSPs in the discovered
network topology, and all LSPs and their attributes can be viewed from the NorthStar
Controller user interface. However, the LSP type determines whether the Path
Computation Client (PCC) or NorthStar Controller maintains the operational and
configuration states.
• PCC-controlled LSP: The LSP is configured locally on the router, and the router
maintains both the operational state and configuration state of the LSP. The NorthStar
Controller learns these LSPs for the purpose of visualization and comprehensive path
computation. Using Netconf, these LSPs can be created or modified in NorthStar.
• PCC-delegated LSP: The LSP is provisioned on the PCC (router) and has been delegated
to the NorthStar Controller for subsequent management. The operational state and
configuration state of the LSP is stored in the PCC. For delegated LSPs, the ERO,
bandwidth, LSP metric, and priority fields can be changed from the NorthStar Controller
user interface. However, the NorthStar Controller can return delegation back to the
PCC, in which case, the LSP is reclassified as PCC-controlled.
• PCE-initiated LSP: The LSP is provisioned from the NorthStar Controller UI. For these
LSPs, only the operational state is maintained in the router, and only NorthStar can
update the LSP attributes.
The NorthStar Controller supports the discovery, control, and creation of protection LSPs
(standby and secondary LSPs). For protection LSPs, the primary, secondary, and standby
LSP must be of the same type (PCC-controlled, PCC-delegated, or PCE-initiated). Each
LSP can have its own specific bandwidth, setup priority, and hold priority or can use the
values of the primary LSP (the default). A primary LSP must always be present for
controller-initiated LSPs.
Provisioning Method
NorthStar Controller supports two methods for provisioning and managing LSPs: PCEP
and Netconf. When you provision an LSP using PCEP, the LSP is added as a PCE-initiated
LSP. When you provision using Netconf, the LSP is added as a PCC-controlled LSP.
Table 14 on page 100 summarizes the provisioning actions available for each type of LSP
in the NorthStar Controller.
In NorthStar, both PCEP and Netconf device collection discover the same LSP attributes
(in other words, there are no additional LSP attributes discovered only by device
collection).
The following actions are performed or available when LSP provisioning is done via PCEP,
but not when done via Netconf:
• LSP rerouting: When receiving an LSP down event from the network, NorthStar does
not automatically recompute and reprovision a new path for PCC-controlled LSPs.
• Path Optimization: When you run path optimization, PCC-controlled LSPs are not
optimized.
• Other routing methods (default, delay, and so on)— When a PCC-controlled LSP has
a routing method that is not routeByDevice, the NorthStar Controller computes and
provisions the path as a strict explicit route when provisioning the LSP. The LSP’s
existing explicit route might be modified to a NorthStar-computed strict explicit route.
For example, a loose explicit route specified by the user or learned from the router
would be modified to a strict explicit route.
NOTE: NorthStar saves the computed strict explicit route with Preferred
path selection. This allows NorthStar, when it needs to re-compute the
LSP path, to try to follow the strict explicit path, while still enabling it to
compute an alternate path if the strict explicit path is no longer valid.
When an LSP is externally controlled, the controller manages the following LSP attributes:
• Bandwidth
• LSP metric
• ERO
Any configuration changes to the preceding attributes performed from the router are
overridden by the values configured from the controller. Changes made to these attributes
from the PCC do not take effect as long as the LSP is externally controlled. Any
configuration changes made from the PCC take effect only when the LSP becomes locally
or router controlled.
In both standalone and high availability (HA) cluster configurations, whenever a PCEP
session goes down on a PCC, all the LSPs that originated from that PCC are removed
from NorthStar except those with design parameters saved in NorthStar Controller.
Examples of LSPs with design parameters include:
• PCE-initiated LSPs
• PCC-delegated LSPs with LSP attributes such as path, that have been modified by
NorthStar
• Behavior of Delegated LSPs That Are Returned to Local PCC Control on page 103
• Modifying Attributes of Delegated LSPs on the NorthStar Controller on page 104
admin-group Results in an MBB. The new LSP is reported; the old LSP is reported with the R-bit set.
auto-bandwidth PCC automatically adjusts bandwidth based on the traffic on the tunnel. Supported on Juniper
Networks routers only.
bandwidth Results in an MBB. The new LSP is reported; the old LSP is reported with the R-bit set.
bandwidth ct0 Results in an MBB. The new LSP is reported; the old LSP is reported with the R-bit set.
disable LSP is deleted on the router. The PCRpt message is sent with R-bit.
fast-reroute Results in detour path setup; the detours are not reported to the controller.
from LSP name change results in a new LSP being signaled, and the old LSP is deleted. The new
LSP is reported through PCRpt message with D-bit. The old LSP is removed.
install The prefix is applied locally and is not reflected to the PCE.
metric Results in an MBB. The new LSP is reported, and the old LSP is reported with the R-bit set.
name LSP name change results in a new LSP being signaled, and the old LSP is deleted. The new
LSP is reported through PCRpt message with D-bit. The old LSP is removed.
node-link-protection No change is reported from PCE. The LSP is brought down and then brought back up again.
This sequence does not use an MBB.
priority Results in an MBB. The new LSP is reported; the old LSP is reported with the R-bit set.
standby Implementation of stateful path protection draft along with association object; see section
5.2.
to LSP name change results in a new LSP being signaled, and the old LSP is deleted.
• ERO—Modifying this attribute results in an MBB operation. The new LSP state is
reported, and the old state is deleted.
Provision LSPs
LSPs can be provisioned using either PCEP or NETCONF. Whether provisioned using
PCEP or NETCONF, LSPs can be learned via PCEP or by way of device collection. If learned
by way of device collection, then the NorthStar Controller requires periodic device
collection to learn about LSPs and other updates to the network. See “Scheduling Device
Collection for Analytics” on page 285 for more information. Once you have created device
collection tasks, NorthStar Controller should be able to discover LSPs provisioned via
NETCONF. Unlike PCEP, the NorthStar Controller with NETCONF supports logical systems.
For more information about managing logical nodes, see Considerations When Using
Logical Nodes later in this topic.
Provisioning LSPs
To provision an LSP, navigate to Applications>Provision LSP. The Provision LSP window
is displayed as shown in Figure 80 on page 105.
NOTE: For IOS-XR devices, before provisioning LSPs via NETCONF, you must
first run device collection. See “Scheduling Device Collection for Analytics”
on page 285 for instructions.
NOTE: You can also reach the Provision LSP window from the Tunnel tab of
the network information table by clicking Add at the bottom of the pane.
As shown in Figure 80 on page 105, the Provision LSP window has several tabs:
• Properties
• Path
• Advanced
• Design
• Scheduling
• User Properties
From any tab, you can click Preview Path at the bottom of the window to see the path
drawn on the topology map, and click Submit to complete the LSP provisioning. These
buttons become available as soon as Name, Node A, and Node Z have been specified.
Table 16 on page 106 describes the data entry fields in the Properties tab of the Provision
LSP window.
Field Description
Provisioning Method Use the drop-down menu to select PCEP or NETCONF. The default is NETCONF.
See “Templates for Netconf Provisioning” on page 144 for information about using customized
provisioning templates to support non-Juniper devices.
NOTE: For IOS-XR routers, NorthStar LSP NETCONF-based provisioning has the same capabilities
as NorthStar PCEP-based provisioning.
Name A user-defined name for the tunnel. Only alphanumeric characters, hyphens, and underscores are
allowed. Other special characters and spaces are not allowed. Required for primary LSPs, but not
available for secondary or standby LSPs.
If you are creating multiple parallel LSPs that will share the same Design parameters, the Name you
specify here is used as the base for the automatic naming of those LSPs. See the Count and Delimiter
fields on the Advanced tab for more information.
Node A Required. The name or IP address of the ingress node. Select from the drop-down list. You can start
typing in the field to narrow the selection to nodes that begin with the text you typed.
Node Z Required. The name or IP address of the egress node. Select from the drop-down list. You can start
typing in the field to narrow the selection to nodes that begin with the text you typed.
IP Z IP address of Node Z.
Provisioning Type Use the drop-down menu to select RSVP or SR (segment routing).
Path Type Use the drop-down menu to select primary, secondary, or standby as the path type.
secondary (or standby) LSP name. Required and only available if the Path Type is set to secondary or standby. Identifies
for the LSP for which the current LSP is secondary (or standby).
Path Name Name for the path. Required and only available for primary LSPs if the provisioning type is set to
RSVP, and for all secondary and standby LSPs.
Planned Bandwidth Required. Bandwidth immediately followed by units (no space in between). Valid units are:
• B or b (bps)
• M or m (Mbps)
• K or k (Kbps)
• G or g (Gbps)
Field Description
Setup Required. RSVP setup priority for the tunnel traffic. Priority levels range from 0 (highest priority)
through 7 (lowest priority). The default is 7, which is the standard MPLS LSP definition in Junos OS.
Hold Required. RSVP hold priority for the tunnel traffic. Priority levels range from 0 (highest priority)
through 7 (lowest priority). The default is 7, which is the standard MPLS LSP definition in Junos OS.
Planned Metric Static tunnel metric. Type a value or use the up and down arrows to increment or decrement by 10.
The Path tab includes the fields shown in Figure 81 on page 107 and described in
Table 17 on page 107.
Field Description
Hop 1 Only available if your initial selection is either required or preferred. Enter the first hop and specify
whether it is strict or loose. To add an additional hop, click the + button.
The Advanced tab includes the fields shown in Figure 82 on page 108 and described in
Table 18 on page 109.
Field Description
Count Enables creation of multiple parallel LSPs between two endpoints. These LSPs share the same
design parameters as specified in the Provision LSP window Design tab.
Use the up and down arrows to select the number of parallel LSPs to be created.
NOTE: Creating parallel LSPs in this manner is different from using Provision Multiple LSPs where
the Design parameters are configured separately for each LSP created.
Delimiter Used in the automatic naming of parallel LSPs that share the same design parameters. NorthStar
names the LSPs using the Name you enter in the Properties tab and appends the delimiter value
plus a unique numerical value beginning with 1 (myLSP_1, myLSP_2, for example).
This field is only available when the Count value is greater than 1.
Bandwidth Sizing If set to yes, the LSP is included in periodic re-computation of planned bandwidth based on
aggregated LSP traffic statistics.
Adjustment Threshold (%) This setting controls the sensitivity of the automatic bandwidth adjustment. The new planned
bandwidth is only considered if it differs from the existing bandwidth by the value of this setting
or more.
Only available (and then required) if Bandwidth Sizing is set to yes. The default value is 10%.
NOTE: Bandwidth sizing is supported only for PCE-initiated and PCC-delegated LSPs. Although
nothing will prevent you from applying this attribute to a PCC-controlled LSP, it would have no
effect.
Minimum Bandwidth Minimum planned bandwidth immediately followed by units (no space in between). Valid units
are:
• B or b (bps)
• M or m (Mbps)
• K or k (Kbps)
• G or g (Gbps)
This value is only available (and then required) if Bandwidth Sizing is set to yes. The default value
is 0.
NOTE: Bandwidth sizing is supported only for PCE-initiated and PCC-delegated LSPs.
Field Description
Maximum Bandwidth Maximum planned bandwidth immediately followed by units (no space in between). Bandwidth
sizing can be done up to this maximum.
• B or b (bps)
• M or m (Mbps)
• K or k (Kbps)
• G or g (Gbps)
This value is only available if Bandwidth Sizing is set to yes. There is no default value.
NOTE: Bandwidth sizing is supported only for PCE-initiated and PCC-delegated LSPs. Although
nothing will prevent you from applying this attribute to a PCC-controlled LSP, it would have no
effect.
Min Variation Threshold Modifies the sensitivity of the automatic bandwidth adjustment.
This value is only available (and then required) if Bandwidth Sizing is set to yes. The default value
is zero.
Coloring Include All Double click in this field to display the Modify Coloring Include All window. Select the appropriate
check boxes. Click OK when finished.
Coloring Include Any Double click in this field to display the Modify Coloring Include Any window. Select the appropriate
check boxes. Click OK when finished.
Coloring Exclude Double click in this field to display the Modify Coloring Exclude window. Select the appropriate
check boxes. Click OK when finished.
Symmetric Pair Group When there are two tunnels with the same end nodes but in opposite directions, the path routing
uses the same set of links. For example, suppose Tunnel1 source to destination is NodeA to NodeZ,
and Tunnel2 source to destination is NodeZ to NodeA. Selecting Tunnel1-Tunnel2 as a symmetric
pair group places both tunnels along the same set of links. Tunnels in the same group are paired
based on the source and destination node.
Create Symmetric Pair Select the check box to create a symmetric pair.
Diversity Group Name of a group of tunnels to which this tunnel belongs, and for which diverse paths is desired.
Diversity Level Use the drop-down menu to select the level of diversity as default, site, link, or SRLG.
Route on Protected IP Link Select the check box if you want the route to use protected IP links as much a possible.
Field Description
Binding SID Only available if the Provisioning Method is set to NETCONF and the Provisioning Type is set to
SR. Numerical binding SID label value. See “Segment Routing” on page 172 for more information.
Color Community Color assignment for the SR LSP. Only available if the Provisioning Method is set to NETCONF
and the Provisioning Type is set to SR.
Use Penultimate Hop as When selected, the PCS uses the penultimate hop as the signaling address for EPE. Only available
Signaling Address For All if the Provisioning Type is set to SR.
Traffic/For Color
Community X If no color community is specified, the setting applies to all traffic. If a color community is specified,
the setting applies to traffic in that color community.
The Design tab includes the fields shown in Figure 83 on page 111 and described in
Table 19 on page 111.
Field Description
Routing Method Use the drop-down menu to select a routing method. Available options include default
(NorthStar computes the path), adminWeight, delay, constant, distance, ISIS, OSPF, and
routeByDevice (router computes part of the path).
Max Delay Type a value or use the up and down arrows to increment or decrement by 100.
Field Description
Max Hop Type a value or use the up and down arrows to increment or decrement by 1.
Max Cost Type a value or use the up and down arrows to increment or decrement by 100.
High Delay Threshold Type a value or use the up and down arrows to increment or decrement by 100.
Low Delay Threshold Type a value or use the up and down arrows to increment or decrement by 100.
High Delay Metric Type a value or use the up and down arrows to increment or decrement by 100.
Low Delay Metric Type a value or use the up and down arrows to increment or decrement by 100.
When provisioning via PCEP, the NorthStar Controller’s default behavior is to compute
the path to be used when provisioning the LSP. Alternatively, you can select the
routeByDevice routing method in the Design tab, in which the router controls part of the
routing. This alternate routing method is only meaningful for three types of LSP:
1. On the Design tab, select routeByDevice from the Routing Method drop-down menu.
2. On the Path tab, select dynamic from the Selection drop-down menu.
The LSP is then set up to be provisioned with the specified attributes, and no explicit
path.
The Scheduling tab relates to bandwidth calendaring. By default, tunnel creation is not
scheduled, which means that tunnels are provisioned immediately upon submission.
Click the Scheduling tab in the Provision LSP window to access the fields for setting up
the date/time interval. Figure 84 on page 113 shows the Scheduling tab of the Provision
LSP window.
Select Once to select start and end parameters for a single event. Select Daily to select
start and end parameters for a recurring daily event. Click the calendar icon beside the
fields to select the start and end dates, and beginning and ending times.
In the User Properties tab shown in Figure 85 on page 114, you can add provisioning
properties not directly supported by the NorthStar UI. For example, you cannot specify
a hop-limit in the Properties tab when you provision an LSP. However, you can add
hop-limit as a user property in the User Properties tab.
The following steps describe how to utilize User Properties for LSP provisioning:
1. Access the NETCONF template file that is used for adding new LSPs
(lsp-add-junos.hjson), located in the /opt/northstar/netconfd/templates/ directory.
2. At the edit > protocols > mpls > label-switched-path hierarchy level, add the
statements needed to provision with the property you are adding. For example, to
provision with a hop-limit of 7, you would add the lines below in bold:
protocols {
mpls {
label-switched-path {{ request.name }} {
to {{ request.to }};
{{ macros.ifexists('from', request.from) -}}
{% if request['user-properties'] %}
{% if request['user-properties']['hop-limit'] %}
hop-limit {{ request['user-properties']['hop-limit'] }};
{% endif %}
{% endif %}
{{ macros.ifexistandnotzero('metric', request.metric) -}}
{{ macros.ifexists('p2mp', request['p2mp-name']) -}}
{% if request['lsp-path-name'] %}
.
.
.
The result of adding these statements is that if hop-limit, with the value defined in
the user properties, is present, then the provisioning statement is executed. You could
also edit the template used for modifying LSPs (lsp-modify-junos.hjson).
4. Add the user property and corresponding value in the User Properties tab of the
Provision LSP window (see Figure 85 on page 114).
label-switched-path test-user {
from 10.0.0.101;
to 10.0.0.104;
hop-limit 7;
primary test-user.p0 {
bandwidth 0;
priority 7 7;
}
}
Click Submit when you have finished populating fields in all of the tabs of the Provision
LSP window. The LSP is entered into the work order management process.
To modify an existing LSP, select the tunnel on the Tunnels tab in the network information
table and click Modify at the bottom of the table. The Modify LSP window is displayed,
which is very similar to the Provision LSP window.
If you modify an existing LSP via NETCONF, NorthStar Controller only generates the
configuration statements necessary to make the change, as opposed to re-generating
all the statements in the full LSP configuration as is required for PCEP.
NOTE: After provisioning LSPs, if there is a PCEP flap, the UI display for RSVP
utilization and RSVP live utilization might be out of sync. You can display
those utilization metrics by navigating to Performance in the left pane of the
UI. This is a UI display issue only. The next live update from the network or
the next manual sync using Sync Network Model (Administration > System
Settings > Advanced Settings) corrects the UI display. In the System Settings
window, you toggle between General and Advanced Settings using the button
in the upper right corner of the window.
2. Click the Sync with Live Network button to create (or update) the physical and logical
devices list. The NorthStar BGP-LS session toward the Junos VM automatically
discovers both the physical and logical devices in the topology. However, there is no
automatic correlation between the two.
In the Topology view, navigate to the Node tab of the network information table to
confirm that the PCEP Status is UP for all the physical nodes as shown in
Figure 86 on page 116. Logical nodes are blank in the PCEP Status column because
there is no PCEP for logical nodes.
Figure 86: PCEP Status Column Showing Physical and Logical Nodes
3. In the Device Profile window, enable NETCONF for the physical devices (if not already
done).
Select one or more devices and click Modify to display the Modify Device window. On
the Access tab, click the check box for Enable Netconf. Click Modify in the lower right
corner of the window to complete the modification.
Select one or more devices in the device list and click Test Connectivity. In the Profile
Connectivity window, click Start. The test is complete when the green (pass) or red
(fail) status icons are displayed. Figure 87 on page 117 shows an example.
5. In Topology view, check the Node tab of the network information table to ensure that
the NETCONF status column now reports UP for physical devices.
Navigate to Administration > Task Scheduler and click Add to display the Create
New Task window. If you use the Selective Devices option, select only the physical
devices. For complete information about the Create new Task windows, see
“Scheduling Device Collection for Analytics” on page 285.
When this device collection task is run, NorthStar uses the Junos OS show
configuration command on each physical router to obtain both physical and logical
node information, and reports it to NorthStar. This step allows NorthStar to correlate
each logical node to its corresponding physical node, which you can confirm by
examining the network information table, Node tab.
NOTE: When you first install NorthStar, the device profile page is empty.
Use the Sync with Live Network button to update and synchronize with
the live network devices, and update the Node tab in the network
information table. The device collection task correlates the logical system
with its physical system and also updates LSP information for the logical
system since the logical system does not have a PCEP session to report
its LSP status.
• Physical Hostname
• Physical Host IP
For a logical node, the hostname and IP address in those columns tell you which
physical node correlates to the logical node.
7. Provision LSPs.
Now that the logical nodes are in the NorthStar device list and they are correlated to
the correct physical nodes, you can create LSPs that incorporate logical nodes. You
do this using the same procedure as for LSPs using only physical nodes except that
the provisioning method MUST be specified as Netconf as shown in
Figure 89 on page 119.
8. Run your device collection task periodically to keep the logical node information
updated. There are no real time updates for logical devices.
When creating a route between two sites, you might not want to rely on a single LSP to
send traffic from one site to another. By creating a second LSP routing path between the
two sites, you can protect against failures and balance the network load.
On the Properties and Advanced tabs, the data entry fields specific to setting up diverse
LSPs are described in Table 20 on page 121. The remaining fields are the same as for
provisioning individual LSPs.
Field Description
Diversity Level Use the drop-down menu to select the level of diversity as default, site, link, or
SRLG.
Diversity Group Name of a group of tunnels to which this tunnel belongs, and for which diverse
paths is desired.
Symmetric Pair Group When there are two tunnels with the same end nodes but in opposite directions,
the path routing uses the same set of links. For example, suppose Tunnel1 source
to destination is NodeA to NodeZ, and Tunnel2 source to destination is NodeZ
to NodeA. Selecting Tunnel1-Tunnel2 as a symmetric pair group places both
tunnels along the same set of links. Tunnels in the same group are paired based
on the source and destination node.
Create Symmetric Pair Select the check box to create a symmetric pair.
By default, the tunnel creation is not scheduled, which means the tunnels are provisioned
immediately upon submission. Click the Scheduling tab to access scheduling options.
Select Once to enable the scheduler options for a single event. Select Daily to enable
the scheduler options for a recurring daily event. Click the calendar icon beside the fields
to select the start and end dates, and the beginning and ending times.
Click Preview Paths at the bottom of the window to see the paths drawn on the topology
map. Click Submit to complete the diverse LSP provisioning.
• If NorthStar Controller is not able to achieve the diversity level you request, it still
creates the diverse tunnel pair, using a diversity level as close as possible to the level
you requested.
• NorthStar Controller does not, by default, reroute a diverse LSP pair when there is a
network outage. Instead, use the Path Optimization feature (Applications > Path
Optimization). One option is to schedule path optimization to occur at regular intervals.
• When provisioning diverse LSPs, NorthStar might return an error if the value you entered
in the Modify Node window’s Site field contains special characters, depending on the
version of Node.js in use. We recommend using alphanumeric characters only. See
“Network Information Table Bottom Tool Bar” on page 83 for the location of the Site
field in the Modify Node window.
The Provision Multiple LSPs Properties is displayed as shown in Figure 92 on page 123.
Table 21 on page 123 describes the fields available in the Properties tab.
Field Description
ID Prefix You can enter a prefix to be applied to all of the tunnel names that are created. If left blank, this field
defaults to “PCE”.
Provisioning Method Required. Use the drop-down menu to select PCEP or NETCONF. The default is NETCONF.
See “Templates for Netconf Provisioning” on page 144 for information about using customized
provisioning templates to support non-Juniper devices.
NOTE: For IOS-XR routers, NorthStar LSP NETCONF-based provisioning has the same capabilities
as NorthStar PCEP-based provisioning.
Field Description
Planned Bandwidth Required. Bandwidth immediately followed by units (no space in between). Valid units are:
• B or b (bps)
• M or m (Mbps)
• K or k (Kbps)
• G or g (Gbps)
Setup Required. RSVP setup priority for the tunnel traffic. Priority levels range from 0 (highest priority)
through 7 (lowest priority). The default is 7, which is the standard MPLS LSP definition in Junos OS.
Count Required. Number of copies of the tunnels to create. The default is 1. For example, if you specify a
count of 2, two copies of each tunnel are created.
Provisioning Type Required. Use the drop-down menu to select RSVP or SR (segment routing).
Delimiter Required. Delimiter character used in the automatic naming of the LSPs.
Hold Required. RSVP hold priority for the tunnel traffic. Priority levels range from 0 (highest priority) through
7 (lowest priority). The default is 7, which is the standard MPLS LSP definition in Junos OS.
Node A column Select the Node A nodes. If you select the same nodes for Node A and Node Z, a full mesh of tunnels
is created. See Table 22 on page 124 for selection method options.
Node Z column Select the Node Z nodes. If you select the same nodes for Node Z and Node A, a full mesh of tunnels
is created. See Table 22 on page 124 for selection method options.
Node Z Tag Select a tag from the drop down menu. Tags are set up in the Modify Node window, Addresses tab.
In the Addresses tab of the Modify Node window, you have the option to add destination IP addresses
in addition to the default IPv4 router ID address, and assign a descriptive tag to each. You can then
specify a tag as the destination IP address when provisioning an LSP.
Under the Node A and Node Z columns are several buttons to aid in selecting the tunnel
endpoints. Table 22 on page 124 describes how to use these buttons.
Button Function
(world) Select one or more nodes on the topology map, then click the globe button to add them to the
Node column.
(plus) Click the plus button to add all of the nodes in the topology map to the Node column.
(minus) Select a node in the Node column and click the minus button to remove it from the Node column.
Ctrl-click to select multiple nodes.
Button Function
(copies) Click the right-arrow button on the Node Z side to add all of the nodes in the Node A column to
the Node Z column.
Field Description
Bandwidth Sizing If set to yes, the LSP is included in periodic re-computation of planned bandwidth based on
aggregated LSP traffic statistics.
NOTE: Bandwidth sizing is supported only for PCE-initiated and PCC-delegated LSPs. Although
nothing will prevent you from applying this attribute to a PCC-controlled LSP, it would have
no effect.
Coloring Include All Double click in this field to display the Modify Coloring Include All window. Select the
appropriate check boxes. Click OK when finished.
Coloring Include Any Double click in this field to display the Modify Coloring Include Any window. Select the
appropriate check boxes. Click OK when finished.
Table 23: Provision Multiple LSPs Window, Advanced Tab Fields (continued)
Field Description
Coloring Exclude Double click in this field to display the Modify Coloring Exclude window. Select the appropriate
check boxes. Click OK when finished.
Diversity Group Name of a group of tunnels to which this tunnel belongs, and for which diverse paths is desired.
Diversity Level Use the drop-down menu to select the level of diversity as default, site, link, or SRLG.
The Design tab, shown in Figure 94 on page 126, allows you to use a drop-down menu to
select a routing method. Available options include default (NorthStar computes the
path), adminWeight, delay, constant, distance, ISIS, OSPF, and routeByDevice (router
computes part of the path).
Select Once to select start and end parameters for a single event. Select Daily to select
start and end parameters for a recurring daily event. Click the calendar icon beside the
fields to select the start and end dates, and beginning and ending times.
In the User Properties tab, you can add provisioning properties not directly supported by
the NorthStar UI. For example, you cannot specify a hop-limit in the Properties tab when
you provision an LSP. However, you can add hop-limit as a user property in the User
Properties tab. This works the same way as it does when provisioning single LSPs.
Navigate to Applications > Configure LSP Delegation to reach the Configure LSP
Delegation window where you can select LSPs to either delegate to NorthStar Controller
or remove from delegation.
Click the check boxes for the desired LSPs on either the Add Delegation or Remove
Delegation tab. You can also Check All or Uncheck All. Then click Submit at the bottom
of the window.
When you add or remove delegation to/from the NorthStar Controller using this operation,
the delegation statement block is added or removed from the router configuration.
NOTE: This is not the same as the temporary removal you achieve when you
right-click a tunnel in the network information table and select Return
Delegation to PCC. In that case, control is temporarily returned back to the
PCC for a period of time based on the router’s timer statement.
Bandwidth Management
There are two methods for enabling NorthStar to control RSVP bandwidth reservations
without the support of proprietary PCEP extensions on the PCC. Using these methods,
NorthStar, not the PCC, makes bandwidth reservation decisions based on actual traffic.
These methods are possible because NorthStar analytics gathers (via periodic SNMP
polling or JTI telemetry streams) the traffic statistics necessary for NorthStar to make
path-related decisions. Both methods are vendor-agnostic.
NOTE: NorthStar does not support collection of SR-TE LSP statistics via
SNMP, and therefore cannot support automatic bandwidth sizing on SR-TE
LSPs where statistics are collected via SNMP.
Bandwidth Sizing
The following sections describe bandwidth sizing and how to use it:
new bandwidth requirements and the LSP bandwidth sizing parameters, whether it needs
to provision the new planned bandwidth or not.
NorthStar supports bandwidth sizing for all PCE-initiated and PCC-delegated LSPs
for which it can obtain LSP statistics, either via Juniper Telemetry Interface (JTI), or
SNMP collection (scheduled via the Task Scheduler). This means that you must
enable/use NorthStar analytics, and confirm that NorthStar is receiving traffic from
the LSPs.
• Create and schedule a bandwidth sizing task in the Task Scheduler, as described later
in this topic.
Where configured Router (PCC) via a template NorthStar (PCS) via web UI or REST API
PCC-delegated PCC-delegated
PCC-controlled RSVP
Bandwidth computations and Done by the router (PCC) Done by NorthStar (PCS)
bandwidth change decisions
Max
Behavior if both are configured Auto-bandwidth overwrites bandwidth sizing and vice versa.
For this reason, you should not have auto-bandwidth enabled for bandwidth sizing-enabled
LSPs.
NOTE: For PCE-initiated LSPs, this means you must ensure that the name of the LSP does
not match any configured label-switched path template that includes the auto-bandwidth
parameter.
For PCC-delegated LSPs, this means you must ensure that the auto-bandwidth parameter
is not configured on the router.
Only bandwidth sizing-enabled LSPs are included in the re-computation of new planned
bandwidths. When you add or modify an LSP, you must set the Bandwidth Sizing (yes/no)
setting to yes to enable sizing. At the same time, you also set values for the following
parameters:
This setting controls the sensitivity of the automatic bandwidth adjustment. The new
planned bandwidth is only considered if it differs from the existing bandwidth by the
value of this setting or more.
• If the new planned bandwidth is greater than the maximum setting, NorthStar signals
the LSP with the maximum bandwidth.
• If the new planned bandwidth is less than the minimum setting, NorthStar signals
the LSP with the minimum bandwidth.
• If the new planned bandwidth falls in between the maximum and minimum settings,
NorthStar signals the LSP with the new planned bandwidth.
This setting specifies the sensitivity of the automatic bandwidth adjustment when the
new planned bandwidth is compared to the current planned bandwidth. The new
planned bandwidth is only considered if the difference is greater than or equal to the
value of this setting. Because it is not a percentage, this can be used to prevent small
fluctuations from triggering unnecessary bandwidth changes.
If both the adjustment threshold and the minimum variation threshold are greater than
zero, both settings are taken into consideration. In that case, the new planned bandwidth
is considered if:
• The percentage difference is greater than or equal to the adjustment threshold, and,
NOTE: These parameters are also described in the context of the Provision
LSP window.
The bandwidth sizing task periodically sends a new planned bandwidth for bandwidth
sizing-enabled LSPs to the NorthStar PCS. The PCS determines whether it needs to
provision the new planned bandwidth with a path that satisfies the new bandwidth
requirement.
To schedule a bandwidth sizing task, navigate to Administration > Task Scheduler from
the More Options menu.
1. Click Add in the upper right corner. The Create New Task window is displayed as shown
in Figure 96 on page 132.
Enter a name for the task, select Bandwidth Sizing from the Task Type drop-down
menu, and click Next.
The aggregation statistic works together with the task execution recurrence interval
(the period of bandwidth adjustment) that you set up in the scheduling window.
NorthStar aggregates the LSP traffic for the interval based on the aggregation statistic
you select, and uses that information to calculate the new planned bandwidth. The
options in the Aggregation Statistic drop-down menu are described in
Table 25 on page 133.
80th, 90th, 95th, 99th Percentile Aggregation is based on the selected percentile.
Average For each interval, the samples within that interval are averaged. If there are N samples
for a particular interval, the result is the sum of all the sample values divided by N.
Max For each interval, the maximum of the sample values within that interval is used.
3. Click Next to proceed to the scheduling parameters. The Create New Task - Schedule
window is displayed as shown in Figure 98 on page 134. You must schedule the task
to repeat at a specific interval from a minimum of 15 minutes to a maximum of one
day. The default interval is one hour.
4. Click Submit to complete the addition of the new collection task and add it to the
Task List. Click a completed task in the list to display the results in the lower portion
of the window. There are three tabs in the results window: Summary, Status, and
History.
NOTE: You can have only one bandwidth sizing task per NorthStar server.
If you attempt to add a second, the system will prompt you to approve
overwriting the first one.
In the network information table (Tunnel tab), you can add optional columns related to
bandwidth sizing by hovering over any column heading and clicking the down arrow that
appears. Select Columns and click the check boxes to add columns for bandwidth sizing
parameters as shown in Figure 99 on page 135.
Once added, these columns display in the network information table the values of the
parameters you configured for the bandwidth sizing-enabled LSPs.
You can view an LSP’s statistics and bandwidth in graphical form by right-clicking an
LSP on the Tunnel tab of the network information table and selecting View LSP Traffic.
An example of the display is shown in Figure 100 on page 135.
This example shows the actual LSP traffic (blue line) as well as the signaled (configured)
bandwidth (green line). The hide bandwidth/show bandwidth button allows you to
toggle back and forth between including and not including the bandwidth in the display.
• bandwidth_sizing.log
• pcs.log
bandwidth sizing, and it affects all PCE-initiated and PCC-delegated LSPs, regardless
of whether they are bandwidth sizing-enabled or not.
When zero bandwidth signaling is enabled and NorthStar is receiving traffic statistics for
bandwidth sizing-enabled LSPs, NorthStar does the following at the end of the bandwidth
adjustment period:
• Updates the RSVP link utilization based on the new planned bandwidth and the new
path.
• Provisions the new path with zero bandwidth as opposed to provisioning with the new
planned bandwidth.
Container LSPs
The following sections describe container LSPs and how to use them:
A container LSP is a logical grouping of sub-LSPs that share the properties defined in the
container. Container LSPs provide automatic adding or removing of sub-LSPs based on
traffic statistics. This mitigates the difficulty of finding a single path large enough to
accommodate a large bandwidth reservation. Using container LSPs involves:
• Creating a container LSP from the network information table (Container LSP tab).
• Creating a container normalization task using the Task Scheduler. During normalization,
NorthStar calculates the number of sub-LSPs needed and if possible, provisions them.
• Viewing container LSPs, as well as their sub-LSPs and traffic in the network information
table.
Container LSPs are different from TE++ LSPs in ways that are important to understand.
TE++ can only be configured on the router. NorthStar supports TE++ by responding to
instructions from the router regarding the creation and deletion of sub-LSPs and the
associated redistribution of bandwidth across the sub-LSPs. With container LSPs,
NorthStar is doing the bandwidth computations and decision-making. Table 26 on page 137
summarizes the differences between TE++ and container LSPs.
Where configured Router (PCC) via a template NorthStar (PCS) via web UI or REST API
PCC-controlled PCC-delegated
Triggers for normalization to occur On a per-LSP basis, either: One centralized normalization schedule
applies to all container LSPs
• A periodic timer, or
• Bandwidth thresholds are reached
Bandwidth computations and Done by the router (PCC) Done by NorthStar (PCS)
bandwidth change decisions
Max
Can both be configured We do not recommend allowing both the PCC and NorthStar to attempt normalization
simultaneously? at the same time.
See “NorthStar Controller Features Overview” on page 6 for more information about
TE++ LSPs.
To create a container LSP, start in the network information table. On the tabs bar, click
the plus sign (+) and select Container LSP from the drop-down menu as shown in
Figure 101 on page 138.
NOTE: When you launch the web UI, only the Node, Link, and Tunnel tabs
are displayed by default; Container LSP is one of the tabs you can optionally
display.
Click Add at the bottom of the table to open the Add Container window.
The fields specific to container LSPs are described in Table 27 on page 139. The remaining
fields are the same as for creating regular LSPs.
Field Description
Name The name you assign to the container LSP is used as the base
for automatic naming of the sub-LSPs that are created.
Sub-LSP Count (Minimum-Maximum) Required. Minimum and maximum number of sub-LSPs that
can be created in the container LSP. The default is 1-6.
Sub-LSP Bandwidth (Minimum-Maximum) Minimum and Maximum bandwidth that can be signaled for
the sub-LSPs during normalization or initialization, immediately
followed by units (no space in between). Valid units are:
• B or b (bps)
• M or m (Mbps)
• K or k (Kbps)
• G or g (Gbps)
NOTE: On the Advanced tab, you can opt to enable bandwidth sizing for a
container LSP by selecting Bandwidth Sizing = yes and supplying values for
the bandwidth sizing parameters. During normalization, NorthStar signals
the sub-LSPs with equally divided container LSP aggregated bandwidth.
However, the PCC might not forward traffic equally among the sub-LSPs. By
also enabling bandwidth sizing for the container LSP, the sub-LSPs can be
individually adjusted based on the actual traffic going over them.
Use the Task Scheduler to enable periodic container LSP normalization. The container
normalization task computes aggregated bandwidth for each container LSP and sends
it to the NorthStar PCS. The PCS determines whether it needs to add or remove sub-LSPs
belonging to the container LSP, based on the container’s new aggregated bandwidth.
1. Click Add in the upper right corner. The Create New Task window is displayed as shown
in Figure 103 on page 140.
Enter a name for the task, select Container Normalization from the Task Type
drop-down menu, and click Next.
The aggregation statistic works together with the task execution recurrence interval
that you will set up in the scheduling window, the same as it does for bandwidth sizing.
3. Click Next to proceed to the scheduling parameters which work just the same as for
bandwidth sizing.
4. Click Submit to complete the addition of the new collection task and add it to the
Task List. Click a completed task in the list to display the results in the lower portion
of the window. There are three tabs in the results window: Summary, Status, and
History.
NOTE: You can have only one container normalization task per NorthStar
server. If you attempt to add a second, the system will prompt you to
approve overwriting the first one.
The Container LSP tab is shown in Figure 105 on page 141. You can add columns and filter
the display in the usual ways. See “Sorting and Filtering Options in the Network Information
Table” on page 81 for more information.
Right-click a row in the Container LSP tab to select either View Sub LSPs or View Traffic.
Each of these options opens a new tab in the network information table displaying the
requested information. Figure 106 on page 142 shows the right-click options in the Container
LSP tab.
When you select View Sub LSPs, a new tab in the network information table opens
displaying the sub-LSPs and their parameters. In the list of sub-LSPs, you have all the
display options normally available on the Tunnel tab. See “Network Information Table
Overview” on page 79 for more information. Figure 107 on page 142 shows an example of
a sub-LSPs tab in the network information table.
NOTE: The sub-LSP tab in the network information table is for display
purposes only; you cannot perform Add, Modify, or Delete functions from
there.
The sub-LSPs are also displayed in the Tunnel tab. The Container column (optionally
displayed) identifies them as belonging to a container LSP. Figure 108 on page 142 shows
sub-LSPs in the Tunnel tab.
When you right-click a row in the Container LSP tab and select View Traffic, a new tab
opens in the network information table displaying the traffic for the container
LSP.Figure 109 on page 143 shows an example of the View Traffic tab.
• container_lsp.log
• pcs.log
• LSPs provisioned via NETCONF that are not delegated to the controller require a config
commit to modify LSP attributes. Currently, NorthStar doesn’t perform such changes
without user approval and, therefore, managing these kinds of LSPs is not supported.
Whenever NorthStar adds support for automatic modification of
NETCONF/PCC-controlled LSPs, this feature will be re-qualified for that scenario.
For more information about configuring the router for data collection, see Configuring
Routers to Send JTI Telemetry Data and RPM Statistics to the Data Collectors in the
NorthStar Controller Getting Started Guide.
NorthStar Controller supports NETCONF provisioning for Juniper devices and Cisco
IOS-XR devices. You can customize provisioning templates by modifying the templates
provided in the /opt/northstar/netconfd/templates/ directory on the NorthStar server,
or by creating new, customized templates.
The syntax and semantics used in the template attributes are based on Jinja Templates,
a template engine for Python. Help/support for using Jinja Templates is readily available
online.
• LSP Provisioning: make use of provisioning properties not directly supported by the
NorthStar UI.
For example, you cannot specify a hop-limit in the Properties tab in the Provision LSP
window. However, you can add hop-limit in the User Properties tab of the Provision
LSP or Modify LSP window and then modify the appropriate provisioning template
accordingly.
When an LSP is created, it can be tagged with user properties that, when also defined
in the Jinja template, cause the corresponding service mapping statement to be
generated in the router configuration.
NOTE: The CCC service must already exist in the network before you
perform this type of service mapping.
NOTE: An MVPN routing instance must already exist before you perform
this type of service mapping.
4. Provision or modify the LSP using the web UI, and include the user properties and their
values in the User Properties tab of the Provision LSP or Modify LSP window.
• Encoding templates are for internal use only and should never be modified or deleted.
All of these templates have “encoding” in their names (lsp-modify-encoding.hjson,
for example).
• Configuration templates are for transforming JSON document keys into device
configuration statements. These templates are available for modification and to use
as models for creating new templates. Currently, these templates all have “junos” in
their names, (lsp-modify-junos.hjson, for example), although, as long as you use the
.hjson suffix, you can name new templates according to your preference.
Template Requirements
Keep in mind the following template requirements:
• If you create a new template, be sure the PCS user has Unix file permission to read it.
• Template files are hjson documents, so their file names must have the .hjson suffix.
netconf:netconfd: stopped
netconf:netconfd: started
• Text format is supported for device configuration statements. XML format is supported
for modifying Cisco IOS XR devices.
• When you upgrade a NorthStar build, the templates provided in the new build replace
the ones that were provided with the original build. You can prevent loss of your
template changes by backing up your templates to a different directory on the server
before upgrading NorthStar, or by saving your modified files with different file names.
Template Structure
Each template has two types of attributes:
• Routing-key attributes which describe the type of provisioning for which the template
should be used. The value of routing-key is not fixed in NETCONFD, but the following
keys are currently agreed upon between NETCONFD and ConfigServer for LSP
provisioning:
• rest_eventd_request_key
• rest_eventd_update_key
• rest_eventd_delete_key
• Device profile attributes that define the device to be provisioned when using the
template.
You can use any device profile attributes (Administration > Device Profile) such as
routerType (Vendor field in Device Profile), model, and so on. NETCONFD tries to match
the attributes in the template with the attributes in the device profiles of the targeted
devices.
• User properties attributes that define such things as service mapping attributes.
User properties is a generic mechanism that allows you to “tag” LSPs with additional
properties. One use of user properties is to tag an LSP with the vpn-name, source-ip,
and group-ip that are related to the associated MVPN (for service mapping).
In the Jinja template, when those user properties are defined, a corresponding set of
statements (related to service mapping) are also generated. The support in the REST
body and the web UI is the same. In the REST body, you include the user properties
under “userParameters”, while in the web UI, you include them in the User Properties
tab of the Provision (or Modify) LSP window.
Table 28 on page 147, Table 29 on page 148, and Table 30 on page 148 detail the supported
JSON document keys for adding LSPs, modifying LSPs, deleting LSPs, and link
modification.
NOTE: Keys that do not “always exist” only exist conditionally. For example:
no for modifying
no for modifying
no for modifying
Template Macros
Jinja Templates support macros for defining reusable functions. The NorthStar template
directory includes the macros listed in Table 31 on page 149.
Macro Function
ifexist Generates a Junos configuration statement if the evaluated key in the JSON document exists.
Ifnotzero Generates a Junos configuration statement if the evaluated key in the JSON document has a
value that is not equal to zero.
Ifnotnone Generates a Junos configuration statement if the evaluated key in the JSON document has
any value.
In the following Jinja template snippet, the statement related to service mapping of the
LSP to the CCC-VPN is provisioned with the LSP if the LSP has associated with it the
“ccc-vpn-name” user property.
}
}
{% endif %}
{% if request['path-setup-type'] == "segment" %}
protocols {
source-packet-routing {
delete: segment-list {{request.name}};
delete: source-routing-path {{request.name}}/{{request.name}};
segment-list {{request.name}} {
{% for segment in request['path-attributes']['sr-ero'] %}
{% if segment['remote-ipv4-address'] %}
segment{{loop.index}} label {{segment.sid}} ip-address
{{segment['remote-ipv4-address']}};
{% else %}
segment{{loop.index}} label {{segment.sid}};
{% endif %}
{% endfor %}
}
source-routing-path {{request.name}}/{{request.name}} {
to {{request.to}};
{{ macros.ifexistandnotzero('metric', request.metric) -}}
{{ macros.ifexistandnotzero('binding-sid',
request['path-attributes']['binding-sid']) -}}
{{ request.type }} {
{{request.name}};
}
}
}
}
In the NorthStar Controller, you can provision P2MP groups; view and modify group
attributes; and view, add, modify, or delete sub-LSPs. This is a separate workflow from
provisioning P2P LSPs, and is initiated from the P2MP Group tab in the network information
table.
NorthStar supports two provisioning methods for P2MP groups: NETCONF and PCEP.
PCEP provisioning offers the advantage of real-time reporting. Functionality and support
for the two provisioning methods are not identical; differences are noted in this
documentation. IMPORTANT: See the release notes for Junos OS release requirements
related to PCEP provisioning.
NOTE: In Junos OS Release 15.1F6 and later, you can enable the router to
send P2MP LSP information to a controller (like the NorthStar Controller) in
real time, automatically. Without that configuration, you must run device
collection for NorthStar to learn about newly provisioned P2MP LSPs.
In the Junos OS, the configuration is done in the [set protocols pcep] hierarchy
for PCEs and for PCE groups. The following configuration statement allows
PCEP to report the status of P2MP trees in real time, whether provisioned by
NETCONF or by PCEP:
NOTE: After provisioning P2MP LSPs, if there is a PCEP flap, the UI display
for RSVP utilization and RSVP live utilization might be out of sync. This is
also true for P2P LSPs. You can display utilization metrics by navigating to
Performance in the left pane of the UI. This is a UI display issue only. The next
live update from the network or the next manual sync using Sync Network
Model (Administration > System Settings > Advance Settings) corrects the
UI display. In the System Settings window, you toggle between General and
Advanced Settings using the button in the upper right corner of the window.
The following sections describe viewing, provisioning, and managing P2MP groups in the
NorthStar Controller.
1. On the tabs bar of the network information table, click the plus sign (+) and select
P2MP Group from the drop-down menu as shown in Figure 110 on page 152.
NOTE: When you launch the web UI, only the Node, Link, and Tunnel tabs
are displayed by default; P2MP Group is one of the tabs you can optionally
display.
2. The P2MP Group tab is added to the tab bar and the contents are displayed as shown
in Figure 111 on page 153.
Columns for group attributes are shown across the top. You can add columns and
filter the display in the usual ways. See “Sorting and Filtering Options in the Network
Information Table” on page 81 for more information.
3. Click a row in the table to highlight the path in the topology map.
4. Right-click a row in the table to display either a graphical tree view of the group, or a
list of the sub-LSPs that make up the group. Figure 112 on page 153 shows these options.
The tree diagram opens as a separate pop-up as show in Figure 113 on page 154.
When you select to view sub-LSPs, the sub-LSPs that make up the group are displayed
in a new tab in the network information table. On the list of sub-LSPs, you have all
the display options normally available on the Tunnel tab. See “Network Information
Table Overview” on page 79 for more information.
NOTE: The sub-LSP tab in the network information table is for display
purposes only; you cannot perform Add, Modify, or Delete functions from
there. But the sub-LSPs are also displayed in the Tunnel tab, where you
can perform those actions.
In the P2MP Group tab of the network information table, the Control Type column displays
Device Controlled for NETCONF-provisioned groups and PCEInitiated for
PCEP-provisioned groups.
Table 32 on page 155 describes the data entry fields in the Properties tab of the Add P2MP
Group window.
Field Description
P2MP Name Required. A user-defined name for the P2MP group. Only alphanumeric characters, hyphens, and
underscores are allowed. Other special characters and spaces are not allowed.
ID Prefix You can enter a prefix to be applied to all of the tunnel names that are created.
Field Description
Bandwidth Required. Planned bandwidth immediately followed by units (no space in between). Valid units are:
• B or b (bps)
• M or m (Mbps)
• K or k (Kbps)
• G or g (Gbps)
Provisioning Type The default is RSVP, which is the only option supported for P2MP groups. Even if you select SR, RSVP
is used.
Setup Required. RSVP setup priority for the tunnel traffic. Priority levels range from 0 (highest priority)
through 7 (lowest priority). The default is 7, which is the standard MPLS LSP definition in Junos OS.
Hold Required. RSVP hold priority for the tunnel traffic. Priority levels range from 0 (highest priority) through
7 (lowest priority). The default is 7, which is the standard MPLS LSP definition in Junos OS.
Provisioning Method Use the drop-down menu to select PCEP or NETCONF. The default is NETCONF.
Node A Required. The name or IP address of the source node. Select from the drop-down list.
Node Z At least one is required. The names or IP addresses of the destination nodes. To select nodes from
the topology map, Shift-click the nodes on the map and then click the world button at the bottom
of the Node Z field. To add all nodes in the network, click the plus (+) button. To remove a node,
highlight it in the Node Z field and click the minus (-) button.
The Advanced tab includes the fields shown in Figure 115 on page 157 and described in
Table 33 on page 157.
Field Description
Bandwidth Sizing Controls whether bandwidth sizing is enabled for the P2MP group. Use the drop-down menu
to select yes or no. The default is no.
Coloring Include All Double click in this field to display the Modify Coloring Include All window. Select the
appropriate bits. Click OK when finished.
Coloring Include Any Double click in this field to display the Modify Coloring Include Any window. Select the
appropriate bits. Click OK when finished.
Coloring Exclude Double click in this field to display the Modify Coloring Exclude window. Select the
appropriate bits. Click OK when finished.
Diversity Group/Level Diverse P2MP is currently not supported via the web UI, so these fields are not used. Diverse
P2MP computation via REST API is currently available for NETCONF P2MP groups, but not
for PCEP P2MP groups.
The Design tab includes the Routing Method options shown in Figure 116 on page 158.
The Scheduling tab is identical to the one you use to provision P2P LSPs.
For P2MP, the User Properties tab is used for P2MP tree to MVPN service mapping (not
supported for PCEP-provisioned P2MP groups). See “Templates for Netconf Provisioning”
on page 144 for more information.
Once you are finished defining the group, click Submit. The group is added to the network
information table, on the P2MP Group tab.
NOTE:
• Naming of the sub-LSPs is automatic, based on the Prefix-ID if provided,
and the A and Z node names.
To modify a P2MP group, select the group in the P2MP Group tab of the network
information table, and click Modify at the bottom of the table. The Modify P2MP Group
window is displayed as shown in Figure 117 on page 159.
Using the tabs on the Modify P2MP Group window, you can change the value of attributes
(affects all sub-LSPs in the group), add or remove destination nodes (which adds or
removes sub-LSPs), and set up or change scheduling for the group.
NOTE: There are actually two ways you can remove sub-LSPs from a group:
• In the Properties tab of the Modify P2MP Group window, select the
destination node(s) in the Node Z field and click the minus sign (-).
• In the Tunnel tab of the network information table, select the sub-LSP to
be removed and click Delete at the bottom of the table.
NOTE: The following six attributes must be the same for all sub-LSPs in a
P2MP group, and can therefore only be modified at the group level, using the
Modify P2MP Group window:
• Bandwidth
• Setup
• Hold
You can modify other attributes on the individual sub-LSP level (path or Max Hop, for
example). To modify sub-LSP attributes, select the tunnel in the Tunnel tab of the network
information table and click Modify at the bottom of the table. If you attempt to modify
one of the six group-level-only attributes at the sub-LSP level, an error message is
displayed when you click Submit and the change is not made.
NOTE: If the sub-LSPs tab in the network information table fails to update
after modifying group or sub-LSP attributes, you can close the sub-LSPs tab
and reopen it to refresh the display. There is also a refresh button at the
bottom of the table that turns orange when prompting you for a refresh.
When you click the refresh button, the web UI client retrieves the latest P2MP
sub-LSP status from the NorthStar server.
To delete a P2MP group, select the group on the P2MP Group tab of the network
information table and click Delete at the bottom of the table. Respond to the confirmation
message to complete the deletion.
Alternatively, you can use the Tunnel tab of the network information table to delete all
the sub-LSPs in the P2MP group, which also deletes the group itself.
Bandwidth Calendar
The Bandwidth Calendar opens in a new browser window or tab when you navigate to
Applications>Bandwidth Calendar. The calendar displays all scheduled LSPs on a
timeline, along with their properties, so you can see the total bandwidth requirements
for any given time. Figure 118 on page 161 shows an example bandwidth calendar.
NOTE: The bandwidth calendar timeline is empty until you schedule LSPs.
On the timeline, a red vertical line represents the current date and time, so you can easily
distinguish between past and future events. Zoom functions at the top of the window
allow you to select from the following:
1d—LSPs scheduled from the current date and time, plus 24 hours
7d—LSPs scheduled from the current date and time, plus 7 days
1m—LSPs scheduled from the current date and time, plus 1 month
3m—LSPs scheduled from the current date and time, plus 3 months
6m—LSPs scheduled from the current date and time, plus 6 months
1y—LSPs scheduled from the current date and time, plus 1 year
From a PCC router’s CLI, you can create LSP templates to define a set of LSP attributes
to apply to PCE-initiated LSPs. Any PCE-initiated LSPs that provide a name match with
the regular expression (regex) name specified in the template automatically inherit the
LSP attributes that are defined in the template. By associating LSPs (through regex name
matching) with a specific user-defined LSP template, you can automatically turn on (or
turn off) LSP attributes across all LSPs that provide a name match with the regex name
specified in the template.
The following configuration example shows how to define the regex-based LSP name
for a set of LSP “container” templates that you can deploy to apply specific attributes
to any LSPs on the network that provide a matching LSP name.
a. To specify that any PCE-initiated LSP that provides a name match with the prefix
PCE-LP-* will inherit the LSP link-protection attributes defined in the
LINK-PROTECT-TEMPLATE template, configure the following statement from the
PCC router CLI:
b. To specify that any PCE-initiated LSP that provides a name match with the prefix
PCE-AUTOBW-* will inherit the LSP auto-bandwidth attributes defined in the
AUTO-BW-TEMPLATE template, configure the following statement from the PCC
router CLI:
2. Create the templates that define the attributes you want to apply to all PCE-initiated
LSPs that provide a name match.
4. Create LSPs in NorthStar by specifying LSP names based on the regex-based name
defined in Step 1 above.
From the Path Computation Client (PCC) router’s command line interface, you can use
the Junos OS groups statement with label-switched path (LSP) templates to define a
set of LSP attributes to apply to PCE-initiated LSPs. Any PCE-initiated LSP that provides
a name match with the regular expression (regex) name that is specified in the template
automatically inherits the LSP attributes that are specified in the template. Thus, by
associating PCE-initiated LSPs with a user-defined LSP template, you can automatically
turn on (or turn off) LSP attributes across all LSPs that provide a name match with the
regex name that is specified in the template.
The following example show how you can use templates to apply auto-bandwidth and
link-protection attributes to LSPs. For example, when auto-bandwidth is enabled, LSP
auto-bandwidth parameters must be configured from the router, even when the LSP has
been delegated. Under no circumstances can the NorthStar Controller modify the
bandwidth of an externally controlled LSP when auto-bandwidth is enabled. A PCC
enforces this behavior by returning an error if it receives an LSP update for an LSP that
has auto-bandwidth enabled. Currently, there is no way to signal through PCEP when
auto-bandwidth is enabled, so the NorthStar Controller cannot know in advance that
the LSP has auto-bandwidth enabled. However, if auto-bandwidth is enabled by way of
a template, the NorthStar Controller knows that the LSP has auto-bandwidth enabled
and disallows modification of bandwidth.
To configure and apply groups to assign auto-bandwidth and link protection attributes
to label-switched paths:
1. From the PCC router CLI, configure groups to specify that any PCE-initiated LSP that
provides a name match with the specified prefix will inherit the LSP attributes defined
in the template:
a. Configure a group to specify that an LSP that provides a name match with the
prefix AUTO-BW-* will inherit the LSP auto-bandwidth attributes defined in the
AUTO-BW-TEMPLATE template.
b. Configure a group to specify that any LSP that provides a name match with the
prefix LINK-PROTECT-* will inherit the LSP link-protection attributes defined in
the LINK-PROTECT-TEMPLATE template.
2. Configure the templates to apply the attributes defined for the two groups in the
previous step.
4. Create LSPs from the NorthStar Controller by specifying LSP names based on the
regex-based name defined in Step 1.
Path Optimization
For many large networks, when a tunnel is rerouted due to a network failure, the new
path remains in use even when the network failure is resolved. Over time, a suboptimal
set of paths might evolve in the network. The path analysis and optimization feature
re-establishes an optimal set of paths for a network by finding the optimal placement
of tunnels using the current set of nodes and links in the network. You can request path
analysis on demand, and path optimization either on demand or according to a schedule
that you define.
Analyze Now Analyzes the network for optimization opportunities, and generates a results report. Reviewing the
report gives you the opportunity to consider the effects of optimization before you actually execute
it.
NOTE: The path analysis and optimization reports do not contain any information about
PCC-controlled LSPs because NorthStar does not attempt to optimize them.
NOTE: The optimization is based on the current network, not on the most recent Analyze Now
report.
Settings Enables or disables an optimization schedule. For example, in Figure 120 on page 169, path
optimization would occur every 60 minutes.
In the lower left corner of the topology map pane, there is a color legend for the links
displayed in the map. The title of the legend and the units it represents (percent,
milliseconds, megabytes) correspond to the display option you select in the Performance
window in the left pane, shown in Figure 121 on page 170.
Click the legend to enlarge it and enable configuration as shown in Figure 122 on page 170.
Click the triangle icon in the upper right corner to open the color palette where you can
choose a color scheme. The color scheme options are designed to support any network
visualization goals, including a create-your-own-palette option (Custom).
Figure 123 on page 171 shows the color palette options.
Double click in one segment on the Custom palette to open the custom color window
where you can select a color for that segment. Figure 124 on page 171 shows the custom
color window.
Click OK to add the color to the palette. Double click another segment, and so on until
you have selected all five colors for the Custom palette. If you save a layout, the active
palette is saved with the layout, even if it is a custom palette.
The ranges represented in the color legend are configurable. Click and drag the slider
buttons between colors on the legend to change the ranges. The links in the topology
map change color accordingly. The max value option (gear icon) appears in the upper
right corner of the legend when your Performance selection (left pane) calls for units
other than a percentage. Click the gear icon to set the maximum value for the legend.
Sometimes links display as half one color and half another color. The presence of two
different colors indicates that the utilization in one direction (A to Z) is different from the
utilization in the other direction (Z to A). The half of the link originating from a certain
node is colored according to the link utilization in the direction from that node to the
other node. Figure 125 on page 172 shows two colors in one of the links between vmx104-11
and vmx105-11-p107.
Segment Routing
Junos OS Release 17.2R1 or later is required to utilize NorthStar Controller SPRING features.
However, NorthStar Controller does not report the correct record route object (RRO) in
the web UI and via the REST API when routers are configured with Junos OS Release
17.2R1. Instead of showing a list of link adjacency SIDs, the web UI and REST API report
a list of “zero” labels. This issue has been fixed in Junos OS Releases 17.2.R1-S1 and 17.2R2,
and later releases.
• NorthStar diverse LSP and multiple LSP provisioning support segment routing. Select
SR from the Provisioning Type drop-down menu on the Provision Diverse LSP or
Provision Multiple LSPs window.
• SR LSPs can be configured via NorthStar using either PCEP (real-time push model)
or NETCONF (non-real-time pull model–LSP information is collected via periodic
NETCONF device collection).
See “Provision LSPs” on page 104 for full documentation of the Provision LSP window
tabs. The following sections describe provisioning SR LSPs using NorthStar and viewing
the SR LSP information in the NorthStar web UI.
Segment ID Labels
Adjacency segment ID (SID) labels (associated with links) and node SID labels
(associated with nodes) can be displayed on the topological map.
NOTE: You can use either BGP-LS peering or IGP adjacency from the JunosVM
to the network to acquire network topology. However, for SPRING information
to be properly learned by NorthStar when using BGP-LS, the network should
have RSVP enabled on the links and the TED database available in the
network.
You can display adjacency SID labels on the map. On the right side of the topology window
is a menu bar offering various topology settings. Click the Tools (gear-shaped) icon and
select the Elements tab. Under Links, click the check box for Show Label and select SID
A::Z from the corresponding drop-down menu. An example topology map showing
adjacency SID labels is shown in Figure 126 on page 174
To view adjacency SID labels in the network information table, click the down arrow
beside any column heading under the Link tab, and click Columns to display the full list
of available columns. Click the check boxes beside SID A and SID Z.
When you display the detailed information for a specific link (by double clicking the link
in the map or in the network information table), you see an attribute folder for both endA
and endZ called SR. You can drill down to display attributes for each SID as shown in
Figure 127 on page 174. At present, only IPv4 SIDs are supported, and only one per interface.
Node SID labels are displayed a little differently because the value of the label depends
on the perspective of the node assigning it. A node might be given different node SID
labels based on the perspective of the assigning node. To display node SID labels on the
topology map, specify the perspective by right-clicking on a node and selecting Node
SIDs from selected node. The node SID labels are then assigned from the perspective
of that selected node.
For example, Figure 128 on page 175 shows a topology displaying the SID node labels from
the perspective of node vmx101. Note that the node SID label for node vmx106 is 1106.
If you right-click on node vmx104 and select Node SIDs from selected node, the node
SID labels on the topology change to reflect the perspective of node vmx104 as shown
in Figure 129 on page 176. Note that the node SID label for node vmx106 is now 4106.
The selected node does not display a node SID label for itself. Any other nodes in the
topology map that do not display a node SID label do not have the segment routing
protocol configured.
NOTE: Node SID information is not available in the network information table.
SR LSPs
SR LSPs can be created using both adjacency SID and node SID labels. An SR LSP is a
label stack that consists of a list of adjacency SID labels, node SID labels, or a mix of
both. To create an SR LSP:
1. Navigate to the Tunnel tab in the network information table and click Add at the
bottom of the table to display the Provision LSP window, Properties tab.
2. From the Provisioning Method drop-down menu, select either PCEP or NETCONF.
provisioned via NETCONF, they can be learned via either PCEP or NETCONF. In
Junos OS Release 18.2 R1, PCEP reporting is limited. The alternative is to learn about
the details of the NETCONF-provisioned SR LSPs by way of Device Collection
configuration parsing in NorthStar.If you opt to use this method for SR LSP
provisioning, be aware that because the primary path details come from device
collection configuration parsing, updates are not provided to NorthStar in real time,
and NorthStar reports the operation status for these LSPs as Unknown.
5. For NETCONF SR LSP provisioning (not applicable to PCEP), you can also specify a
binding SID label value in the Binding SID field on the Advanced tab. See the Binding
SID section for more information.
6. On the Design tab, select the routing method from the drop-down menu, typically
either routeByDevice (router computes some of the path) or default (NorthStar
computes the path).
7. On the Path tab, you can specify any specific hops you want in the path, including
private forwarding adjacency links generated by the provisioning of binding SID SR
LSP pairs. See the Binding SID section for more information.
8. Click Submit. The provisioning request then enters the Work Order Management
process.
• For both PCEP and NETCONF provisioned SR LSPs, once the work order is activated,
the new path is highlighted in the topology map.
• For NETCONF provisioned SR LSPs, once the work order is activated, the
corresponding configuration statements appear in the router configuration file.
• The IP address and the SID are the two parts of the explicit route. The IP address part
is displayed in the ERO column in the network information table, Tunnel tab. The SID
part is displayed in the Record Route column.
• Double-click on the tunnel row in the network information table and drill down into
the liveProperties to see the details of the ERO.
• show spring-traffic-engineering lsp name lsp-name detail to display the LSP status
and SID labels.
• show route table inet.3 to display the mapping of traffic destinations with SPRING
LSPs.
Binding SID
When you provision a pair of binding SID SR LSPs (one going from A to Z and one for the
return path from Z to A), a private forwarding adjacency is automatically generated.
These adjacencies are named with a specific format, with three sections, separated by
colons. For example, binding:0110.0000.0105:privatefa57.
• The center section is the name of the originating node, followed by a colon
(0110.0000.0105: in this example).
• The last section is the name you specified for the binding SID SR LSP in the Name field
on the Properties tab of the Provision LSP window (privatefa57 in this example). This
name must be the same for the binding SID SR LSPs in both directions, to ensure they
can be properly matched, creating the corresponding private forwarding adjacency
link.
In the topology map, you can opt to display private forwarding adjacency links or not. In
the left pane drop-down menu, select Types and then select or deselect the check box
for privateForwardingAdjacency under Link Types as shown in Figure 131 on page 179.
When selected, the adjacencies display as dotted lines on the topology map as shown
in Figure 132 on page 180.
You can tunnel a non-binding SID SR LSP over a binding SID SR LSP, thereby reducing
the number of labels in the label stack (private forwarding adjacency labels can represent
multiple hops in the path). An example is shown in Figure 133 on page 181.
NOTE: Tunneling a binding SID SR LSP over another binding SID SR LSP is
not supported.
In this display, you can see the logical path (traced in amber) of the SR LSP as it goes
from vmx101 to vmx105, to vmx107 by way of a private forwarding adjacency link, and
finally to vmx103. You can also see (traced in pink) the path of the private forwarding
adjacency link of the binding SID SR LSP. The Record Route column in the network
information tunnel shows a label stack with three labels. The second label of the three
is the private forwarding adjacency link. Without that adjacency link, the label stack would
have required six labels to define the same path.
NOTE: Path highlighting for an SR LSP in a network that has two adjacency
SIDs per interface is not supported.
To provision a pair of binding SID SR LSPs, use the procedure for NETCONF SR LSP
provisioning, plus:
1. On the Provision LSP window Advanced tab, populate the Binding SID field with a
numerical binding SID label value of your choice from the static label range of 1000000
to 1048575. This value then becomes the label that represents the path defined by
the hops you specify on the Path tab (the hops that make up the private forwarding
adjacency link).
NOTE: At this time, NorthStar does not support binding SID label allocation
nor collision detection. Note that Junos OS has built-in collision detection,
so that if the binding SID label specified is outside the allowed range of
1000000 to 1048575, the router does not allow the configuration to
commit. Correspondingly, the Controller Status in the Tunnel tab of the
network information table shows the usual indication of
FAILED(NS_ERR_INVALID_CONFIG).
2. On the Design tab, select the routing method, default for example.
4. Provision a second binding SID SR LSP in the opposite direction, using the same LSP
name as the first LSP in the pair. The binding SID label value can also be the same as
in the first LSP in the pair, but it is not required that it be the same.
When the binding SID SR LSP pair is provisioned, the private forwarding adjacency link
is automatically created, and can then be selected as a destination when you designate
hops for a non-binding SID SR LSP. Use show commands on the router to confirm that
the LSP pair has been pushed to the router configuration.
Provisioning of an SR LSP can include hop information that somewhat influences the
routing. In the Provision LSP window, select the Path tab. There, you can select hops up
to the MSD hop limitation that is imposed on the ingress router, and specify Strict or
Loose adherence.
• The provisioning type is SR, designated in the Properties tab of the Provision LSP
window.
• The routing method is routeByDevice, designated in the Design tab of the Provision
LSP window. The highlighting of the equal cost paths can only be viewed in the topology
if the routing is being done by the PCC.
The mandatory transit router can be part of the generated ERO using the adjacency SID
passing through that transit router. However, specifying a mandatory transit router usually
increases the label stack depth, violating the MSD. In that case, you can try using the
routeByDevice method. To specify a mandatory transit router using Node SID, select the
routing method as routeByDevice (Design tab), and specify the loopback of the mandatory
transit router as loose hop (Path tab).
A possible downside to using routeByDevice is that other constraints you impose on the
LSP links (bandwidth, coloring, and so on) cannot be guaranteed. The NorthStar Controller
does not provision the LSP if it sees that the constraints cannot be met. But if the
information available indicates that the constraints can be met, the NorthStar Controller
provisions the LSP even though those constraints are not guaranteed. Turning on the
path optimization timer enables NorthStar to periodically check the constraints.
If the NorthStar Controller later learns (during the execution of an optimization request,
for example) that the constraints are no longer being met, it will try to reroute the tunnel
by changing the first hop outgoing interface if a specific one was not configured. If that
is not possible, the LSP remains in the network, even though constraints have been
violated.
NOTE: When you create your NETCONF device collection tasks, be sure you
select the check box to collect configuration data. This is necessary for
NorthStar to collect and parse the statements in the router configuration file,
including those related to SR LSPs. See Figure 136 on page 185.
the operational status. If the labels change or disappear from the network, the NorthStar
Controller tries to reroute and re-provision the LSPs that are in a non-operational state.
If NorthStar is not able to find an alternative routing path that complies with the
constraints, the LSP is deleted from the network. These LSPs are not, however, deleted
from the data model (they are deleted from the network, and persist in the data storage
mechanism). The goal is to minimize traffic loss from non-viable SR LSPs (black holes)
by deleting them from the network. Op Status is listed as Unknown when an SR LSP is
deleted, and the Controller Status is listed as No path found or Reschedule in x minutes.
You can mitigate the risk of traffic loss by creating a secondary path for the LSP with
fewer or more relaxed constraints. If the NorthStar Controller learns that the original
constraints are not being met, it first tries to reroute using the secondary path.
Egress Peer Engineering (EPE) allows users to steer egress traffic to peers external to
the local network, by way of egress ASBRs. NorthStar Controller uses BGP-LS and the
SIDs to the external EPE peers to learn the topology. Segment Routing is used for the
transport LSPs.
In this release, only manual steering of traffic is supported. NorthStar uses netflowd to
create the per-prefix aggregation of traffic demands. Netflowd processes the traffic data
and periodically identifies the Top N demands which, based on congestion, are the best
candidates for steering. These demands are displayed in the network information table,
Demand tab.
Traffic steering involves creating a colored SRTE LSP and then mapping that LSP to
traffic demands via PRPD.
• Netflow must be configured on the router. See “Netflow Collector” on page 333 for
instructions.
• NETCONF
• PRPD client (see the Enable PRPD section later in this topic)
Topology Setup
Figure 137 on page 187 shows a sample EPE topology which we can use to visualize what
NorthStar EPE does.
NorthStar has no information about the traffic past the ASBRs in this example, because
the nodes are external to the local network; they belong to a service provider. So it is also
not possible for NorthStar to display congestion on the links past the ASBRs. The goal
is to be able to reroute traffic among external destinations that all advertise the same
prefix (source). One of the paths is designated as “preferred”. Rerouting the traffic changes
the preferred path. Use Junos OS show route commands to view the preferred path and
the advertised prefixes. Use NorthStar to reroute the traffic.
The following sections describe enabling, configuring, and viewing information related
to setting up and using NorthStar EPE.
Enable PRPD
PRPD enables NorthStar to push the mapping using the PRPD client at the local ASBR.
PRPD must be enabled, both in NorthStar (Device Profile), and in the router configuration.
2. In the device list, click on a device that will be used for EPE and select Modify.
3. In the General tab of the Modify Device window, the login and password credentials
must be correct for NorthStar to access the router.
4. In the Access tab of the Modify Device window, check Enable PRPD, and enter the
port on the router that NorthStar will use to establish the PRPD session. Port 50051
is the default, but you can modify it. If you leave the PRPD IP field empty, the router
ID (router’s loopback address) is used. The Access tab is shown in
Figure 138 on page 188.
To enable the PRPD service on the router, use the following procedure:
1. Add the following configuration statements to the router configuration. The values
are examples only:
The IP address is typically the loopback address of the router. The port number must
match the one you entered in the device profile in NorthStar. The max-connections
value is the total number of connections the router can receive from other clients.
NorthStar will use one of those connections.
2. Make sure you have the BGP protocol enabled on the router.
3. For NorthStar to learn and display the BGP routes associated with each router,
configure a policy with these statements (example policy is called “monitor”):
If configured successfully, you should be able to right-click on a node in the Node tab
of the network information table and select View Routes to see the routing table for
that node. Figure 139 on page 189 shows an example. Only routing tables for nodes
where PRPD is Up can be viewed in this way.
You can view the PRPD Status in the network information table (Node tab) as either Up
or Down. If the PRPD Status is unexpectedly Down, check the device profile in NorthStar,
and the router configuration, including whether BGP protocol is enabled.
To reroute the traffic from ASBR11 to destination node 10.0.0.22 (instead of 10.0.0.21),
you would:
From the network information table, Tunnel tab, click Add at the bottom of the table to
display the Provision LSP window. For this example, we provision an SR LSP using
NETCONF from PE1 to 10.0.0.22. The provisioning method must be NETCONF and the
provisioning type must be SR. On the path tab, select “required” and specify that the
traffic is to go through ASBR11.
In the Advanced tab, specify the Color Community and check Use Penultimate Hop as
Signaling Address for Color Community. In our example, the penultimate hop is ASBR11.
Figure 140 on page 191 shows the Advanced tab of the Provision LSP window.
On the Design tab, select “default” so NorthStar will calculate the ERO.
Because the LSP is provisioned using NETCONF, NETCONF pushes the configuration to
the router. The LSP entry in the Tunnel tab of the network information table shows the
new destination address. NorthStar pushes the hop-by-hop route in the form of segment
(SID) labels.
On the router, you can use the show configuration protocols source-packet-routing
command from the source node (node A) to see the segment list. You can use the show
spring-traffic-engineering lsp command from the source node to see the final destination
with the color designation, the state (up/down), and the LSP name. The show
configuration protocols source-packet-routing command also displays this information.
The following sections describe creating the demands and mapping them to SRTE colored
LSPs.
The netflowd process analyzes traffic from the router and displays it in the Demands tab
in the network information table. By default, Netflow aggregates traffic by PE, but for
EPE, you want the traffic aggregated by prefix. To configure this, use a text editing tool
such as vi to modify the northstar.cfg file, setting the netflow_aggregate_by_prefix
parameter to “always”:
[root@ns]# vi /opt/northstar/data/northstar.cfg
.
.
.
# netflowd settings
.
.
.
netflow_aggregate_by_prefix=always
You can use supervisorctl status to check that the process comes back up.
To map a demand, select it in the network information table and click Modify to display
the Modify Demand window. Select the LSP Mapping tab as shown in
Figure 141 on page 193.
Click the check box beside the LSP to which you want the demand routed. In this release,
you can only select one LSP. In our example, this would be the new SR LSP we created.
Click Submit. NorthStar pushes the mapping via the PRPD client.
You can use the show route command to confirm that the preferred path has changed
as you specified.
To reverse the mapping, you can access the Modify Demand window again and deselect
the check box for the LSP in the LSP Mapping tab. You can also delete the demand.
You can change the IGP metric from within the NorthStar Controller web UI, without
having to configure anything on the router. Modifying metrics is one way to cause the
path selection process to favor one path over the other available paths.
NOTE: Interface data must have been collected using a Netconf device
collection task as described in “Scheduling Device Collection for Analytics”
on page 285 before you can modify IGP metrics.
To modify IGP metrics from within the web UI, perform the following steps:
1. In the Link tab of the network information table, highlight the link to be modified. Click
Modify at the bottom of the table to display the Modify Link window.
2. Click the new Configuration tab where you can change the ISIS Level1, ISIS Level2, or
OSPF metric for either side of the link, or for both sides.
NOTE: If the Configuration tab is not available, device collection has not
been run.
3. Click the Properties tab and add a description of the change you are making in the
Comment field. This is optional, but we recommend it because it serves as a reference
if you want to revert to the original metric.
If your system uses BGP-LS for topology acquisition, only the TE metric can be
immediately updated in the web UI. To retrieve and display other updated metrics
(ISIS1, ISIS2, OSPF), right-click the link in the network information table and select
Run Device Collection.
If your system is configured to use IGP adjacency for topology acquisition, this step is
not necessary because all metrics are immediately updated.
Manual switching allows you to select which LSP path is to be active for PCC-controlled
LSPs where the path name is not empty. One use case for this feature is to proactively
switch the active path in preparation for a maintenance event that would make the
currently active path unavailable.
1. In the Tunnel tab of the network information table, right-click the LSP.
2. Select Set Preferred Path to display the Select Preferred Path window.
NOTE: This menu option is grayed out if the LSP is not a PCC-controlled
LSP for which the path name is not empty.
3. In the list of available paths, click the radio button for the path you want to make
active. When you click a radio button, you can see the corresponding path highlighted
in the topology map.
NOTE: The list of paths comes from the router’s configuration under the
path statement blocks. If the network does not run PCEP, you must first
run a Netconf device collection task to populate the list of paths.
4. Click Submit. The Op Status of the paths is updated in the network information table.
In the Configured Preferred Path column, the manually-selected path is designated
as Manual Preferred.
1. In the Tunnel tab of the network information table, right-click the LSP.
2. Select Set Preferred Path to display the Select Preferred Path window.
3. In the list of available paths, click the radio button next to None.
Maintenance Events
Use the Maintenance option to schedule maintenance events for network elements, so
you can perform updates or other configuration tasks. Maintenance events are planned
failures at specific future dates and times. During a scheduled maintenance event, the
selected elements are considered logically down, and the system reroutes the LSPs
around those elements during the maintenance period. After the maintenance event is
completed, the default behavior is that all LSPs that were affected by the event are
reoptimized. There is an option that allows you to disable that reoptimization if you want
to complete the maintenance event, but keep the paths in their rerouted condition.
NOTE: Maintenance events can also be created by NorthStar when the link
packet loss threshold has been exceeded, triggering LSP rerouting. See “LSP
Routing Behavior” on page 349 for more information about LSP rerouting.
Table 35 on page 196 describes the columns displayed in the Maintenance tab.
Field Description
Name Name assigned to the scheduled maintenance event. The name specified for the maintenance
event is also used to name the subfolder for reports in the Report Manager.
NOTE: The names of triggered maintenance events (created by NorthStar) indicate they were
triggered by packet loss.
Estimated Duration Estimated duration for the maintenance event, which is calculated as the duration between the
Start Time and End Time in the Maintenance Scheduler window.
Auto Complete When selected, NorthStar automatically sets the event’s Operation Status to Completed at the
specified End Time.
NOTE: For NorthStar-created maintenance events, this option is not available. NorthStar-created
events require manual completion via the Modify Maintenance Event window.
No LSP Reoptimization When selected, NorthStar does not automatically reoptimize LSPs when the event is completed.
Table 36 on page 198 describes the data entry fields available in the Properties tab. A red
asterisk denotes a required field.
Field Description
Owner This field auto-populates with the user that is scheduling the maintenance event.
Starts Required. Click the calendar icon to display a monthly calender from which you can select the year,
month, day, and time.
Ends Required. Click the calendar icon to display a monthly calender from which you can select the year,
month, day, and time.
Field Description
Auto Complete at End Select the Auto Complete at End Time check box to automatically complete the maintenance event
Time (bring the elements back up) at the specified end time. If the check box is not selected, you must
manually complete the maintenance event after it finishes.
NOTE: To manually complete an event, select it in the network information table, click Modify, and
use the drop-down menu in the Status field to select Completed.
When a maintenance event is completed, it triggers NorthStar to bring the maintenance elements
back to an Up state, ready for path reoptimization. The affected LSPs are then rerouted to optimal
paths unless you selected No LSP Reoptimization Upon Completion.
No LSP Reoptimization The default behavior is for the system to reoptimize those LSPs that were affected by the maintenance
Upon Completion event when the maintenance event is completed. When you check the No LSP Reoptimization Upon
Completion option, that behavior is disabled. This allows you to use a maintenance event to temporarily
disable a link in NorthStar.
You can reoptimize all LSPs by navigating to Applications > Path Optimization. You can reoptimize
specific LSPs by selecting them in the Tunnel tab of the network information table, right-clicking, and
selecting Trigger LSP Optimization from the drop-down menu. You can also right-click on links in the
Link tab to reoptimize LSPs on those links.
Use the Nodes, Links, and SRLG tabs to select the elements that are to be included in
the maintenance event. All three of these tabs are structured in the same way.
Figure 143 on page 200 shows an example.
Select elements in the Available column and click the right arrow to move them to the
Selected column. Click the left arrow to deselect elements. Click Submit when finished.
The new maintenance event appears in the network information table at the bottom of
the Topology view.
These events start immediately when the link packet loss threshold is exceeded, and the
end time is set for one hour later. Because this type of maintenance event requires manual
completion, the end time is not significant.
These events do not automatically complete because there is no way for NorthStar to
know when troubleshooting efforts have been successful and the link has been restored
to stability. Therefore, you must manually complete these events using the Modify
Maintenance Event window.
See Table 36 on page 198 and Table 35 on page 196 for descriptions of these fields and
possible values.
When you are finished updating the fields, click OK. The updates you made are reflected
in the network information table.
NOTE: You cannot delete a maintenance event that is in progress. You can,
however, cancel one.
To cancel a maintenance event, select the event row in the Maintenance tab of the
network information table and click Modify at the bottom of the table. Use the drop-down
menu in the Status field to select Cancelled.
To delete a maintenance event, you can select the event row and click Delete at the
bottom of the table. Alternatively, you can select the event row and click Modify at the
bottom of the table. Use the drop-down menu in the Status field to select Deleted. With
either method, the row is removed from the table.
Creating Maintenance Events for Devices with the Overload Bit Set
When a device has the overload bit set, it might be at risk of going down. Putting such
devices under maintenance and routing traffic around them until the issue is resolved is
a preventative measure. Rather than monitoring for the overload bit manually, NorthStar
supports automatically creating and completing maintenance events for devices that
have the overload bit set. NorthStar discovers the overload bit setting via either NTAD
or BMP.
NOTE: Not all Junos OS releases set the overload bit properly when sending
node advertisement to NorthStar. For example, the Junos VM bundled with
NorthStar Release 5.0 does not support setting the overload bit. If you want
to use this feature with NorthStar Release 5.0 and the bundled JunosVM, you
can use BMP instead of NTAD.
To set up automatic creation and completion of an overload bit maintenance event, you
create a Network Maintenance task in the Task Scheduler (Administration > Task
Scheduler), and schedule it to recur at regular intervals.
1. In the Task Scheduler, click Add to bring up the Create New Task window. Enter a
name for the task and use the Task Type drop-down menu to select Network
Maintenance. Click Next to proceed to the options and conditions window shown in
Figure 146 on page 204.
2. On the Task Options tab, Event Name Prefix is a required field. NorthStar uses the
prefix in the naming of the maintenance event created by the task. The prefix is
followed by a timestamp to ensure the uniqueness of the event name. You can either
enter a prefix or you can select to use the name of the task as the prefix.
Click the No LSP Optimization Upon Completion check box if you don’t want NorthStar
to automatically reoptimize LSPs when the event is completed.
3. The Event Create Conditions and Event Complete Conditions tabs are for specifying
what should trigger the creation and completion of the maintenance event.
In the Event Create Conditions tab, highlight elements in the Available column and
click the right arrow to move them into the Selected column. As of NorthStar Controller
Release 5.0, the only available create condition is Node.
Once Node has been moved to the Selected column, the Attributes table displays in
the lower part of the window. Click the plus sign (+) to add a property row and then
click in the property row Name field to display the drop-down menu arrow. From the
drop-down menu, select the create condition. As of NorthStar Release 5.0, the only
available create condition is overloadBit. In the Value column, use the drop-down
menu to select the value of True for the overloadBit create condition.
NOTE: For other create conditions available in future releases, False might
be the appropriate selection.
Figure 147 on page 205 shows the Event Create Conditions tab with the Attributes table
displayed.
There are sorting and column selection tools available in the Attributes table headings.
These will be more useful later, when additional create conditions are implemented.
4. The Event Complete Conditions tab fields work the same way as the Event Create
Conditions tab fields. Select Node and move it from Available to Selected. Click the
plus sign (+) beside the Attributes table, click in the Name field of the new row, and
use the drop-down menu to select overloadBit. In the Value field, select False. Click
Next to proceed to the scheduling window.
5. In the scheduling window, specify when the task should start and how often it should
repeat. Click Submit. The task appears in the list of Task Scheduler tasks. See
“Introduction to the Task Scheduler” on page 280 for information about monitoring the
progress of scheduled tasks.
Every time the task runs, it first checks the complete condition for the maintenance event
created by the task. If all the elements included in the maintenance task satisfy the
complete condition (overloadBit = false, for example), it completes the maintenance
event. Next, it looks for elements that match the create conditions (overloadBit = true,
for example). If it finds some, it creates a new maintenance event that includes those
elements.
Just as for other maintenance events, the “M” symbol marks the affected nodes on the
topology map. In the Maintenance tab of the network information table, the maintenance
event displays the comment “created by maintenance task” in the Comment column.
NOTE: This type of maintenance event completes when the included nodes
no longer have the overload bit set, but the event will not automatically be
deleted. You can manually delete the completed event from the Maintenance
tab of the network information table.
To access this function, right-click in the maintenance event row in the network
information table and select Simulate.
The Maintenance Event Simulation window, as shown in Figure 148 on page 207, displays
the nodes, links, and SRLGs you selected to include in the event.
The Exhaustive Failure Simulation section at the bottom of the window is optional. It
provides check boxes for selecting the element types you want to include in an exhaustive
failure simulation. If you do not perform an exhaustive failure simulation (all check boxes
under Exhaustive Failure Simulation are cleared), all the nodes, links, and SRLGs selected
for the maintenance event fail concurrently. In Figure 148 on page 207, for example, node
0110.0000.0199, link L11.106.107.1_11.106.107.2, and SRLG 100 would all fail at the same
time.
Using this same example, but with Nodes selected under Exhaustive Failure Simulation,
the simulation still fails all the maintenance event elements concurrently, but
simultaneously fails each of the other nodes in the topology, one at a time. If you select
multiple element types for exhaustive failure simulation, all possible combinations
involving those elements are tested. The subsequent report reflects peak values based
on the worst performing combination.
Whether or not you select exhaustive failure, click Simulate to perform the simulation
and generate reports.
The following reports are available for each maintenance event simulation:
• RSVP Link Utilization Changes: Shows changes to the tunnel paths, number of hops,
path cost, and delay.
• Peak Simulation Stat Summary: Shows the summary view of the count, bandwidth,
and hops of the impacted and failed tunnels.
• Peak Simulation Tunnel Failure Info: Lists the tunnels that were unable to reroute and
the causing events during exhaustive failure simulation.
• LSP Path Changes: Shows changes to the tunnel paths, number of hops, path cost,
and delay.
• Link Peak Utilization: For each link, this report shows the peak utilization encountered
from one or more elements that failed.
• Link Oversubscription Stat Summary: Lists the links that reached over 100% utilization
during exhaustive failure simulation.
• Physical Interface Peak Utilization Report: Physical interfaces report with normal
utilization, the worst utilization, and the causing events during exhaustive failure
simulation.
• Maintenance Event Simulation Report: Link utilization and LSP routing changes during
failure simulation caused by maintenance events.
• Path Delay Information Report: Shows the worst path delay and distance experience
by each tunnel and the associated failure event that caused the worst-case scenario.
The following sections describe how multilayer support is integrated into the NorthStar
Controller:
The NorthStar user interface for configuring and working with transport domain data and
the work flow are the same, whether the interface is Open ROADM or TE. There are,
however, a few differences in terms of supported features, and those are noted in the
documentation.
• You can configure multiple devices associated with a single transport controller, and
at least one device is required. If multiple devices are configured, NorthStar Controller
attempts connection to them in round-robin fashion.
• The transport controller should provide the NorthStar Controller with the local and
remote identifier information for each interlayer link. If the transport controller is not
able to provide the interlayer link identifiers on the packet domain side, it provides open
ended interlayer links that you can complete using the NorthStar Controller Web UI.
• Juniper Networks provides an open source script to be used optionally for configuring
interlayer links.
• Transport link failures can be reported by transport controllers and are displayed in
the NorthStar Controller UI as failed transport links. Only failures reported in the traffic
engineering database (TED) are taken into account for rerouting. IP links associated
with transport link failures reported by a transport controller are not considered down
by NorthStar Controller unless reported down in the TED.
• Transport controller profile configuration can be done in the NorthStar Controller Web
UI or directly via the NorthStar Controller’s northbound REST API. You can view and
manage transport layer elements in both the web UI and the NorthStar Planner.
• The web UI and the northbound REST API offer premium delay-related path design
options for transport links. In the web UI, navigate to Applications>Provision LSP, and
click the Design tab. These options are also available in the NorthStar Planner.
When the transport domain is known, the delay information does not need to be
populated manually or imported from a static file because the information is learned
dynamically by NorthStar Controller.
• Once the interlayer links mapping is completed, the data used by the path design
options (delay, SRLGs, Protected) is populated automatically and updated dynamically
through communication between the transport and NorthStar controllers.
SRLGs
NorthStar Controller considers transport shared risk link group (SRLG) information
whenever a path optimization occurs or whenever some event triggers rerouting.
NorthStar Controller Web UI allows for the specification of an additional TSRLG prefix
(a prefix extension) for each transport controller to prevent unintentional overlap.
Preventing unintentional SRLG range overlap requires particular vigilance when you have
transport controller ranges and you also manually assign SRLGs to IP links in NorthStar
Controller.
Maintenance Events
Maintenance events that include transport layer elements can be scheduled in the
NorthStar Controller UI because transport SRLGs are automatically discovered by
NorthStar Controller. You can select any transport layer elements or combination of
transport and packet layer elements to be included in a maintenance event. Of the
transport layer elements only the transport SRLGs can trigger the rerouting of packet
layer LSPs.
Both the NorthStar Controller and NorthStar Planner support creation of maintenance
events that include transport layer elements. The transport controller is not made aware
of these maintenance events as they exist only in the scope of NorthStar.
Latency
NorthStar Controller can dynamically learn latency information for transport links and
interlayer links, to support latency-based routing constraints for packet LSPs. There are
three possible sources for latency values. All of the values are collected and saved, but
when multiple values are present for the same object, the NorthStar Controller can only
accept one. The NorthStar Controller resolves conflicts by accepting latency values
according to their source in the following order of preference:
• Transport controller
NOTE: You can also access the Provision LSP window from the network
information table. From the Tunnel tab, click Add at the bottom of the table.
This section describes transport controller configuration tasks in the web UI.
NOTE: Transport layer elements can be viewed in both the web UI and
NorthStar Planner.
The Transport Controller window consists of the following panes (numbers correspond
to the numbers in Figure 149 on page 212):
3. Profile Groups (lower left pane)—Lists configured profile groups, and used to reload,
add, modify, and delete profile groups.
4. Device List (lower right pane)—Lists the devices that are part of the profile group
selected in the Profile Groups pane, and used to add, modify, delete, and copy devices.
2. Select the group in the Profile Groups pane. In the Device List pane, create at least
one device for the group. A group can have multiple devices.
3. Select (or create and select) the transport controller in the Transport Controllers
pane.
4. In the Configuration pane for the selected transport controller, enter the requested
information, including selecting the Group Name from the drop-down menu. The
devices in the group are then associated with the transport controller.
Button Function
Reloads the selected profile group. Used to update the device list in the UI when devices
have been added using the REST API.
1. In the Profile Groups pane (lower left pane), click the Add (+) button to display the
Create New Group window. Figure 150 on page 214 shows the Create New Group
window that is displayed.
To delete a selected group, click the Delete button, and respond to the request for
confirmation.
Adding Devices
The buttons across the top of the Device List pane perform the functions described in
Table 38 on page 214.
Button Function
To create the devices that are part of the new profile group, perform the following steps:
1. In the Device List pane (lower right pane), click the Add (+) button to display the Add
New Device window as shown in Figure 151 on page 215.
2. Enter the requested information. Some fields are populated with default values, but
you can change them. Table 39 on page 215 describes the fields in the Add New Device
window.
Field Description
Device Name Name of the device for display and reporting purposes.
Device IP (required) The IP address used to connect to the HTTP server on the device. This address
is typically provided by the vendor.
Login (required unless the authentication Username for authentication. The username must match the username
method is NOAUTH) configured on the server running the device being configured.
Field Description
Password (required unless the authentication Password for authentication. The password must match the password
method is NOAUTH) configured on the server running the device being configured.
Access Method Use the drop-down menu to select either HTTP or HTTPS. The default is HTTP.
HTTP Port The HTTP port on the device. The default is 5000.
Connection Timeout Number of seconds before a connection request to the device times out. The
default is 300. Use the up and down arrows to increment or decrement this
value or type a different value in the field.
Heartbeat Failure Limit Number of connection retries before the device is considered down. The default
is 3.
Authentication Method Use the drop-down menu to select BASIC, NOAUTH, or BEARER. The default
is BASIC.
Authorization URL Used when the Authentication Method is BEARER. The server URL used to
generate the bearer token based on the user name and password.
Token Expiration Time Used when the Authentication Method is BEARER. Number of seconds the
bearer token is valid.
Table 40 on page 216 shows the fields that require specific values for particular
transport controller vendors. Fields not listed are not typically vendor-specific. Confirm
all values with the vendor before using them.
3. Click Submit.
4. Repeat the procedure to add all the devices for the profile group.
You can drag and drop device rows to change the order in the Device list. Changing the
order in the list changes the order in which connection to the devices is attempted.
Button Function
1. In the Transport Controllers pane (upper left pane), click the Add (+) button. The
default name newController is added to the Transport Controllers pane in red text
(because it has not yet been saved), and is selected so you can populate the properties
in the Configuration pane (upper right pane).
2. In the Configuration pane, enter the requested information. Table 42 on page 217
describes the transport controller profile configuration fields and identifies the ones
that are required.
Field Description
Name (required) Name of the transport controller profile. The default name for a new profile
is newController. We recommend you use the name of the vendor (ADVA, for
example) as the name of the transport controller profile, so NorthStar
Controller can use corresponding icons in the UI. Otherwise, it uses generic
icons.
Group Name (required) Use the drop-down menu to select a group name from those configured in
the Profile Groups pane.
Interface Type (required) Use the drop-down menu to select REST or RESTCONF. The default is REST.
Notify URL (required) REST or RESTCONF URL on the transport controller that publishes topology
change notifications.
Field Description
Poll URL The server URL used to poll server liveness. If the interface type is RESTCONF
and no value is entered, NorthStar Controller uses /.well-known/host-meta
by default. If the interface type is REST, you must enter a value which you
obtain from the vendor.
SRLG Prefix Enables separation of shared risk link group (SRLG) spaces when multiple
controllers are configured.
Topology to use Specifies the topology to use in the event that a controller returns multiple
topologies. This is your choice from the topologies provided, but there are
typical topologies for each vendor. The field can be left empty, in which case
all topologies are imported. If the value does not match a topology exported
by the transport controller, no topology is shown.
Topology URL (required) URL on the transport controller that provides the abstract topology.
Service URL Used when the Model is OpenROADM-2.0. IP layer link that fetches services
information.
Reconnect Interval Number of seconds between reconnection attempts to the devices included
in the profile group. The default is 300.
Table 43 on page 218, Table 44 on page 219, and Table 45 on page 219 show the fields
that require specific values for particular vendors. Confirm all values with the vendor
before using them.
Table 43: proNX Optical Director: Typical Transport Controller Field Values
Name JuniperPOD
Model OpenROADM-2.0
ADVA
Name ADVA
Model ietf-te-topology-01
Service URL NA
Coriant
Name Coriant
Model ietf-te-topology-01
Service URL NA
3. Click the Save button in the Transport Controllers pane to save the transport controller
profile. The profile name turns from red to black if saved successfully. If it does not
become black when you attempt to save it, double check the data in the Configuration
pane.
Sometimes, when interlayer links are initially loaded into the model, only the source is
known. To complete the linking of the transport layer to the IP layer, you must supply the
missing remote node (node Z) information in one of the ways described in the following
sections:
1. Select the Link tab in the network information table of the Web UI topology window.
Highlight the link to be updated.
2. Click Modify in the bottom tool bar to display the Modify Link window shown in
Figure 152 on page 220.
3. In the Node Z field, use the drop-down menu to select the remote node.
4. In the IP Z field, enter the IP address for the corresponding IP link on the remote node.
5. Click Submit.
The script requires an input file that identifies at least one side of each IP link. It is not
necessary to include both sides of the IP links because the missing side can be determined
from the transport circuits provided by the transport controller.
The text file must include just one mapping per interlayer link and must be formatted
with just one mapping per line. If you are providing both sides of an IP link, use two lines,
one per side.
transport-node-name|transport-link-ID IP-address
For example:
Transport:0.1.0.5|1008001 11.112.122.2
The script is installed at the following location on the NorthStar Controller server:
/opt/northstar/mlAdapter/tools/configureAccessLinks.py
Run the script from the CLI using your username (full-access user group required),
password, and input file:
Layers, types, transport circuits, transport SRLGs, and latency values can all be displayed
in the web UI and the NorthStar Planner. The REST API offers the option to use protected
links. This topic focuses on navigating to the display options you have in each case.
Displaying Layers
In the left pane of the topology window, select Layers from the drop-down menu to
display the Layers list. The Layers list gives you the option to exclude or include individual
layer information in the topology map.
The colors indicated in the Layers list are reflected in the topology map so you can
distinguish the nodes belonging to the different layers. Figure 153 on page 223 shows an
example of a topology map that includes both IP Layer and Transport Layer elements.
The dotted link lines are interlayer links.
In the left pane of the topology map window, access advanced filters by selecting
Filters>Advanced.
From the Advanced filters window you have the option to hide various elements on the
topology map including IP layer, transport layer, and interlayer links. To hide an element,
select the corresponding check box. To display an element, clear the corresponding check
box.
In the left pane of the Topology window, select Types from the drop-down menu to
display the Types list. The list includes categories of nodes and links found in the network.
Different types are associated with different icons, which are reflected in the topology
map.
You can select or deselect a type by checking or clearing the corresponding check box.
Only selected options are displayed in the topology map. Figure 154 on page 224 shows
a Types list and topology map for a network that includes a Coriant transport layer.
The network information table below the topology map includes a Layer column that is
available on the Node tab. If you do not see the column, hover over any column heading
and click the down arrow that appears. A column selection window is displayed. Select
the Layer check box to include that column in the table.
In the Left pane of the Topology Map window, select Filters>Types to display categories
of nodes and links that you can opt to display or hide on the topology map.
You can select or deselect a type (Transport, for example) by checking or clearing the
corresponding check box. Only selected options are displayed in the topology map. You
can also change the line color and style for a link type by clicking the line indicator next
to the check box.
The Network Info table below the topology map includes tabs for L1 Links, L1 Nodes, and
Interlayer Links.
If you do not see a column, click the plus sign (+) at the end of the row of column headings
to display available columns. Click the column you want to display.
In the web UI, the paths are added to the network information table in the Tunnel tab. In
the Layer column, they are identified as Transport. The names are the same as the
corresponding IP link names.
If a selected IP link in the Link tab of the network information table has an associated
transport circuit, it is automatically highlighted.
In the NorthStar Planner, the paths are added to the network information table in the
Tunnels tab together with normal packet tunnels. The names are the same as the
corresponding IP link names. In the Type column, they are identified as L1Circuit.
Right-click an IP link in the Network Info table Tunnels tab or on the topology map to
access the option to display the L1 circuit path if there is an associated transport circuit.
Displaying Latency
Using the topology settings window, you can opt to display latency on the topology map.
Perform the following steps:
1. Access the Topology Settings window by clicking on the settings icon (gear) in the
upper right corner of the topology window. Figure 155 on page 225 shows the settings
icon.
2. In the Elements tab, shown in Figure 156 on page 226, click the check box for Show
Label in the Links section (the middle section) and select Delay A::Z from the
corresponding drop-down menu.
The topology map displays the latency values for each link in the form delayA::delayZ
(252::252, for example), in milliseconds. In the Link tab of the network information table,
the Delay A and Delay Z columns also display these latency values.
Through the Link Labels window, you can opt to display latency on the topology map.
Perform the following steps:
1. Right-click in the topology map window and navigate to Labels>Link Labels. The Link
Labels window is displayed as shown in Figure 157 on page 227.
The topology map displays the latency values for each link in the form delayA-delayZ
(252-252, for example).
When you select an SRLG, all links in all layers in the group are highlighted in the topology
map.
In the web UI, you can also use the Link Label settings window shown in
Figure 156 on page 226 to specify that SRLGs are to be displayed on the topology map as
link labels.
In the network information table, you can display a column that shows the protection
status of transport and IP layer links. Perform the following steps:
2. Click the down arrow in any column heading, and select Columns.
4. You can then manually change the protection status of any link by selecting the link
and clicking Modify at the bottom of the table. Click in the Protected check box
(Properties tab) to select or deselect protected status. Protected links are highlighted
in the topology map.
In the NorthStar Planner network information table, you can view the protection status
of transport as well as IP layer links. Perform the following steps:
2. Right-click in any column heading and select Table Options to display the Table
Options window shown in Figure 158 on page 228.
3. On the left side, select CanFail and click Add to add the column to the display.
4. By default, links are set to CanFail=yes, and the corresponding check boxes are
selected. If the transport controller indicates that a link is protected, NorthStar clears
the check box for that link, making it protected.
NOTE: The NorthStar REST API offers the ability to use a protected link,
which suspends the link’s protected status.
High Availability
All processes are started on the new active node, and the node configures the virtual IP
address based on the user configuration (via net_setup.py). The virtual IP can be used
for client-facing interfaces as well as for PCEP sessions.
Failure Scenarios
NorthStar Controller HA protects the network from the following failure scenarios:
• Operating system failures (server operating system reboot, server operating system
not responding)
• Software failures (failure of any process running on the active server when it is unable
to recover locally)
NOTE: If the server has only one interface or if you only want to use one
interface, the network-facing interface is then also the client-facing interface.
The Web UI also loses connectivity upon failover, requiring you to log in again.
The ha_agent sends probes using ICMP packets (ping) to remote cluster endpoints
(including the Zookeeper interface) to monitor the connectivity of the interfaces. If the
packet is not received within the timeout period, the neighbor is declared unreachable.
The ha_agent updates Zookeeper on any interface status changes and propagates that
information across the cluster. You can configure the interval and timeout values for the
cluster in the HA setup script. Default values are 10 seconds and 30 seconds, respectively.
NOTE: Only PCC-initiated and PCC-delegated LSPs are included in the report.
Access the report by navigating to Applications > Reports. Figure 159 on page 233 shows
a list of available reports, including the LSP Discrepancy report.
Cluster Configuration
The NorthStar implementation of HA requires that the cluster have a quorum, or majority,
of voters. This is to prevent “split brain” when the nodes are partitioned due to failure. In
a five-node cluster, HA can tolerate two node failures because the remaining three nodes
can still form a simple majority. The minimum number of nodes in a cluster is three.
There is an option within the NorthStar Controller setup utility for configuring an HA
cluster. First, configure the standalone servers; then configure the cluster. The HA
installation script provides an option to automate the deployment of NorthStar servers
in remote data centers such as those located in different countries.
See Configuring a NorthStar Cluster for High Availability in the NorthStar Controller Getting
Started Guide for step-by-step cluster configuration instructions.
Related • Configuring a NorthStar Cluster for High Availability (NorthStar Controller Getting Started
Documentation Guide)
System Monitoring
Dashboard Overview
The Dashboard view is shown in Figure 160 on page 236. The Dashboard presents a variety
of status and statistics information related to the network, in a collection of widgets that
you can arrange according to your preference. The information displayed is read-only.
Figure 160: Dashboard Widgets, Not All Showing the Same Network
Widget Description
Network Elements Summation of the elements (nodes, links, LSPs, SRLGs) in the model, computed
from the client. If the values differ from the information reported in the Network
Status (left pane) or in the network information table, it is because they have
different sources of data for the calculations and different rates of synchronizing
to the client.
Network Model Audit Periodically poles for status. This is a troubleshooting tool.
LSP Bandwidth Pie chart showing the percentage of the total LSP bandwidth that is accounted
for by each LSP type (PCE-initiated, PCC-delegated, PCC-controlled).
Hop Count Statistics Aggregates the number of LSPs by hop count, per LSP type (PCE-initiated,
PCC-delegated, PCC-controlled). In other words, it shows the number of LSPs
of each type with three hops, with two hops, and so on. The LSP types are color
coded according to the key at the bottom. Click an LSP type in the key to toggle
between hiding and unhiding the LSP type. Mouse over the color bar to see the
count.
Widget Description
Top 10 LSP Source Top 10 routers that have LSPs originating there, and the number of originating
LSPs. Click the button in the lower right corner to toggle between table, bar chart,
and pie chart representation.
To 10 LSP Destination Top 10 routers that have LSPs terminating there, and the number of terminating
LSPs. Click the button in the lower right corner to toggle between table, bar chart,
and pie chart representation.
LSP Summary Number of active, standby, and secondary LSPs that are Up and Down.
The dashboard offers the following options for customizing the arrangement of widgets:
• The Settings drop-down menu in the upper right corner of the Dashboard view allows
you to change the number of widget columns.
As shown in Figure 161 on page 237, you can select either Two columns or Three columns.
• Minimize a widget by clicking on the up arrow in the upper right corner of the widget.
• Close a widget by clicking on the X in the upper right corner of the widget.
• From the Settings drop-down menu in the upper right corner of the dashboard, select
Restore defaults to return all the widgets to the original display arrangement.
Logs
Navigate to Administration>Logs to view a list of the available NorthStar logs. Click any
log name to display the contents of the log itself.
Hover over any column heading and click the down arrow that appears to view sorting
and column selection options. Figure 163 on page 238 shows an example of sorting and
column selection options.
Click View Raw Log in the upper left corner to view the log in a new browser window or
tab. This enables you to keep the log viewable while you perform other actions in NorthStar
Controller.
Logs are typically used by system administrators and for troubleshooting purposes.
You can access Subscribers and System Settings by selecting Administration from the
More Options menu in the upper right corner of the NorthStar Controller UI. These options
are visible to and accessible by the Admin user only.
Subscribers
The Admin can assign users to receive system messages by navigating to Administration
> Subscribers. Click Add in the upper right corner of the Subscriber Management window
to display the Add Subscriber window as shown in Figure 165 on page 240.
Enter the email address of the user to be subscribed (under Profile) and select the type
of system messages to be received (under Subscriptions). Only disk space notifications
are available at this time. Click Submit to complete the subscription. See “General System
Settings” on page 242 for information about customizing disk space notifications.
Once subscribed, the user receives system messages and can then take the appropriate
action.
You can modify or delete existing subscribers by clicking Modify or Delete in the upper
right corner of the Subscriber Management window.
System Settings
Navigate to Administration>System Settings from the More Options menu to access
the general system settings shown in Figure 166 on page 241:
In the upper right corner of the General Settings window is an Advanced Settings button.
This button allows you to toggle back and forth between general and advanced system
settings. The advanced system settings are shown in Figure 167 on page 241.
Setting Description
User Inactivity Timer When enabled, users are automatically logged out of the NorthStar Controller after
the specified period of inactivity. The timer is disabled by default. To enable it, select
Enable and enter the time in minutes.
Link Flap Behavior Link flap can be enabled or disabled, and is enabled by default. Adjust the seconds
and count settings as appropriate for your network.
Provisioning Provisioning can be globally enabled or disabled for all users, and is enabled by default.
Disabling provisioning does not prevent users from accessing and using the provisioning
functions in the UI, but it does prevent those actions from taking effect in the network.
This allows you to respond to periods of network instability by preventing the additional
strain on the system that might result from provisioning going on at the same time.
Zero Bandwidth Signaling When set to On, NorthStar can optimize resource utilization more effectively and more
aggressively. When set to Off, some LSPs may not be routed due to bandwidth
overbooking when a Make Before Break (MBB) operation is performed.
SMTP Mail Server The SMTP mail server must be enabled for subscribers to receive system messages.
Disk Space Notification Thresholds For each partition, you can set the disk usage threshold that triggers a system message
to be sent out to subscribers as configured in Administration > Subscribers. Click on
the slider and drag to adjust the threshold.
In the Advanced System Settings window, there are two operations available to the
administrator that help keep NorthStar’s view of the network (the network model)
synchronized with the live network:
The Sync Network Model operation refreshes the synchronization of the network model
and is appropriate to use if, for example, the network model audit has unresolved
discrepancies.
When you sync the network model, this is what happens behind the scenes:
1. Information associated with the network model (nodes, links, LSPs, interfaces,
SRLGs, and user-defined parameters) remains intact. Nothing is purged from the
database.
2. NorthStar processes, including the topology server and path computation server
processes, are restarted.
3. The network model is repopulated with live data learned from topology acquisition.
The Reset Network Model operation should not be undertaken lightly, but there are
two circumstances under which you must reset the network model in order to keep
the model in sync with the actual network:
• The node ISO network entity title (NET) address changes. This can happen when
configuration changes are made to support IS-IS.
• The routing device’s IP address (router ID) changes. The router ID is used by BGP
and OSPF to identify the routing device from which a packet originated. The router
ID is usually the IP address of the local routing device. If a router ID has not been
configured, the IP address of the first interface to come online is used, usually the
loopback interface. Otherwise, the first hardware interface with an IP address is used.
If either of these addresses changes, and you do not perform the Reset Network Model
operation, the network model in the NorthStar Controller database becomes out of
sync with the live network.
When you reset the network model, this is what happens behind the scenes:
1. Information associated with the network model (nodes, links, LSPs, interfaces,
SRLGs, and user-defined parameters) is purged from the database (so you would
not want to do this unless you have to).
2. NorthStar processes, including the topology server and path computation server
processes, are restarted.
3. The network model is repopulated with live data learned from topology acquisition.
Table 48 on page 244 describes the effects on various elements in the network when you
reset or synchronize the model.
Network Monitoring
System Health
NorthStar System Health enhances health monitoring functionality in the areas of process,
server, connectivity (topology and PCEP), license monitoring, and the monitoring of
distributed analytics collectors in an HA environment.
• You can display cluster, data collector, and connectivity status information by navigating
to Administration > System Health. For HA cluster environments, you can view the
process status of all processes in all cluster members. Both BGP-LS and ISIS/OSPF
peering statuses are also available.
NOTE: Hover over any column heading and click the down arrow that
appears to view sorting and column selection options.
• Critical health monitoring information is pushed to a web UI banner that appears above
the Juniper Networks logo. Conditions that are considered critical include expiring
license, disk utilization exceeds threshold, and a server time difference of more than
60 seconds between application servers in an HA cluster.
NOTE: The health monitor does not enable NorthStar Controller to take
any corrective action regarding these notices. Its responsibility is to monitor
and report so the user can respond as appropriate.
Event View
The Event View opens in a new browser window or tab when you navigate to
Applications>Event View. Figure 168 on page 248 shows the Event View.
The event data displayed in the Event View is stored in the database. The number of
events depends on the NorthStar configuration. By default, NorthStar keeps event history
for 35 days. To customize the number of days event data is retained:
2. Restart the pruneDB process using the supervisorctl restart infra:prunedb command.
NOTE: One event typically requires about 300 bytes of memory. See NorthStar
Controller System Requirements in the NorthStar Controller Getting Started
Guide for server sizing guidance.
In the upper right pane of the view is a table of events, listed in chronologically descending
order by default. You can change the order by using the sort options available when you
hover over any column heading and click in the down arrow that is displayed. You can
sort by any column, in ascending or descending order. You can also select the columns
you want to display. Figure 169 on page 249 shows the options displayed when you hover
over a column heading and click the down arrow.
In the upper left pane is a grouping bar chart. By clicking on the Settings menu in the
lower right corner of the pane, you can select the groupings you want to include. Click
and drag groupings to reorder them as shown in Figure 170 on page 249.
On the bar chart, any blue bar can be broken down further until you drill down to the
lowest level, which is portrayed by a gray bar. Click a blue bar to drill down to the next
level. To go back to a previous level, click empty space below the bar chart.
For example, if the Settings menu has Source, Type, and Name selected, in that order,
the first bar chart display has events grouped by Source. If you click the bar representing
the events for one source, the display refreshes to show all the events for that source
grouped by Type, which is the next grouping in the menu. If you then click the bar
representing the events for one type, the display refreshes again, showing all the events
for that source and type, grouped by name, and those bars are gray.
Each time the bar chart refreshes, the table of events refreshes accordingly.
In the pane at the bottom of the view is a timeline that shows the number of events on
the vertical axis and time on the horizontal axis. You can select the time span displayed
by opening the drop-down menu in the upper right corner of the pane as shown in
Figure 171 on page 250.
You can also left-click and drag in the timeline to highlight a discrete period of time. The
event table and bar chart panes refresh to display only the events included in the time
frame you selected. Figure 172 on page 250 shows a selected period of time in the timeline.
To identify the root cause of frequent LSP changes or flaps, you can view changes to the
link that the LSP traverses that occurred during the time period of the LSP changes. The
NorthStar Controller records all the link events and allows you to query on those link
changes (such as operational status and bandwidth) over any specified time period.
All link events are stored in the database. However, to display all raw events would result
in an excess of unnecessary information for NorthStar Controller users. To avoid this
situation, the Path Computation Server (PCS) processes the link events and displays
only the events that trigger actual link changes. You can view these link change entries
in the Event View that opens as a separate browser window or tab.
The Event View opens in a new browser window or tab when you navigate to
Applications>Event View. Figure 173 on page 251 shows the Event View.
The event data displayed in the Event View is stored in the database. The number of
events depends on the NorthStar configuration.
In the upper right pane of the view is a table of events, listed in chronologically descending
order by default. You can change the order by using the sort options available when you
hover over any column heading and click in the down arrow that is displayed. You can
sort by any column, in ascending or descending order. You can also select the columns
you want to display. Figure 174 on page 251 shows the options displayed when you hover
over a column heading and click the down arrow.
In the upper left pane is a grouping bar chart. By clicking on the Settings menu in the
lower right corner of the pane, you can select the groupings you want to include. Click
and drag groupings to reorder them as shown in Figure 175 on page 252.
On the bar chart, any blue bar can be broken down further until you drill down to the
lowest level, which is portrayed by a gray bar. Click a blue bar to drill down to the next
level. To go back to a previous level, click empty space below the bar chart.
For example, if the Settings menu has Source, Type, and Name selected, in that order,
the first bar chart display has events grouped by Source. If you click the bar representing
the events for one source, the display refreshes to show all the events for that source
grouped by Type, which is the next grouping in the menu. If you then click the bar
representing the events for one type, the display refreshes again, showing all the events
for that source and type, grouped by name, and those bars are gray.
Each time the bar chart refreshes, the table of events refreshes accordingly.
In the pane at the bottom of the view is a timeline that shows the number of events on
the vertical axis and time on the horizontal axis. You can select the time span displayed
by opening the drop-down menu in the upper right corner of the pane as shown in
Figure 176 on page 252.
You can also left-click and drag in the timeline to highlight a discrete period of time. The
event table and bar chart panes refresh to display only the events included in the time
frame you selected. Figure 177 on page 252 shows a selected period of time in the timeline.
You can run a task from the Task Scheduler (Administration > Task Scheduler) to clean
up the network. Automating this process by scheduling the cleanup task to run periodically
can be especially time-saving in large networks. The following options are available:
• Purge links with user attributes that are down (having user attributes would otherwise
protect them from removal)
1. In the Task Scheduler, click Add to bring up the Create New Task Window, and select
Network Cleanup from the Task Type drop-down menu as shown in
Figure 178 on page 253.
2. As shown in Figure 179 on page 254, all the available options are selected by default
except to force the removal of links with user attributes.
If you opt to generate purge reports, a report is generated every time the task executes.
The report details the actions taken as a result of the cleanup. Purge reports, identified
with a timestamp, are stored in
/opt/northstar/data/.network_plan/Report/purge_reports/.
If you opt to add notifications to the timeline, you can see notifications relevant to the
execution of the task in the Timeline view. To get there, click Topology in the top
navigation bar and then Timeline in the left panel drop-down menu. An example is
shown in Figure 180 on page 255.
In the Create New Cleanup Task options window, select or deselect the options you
want. Click Next to proceed to the scheduling window.
3. Like other tasks in the Task Scheduler, you can schedule the cleanup task for periodic
execution, automating the cleanup effort. As an alternative to scheduling recurrence,
you can select to have the cleanup task “chained” after an already-recurring task of
another type so that it executes as soon as the other task completes. See “Introduction
to the Task Scheduler” on page 280 for information about scheduling and chaining.
4. To ensure you see the post-cleanup topology in the UI, click Topology in the top
navigation bar to display the topology map and network information table. Right-click
in a blank spot on the topology map and select Reload Network. The updated network
is displayed.
• Node (nodeEvent)
• Link (linkEvent)
• LSP (lspEvent)
• P2MP (p2mpEvent)
• Facility (facilityEvent)
• HA (haEvent)
Table 49 on page 256 lists the schema for each of these event notification types.
Examples
To ensure secure access, a third party application must be authenticated before it can
receive NorthStar event notifications. Use the NorthStar OAuth2 authentication API to
obtain a token for authentication purposes. The token allows subscription to the socket.io
channel. The following example shows connecting to NorthStar and requesting a token.
#!/usr/bin/env python
import requests,json,sys
serverURL = 'https://northstar.example.net'
username = 'user'
password = 'password'
# use NorhtStar OAuth2 authentication API to get a token
payload = {'grant_type': 'password','username': username,'password': password}
r = requests.post(serverURL +
':8443/oauth2/token',data=payload,verify=False,auth=(username, password)) data
=r.json()
if "token_type" not in data or "access_token" not in data:
print "Error: Invalid credentials"
sys.exit(1)
# The following header needs to be passed on all subsequent request to REST
or Notifications
auth_headers= {'Authorization': "{token_type} {access_token}".format(**data)}
The following example retrieves the NorthStar topology nodes and links.
#!/usr/bin/env python
import requests,json,sys
serverURL = 'https://northstar.example.net'
# auth_headers : see Authentication Token retrieval
data = requests.get(serverURL +
':8443/NorthStar/API/v2/tenant/1/topology/1/',verify=False,headers=auth_headers)
topology=data.json()
The following example subscribes to the NorthStar REST API push notification service.
#!/usr/bin/env python
from socketIO_client import SocketIO, BaseNamespace
serverURL = 'https://northstar.example.net'
class NSNotificationNamespace(BaseNamespace):
def on_connect(self):
print('Connected to %s:8443/restNotifications-v2'%serverURL)
def on_event(key,name,data):
print "NorthStar Event: %r,data:%r"%(name,json.dumps(data))
# auth_headers : see Authentication Token retrieval
socketIO = SocketIO(serverURL, 8443,verify=False,headers= auth_headers)
ns = socketIO.define(NSNotificationNamespace, '/restNotifications-v2')
socketIO.wait()
Reports Overview
NOTE: Click the Help icon (question mark) in the upper right corner of the
NorthStar window to display more information about the selected report.
Report Source
Demand Reports Generated when you run a Demand Reports Collection task. You select the specific reports you
want to generate when you schedule the collection task.
Integrity Checks Generated when you run the Device Collection task and select configuration data as a collection
option.
NOTE: You must run a collection to generate a network archive for this report to be available.
Inventory Generated when you run the Device Collection task and select equipment CLI data as a collection
option.
NOTE: You must run a collection to generate a network archive for this report to be available.
LSP Discrepancy During an HA switchover, the PCS server performs LSP reconciliation and produces the LSP
discrepancy report. This report identifies LSPs that the PCS server has discovered might require
re-provisioning.
Maintenance Generated when you use the Simulate Maintenance Event function.
Network Summary Updated summary of network elements. One report is currently available in this category, called
Nodes. It displays counts of LSPs that start, end, or transit through each node in the topology.
Path Analysis and Generated when you use the Analyze Now function for path optimization.
Optimization
NOTE: PCC-controlled LSPs are not included in the reports because NorthStar does not attempt
to optimize PCC-Controlled LSPs.
• Path Analysis Optimization Report: lists LSPs that are currently not in an optimized path, suggests
what the optimized paths should be, and provides data about what could be gained (in terms of
delay, metric, distance, and so on) if the LSP were to be optimized.
• LSP Path Changes: lists changes to PCE-initiated and PCC-delegated LSPs as a result of analysis.
• RSVP Link Utilization Changes: lists the changes in Link RSVP bandwidth reservation if all LSPs
were to be routed over their optimized paths instead of their current paths.
Report details are displayed in a pane to the right of the menu when you click an individual
report in the menu. Click the Help icon (question mark) in the upper right corner of the
report details pane to display a description of the report.
In the Integrity Check report, you can right-click a line in the report and select Show Config
to bring up the Configuration Viewer.
At the bottom of the Reports window, click the export icon to export the report to a CSV
file.
The Nodes view displays detailed information about the nodes in the network. With this
view, you can see node details, tunnel and interface summaries, and groupings, all in one
place.
• Nodes list on the far left—Lists all nodes in the topology, including any node groups.
Click a node to select it. Click the plus (+) or minus (-) sign next to a group to expand
or collapse the list of nodes within the group.
• Detailed node information to the right of the Nodes list—Shows detailed information
for the node selected in the Nodes list.
• Tunnels and Interfaces tables on the bottom of the display—Lists all the tunnels and
interfaces that start at the selected node, along with their properties. Mouse over any
column heading and click the down arrow to select or deselect columns. Sorting and
filtering options are also available.
Raw data logs are retained in Elasticsearch for a user-configurable number of days. Data
is also rolled up (aggregated) every hour and retained for a user-configurable number
of days. The purpose of aggregation is to make longer retention of data more feasible
given limited disk space. When you modify these retention parameters, keep in mind that
there is an impact on your storage resources.
The parameters described in Table 51 on page 262 work together to control data retention
and aggregation behaviors. The parameters are located in
/opt/northstar/data/northstar.cfg, and you can modify their values there.
Parameter Description
es_log_retention_days Defines what is considered an “old” log of raw data. The default
is 90 days, meaning that raw data logs are retained in
Elasticsearch for 90 days. This can be expressed only in days,
so no unit designation is required. To disable the retention of
raw data logs, set the value to 0.
es_data_rollup_interval Controls how often the ESRollup system task is run. This task
executes the esrollup.py script to aggregate the previous
interval’s data. The default is 1 hour (1h).
The ESRollup task is called from the NorthStar server. You can
view (but not modify) the rollup task by navigating to
Administration > Task Scheduler.
The NorthStar REST API supports telemetry data aggregation with the additional
parameters described in Table 52 on page 263. See the NorthStar REST API documentation
for more information.
Parameter Description
rollup_query_enabled A value of 1 indicates that rollup query functionality is enabled. A value of 0 indicates it
is disabled.
es_rollup_cutoff_days If rollup_query_enabled is set to 1 (enabled) and the requested time range in stats REST
API is greater than es_rollup_cutoff_days from now, the query uses the roll-up index to
search data.
To modify retention or aggregation parameters, use a text editing tool such as vi and
modify the value of the parameters in the northstar.cfg file. For example:
vi /opt/northstar/data/northstar.cfg
.
.
.
collection_cleanup_task_interval=7d
es_log_retention_days=30
es_log_rollups_retention_days=800
In this example, raw data logs older than 30 days and hourly aggregated data logs older
than 800 days are set to be purged every seven days.
The data included in the rollup tasks (aggregation types, fields, and counters) is defined
in the view-only esrollup_config.json file located in the /opt/northstar/utils directory.
To view the system tasks that launch the esrollup.py and collector-utils.py scripts, navigate
to Administration > Task Scheduler in the NorthStar web UI. In the Task list, the Name
column indicates CollectionCleanup or ESRollup Task. In the Type column, they are
designated as ExecuteScript. An example is shown in Figure 183 on page 263.
There is an optional column in the task list that indicates whether each task is a system
task. Hover over any column heading, click the down arrow that appears, and highlight
Columns to display a list of available columns. Click the check box for System Task to
select the System Task column (true/false) for inclusion in the display.
When you select a system task, Summary, Status, and History tabs are available at the
bottom of the window.
• Set up or modify the device list. Initially, the device list contains all the devices
discovered from the traffic engineering database (TED). The device IP address (if not
already discovered) and the PCEP IP address for each device are required. The PCEP
IP address is the local address of the PCC located in the PCE statement stanza block.
• Supply a hostname for each router for OSPF networks. This is necessary because the
TED does not contain hostnames for OSPF networks.
• Specify an MD5 key to secure PCEP communication between the NorthStar Controller
and the PCC.
Figure 184 on page 265 shows the Device Profile window, including the device list in the
upper pane and details about the highlighted device in the lower pane.
You can filter the devices that are included in the display by activating a filter on any
column. See “Sorting and Filtering Options in the Network Information Table” on page 81
for a description of the column filtering functionality, along with an example.
The buttons across the top and bottom of the Device List pane perform the functions
described in Table 53 on page 266. Button labels are displayed when you hover over icon
buttons.
Button Function
Save Changes Saves the device profile changes. The button becomes active when modifications or edits have
been made to entries or fields in the device list. When the button is active, you must click it to
finalize your changes.
Button Function
Sync with Live Network Synchronizes devices with the live network. This function does not delete devices from the
selected profile that do not exist in the live network, but it does add devices that are missing
from the live network, and it synchronizes all devices with a corresponding live network device.
When you click Sync with Live Network, this is what happens behind the scenes:
• The latest network topology is retrieved using NorthStar REST API calls.
• The Device Profile is updated with changes and additions, though deletions are ignored –
entries in the Device Profile that correspond to nodes deleted from the live network are not
removed.
Filter Filters the list of devices according to the text you enter.
Reloads the device profiles. This is useful when you are modifying a device entry and then realize
that you don’t want to save it. Reload will reload the device list back to the last saved state.
(Device Grouping)
Export Device Profiles Exports device profiles to a comma separated values (CSV) file named DeviceProfiles.csv.
Import Device Profiles Imports devices from a CSV file. This is particularly useful when there are a large number of
devices to add. Clicking the button opens the Import Devices from CSV window where you browse
to the CSV file and specify the appropriate delimiter. A preview of the data appears in the Data
Preview box.
You can perform many of these functions on multiple devices simultaneously. To select
multiple devices, Ctrl-click or Shift-click the device rows and then click the button for
the function you wish to perform.
Test Connectivity
The Test Connectivity button opens the Profile Connectivity window shown in
Figure 186 on page 268.
Click the Use Management IP check box if the devices to be tested have management
IP addresses specified for out-of-band use. Click Options to open the Test Connectivity
Options window shown in Figure 187 on page 268.
• Specify which test methods you want to use (Ping, SSH, SNMP, NETCONF). Multiple
methods are allowed (by default, all methods are tested). To select or deselect
methods, click the corresponding check boxes.
In the SNMP tab, you can add optional SNMP get community string(s), one per line. If an
SNMP connectivity check fails with the community string specified in the device profile
(SNMP Parameters tab), these additional community strings are tried until one succeeds.
In the Login/Password tab, you can enter alternate login credentials to be used in case
of login/password failure.
Click OK to submit your selections and close the Test Connectivity Options window.
In the Profile Connectivity window, click Start to begin the connectivity test. You can click
Stop if the test fails to complete quickly. The test is complete when the green (pass) or
red (fail) status icons are displayed. Figure 188 on page 269 shows an example.
In SNMP connectivity testing, the host name and device type (vendor) are polled and
are auto-populated in the test results if the information was previously missing or incorrect
in the device profile. A red triangle in the upper left corner of a field in the test results
indicates that a change was automatically made. You can see an example in the Device
column in Figure 188 on page 269. To propagate those changes to the device profile, click
Profile Fix at the bottom of the Connectivity Test Results window.
To display the detailed test results for an individual device in the lower part of the window,
click the device row in the upper portion of the window, even if you only tested connectivity
for a single device.
NOTE: The Start button remains unavailable after test completion until you
close the window and reopen it to begin a new connectivity test.
Add Device
The Add button opens the Add New Device window shown in Figure 189 on page 270.
Table 54 on page 270 describes the data entry fields under the General tab.
Field Description
Device Name Name of the network device, which should be identical to the
hostname. During configuration collection, the software uses
this name as part of the name of the collected configuration
file. The configuration filename uses the format ip.name.cfg. If
the device name is left blank, the configuration filename uses
the format ip.cfg.
PCEP IP The local address of the PCC located in the PCE statement
stanza block.
Vendor (Type) Select the device vendor from the drop-down menu. The default
is GENERIC. The vendor is displayed in the Device List under
the column heading Type.
• Non-RFC
Select this version to run in non-RFC 8231/8281 compliance
mode. This is the default.
• RFC Compliant
Select this version to run in RFC 8231/8281 compliance mode.
This is supported in Junos OS 19.x and later (Junos OS
releases that are RFC 8231/8281 compliant).
Device Group Device group name you assign to the device, such as a regional
group.
NOTE: We recommend you do not use the credentials of Junos OS root users
when running device collection. NorthStar Controller will not raise a warning
when such credentials are used, even if the task fails.
Table 55 on page 272 describes the data entry fields under the Access tab.
Field Description
SSH Timeout Number of milliseconds after which a connection attempt times out. The default is 300. To enter
a different value, type the number of milliseconds in the field or use the up and down arrows to
increment or decrement the displayed value.
SSH Retry Number of times a connection to the device is attempted. The default is 3. To enter a different
value, type the number of retries in the field.
SSH Command Command to use for SSH connection. The default is ssh. To enter a different value, type the
command in the field. Include the full path of the command and options used for ssh, such as
/usr/bin/ssh -1 -p 8888.
Enable Netconf Select this checkbox to enable Netconf communication to the device.
Enable Bulk Commit Select this checkbox to allow NorthStar to do a single commit instead of multiple commits when
you provision multiple LSPs on the same router.
Netconf Retry Enter the number of times a Netconf connection is to be attempted. The default is three.
NOTE: A value of 0 means an unlimited number of retries - connection attempts never stop.
PCEP MD5 String Message Digest 5 Algorithm (MD5) key string, also configured on the router. “Configuring MD5” on
page 279 provides information on configuring MD5 authentication.
NOTE: All the routers in the network must have their PCEP IP addresses in the profile. This is
especially important if any router in the network is configured with an MD5 authentication key.
Enable PRPD Click the check box to enable programmable routing protocol process (PRPD) on the device. This
is required for EPE.
PRPD IP IP adress for PRPD on the device. The default is the router ID (router’s loopback address). If you
leave the field empty, the default is used.
PRPD Port Port on the router that NorthStar can use to establish a PRPD session. The default is 50051, but
you can modify it.
The fields on the SNMP Parameters tab are required to set up for SNMP collection. The
SNMP parameters are described in Table 56 on page 273.
Version Use the drop-down menu to select SNMPv1, SNMPv2c, or SNMPv3. The
default is SNMPv2c.
Port SNMP port. The default is 161. Must match the port configured on the router.
Get Community SNMP get community string as configured on the router. The default is
“public” if you leave it blank.
Timeout Number of seconds after which connection attempts will stop. The default
is 3.
NOTE: Additional fields become available if you select SNMPv3 as the version.
In the User Defined Properties tab, you can add properties not directly supported by the
NorthStar UI.
Click Submit to complete the device addition. The new device appears in the device list.
Modify Device
The Modify button opens the Modify Device(s) window, which has the same fields as
the Add New Device window. Edit the fields you want to change and click Submit. Click
Save Changes to complete the modification. You can wait until you have completed all
your device modifications to click Save Changes, which will have become active to flag
that there are unsaved changes.
To modify one or more fields in the same way for multiple devices, Ctrl-click or Shift-click
to select the devices in the device list and click Modify. On the resulting Modify Device(s)
window, you can make changes that affect all the selected devices.
Delete Device
To delete a device, select the device row in the Device List and click Delete. A confirmation
window is displayed as shown in Figure 190 on page 274.
NOTE: If you delete a device from the liveNetwork profile, you are not deleting
it from the live network itself. You can restore the device to the profile using
the Sync with Live Network button.
With device grouping, you can group devices in ways that are independent of topological
groups. Since Netconf task collection supports collection by device profile group, one
way to use this functionality is to manage Netconf sub-collection tasks by group.
When you click the down arrow beside the Device Grouping icon, the two options displayed
are:
Select Toggle Device Grouping to either display the devices in the Device List according
to their assigned groups, or not. Figure 191 on page 275 shows an example of a device list
in which device grouping is toggled on.
To return to the ungrouped device list, select Disable Grouping. To display just the group
names without displaying the group members, select Collapse All. To return to the
grouped display in which the group members are also shown, select Expand All.
Select Manage Device Grouping to open the Manage Device Groups window as shown
in Figure 192 on page 275.
Existing groups are listed on the left side. Click the name of an existing group to display
its members in the “Devices in the group” list on the right. All other devices are listed in
the “Select device(s) from” list where you can select devices to add.
To delete a group, click the name of an existing group on the left and click Delete Group(s)
at the bottom. This action removes the group assignment from the member devices.
Groups with no members are automatically deleted.
To create a new group and add devices to it, type the group name at the top and click
the New Group check box. All devices are then listed in the “Select device(s) from” list
so you can choose the group members. Figure 193 on page 276 shows an example. If you
add devices that are already assigned to a group, the new assignment removes the
previous assignment.
You can also assign a group to a device profile in the Add New Device or Modify Device(s)
window (General tab). The Manage Device Groups window is particularly useful for
making changes to multiple devices at once.
• Click the down arrow at the top center of the pane. Click the up arrow to maximize the
pane.
• Click the down arrow in the top right corner of the pane. Click the up arrow to maximize
the pane.
Click and drag the top margin of the pane to resize the pane.
points for Association, S2LS Objects, and P2MP-IPv4-Lsp-Identifier TLV. This also makes
the system compliant with RFC 8231/8281.
NOTE: You must be using Junos OS Release 19.x or later to run NorthStar in
RFC 8231/8281 compliant mode.
The following example indicates that PCEP version 2 (RFC compliant mode) is configured
for the three listed devices:
NOTE: The IP address should be the PCC IP used to establish the PCEP
session. This is the IP address the PCC uses as the local IP address and is the
same as appears in the PCC_IP field in the web UI device profile for the device.
If you select Non-RFC for the PCEP version in the device profile, you are indicating that
you do not want to use RFC 8231/8281 compliance and IANA code points for Association,
S2LS Objects, and P2MP-IPv4-Lsp-Identifier TLV. This selection sets the pcc_version to
0 in the pcc_version.config file, and is the default setting. This setting is appropriate for:
• Any device that is not RFC 8231/8281 compliant, such as devices running a release of
Junos OS older than Release 19.x.
• Any RFC 8231/8281 compliant device that you do not want running in RFC compliant
mode. This is referred to as running in compatibility mode. On these routers, you must
also configure the following statements:
Whenever a device profile is updated in the web UI, the pcc_version.config file is also
updated and reloaded, so there is no need to manually restart the PCE server to capture
the updates.
Logical Systems
Some networks include both a physical topology and a logical topology. An example of
how that could look in the NorthStar UI topology view is shown in Figure 194 on page 278.
In this example, the physical and logical layers are not connected, but they could be,
depending on your network.
Logical nodes (and LSPs that incorporate logical nodes) are fully supported by NorthStar,
but somewhat differently from physical nodes:
• LSPs originating from a logical system cannot be discovered directly by PCEP. Instead,
you run device collection for physical devices and any corresponding LSPs originating
from logical devices are imported into the network information table, under the tunnel
tab. The correlation between the physical and logical systems are established via
device collection.
• In the network information table in NorthStar, display the optional columns Physical
Hostname and Physical Host IP so you can confirm that NorthStar successfully
correlated the physical and logical nodes when it performed device colleciton.
• Because PCEP is not supported for logical devices, it is not possible for NorthStar to
obtain real time topology updates for logical devices. We recommend periodic device
collection to compensate for this limitation.
• Device collection must be run before you attempt to create LSPs that incorporate
logical nodes because otherwise, the logical nodes are not available as selections for
Nodes A and Z in the Create LSP window. In the Create LSP window, you must specify
Netconf as the Provisioning Method (not PCEP) when the LSP incorporates logical
nodes.
For more information about logical nodes and provisioning LSPs that incorporate them,
see “Provision LSPs” on page 104.
Configuring MD5
MD5 can be used to secure PCEP sessions as described in RFC 5440, Path Computation
Element (PCE) Communication Protocol (PCEP). MD5 authentication must be configured
on both the NorthStar Controller (in the Device Profile window) and on the router (using
the Junos OS CLI). The authentication key must be the same in both configurations. The
device profile acts as a “white list” when MD5 is configured. The NorthStar Controller
does not report LSPs or provision LSPs for the routers not included in the device profile.
NOTE: The first time MD5 is enabled on the router, all PCEP sessions to
routers are reset to apply MD5 at the system level. Whenever the MD5 enabled
status on a router or the MD5 key changes, that router resets the PCEP
connection to the NorthStar Controller.
The first four steps are done in the NorthStar Controller Device Profile window, to configure
MD5 for the PCEP session to a router.
3. In the MD5 String field (Access tab), enter the MD5 key string. Click Modify.
4. Click Save Changes to save your changes. The PCEP MD5 Configured field for the
router changes from no to yes.
NOTE: All the routers in the network must have their PCEP IP addresses
in the profile. When you save your changes, you might receive a warning,
reminding you of this.
5. The final step is done in the Junos OS CLI on the router, to configure MD5 for the PCEP
session to the NorthStar Controller.
Use the set authentication-key command at the [edit protocols pcep pce] hierarchy
level to configure the MD5 authentication key.
In the NorthStar Controller UI, navigate to Administration > Task Scheduler to manage
the NorthStar task types. The Task List at the top of the window shows the already
scheduled and completed tasks. In the Task List, sorting and column selection options
become available when you hover over a column heading and click the down arrow that
appears. To display optional columns, hover over any column heading, click the down
arrow that appears, and highlight Columns to display a list of available columns. Click
the check box for any columns you want to add to the display. You can also rearrange
the columns that are displayed in the list by clicking and dragging a column heading.
Click Add to begin creating a new task. Using the Task Group drop-down menu, you can
either display the task type options alphabetically (select All Tasks) or by group. Then
use the Task Type drop-down menu to select a particular task to add.
Figure 195 on page 280 shows the Create New Task window with the Task Group menu
expanded.
The task types are described in Table 57 on page 280, organized by group. For most task
types, links to additional information are provided.
Table 57: Task Types Managed from the Task Scheduler (continued)
Table 57: Task Types Managed from the Task Scheduler (continued)
In addition to the tasks you can create, there are system tasks launched by NorthStar to
run scripts. You cannot add or modify these tasks, but you might see them in the Task
List. In the Type column, they are listed as ExecuteScript. In the optional System Task
column, they are listed as true.
See “NorthStar Analytics Raw and Aggregated Data Retention” on page 261 for more
information about these system tasks.
You can schedule tasks to recur periodically using the scheduling window that is part of
the Create New Task process. Figure 196 on page 283 shows an example of the Create
New Task - Schedule window. You can execute a task only once, or repeat it at
configurable intervals.
Instead of scheduling recurrence, you can, for most task types, select to chain the task
after an already-scheduled recurring task, so it launches as soon as the other task
completes. When you select the “Chain after another task” radio button, a drop-down
list of recurring tasks is displayed from which you can select.
The NorthStar Controller Analytics features require that the Controller periodically connect
to the network in order to obtain the configuration of network devices. It uses this
information to correlate IP addresses, interfaces, and devices.
NOTE: For topologies that include logical nodes, periodic device collection
is necessary because there are no real time PCEP-based updates for logical
devices.
To schedule a new device collection task, navigate to Administration > Task Scheduler.
1. Click Add in the upper right corner. The Create New Task window is displayed as shown
in Figure 197 on page 285.
2. Enter a name for the task and use the drop-down menu to select the task type Device
Collection. Click Next to display the first Create New Task – Device Collection window
as shown in Figure 198 on page 286.
On the Task Options tab, you can choose All devices, Selective devices, or Groups as
a method for specifying the devices to be included in the collection task. For all three
of those choices, the following fields are available:
Parsing reads the content of the files and updates the network model accordingly.
If parsing is not selected, the configuration files are collected on the server, but not
used in the model.
• Archive raw data (the default is yes). Raw data is archived in Elasticsearch.
If you select “Selective devices”, you are presented with a list of all the devices available
to be included in the collection task. Figure 199 on page 287 shows an example.
Click the check boxes corresponding to the devices you want to include.
If you opt for Groups, you are presented with a list of the device groups that have been
configured in Administration > Device Profile, as shown in Figure 200 on page 288.
Click the check boxes corresponding to the groups you want to include.
On the Collection Options tab, you can select the types of data to be collected or
processed as shown in Figure 201 on page 289.
Click the appropriate check boxes to select or deselect options. You can also Select
All or Deselect All. By default, the first four options listed are collected.
Equipment CLI data is collected in device collection tasks that include the Equipment
CLI option. The Process Equipment CLI option in Network Archive collection parses
the Equipment CLI data collected in device collection and generates the Inventory
Report available in both the NorthStar Controller and the NorthStar Planner.
To view Hardware Inventory in the NorthStar Planner, you must run device collection
with the Equipment CLI collection option (collects the inventory data) and you must
run Network Archive collection with the Process Equipment CLI option (processes
the inventory data).
Each of the options results in the collection task capturing the results of various show
commands. Table 58 on page 290 lists the show command output captured for each
option.
Interface show configuration system host-name | display inheritance show running | include hostname
brief
show interfaces
show interfaces | no-more
show ipv4 interface
Tunnel Path show configuration system host-name | display inheritance show running | include hostname
brief
show mpls traffic-eng tunnels detail role
show mpls lsp statistics ingress extensive logical-router all | head
no-more
Transit Tunnel show configuration system host-name | display inheritance show running | include hostname
brief
show mpls traffic-eng tunnels backup
show rsvp session ingress detail logical-router all | no-more
Switch CLI show configuration system host-name | display inheritance show running | include hostname
brief
show cdp neighbor detail
show l ldp neighbor | no-more
Equipment CLI show configuration system host-name | display inheritance show version
brief
show diag
show version | no-more
show env all admin
show chassis hardware | no-more
show inventory
show chassis fpc | no-more
show inventory raw
show chassis hardware models | no-more
3. Click Next to proceed to the scheduling parameters. The Create New Task - Schedule
window is displayed as shown in Figure 202 on page 291. You can opt to run the
collection only once, or to repeat it at configurable intervals. The default interval is 15
minutes.
Instead of scheduling recurrence, you can select to chain the task after an
already-scheduled recurring task, so it launches as soon as the other task completes.
When you select the “Chain after another task” radio button, a drop-down list of
recurring tasks is displayed from which to select.
4. Click Submit to complete the addition of the new collection task and add it to the
Task List. Click a completed task in the list to display the results in the lower portion
of the window. There are three tabs in the results window: Summary, Status, and
History. Figure 203 on page 292 shows an example of the Summary tab.
Figure 204 on page 292shows an example of the Status tab.
The device collection data is sent to the PCS server for routing and is reflected in the
Topology view. See “Viewing Analytics Data in the Web UI” on page 293 for more
information.
There are views and work flows in the web UI that support visualization of collected data
so it can be interpreted and acted upon.
Data collectors must be installed and devices must be configured to push the data to
the data collectors. The health monitoring feature also uses information from the data
collectors.
To view information about installed data collectors, navigate to Administration > System
Health.
NOTE: Interface Utilization, RSVP Live Utilization, and RSVP Utilization are
mutually exclusive. You can display only one of those three in the topology
at a time.
In the Topology Settings menu bar on the right side of the window, click the Tools icon
and select the Link Label tab. You will see link label settings that pertain to interface
utilization, as shown in Figure 206 on page 294. The topology then displays the percentage
utilization of the links in the format percentage AZ::percentage ZA. Additional labels are
also available to display information that is collected through a Netconf collection task,
and is used by the analytics feature. Interface names, interface bandwidth values, and
shape bandwidth values are some examples.
Reaching the Traffic Chart from the Topology or the Network Information Table
You can right-click a link in the topology and select View Interface Traffic to see traffic
statistics over time for the link. In this chart, you can select to display one or both
interfaces, adjust the time range, and select the units as bps or % (of the link bandwidth).
You can also view LSP events on the right side of the chart. Double click an event to see
event details. A bell icon in the chart indicates that one or more events took place. Click
a bell to filter the list of events on the right to include only those that occurred at that
timestamp. Figure 207 on page 295 shows the traffic view chart.
NOTE: The events displayed are only those pertaining to the LSPs currently
routed through the link being viewed, as opposed to all events for all LSPs in
the network.
You can also reach this traffic-over-time view by right-clicking a link in the network
information table (Link tab) and selecting View Interface Traffic. To see LSP traffic over
time, click the Tunnel tab in the network information table. Right-click on an LSP and
select View LSP Traffic. You can choose multiple objects at a time if you want to compare
them. The top portion of the chart shows traffic over time. The bottom portion shows
packets over time.
Also available by right-clicking a link in either the topology or the network information
table are the options to View Link Events and View Interface Delay.
NOTE: Interface delay information is only available if the devices have been
prepared:
• The rpm-log.slax script has been loaded, to send the results of the probes
to the data collectors.
At any given time, the NorthStar Controller is aware of the paths of all LSPs in the network.
Periodically, the controller uses the reported link delays to compute the end-to-end LSP
delay as the simple sum of all link delays in the LSP path.
pcs_lsp_latency_interval_sec=seconds
The seconds variable is the interval at which you want PCViewer to update the LSP
delay metric.
2. Restart PCViewer:
Once the functionality is enabled, you can right-click a tunnel in the network information
table in Topology view and select View Delay. The data is also available in the Tunnels
view. Figure 208 on page 296 shows the LSP delay view, using data for the Silver-102-104
LSP as an example.
Performance View
The Performance View shows you how utilization has changed over time. In the left pane
of the topology view, select Performance from the drop-down menu. If you click the
Interface Utilization check box, for example, and then move the slide bar in the upper left
corner of the topology map, you see the link colors change to reflect the utilization at the
time. Interface utilization is calculated using Layer 3 bandwidth (interface utilization =
Layer 3 traffic divided by Layer 3 bandwidth). This is different from RSVP bandwidth
which is initialized via BGP-LS and automatically adjusted. The two bandwidth values
(RSVP and Layer 3) can be the same, but in some networks, they are not.
Figure 209 on page 297 shows the location of the slide bar.
Node Ingress Traffic, Node Egress Traffic, and Interface Delay are also available, in addition
to Interface Utilization. In the case of Node Ingress and Node Egress Traffic, the size of
the node on the map is proportional to the amount of traffic being handled by the node.
Ingress and egress traffic for a node are not always equal. Generally, most traffic is simply
forwarded by a router (as opposed to being generated or consumed), so it might seem
reasonable to expect that the sum of all ingress traffic would be roughly equal to the
sum of all egress traffic. But in practice, nodes can replicate traffic, as is commonly the
case for multicast traffic or unknown unicast traffic when doing L2 Ethernet forwarding.
In such cases, the total egress traffic can (and should) exceed the total ingress traffic.
For all four options (Node Ingress Traffic, Node Egress Traffic, Interface Delay, Interface
Utilization), the Settings button at the bottom of the left pane allows you to select how
far back you want the data to show, with options up to 30 days back. Figure 210 on page 297
shows these options.
Nodes View
Two columns of data in the Nodes View reflect a snapshot of traffic in bps and pps over
the last hour. This is for quick reference in case there are conditions that require attention.
You can see this snapshot for both Interfaces and Tunnels. Figure 211 on page 298 shows
these two columns.
Top traffic is the computed top N traffic over X period of time by Node, Interface Traffic,
or Interface Delay. You can select N and X by clicking on the currently selected values in
the lower right corner of the display as shown in figx. In the resulting Top Traffic Settings
window, you can select the number of top elements you want to see, and the period of
time they cover. Figure 213 on page 300 shows Top Interface Traffic with the top 10
elements over the past hour displayed. To modify the settings in this example, you would
click on Top 10, Past Hour at the bottom of the display, which would bring up the Top
Traffic Settings window where you could make different setting selections.
You can select any or all of the top traffic options (Node, Interface Traffic, Interface
Delay) to be included in the display. Multiple selections appear as tabs that you can
toggle between. There is interactivity between the topology map and the top traffic
charts: you can select a line item on the chart and it will highlight the corresponding object
on the topology map. You can also mouse over a line item on the chart to display details
about the object as shown in Figure 214 on page 301.
Netconf Persistence
Netconf Persistence allows you to create collection tasks to discover information from
device configurations (such as hostname and interface name), and from operational
commands (such as LSP on non-PCEP enabled devices). The Analytics features rely on
the results of Netconf collection to associate statistics with the correct network elements.
As an alternative to provisioning LSPs (P2P or P2MP) using PCEP (the default), you can
also provision LSPs using Netconf.
1. Ensure that port 830 is allowed by any external firewall being used. Port 830 enables
Netconf communication between the NorthStar Controller and other devices.
2. Populate the Device Profile (only the Admin user can perform this step). From the
More Options menu in the upper right corner of the NorthStar Controller web UI,
navigate to Administration > Device Profile. Figure 215 on page 302 shows the More
Options menu.
3. Highlight a device in the Device List and click Modify. The Modify Device(s) window
is displayed.
NOTE: If these fields are not populated, the Netconf connection will fail.
• Management IP: The IP address NorthStar Controller can use to establish Netconf
sessions.
• Vendor: Use the drop-down menu to select the vendor for the device (Juniper, Cisco,
and so on).
• Login and Password: Enter the credentials that allow the NorthStar Controller to
authenticate with the router.
5. Enable NorthStar Controller to use Netconf by clicking the check box beside Enable
Netconf in the Netconf section of the Access tab.
7. Click Save Changes (which should be red to signal there are unsaved changes) which
should turn black once the save operation is complete.
8. In the Topology view, verify that the NorthStar Controller can establish a Netconf
session. On the Node tab in the network information table, look for the NETCONF
Status column. You can select that column for display if it is not already selected by
clicking the down arrow next to any column heading, and selecting Columns. The
Netconf status should be reported as Up.
NOTE: In Junos OS Release 15.1F6 and later, you can enable the router to
send P2MP LSP information to a controller (like the NorthStar Controller)
in real time, automatically. Without that configuration, you must run live
network collection tasks for NorthStar to learn about newly provisioned
P2MP LSPs.
In the Junos OS, the configuration is done in the [set protocols pcep]
hierarchy for PCEs and for PCE groups:
Data collection via SNMP is a useful alternative for collecting network statistics in systems
where Juniper Telemetry Interface (JTI) is not available or in multi-vendor systems. Data
collection via SNMP enables the following performance management features:
• Collection of interface statistics using SNMP collection tasks that poll the SNMP MIB
(Juniper Networks and Cisco devices).
• Collection of LSP statistics using SNMP collection tasks that poll the SNMP MIB (Juniper
Networks and Cisco devices).
Cisco LSP statistics can also be collected by polling the interface MIB because in Cisco
devices, an LSP tunnel is a special interface entry.
• Collection of P2MP LSP statistics by polling the Juniper LSP MIB for Juniper Networks
devices, or by polling the standard IFMIB for Cisco devices. Even older Juniper devices
are supported.
• Collection of class of service (CoS) statistics. To collect this data for Juniper Networks
devices, the SNMP collector polls the JUNIPER-COS-MIB.
• The specific OIDs that are collected in SNMP collection tasks are described in tables
Table 59 on page 303, Table 60 on page 304, and Table 61 on page 304.
1.3.6.1.4.1.2636.3.15.4.1.5 jnxCosQstatQedBytes
1.3.6.1.4.1.2636.3.15.4.1.9 jnxCosQstatTxedBytes
1.3.6.1.4.1.2636.3.15.4.1.23 jnxCosQstatTotalRedDropBytes
1.3.6.1.4.1.2636.3.15.7.1.5 jnxCosIngressQstatQedBytes
1.3.6.1.4.1.2636.3.15.7.1.9 jnxCosIngressQstatTxedBytes
1.3.6.1.4.1.2636.3.15.7.1.23 jnxCosIngressQstatTotalRedDropBytes
1.3.6.1.4.1.9.9.166.1.1.1 CISCO-CLASS-BASED-QOS-MIB::cbQosServicePolicyTable
1.3.6.1.4.1.9.9.166.1.6.1 CISCO-CLASS-BASED-QOS-MIB::cbQosPolicyMapCfgTable
1.3.6.1.4.1.9.9.166.1.5.1 CISCO-CLASS-BASED-QOS-MIB::cbQosObjectsTable
1.3.6.1.4.1.9.9.166.1.7.1 CISCO-CLASS-BASED-QOS-MIB::cbQosCMCfgTable
1.3.6.1.4.1.9.9.166.1.15.1.1.10 CISCO-CLASS-BASED-QOS-MIB::
cbQosClassMapStats.cbQosCMPostPolicyByte64
NOTE: NorthStar does not support collection of SR-TE LSP statistics via
SNMP.
Installation of Collectors
The collectors are installed in the same machine as the NorthStar Controller application
server (single-server deployment) by the install.sh script when you install the controller
itself. Once installed, you can see the collector group of processes:
See “Device Profile and Connectivity Testing” on page 264 for detailed instructions on
setting up devices with SNMP parameters, and also on testing SNMP connectivity to
those devices.
To schedule a new SNMP collection task, navigate to Administration > Task Scheduler
from the More Options menu.
1. Click Add in the upper right corner. The Create New Task window is displayed as shown
in Figure 197 on page 285.
2. Enter a name for the task and use the drop-down menu to select the task type as
SNMP Traffic Collection. Click Next.
The next window displayed offers you the opportunity to collect SNMP traffic for all
devices, select devices, or groups. Figure 217 on page 308 shows this window.
NOTE: You would deselect devices for which you are using Cisco MDT.
3. Click Next to proceed to the scheduling parameters. The Create New Task - Schedule
window is displayed as shown in Figure 218 on page 309. At least two collections are
necessary for the calculation of statistics. We recommend setting up automatic
recurrence of the task every 10 to 20 minutes.
Instead of scheduling recurrence, you can select to chain the task after an
already-scheduled recurring task, so it launches as soon as the other task completes.
When you select the “Chain after another task” radio button, a drop-down list of
recurring tasks is displayed from which to select.
4. Click Submit to complete the addition of the new collection task and add it to the
Task List. Click a completed task in the list to display the results in the lower portion
of the window. There are three tabs in the results window: Summary, Status, and
History. An example of the Summary tab is shown in Figure 219 on page 310. An example
of the Status tab is shown in Figure 220 on page 310.
Figure 219: Collection Results for SNMP Traffic Collection Task, Summary Tab
Figure 220: Collection Results for SNMP Traffic Task, Status Tab
NOTE: You can have only one SNMP traffic collection task per NorthStar
server. If you attempt to add a second, the system will prompt you to
approve overwriting the first one.
By default, NorthStar only collects statistics from the following interfaces when running
SNMP traffic collection:
The interface types that can be discovered on devices and that should be used by traffic
collection can be modified by editing the /opt/northstar/data/northstar.cfg file. Use a
text editing tool such as vi, and use a comma as a separator. For example:
• physical: Physical interfaces, expressed as the interface name without a dot (.) in it.
These supported interface types are also commented in the northstar.cfg file.
NOTE: You should not have both SNMP collection and MDT enabled for the
same devices.
How it Works
The MDT Collector is provided as part of the NorthStar Analytics RPM bundle and resides
on the Analytics node. Supervisord manages the MDT Collector process as part of the
Analytics Supervisord group.
Figure 221 on page 313 illustrates the general data flow when using MDT.
• The scope and schedule of the streams is in accordance with the configuration on the
devices.
• NorthStar MDT supports UDP and TCP transport protocols. For encoding, it supports
GPB, self-describing GPB (KV-GPB), and JSON.
• When the pipeline receives the telemetry data via UDP or TCP, it decodes the data and
pushes it to the NorthStar output plugin for processing. This happens inside the MDT
Collector.
• The NorthStar plugin converts the data into JTI format, encodes it as a JSON document
and pushes it out of the MDT Collector to Logstash via UDP.
• Logstash processes the JSON document and then pushes the information to
Elasticsearch and RabbitMQ for use by NorthStar Controller.
• The NorthStar components retrieve the traffic data by leveraging the NorthStar REST
API.
[root@ns]# vi /opt/northstar/data/northstar.cfg
.
.
.
#MDT Collector Logging level info | debug
mdt_log_level = debug
telemetry model-driven
destination-group Northstar
sensor-group mdt
sensor-path
Cisco-IOS-XR-infra-statsd-oper:infra-statistics/interfaces/interface/latest/generic-counters
sensor-path
Cisco-IOS-XR-mpls-te-oper:mpls-te/signalling-counters/head-signalling-counters/head-signalling-counter
subscription mdt
destination-id Northstar
• The collector-address variable refers to the system (analytics node) where the MDT
collector is running.
• The encoding choice (gpb or self-describing-gpb) does not affect the “encap” setting
within the tcp_northstar or udp_northstar section.
• If you configure TCP as the protocol, the port value in the IOS-XR MDT configuration
must match the port setting in the pipeline configuration. Look for the listen parameter
in the tcp_northstar section in /opt/northstar/data/pipeline/config/pipeline.yml. If you
configure UDP as the protocol, the port value must match that in the udp_northstar
section.
• Using the sensor-path configuration, you can filter based on specified criteria. For
example, to report the statistics for tunnel-te interfaces (created for LSPs):
sensor-path
Cisco-IOS-XR-infra-statsd-oper:infra-statistics/interfaces/interface
[interface-name='tunnel-te*']/latest/generic-counters
You can collect link delay statistics using Link Latency collection tasks that use a ping
operation (Juniper Networks and Cisco devices).
When a link latency collection task is run, the collector issues a ping from one device to
the endZ address of all links to gather round trip time (RTT) statistics. The RTT is the
amount of time in milliseconds from when the ping packet is sent to the time a reply is
received. The minimum, maximum, and average RTT is calculated based on multiple
pings.
You must run device collection before attempting to run link latency collection. This is
necessary to establish the baseline network information including the interfaces and
LSPs. Once device collection has been run, link latency collection tasks have the
information they need.
To schedule a new link latency collection task, navigate to Administration > Task
Scheduler from the More Options menu.
1. Click Add in the upper right corner. The Create New Task window is displayed as shown
in Figure 222 on page 316.
2. Enter a name for the task and use the drop-down menu to select the task type as Link
Latency. Click Next.
In the next window, enter the number of times you would like the ping operation to
repeat.Figure 223 on page 317 shows this window.
Figure 223: Device Collection Task, Step 2 for Link Latency Collection
3. Click Next to proceed to the scheduling parameters. The Create New Task - Schedule
window is displayed as shown in Figure 98 on page 134. You can opt to run the collection
only once, or to repeat it at configurable intervals. The default interval is 15 minutes.
Instead of scheduling recurrence, you can select to chain the task after an
already-scheduled recurring task, so it launches as soon as the other task completes.
When you select the “Chain after another task” radio button, a drop-down list of
recurring tasks is displayed from which to select.
4. Click Submit to complete the addition of the new collection task and add it to the
Task List. Click a completed task in the list to display the results in the lower portion
of the window. There are three tabs in the results window: Summary, Status, and
History. An example of the Summary tab is shown in Figure 225 on page 319. An example
of the Status tab is shown in Figure 226 on page 319.
Figure 225: Collection Results for Link Latency Collection Task, Summary Tab
Figure 226: Collection Results for Link Latency Task, Status Tab
NOTE: You can have only one link latency traffic collection task per
NorthStar server. If you attempt to add a second, the system will prompt
you to approve overwriting the first one.
LDP traffic statistics track the volume of traffic passing through forwarding equivalence
classes. In addition to monitoring the LDP traffic statistics in the NorthStar Controller,
the data can also be imported into the NorthStar Planner for capacity planning and failure
simulation studies.
NOTE: You must run device collection before attempting to run LDP traffic
collection so NorthStar (Toposerver) can discover LDP-enabled links. Learning
which links are LDP-enabled allows NorthStar to compute LDP equal cost
paths between sources and destinations.
NOTE: Currently, the LDP traffic collection task only supports Juniper
Networks Junos OS devices. Even if you specify other devices in the task
setup, this task will only run against Junos OS devices.
The device collection task extracts LDP-enabled interfaces from the Junos OS
configuration at the [protocols ldp] and [protocols mpls] hierarchy levels. ConfigServer
correlates these interfaces with the links discovered by Toposerver.
To schedule a new LDP traffic collection task, navigate to Administration > Task
Scheduler from the More Options menu.
1. Enter a name for the task and use the drop-down menu to select the task type LDP
Traffic Collection. Click Next to display the first Create New Task – LDP Traffic
Collection window as shown in Figure 227 on page 321.
Under Select Device(s) to be collected, you can choose All devices, Selective devices,
or Groups as a method for specifying the devices to be included in the collection task.
For all three of those choices, you can select to use ECMP (the default is yes, with a
value of 6).
If you select “Selective devices”, you are presented with a list of all the devices available
to be included in the collection task. Figure 228 on page 321 shows an example.
Click the check boxes corresponding to the devices you want to include.
If you opt for Groups, you are presented with a list of the device groups that have been
configured in Administration > Device Profile, as shown in Figure 229 on page 323.
Click the check boxes corresponding to the groups you want to include.
2. Click Next to proceed to the scheduling parameters. The Create New Task - Schedule
window is displayed as shown in Figure 230 on page 324. At least two collections are
necessary for the calculation of demand statistics. We recommend setting up
automatic recurrence of the task every 10 to 20 minutes.
The option to chain the task after an already-scheduled recurring task is available,
but we do not recommend it for LDP collection. LDP collection is better handled as a
recurring, independent task.
3. Click Submit to complete the addition of the new collection task and add it to the
Task List. The LDP traffic collection task executes show ldp traffic-statistics at
configured intervals for the selected devices. Elasticsearch stores and indexes the
collected the data for further query.
Click a completed task in the list task list to display the results in the lower portion of
the window. There are three tabs in the results window: Summary, Status, and History.
An example of the Summary tab is shown in Figure 231 on page 325. An example of the
Status tab is shown in Figure 232 on page 325.
Figure 231: Example Collection Results for LDP Traffic Collection Task, Summary Tab
Figure 232: Example Collection Results for LDP Traffic Collection Task, Status Tab
NOTE: You can have only one LDP traffic collection task per NorthStar
server. If you attempt to add a second, the system will prompt you to
approve overwriting the first one.
4. Once the traffic collection task has completed, view the collected data in the Demand
tab of the network information table. The Node, Link, and Tunnel tabs are always
displayed. The other tabs are optionally displayed. Click the plus sign (+) in the tabs
heading bar to add a tab as shown in Figure 233 on page 326.
The Demand tab lists the LDP Forwarding Equivalent Class (FEC) data, including
Node A, Node Z, IP A, IP Z, and Bandwidth. NorthStar creates the FEC names using
the source name and the destination IP address. Figure 234 on page 326 shows an
example of the Demand tab.
5. To view LDP-enabled links in the topology map, navigate to Protocols in the left pane
and check LDP as shown in Figure 235 on page 327.
In the Task Scheduler window, you can launch a collection tasks that creates a network
model in a database, for use in the NorthStar Planner. You also have the option to archive
the network model.
Tunnel design attributes that are configured in the web UI are inherited by the NorthStar
Planner, even though they are never pushed to the router. When you run Network Archive
device collection, the tunnel information in the Planner (which came from the router) is
merged with the tunnel information in the Controller (which includes design attributes
that are not pushed to the router). The merged version is then available in the Planner.
The following design attributes that are configured in the Advanced, Design, and
Scheduling tabs of the Provision LSP window in the web UI are inherited by the Planner
via network archive collection:
• Design tab: Routing Method, Max Delay, Max Hop, Max Cost
1. Click Add in the upper right corner. The Create New Task window is displayed as shown
in Figure 197 on page 285.
2. Enter a name for the task and use the drop-down menu to select the task type Network
Archive. Click Next to display the first Create New Task – Network Archive window
as shown in Figure 237 on page 329.
Click the check boxes beside the options in this window to select or deselect them:
Equipment CLI data is collected in Netconf collection tasks that include the
Equipment CLI option. The Process Equipment CLI option in Network Archive
collection parses the Equipment CLI data collected in Netconf collection and
generates the Inventory Report available in both the NorthStar Controller and the
NorthStar Planner.
To view Hardware Inventory in the NorthStar Planner, you must run Netconf
collection with the Equipment CLI collection option (collects the inventory data)
and you must run Network Archive collection with the Process Equipment CLI option
(processes the inventory data).
This option makes the created model available in the NorthStar Planner under the
Archives tab in the Network Browser window. Otherwise, the result of the Network
Archive collection task is reflected in the new spec file for the Latest Network Archive
in the NorthStar Planner, but it is overwritten by the next Latest Network Archive.
This option loads the aggregated results of LDP traffic collection into the network
model created by the Network Archive task. The LDP traffic is loaded as demand
with 24 periods of statistics. You can choose up to 60 days’ worth of LDP traffic to
be aggregated, using the specified aggregation statistic, into 24 data points that
represent hours of the day. The options in the Aggregation Statistic drop-down
menu are described in Table 62 on page 330.
NOTE: This option is only applicable if you have scheduled LDP traffic
collection.
Max For each of the 24 hours, the maximum of the sample values within that hour is used.
Average For each of the 24 hours, the samples within that hour are averaged. If there are N samples
for a particular hour, the result is the sum of the all the sample values divided by N.
80th, 90th, 95th, 99th Percentile For each of the 24 hours, the X percentile value of the samples within that hour is used. The
(X percentile) X percentile is computed from an equation that takes into consideration the average for
the hour and the standard deviation. The result is that X percent of the sample values lie
at or below the calculated value.
Selecting the Include LDP Traffic data option is required for full utilization and
manipulation of traffic load data in the Network Planner.
3. Click Next to proceed to the scheduling parameters. The Create New Task - Schedule
window is displayed as shown in Figure 202 on page 291. You can opt to run the
collection only once, or to repeat it at configurable intervals. The default interval is 15
minutes.
Instead of scheduling recurrence, you can select to chain the task after an
already-scheduled recurring task, so it launches as soon as the other task completes.
When you select the “Chain after another task” radio button, a drop-down list of
recurring tasks is displayed from which to select.
4. Click Submit to complete the addition of the new collection task and add it to the
Task List. Click a completed task in the list to display the results in the lower portion
of the window. There are three tabs in the results window: Summary, Status, and
History. Figure 239 on page 332 shows an example of the Status tab for a complete
Network Archive collection task.
The network archive files are stored in the Cassandra database and can be accessed
from there through the NorthStar Planner. See Network Browser Window and Network
Browser Recently Opened and Archived Networks in the NorthStar Planner User Guide.
Netflow Collector
• Configuring Flow Aggregation to Use IPFIX Flow Templates on MX, vMX and T Series
Routers, EX Series Switches and NFX250
• Configuring Flow Aggregation to Use IPFIX Flow Templates on PTX Series Routers
The Junos OS on the routers samples the traffic, builds a flow table, and sends the details
of the flow table to NorthStar periodically.
NorthStar (Netflow daemon), receives the data from the routers, decodes the records,
performs additional aggregation of the data and creates the demands, stores the data
in the NorthStar database, and shares the information with the PCS. The data is then
available for report creation in the NorthStar Controller and for report creation, planning,
and modeling in the NorthStar Planner.
NorthStar monitors AS and VPN traffic, and supports both IPv4 and IPv6.
• Initial and periodic device collection to create and maintain an accurate VPN model
in NorthStar. We recommend you execute device collection at least daily.
The following sections describe using Netflow Collector in the NorthStar Controller:
Netflow Collector on the NorthStar Controller requires that the network routers be
configured for flow monitoring (Netflow v9 or v10) according to the router operating
system documentation.
NOTE: At present, Juniper devices and Cisco IOS-XR devices are supported,
with both Netflow v9 and v10.
At the forwarding-options {
forwarding-options sampling {
nfv9-ipv4 {
hierarchy level:
input {
rate 1;
run-length 0;
}
family inet {
output {
flow-inactive-timeout 15;
flow-active-timeout 60;
flow-server 172.16.18.1 {
port 9000;
version9 {
template {
nfv9-ipv4;
}
}
}
inline-jflow {
source-address 10.1.0.104;
}
}
}
}
At the forwarding-options {
forwarding-options sampling {
instance {
hierarchy level:
nfv10-ipv4 {
input {
rate 1;
run-length 0;
}
family inet {
output {
flow-inactive-timeout 15;
flow-active-timeout 60;
flow-server 172.16.18.1 {
port 9000;
version-ipfix {
template {
nfv10-ipv4;
}
}
}
inline-jflow {
source-address 10.1.0.104;
}
}
}
}
}
}
}
Netflow Collector is installed as part of the Analytics package with NorthStar Controller.
See Installing Data Collectors for Analytics in the NorthStar Controller Getting Started
Guide.
Sampling is configured on the ingress interface. Flows enter the ingress PE which sends
netflow records to netflowd. The netflow records include the information that determines
the flow’s destination, or “prefix”.
On the NorthStar server where you installed the NorthStar analytics package, there are
some settings in the /opt/northstar/data/northstar.cfg file that can be customized for
Netflow, all of which begin with “netflow_”, as described in Table 63 on page 337.
Setting Notes
netflow_collector_address The IP address of the server on which the NorthStar analytics package was installed (which
might or might not be the same server on which the NorthStar application was installed).
SSL enabled = 1
netflow_log_level The level of information that is captured in the log file at /opt/northstar/logs/netflowd.msg.
The default level is “info”. If more information is required, you can set the level to “debug”, and
the log will include all the flows received from each device, identified by source IP address. You
can also see, for each flow, all the fields that netflowd processes and parses.
netflow_sampling_interval The default SAMPLING-INTERVAL, if the router does not provide the SAMPLING-INTERVAL in
the Template FlowSet.
NOTE: If you are using Netflow v10 (IPFIX) in the network, you must manually configure
netflow_sampling_interval in /opt/northstar/data/northstar.cfg. NorthStar does not support
automatic extraction of the IPFIX sampling interval.
netflow_publish_interval Publishing interval to both Elasticsearch and the PCS. Traffic is aggregated per publishing
interval. The default interval is 60 seconds. This value must be equal to or greater than the
reporting time configured in the router (flow-active-timeout value) to ensure that for every
publishing interval, all active flows are reported.
netflow_workers See Slave Collector Installation for Distributed Data Collection in the NorthStar Controller Getting
Started Guide for more information about workers.
If enabled, netflowd sends one final update after a flow is no longer active, reporting the
bandwidth as 0. If disabled, the bandwidth value is not reported once a flow has become
inactive, so the last reported active value is the last value displayed.
netflow_stats_interval Interval at which statistics are printed to the log file. The default is -1 (never).
Setting Notes
netflow_as_demands Netflowd does not generate AS demands by default. Unless you specify otherwise, AS demands
do not appear through the REST API or through Demand Reports in the UI, even if valid netflow
records are being exported.
If the setting is missing from the northstar.cfg file altogether, AS demand generation is disabled.
NOTE: If you make changes to these settings, you must restart the netflowd
process for the changes to take effect.
• When the key is present, it is the VRF name for which the ingress interface is
configured.
• This key is absent if there is no VPN associated with the demand. In this case, the
ingress interface is configured in the default routing table.
• This key displays as “NONE” if netflowd is not able to determine whether the ingress
interface is configured on the default routing table or on a VRF. That would happen,
for example, if NorthStar was not able to collect the snmp-indexes for the interfaces.
The values of the keys are reflected in the names of the demands in the table. Some
examples:
• vmx102_10.1.0.10/32_vpn100_IP
Selecting a demand in the table highlights the corresponding routing path in the topology
map.
NOTE: Currently, the ability to preview the path on the topology map is limited
to RSVP-based LSPs (not segment routing). A future release will enhance
this feature.
From the network information table, you can delete demands, but you cannot add or
modify them. Demands are never automatically deleted.
1. The Demand tab is not displayed by default. Click the plus (+) sign in the network
information table header and select Demand from the drop-down menu as shown
in Figure 240 on page 340.
Figure 240: Adding the Demand Tab to the Network Information Table
Figure 234 on page 326 shows an example of the Demand tab data.
For each demand, the Demand tab lists the demand properties. Whether the demand
is associated with a VPN or not is shown in the Owner field. If there is no VPN
associated with the demand, the Owner field is blank. The Most Recent Update column
is updated at every publishing interval. If it is not updated, the flow is no longer active.
2. Right-click a demand in the table and select View Demand Traffic. This opens a new
tab in the network information table, displaying a chart with demand traffic over time.
You can adjust the time period in the upper left corner of the chart display, to show
the past hour, day, seven days, or a custom time period.
3. The Service tab in the network information table displays information about VPNs in
the network which might be associated with some of the flows. The Service tab is not
displayed by default. Click the plus sign (+) on the network information table header
and select Service to open the Service tab. The table includes one row per VPN.
Figure 242 on page 341 shows an example of the Service tab data.
The Nodes column indicates how many PE routers are associated with the VPN, and
the Node List column lists them. You can right-click on a VPN row to and select Show
Detail to see information about each interface on each node. From the detail window,
you can right-click on an interface and select Show Demand Traffic to see the demand
traffic chart for the specific interface. You can adjust the time period in the upper left
corner of the chart display, to show the past hour, day, seven days, or a custom time
period.
You can also Show Demand Traffic at the VPN level in the Service by right-clicking
the VPN row. The resulting chart displays the total traffic for the VPN.
Right-click a VPN on the Service tab and select Enable Animated Selection to see
an animated VPN service view in the topology map window. This provides a view of
the network in the context of the VPNs, indicating which parts of the network the
VPNs service. To leave the animated view and return the topology map to the original
layout, right-click again on the VPN and select Disable Animated Selection.
4. You can create a Demand Aging task in the Task Scheduler (Administration > Task
Scheduler) to regularly remove inactive demands from the UI.
For example, if you create a Demand Aging task with a maximum age of ten minutes,
the task deletes all demands that have been inactive for ten minutes or more.
To create a Demand Aging task, Click Add in the Task Scheduler. Enter a name for
the task and select Demand Aging from the drop-down menu in the Task Type field.
Click Next to proceed to the maximum age window.
• Use the drop-down menu in the Units field to select seconds, minutes, hours, or
days.
Click Next to proceed to the scheduling window. Like many other task types, you can
schedule this task to recur automatically on a regular basis.
For more information about the Task Scheduler, see “Introduction to the Task
Scheduler” on page 280.
1. Click Add to begin creating a new task.Figure 243 on page 342 shows the Create New
Task window. Give the new task a name in the Name field. Use the Task Type
drop-down menu to select Demand Reports.
2. The report types are shown in Figure 245 on page 344. In the Report Types tab, select
which reports you want to generate. If you select Include AS Demands, you have the
additional option of choosing from a number of AS reports.
• Range for the last 24 hours (gives you data for the last 24 hours)
If you want a report that includes data for specific hours, you would select the date
range option, and specify the hours you want included as shown in
Figure 246 on page 345.
The traffic is loaded as demand with a configurable number of statistical periods. The
options in the Aggregation Statistic drop-down menu are described in
Table 64 on page 345.
Average For each interval, the samples within that interval are averaged. If there are N samples for a
particular interval, the result is the sum of the all the sample values divided by N.
Max For each interval, the maximum of the sample values within that interval is used.
Min For each interval, the minimum of the sample values within that interval is used.
80th, 90th, 95th, 99th For each interval, the X percentile value of the samples within that interval is used. The X
Percentile (X percentile) percentile is computed from an equation that takes into consideration the average for the
interval and the standard deviation. The result is that X percent of the sample values lie at
or below the calculated value.
fullrange The whole range is one interval. Produces one aggregated data point for the entire range.
daily Each day is one interval. Produces one aggregated data point per day.
hourly Each hour is one interval. Produces one aggregated data point per hour.
Also in this window, you have the opportunity to specify that you want to group data
in the reports according to the groups captured in your saved topology layouts. You
can select all layouts or specific ones. If you select more than one layout, reports are
generated for each.
Figure 247 on page 346 shows the Create New Task – Demand Reports window in
which two saved layouts are selected for data grouping.
Figure 247: Demand Reports Task, Select Saved Layouts for Grouping
See “Group and Ungroup Selected Nodes” on page 57 for information about creating
groups and using the auto-group function, and “Manage Layouts” on page 53 for
information about saving layouts.
5. Click Submit to complete the addition of the new collection task and add it to the
Task List. Click a completed task in the list to display the results in the lower portion
of the window. There are three tabs in the results window: Summary, Status, and
History. Figure 249 on page 348 shows an example of the Status tab for a completed
Demand Reports collection task. The status notes indicate the locations of the reports
that were generated.
The reports are also available by navigating to Applications > Reports. An example
list of reports is shown in Figure 250 on page 348.
You can configure NorthStar Controller to automatically reroute LSPs based on interface
traffic or link delay conditions. The parameters that trigger rerouting can be configured
on a global level (applied to all links in the network, in both directions), and you can
override global thresholds with link-specific thresholds.
Link Utilization Threshold (%) User-defined, global parameter applied Administration > Analytics
to all links for Layer 3 link utilization
violation scenarios. When this threshold
is exceeded, the controller starts moving
LSPs away from the congested links. It is
a mandatory parameter to enable this
controller behavior when Layer 3 link
utilization violations occur. Once the link
utilization crosses the defined threshold
and no previous rerouting processes have
occurred within the defined Reroute
Interval, the rerouting process is triggered.
Packet Loss Threshold (%) When packet loss on a link exceeds this Administration > Analytics
threshold, the link is considered unstable
and rerouting of traffic to avoid the link
is triggered. To achieve this, NorthStar
creates a maintenance event for each
link, temporarily making the link
unavailable for traffic. The event name
reflects that it was triggered by packet
loss. The event start time is immediate
(the link displays a red M indicating it is
in maintenance mode) and the end time
is set for one hour later. Because this type
of maintenance event requires manual
completion, the end time is not
significant.
Link Utilization Threshold, Packet Loss User-defined, per-link parameters. Link Modify an existing link from the network
Threshold, and Link Delay Increase Utilization Threshold and Packet Loss information table (Link tab) by selecting
Threshold work like the global the row and clicking Modify at the
parameters except they are applied to bottom of the window.
individual links as configured.
Max Delay User-defined, local parameter applied to Applications > Provision LSP (Design
each LSP. It is a mandatory parameter to Tab), or modify an existing tunnel from
trigger any LSP delay violation rerouting the network information table by
process. When an LSP is configured with selecting the tunnel row and clicking
a Max Delay, and there is also a global Modify at the bottom of the window.
link delay threshold value, the controller
checks the LSP upon LSP delay The REST API can also be used.
violations.
For LSP rerouting based on link utilization (bandwidth), you can specify a reroute interval
(in minutes) and a link utilization threshold (%). The reroute interval is used to pace
back-to-back rerouting events. LSPs are rerouted when both of the following conditions
are true:
To avoid unnecessary network churn, NorthStar only considers rerouting an LSP with
traffic or a bandwidth reservation when the link utilization threshold has been crossed.
When a threshold has been crossed, LSPs with a lower priority setting and higher traffic
are the first to be rerouted, before LSPs with a higher priority setting and lower traffic. If
LSP traffic data is available, NorthStar uses it over bandwidth reservation for determining
whether an LSP should be re-routed. If LSP traffic data is not available, NorthStar
considers LSP bandwidth reservation to make the determination.
When utilization for a link crosses a configured threshold, it appears in the Timeline as
an event, as does any subsequent rerouting.
Figure 251 on page 352 shows the Provision LSP Design tab. The thresholds in this window
use the delay information to derive the metrics of the LSPs, which are, in turn, used by
the devices when choosing which LSPs to use to forward traffic to a given destination.
Max Delay is used by the NorthStar Path Computation Server (PCS) to constrain the
routing path of an LSP. If this constraint is not met, the LSP is not routed by PCS. Max
Delay is also used by the NorthStar Telemetry module to trigger LSP rerouting.
High Delay Threshold is used to penalize the LSP so it is not used by the data plane as
long as there are other parallel LSPs with lower metrics. The availability of the LSP is not
restored once the delay is lower than the High Delay Threshold, until the LSP delay
reaches Low Delay Threshold. This prevents excess impact on the network. When the
LSP delay drops below the Low Delay Threshold, its metric is set to Low Delay.
For LSP rerouting to work, you must select Reroute: Enabled in this window, which causes
the additional fields to be displayed. Click Save to configure the global settings.
Link level thresholds are set in the Link tab of the network information table. Select a link
and click Modify at the bottom of the table. The Modify Link window is displayed as
shown in Figure 253 on page 354.
In the Analytics tab, you can set any or all of the three thresholds on a per-direction basis
(A-to-Z, Z-to-A) for that specific link.
NOTE: Interface A and Interface Z fields must be populated in a link for the
Analytics tab to be available in the Modify Link window. This information
comes from Netconf collection, so you can either wait for the next scheduled
Netconf collection task to run, or you can create a collection task that runs
immediately.
In the topology map, you can choose to display interface utilization, measured delay, or
packet loss labels for the links. Click the Settings icon on the right side of the topology
view to open the Topology Settings window where you can control link labels and other
display options.
• Configuring Routers to Send JTI Telemetry Data and RPM Statistics to the Data Collectors
(NorthStar Controller Getting Started Guide)
Troubleshooting Strategies
In the Web UI, the Dashboard View and Event View (Applications>Event View) provide
information that can help with troubleshooting.
For additional information to help identify and troubleshoot issues with the Path
Computation Server (PCS) or NorthStar Controller application, you can access the log
files.
NOTE: If you are unable to resolve a problem with the NorthStar Controller,
we recommend that you forward the debug files generated by the NorthStar
Controller debugging utility to JTAC for evaluation. Currently all debug files
are located in subdirectories under the u/wandl/tmp directory.
To collect debug files, log in to the NorthStar Controller CLI, and execute the
command u/wandl/bin/system-diagnostic.sh filename.
The output is generated and available from the /tmp directory in the
filename.tbz2 debug file.
Table 67 on page 359 lists the NorthStar Controller log files most commonly used to
identify and troubleshoot issues with the PCS and PCE. All log files are located under the
/opt/northstar/logs directory, with one exception. The pcep_server.log file is located in
/var/log/jnc.
configServer.msg Log files related to maintaining LSP configuration states in NorthStar Controller. LSP
configuration states are updated by collecting show commands and NETCONF provisioning.
netconfd.msg Log files related to communication between NorthStar Controller and devices via NETCONF
sessions.
pcep_server.log Located in /var/log/jnc. Log files related to communication between the PCC and the
PCE in both directions.
pcs.log Log files related to the PCS, which includes any event received by PCS from Toposerver
and any event from Toposerver to PCS including provisioning orders. This log also contains
any communication errors as well as any issues that prevent the PCS from starting up
properly.
Contains the record of the events between the PCS and topology server, the topology
server and NTAD, and the topology server and the PCE server
NOTE: Any message forwarded to the pcshandler.log file is also forwarded to the pcs.log
file.
This document includes strategies for identifying whether an apparent problem stems
from the NorthStar Controller or from the router, and provides troubleshooting techniques
for those problems that are identified as stemming from the NorthStar Controller.
Before you begin any troubleshooting investigation, confirm that all system processes
are up and running. A sample list of processes is shown below. Your actual list of processes
could be different.
NOTE: To stop, start, or restart all processes, use the service northstar stop,
service northstar start, and service northstar restart commands.
To access system process status information from the NorthStar Controller Web UI,
navigate to More Options>Administration and select System Health.
The current CPU %, memory usage, virtual memory usage, and other statistics for each
system process are displayed. Figure 255 on page 361 shows an example.
NOTE: Only processes that are running are included in this display.
Table 68 on page 361 describes each field displayed in the Process Status table.
Field Description
Process The name of the NorthStar Controller process.
Field Description
PID The Process ID number.
User The NorthStar Controller user permissions required to access information about this process.
Group NorthStar Controller user group permissions required to access information about this
process.
CPU Time The amount of time the CPU was used for processing instructions for the process
CMD Displays the specific command options for the system process.
A list of NorthStar system log and message files is displayed, a truncated example of
which is shown in Figure 256 on page 363.
3. Click the log file or message file that you want to view.
4. To open the file in a separate browser window or tab, click View Raw Log in the pop-up
window.
5. To close the pop-up window and return to the list of log and message files, click X in
the upper right corner of the pop-up window.
Table 67 on page 359 lists the NorthStar Controller log files most commonly used to
identify and troubleshoot issues with the PCS and PCE.
pcep_server.log Log entries related to the PCEP server. The PCEP server /var/log/jnc
maintains the PCEP session. The log contains information
about communication between the PCC and the PCE in
both directions.
pcs.log Log entries related to the PCS. The PCS is responsible /opt/northstar/logs
for path computation. This log includes events received
by the PCS from the Toposerver, including provisioning
orders. It also contains notification of communication
errors and issues that prevent the PCS from starting up
properly.
toposerver.log Log entries related to the topology server. The topology /opt/northstar/logs
server is responsible for maintaining the topology. These
logs contain the record of the events between the PCS
and the Toposerver, the Toposerver and NTAD, and the
Toposerver and the PCE server
Table 70 on page 364 lists additional log files that can also be helpful for troubleshooting.
All of the log files in Table 70 on page 364 are located under the /opt/northstar/logs
directory.
pcep_server.log Log files related to communication between the PCC and the PCE in both directions.
pcs.log Log files related to the PCS, which includes any event received by PCS from Toposerver
and any event from Toposerver to PCS including provisioning orders. This log also contains
any communication errors as well as any issues that prevent the PCS from starting up
properly.
Table 70: Additional Log Files for Troubleshooting NorthStar Controller (continued)
Contains the record of the events between the PCS and topology server, the topology
server and NTAD, and the topology server and the PCE server
NOTE: Any message forwarded to the pcshandler.log file is also forwarded to the
pcs.log file.
To see logs related to the Junos VM, you must establish a telnet session to the router.
The default IP address for the Junos VM is 172.16.16.2. The Junos VM is responsible for
maintaining the necessary BGP, ISIS, or OSPF sessions.
Empty Topology
Figure 257 on page 365 illustrates the flow of information from the router to the Toposerver
that results in the topology display in the NorthStar Controller UI. When the topology
display is empty, it is likely this flow has been interrupted. Finding out where the flow was
interrupted can guide your problem resolution process.
The topology originates at the routers. For NorthStar Controller to receive the topology,
there must be a BGP-LS, ISIS, or OSPF session from one of the routers in the network to
the Junos VM. There must also be an established Network Topology Abstractor Daemon
(NTAD) session between the Junos VM and the Toposerver.
1. Using the NorthStar Controller CLI, verify that the NTAD connection between the
Toposerver and the Junos VM was successfully established as shown in this example:
NOTE: Port 450 is the port used for Junos VM to Toposerver connections.
In the following example, the NTAD connection has not been established:
ntad_host=172.16.16.2
Trying 172.16.16.2...
Connected to 172.16.16.2.
Escape character is '^]'.
northstar_junosvm (ttyp0)
login: northstar
Password:
If the topology-export statement is missing, the Junos VM cannot export data to the
Toposerver.
3. Use Junos OS show commands to confirm whether the BGP, ISIS, or OSPF relationship
between the Junos VM and the router is ACTIVE. If the session is not ACTIVE, the
topology information cannot be sent to the Junos VM.
4. On the Junos VM, verify whether the lsdist.0 routing table has any entries:
If you see only zeros in the lsdist.0 routing table, there is no topology that can be sent.
Review the NorthStar Controller Getting Started Guide sections on configuring topology
acquisition.
5. Ensure that there is at least one link in the lsdist.0 routing table. The Toposerver can
only generate an initial topology if it receives at least one NTAD link event. A network
that consists of a single node with no IGP adjacency with other nodes (as is possible
in a lab environment, for example), will not enable the Toposerver to generate a
topology. Figure 258 on page 367 illustrates the Toposerver’s logic process for creating
the initial topology.
If an initial topology cannot be created for this reason, the toposerver.log generates
an entry similar to the following example:
Incorrect Topology
One important function of the Toposerver is to correlate the unidirectional link (interface)
information from the routers into bidirectional links by matching source and destination
IPv4 Link_Identifiers from NTAD link events. When the topology displayed in the NorthStar
UI does not appear to be correct, it can be helpful to understand how the Toposerver
handles the generation and maintenance of the bidirectional links.
Generation and maintenance of bidirectional links is a complex process, but here are
some key points:
• For the two nodes constituting each bidirectional link, the Node ID that was assigned
first (and therefore has the lower Node ID number) is given the Node A designation,
and the other node is given the Node Z designation.
NOTE: The Node ID is assigned when the Toposerver first receives the
Node event from NTAD.
• The Toposerver receives a Link Update message when a link in the network is added
or modified.
• The Toposerver receives a Link Withdraw message when a link is removed from the
network.
• The Link Update and Link Withdraw messages affect the operational status of the
nodes.
• The node operational status, together with the protocol (IGP versus IGP plus MPLS)
determine whether a link can be used to route LSPs. For a link to be used to route LSPs,
it must have both an operational status of UP and the MPLS protocol active.
Missing LSPs
When your topology is displaying correctly, but you have missing LSPs, take a look at the
flow of information from the PCC to the Toposerver that results in tunnels being added
to the NorthStar Controller UI, as illustrated in Figure 259 on page 369. The flow begins
with the configuration at the PCC, from which an LSP Update message is passed to the
PCEP server by way of a PCEP session and then to the Toposerver by way of an Advanced
Message Queuing Protocol (AMQP) connection.
1. Look at the toposerver.log. The log prints a message every 15 seconds when it detects
that its connection with the PCEP server has been lost or was never successfully
established. Note that in the following example, the connection between the
Toposerver and the PCEP server is marked as down.
Toposerver log:
Apr 22 16:21:35.016721 user-PCS TopoServer Warning, did not receive the PCE
beacon within 15 seconds, marking it as down. Last up: Fri Apr 22 16:21:05
2016
Apr 22 16:21:35.016901 user-PCS TopoServer [->PCS] PCE Down: Warning, did not
receive the PCE beacon within 15 seconds, marking it as down. Last up: Fri
Apr 22 16:21:05 2016
Apr 22 16:21:50.030592 user-PCS TopoServer Warning, did not receive the PCE
beacon within 15 seconds, marking it as down. Last up: Fri Apr 22 16:21:05
2016
Apr 22 16:21:50.031268 user-PCS TopoServer [->PCS] PCE Down: Warning, did not
receive the PCE beacon within 15 seconds, marking it as down. Last up: Fri
Apr 22 16:21:05 2016
2. Using the NorthStar Controller CLI, verify that the PCEP session between the PCC and
the PCEP server was successfully established as shown in this example:
NOTE: Port 4189 is the port used for PCC to PCEP server connections.
Knowing that the session has been established is useful, but it does not necessarily
mean that any data was transferred.
3. Verify whether the PCEP server learned about any LSPs from the PCC.
In the far right column of the output, you see the number of LSPs that were learned.
If this number is 0, no LSP information was sent to the PCEP server. In that case, check
the configuration on the PCC side, as described in the NorthStar Controller Getting
Started Guide.
showing both the PCC_SYNC_COMPLETE message and the PCEP IP address that
NorthStar might or might not recognize:
• Manually input the unrecognized IP address in the device profile in the NorthStar Web
UI by navigating to More Options > Administration > Device Profile.
• Ensure there is at least one LSP originating on the router, which will allow Toposerver
to associate the PCEP session with the node in the TED database.
Once the IP address problem is resolved, and the Toposerver is able to successfully
associate the PCEP session with the node in the topology, it adds the PCEP IP address
to the node attributes as can be seen in the PCS log:
When an LSP is being provisioned, the PCS server computes a path that satisfies all the
requirements for the LSP, and then sends a provisioning order to the PCEP server. Log
messages similar to the following example appear in the PCS log while this process is
taking place:
"include-any":0},"setup-priority":7,"reservation-priority":7,"ero":[{"ipv4-address":"11.102.105.2"},{"ipv4-address":"11.105.107.2"},
{"ipv4-address":"11.114.117.1"}]}}#012]#012}
Apr 25 10:06:44.802500 user-PCS PCServer provisioning order sent, status = SUCCESS
Apr 25 10:06:44.802519 user-PCS PCServer [->TopoServer] Save LSP action,
id=928380025 event=Provisioning Order(ADD) sent request_id=928380025
The LSP controller status is PENDING at this point, meaning that the provisioning order
has been sent to the PCEP server, but an acknowledgement has not yet been received.
If an LSP is stuck at PENDING, it suggests that the problem lies with the PCEP server.
You can log into the PCEP server and configure verbose log messages which can provide
additional information of possible troubleshooting value:
pcep_cli
set log-level all
There are also a variety of show commands on the PCEP server that can display useful
information. Just as with Junos OS syntax, you can enter show ? to see the show
command options.
If the PCEP server successfully receives the provisioning order, it performs two actions:
The PCEP server log would show an entry similar to the following example:
The LSP controller status changes to PCC_PENDING, indicating that the PCEP server
received the provisioning order and forwarded it on to the PCC, but the PCC has not yet
responded. If an LSP is stuck at PCC_PENDING, it suggests that the problem lies with the
PCC.
If the PCC receives the provisioning order successfully, it sends a response to the PCEP
server, which in turn, forwards the response to the PCS. When the PCS receives this
response, it clears the LSP controller status completely, indicating that the LSP is fully
provisioned and is not waiting for action from the PCEP server or PCC. The operational
status (Op Status column) then becomes the indicator for the condition of the tunnel.
The PCS log would show an entry similar to the following example:
status of the LSP is DOWN, the PCC cannot signal the LSP. This section explores some
of the possible reasons for the LSP operational status to be DOWN.
Utilization is a key concept related to LSPs that are stuck in DOWN. There are two types
of utilization, and they can be different from each other at any specific time:
• Live utilization—This type is used by the routers in the network to signal an LSP path.
This type of utilization is learned from the TED by way of NTAD. You might see PCS
log entries such as those in the following example. In particular, note the reservable
bandwidth (reservable_bw) entries that advertise the RSVP utilization on the link:
• Planned utilization—This type is used within NorthStar Controller for path computation.
This utilization is learned from PCEP when the router advertises the LSP and
communicates to NorthStar the LSP bandwidth and the path the LSP is to use. You
might see PCS log entries such as those in the following example. In particular, note
the bandwidth (bw) and record route object (RRO) entries that advertise the RSVP
utilization on the link:
It is possible for the two utilizations to be different enough from each other that it causes
interference with successful computation or signalling of the path. For example, if the
planned utilization is higher than the live utilization, a path computation issue could arise
in which the PCS cannot compute the path because it thinks there is no room for it. But
because the planned utilization is higher than the actual live utilization, there may very
well be room.
It’s also possible for the planned utilization to be lower than the live utilization. In that
case, the PCC does not signal the path because it thinks there is no room for it.
To view utilization in the Web UI topology map, navigate to Options in the left pane of
the Topology view. If you select RSVP Live Utilization, the topology map reflects the live
utilization that comes from the routers. If you select RSVP Utilization, the topology map
reflects the planned utilization which is computed by the NorthStar Controller based on
planned properties.
A better troubleshooting tool in the Web UI is the Network Model Audit widget in the
Dashboard view. The Link RSVP Utilization line item reflects whether there are any
mismatches between the live and the planned utilizations. If there are, you can try
executing Sync Network Model from the Web UI by navigating to Administration >
System Settings, and then clicking Advanced Settings in the upper right corner of the
resulting window.
NOTE: The upper right corner button toggles between General Settings and
Advanced Settings.
Disappearing Changes
Two options are available in the Web UI for synchronizing the topology with the live
network. These options are only available to the system administrator, and can be
accessed by first navigating to Administration > System Settings, and then clicking
Advanced Settings in the upper right corner of the resulting window.
NOTE: The upper right corner button toggles between General Settings and
Advanced Settings.
Figure 260 on page 375 shows the two options that are displayed.
It is important to be aware that if you execute Reset Network Model in the Web UI, you
will lose changes that you’ve made to the database. In a multi-user environment, one
user might reset the network model without the knowledge of the other users. When a
reset is requested, the request goes from the PCS server to the Toposerver, and the PCS
log reflects:
The Toposerver log then reflects that database elements are being removed:
The Toposerver then requests a synchronization with both the Junos VM to retrieve the
topology nodes and links, and with the PCEP server to retrieve the LSPs. In this way, the
Toposerver relearns the topology, but any user updates are missing. Figure 261 on page 376
illustrates the flow from the topology reset request to the request for synchronization
with the Junos VM and the PCEP Server.
Upon receipt of the synchronization requests, Junos VM and the PCEP server return
topology updates that reflect the current live network. The PCS log shows this information
being added to the database:
Figure 262 on page 377 illustrates the return of topology updates from the Junos VM and
the PCEP Server to the Toposerver and the PCS.
You should use the Reset Network Model when you want to start over from scratch with
your topology, but if you don’t want to lose user planning data when synchronizing with
the live network, execute the Sync Network Model operation instead. With this operation,
the PCS still requests a topology synchronization, but the Toposerver does not delete
the existing elements. Figure 263 on page 377 illustrates the flow from the PCS to the
Junos VM and PCEP server, and the updates coming back to the Toposerver.
Figure 263: Synchronization Request and Model Updates Using Sync Network Model
To enable this debug flag, modify the URL you use to launch the Web UI as follows:
https://server_address:8443/client/app.html?debug=true
NOTE: If you are already in the Web UI, it is not necessary to log out; simply
add ?debug=true to the URL and press Enter. The UI reloads.
Figure 264 on page 379 shows an example of the web browser console with
detailed debugging messages.
Accessing the console varies by browser. Figure 265 on page 379 shows an
example: accessing the console on Google Chrome.
To collect debug files, log in to the NorthStar Controller CLI, and execute the command
u/wandl/bin/system-diagnostic.sh filename.
The output is generated and is available from the /tmp directory in the filename.tbz2
debug file.
The following frequently asked questions (FAQs) are provided to help answer questions
you might have about troubleshooting NorthStar Controller features, functionality, and
behavior.
• Should I use an "in-band" or "out-of-band" management interface for the PCEP session?
NOTE: We also recommend that you use the router loopback IP address
as the PCEP local address with the assumption that the loopback IP address
is also the TE router ID.
• What is an "ethernet” node and why is “ethernet" node shown even though there are only
two routers on that link?
Ethernet node represents a switch or hub in the broadcast environment. Unless explicitly
configured otherwise, OSPF and IS-IS perform adjacency in broadcast mode. Displaying
this "ethernet" in the network topology makes it possible to detect which part of the
network has non-explicit point-to-point Interior Gateway Protocol (IGP) configuration.
• The OSPF Broadcast link doesn't sync up, and the NorthStar Controller UI displays an
isolated router and an isolated Ethernet node. What is the problem here?
Verify that each router's interface that is connected to the isolated subnet is configured
with the family mpls enable statement (for routers running Junos OS).
• The PCEP session between the PCC and PCE stays in the "connecting" state. Why isn't
the connection established?
Verify that the PE router has been correctly configured as a PCC, for example:
• Enable external control of LSPs from the PCC router to the NorthStar Controller:
[edit protocols]
user@PE1# set mpls lsp-external-controller pccd
• Specify the NorthStar Controller (northstar1) as the PCE that the PCC connects to,
and specify the NorthStar Controller host external IP address as the destination
address:
[edit protocols]
user@PE1# set pcep pce northstar1 destination-ipv4-address <IP-address>
• Configure the destination port for the PCC router that connects to the NorthStar
Controller (PCE server) using the TCP-based PCEP:
[edit protocols]
user@PE1# set pcep pce northstar1 destination-port 4189
• You must also make sure no firewall (or anything else) is blocking the traffic.
• Does the NorthStar Controller UI show the LSP and topology events in real time?
In most cases, the LSP and topology events are displayed in real time. However, the
PCS can perform some event aggregation to reduce protocol communication between
the server and client if the PCS receives too many events from the network.
• The /var/log/jnc/pcep_server.log file does not contain any information. How can I get
more verbose PCEP logging?
http://cassandra.apache.org/doc/latest/
https://wiki.apache.org/cassandra/ArticlesAndPresentations
• DataStax Enterprise
https://docs.datastax.com/en/dse-trblshoot/doc/index.html
In the case of simple loss of connectivity to the Cassandra database, the NorthStar
processes are actually still running, and there is no service disruption for LSPs controlled
by NorthStar or for newly delegated LSPs created on the routers. However, when you
attempt to access the NorthStar web UI, you see an error message such as:
When this error is detected by the web server (nodejs), it switches to fail-safe mode so
users can have view-only access.
In this case, the web server and Toposerver switch to fail-safe mode, providing view-only
access. Toposerver loads the network topology from the latest network snapshot saved
in the file system.
• The PCEP server and Path Computation Server (PCS) remain running. The web server
(nodejs), Toposerver, and task_scheduler remain running, but in fail-safe mode.
• Even if the Cassandra database has been corrupted, fail-safe mode works.
• Even if only one server in a NorthStar cluster is up and running, fail-safe mode works.
• A fail-safe mode landing page is provided in the NorthStar web UI. Admin user login is
required to access the landing page. Figure 266 on page 386 shows the fail-safe mode
landing page. Note the change in color of the top menu bar and the notation, (Safe
Mode), in the upper right corner.
• In fail-safe mode, existing delegated or PCE-initiated LSPs can be rerouted by the PCS
in the event of network outages.
• Toposerver does not use the Cassandra database to load the network model. Instead,
it loads the network model based on the latest network snapshot collected by the
NorthStar file system. During normal NorthStar operation, the file system collects and
stores network snapshots hourly (by default).
• While in fail-safe mode, the status of the NorthStar cluster is displayed for all users
via a banner in the web UI. The NorthStar health reporting function also reports the
status of nodes, even when they are down.
• Once you have restored the cluster to normal operation, you must manually exit fail-safe
mode by restarting nodejs (infra:web), Toposerver, and task_scheduler:
Managing the Path Computation Server and Path Computation Element Services on
the NorthStar Controller
To perform administrative tasks, you can run commands from the NorthStar Controller
CLI to stop, start, or restart Path Computation Server (PCS) or Path Computation Element
(PCE) services that run on the NorthStar Controller.
We recommend that you run the PCS restart command when encountering either of the
following scenarios:
• If you suspect that the network model is out-of-sync—for example, when LSPs are still
displayed from the UI but the LSPs are no longer on the router.
• If the admin status of LSPs appears to be stuck in “PENDING” when you attempt to
provision LSPs—from the NorthStar Controller UI, the LSPs are displayed as PENDING
and are not provisioned to router.
1. From the CLI, log in to the NorthStar Controller PCS, for example: