Nso Development-5.3 PDF
Nso Development-5.3 PDF
3 Development
First Published: May 17, 2010
Last Modified: November 29, 2019
Americas Headquarters
Cisco Systems, Inc.
170 West Tasman Drive
San Jose, CA 95134-1706
USA
http://www.cisco.com
Tel: 408 526-4000
Tel: 800 553-NETS (6387)
Fax: 408 527-0883
THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS,
INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS.
THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT SHIPPED
WITH THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED
WARRANTY, CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY.
The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB's public domain version of
the UNIX operating system. All rights reserved. Copyright © 1981, Regents of the University of California.
NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED “AS IS" WITH ALL
FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE
PRACTICE.
IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT
LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS
HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output,
network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is
unintentional and coincidental.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: http://
www.cisco.com/go/trademarks. Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship
between Cisco and any other company. (1110R)
This product includes software developed by the NetBSD Foundation, Inc. and its contributors.
This product includes cryptographic software written by Eric Young ([email protected]).
This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit http://www.openssl.org/.
This product includes software written by Tim Hudson ([email protected]).
U.S. Pat. No. 8,533,303 and 8,913,519
Copyright © 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019 Cisco Systems, Inc. All rights reserved.
ii
CONTENTS
Hooks 91
NED API 92
NAVU API 92
ALARM API 100
NOTIF API 102
HA API 104
Java API Conf Package 105
Namespace classes and the loaded schema 107
Export 157
NCS Project man pages 158
NCS Project YANG model 159
Introduction
Cisco Network Service Orchestrator (NSO) version 5.3 is an evolution of the Tail-f Network Control
System (NCS). Tail-f was acquired by Cisco in 2014. The product has been enhanced and forms the
base for Cisco NSO. Note that the terms 'ncs' and 'tail-f' are used extensively in file names, command-
line command names, YANG models, application programming interfaces (API), etc. Throughout this
document we will use NSO to mean the product, which consists of a number of tools and executables.
These executable components will be referred to by their command line name, e.g. ncs, ncs-netsim,
ncs_cli, etc.
This book describes how to develop applications using NSO as a platform. The target audience is Java
developers or system integrators who need to add service packages or add custom functions to NSO.
This book assumes that the reader has a good understanding of the use of NSO as described in the User
Guide.
Architecture Overview
The NSO architecture is illustrated in Figure 1, “NSO Architecture”.
Figure 1. NSO Architecture
NSO has two main layers: the Device Manager and the Service Manager. They serve different purposes
but are tightly integrated into one transactional engine and database.
The purpose of the Device Manager is to manage different devices in a uniform way. The Device Manager
supports functions like device templates, device groups, configuration changes, configuration view and
audit. The Device Manager does all this in a generic way. The NSO user or developer does not have to
understand the underlying device CLI or similar.
The Device Manager sits north of the Network Element Drivers, NEDs. NEDs abstract away the different
protocols and APIs towards the devices. The NEDs are separate packages. Every individual NED packages
is for a specific device OS. NEDs are released and delivered separately from the NSO platform itself.
The NED contains a YANG data-model for the supported device. In this way the Device Manager can deal
with the devices in a uniform YANG way.
The Service Manager lets you use and define service-aware applications, such as IPTV or MPLS VPNs,
that configure devices. Each service type is a package of its own and loaded to NSO. Service packages are
defined by the skilled NSO users, integrators or Cisco. The important thing is that service packages are
defined exactly according to the specific requirements. They can be modified and re-loaded into a running
system. This gives flexibility in the service portfolio. The main parts of a service package is a YANG
Service Model and a mapping definition towards the desired configurations. The Service Manager supports
the full life-cycle for a service. Any possible change can be made to a service instance, for example:
creating a VPN, deleting a VPN, adding a VPN leg, changing the bandwidth, migration a VPN from
one vendor to another, etc. This comes from a patented built-in algorithm FASTMAP, that is capable of
deriving the minimum set of device configuration changes from one single mapping definition.
NSO uses a dedicated built-in storage (CDB) for all configuration data. There are a number of advantages
to CDB compared to using some external storage for configuration data. CDB has:
• A solid model on how to handle configuration data in network devices, including a good update
subscription mechanism.
• A TCP based API whereby it is possible to read and subscribe to changes to the network remotely.
• Fast lightweight database access. CDB by default keeps the entire configuration in RAM as well as on
disk.
• Ease of use. CDB is already integrated into NSO, the database is lightweight and has no maintenance
needs. Writing instrumentation functions to access data is easy.
• Automatic support for upgrade and downgrade of configuration data. This is a key feature, which
is useful not only when performing actual up/downgrades on the device. It also greatly simplifies
the development process by allowing individual developers to add/delete items in the configuration
without any impact whatsoever on other developers.
Service applications and NEDs are examples of packages that NSO loads. A package consists of YANG
modules, code and or templates. NSO loads these at startup. A package is a controlled way of managing
loading and versioning of custom applications. Packages can be added and upgraded at run-time. See
Chapter 9, NSO Packages in NSO 5.3 User Guide for more information on packages.
Note When running NSO against real devices, (not just the NSO network simulator for educational purposes),
make sure you have the correct NED package version from the delivery repository.
The Java code of packages is loaded into the NSO Java VM. The Java VM is a controlled Java
environment that manages NSO Java API applications.
NSO can manage alarms from devices, services and self-management. The alarm life-cycle is handled by
the alarm manager. It also supports operator actions like acknowledgement. See Chapter 6, The Alarm
Manager in NSO 5.3 User Guide for more information on alarms.
The NSO Core engine handles fundamental functions like transactions, HA replication, upgrades/
downgrades etc. All NSO operations are handled within the transaction manager. The transaction
manager handles nested transactions between services and devices, and is capable to roll-back service
configurations across multiple devices.
All NSO operations passes through a centralised AAA engine. Access control can be defined at any
granularity level. All activities are logged in an audit log. Authentication can be connected to centralised
authentication mechanisms via PAM.
YANG modeling is at the core of NSO. Devices and services are modeled in YANG, and as a result NSO
can render Web UI, CLI, and APIs. NSO is truly a model-driven application. The YANG compiler ncsc
is used to check and compile the YANG files, and it generates a compiled schema into .fxs files. The
compilation also generates the corresponding Web, CLI, and Java interfaces.
Further, when browsing $NCS_DIR/src/ncs/yang you will find models for all aspects of NSO
functionality, for example
tailf-ncs-alarms.yang This model defines how NSO manages alarms. The source of
an alarm can be anything like an NSO state change, SNMP or
NETCONF notification.
tailf-ncs-snmp.yang This model defines how to configure the NSO northbound SNMP
agent.
tailf-ncs-config.yang This models describes the layout of the NSO config file, usually
called ncs.conf
tailf-ncs-packages.yang This models describes the layout of the file package-meta-
data.xml. All user code, data models MIBS, Java code is always
contained in an NSO package. The package-meta-data.xml
file must always exist in a package and describes the package.
These models will be illustrated and briefly explained below. Note that the figures only contain some
relevant aspects of the model and are far from complete. The details of the model are explained in the
respective sections.
A good way to learn the model is to start the NSO CLI and use tab completion to navigate the model. Note
that depending if you are in operation mode or configuration mode different parts of the model will show
up. Also try using TAB to get a list of actions at the level you want, for example devices TAB.
Another way to learn and explore the NSO model is to use the yanger tool to render a tree output from the
NSO model: yanger -f tree --tree-depth=3 tailf-ncs.yang This will show a tree for the complete model.
Below is a truncated example:
Example 2. Using yanger
$ yanger -f tree --tree-depth=3 tailf-ncs.yang
module: tailf-ncs
+--rw ssh
| +--rw host-key-verification? ssh-host-key-verification-level
| +--rw private-key* [name]
| +--rw name string
| +--rw key-data ssh-private-key
| +--rw passphrase? tailf:aes-256-cfb-128-encrypted-string
+--rw cluster
| +--rw remote-node* [name]
| | +--rw name node-name
| | +--rw address? inet:host
| | +--rw port? inet:port-number
| | +--rw ssh
| | +--rw authgroup -> /cluster/authgroup/name
| | +--rw trace? trace-flag
| | +--rw username? string
| | +--rw notifications
Process Architecture
NSO is actually only two processes: the NSO daemon: "ncs", and a Java VM as illustrated by the
following figure.
NSO contains NSO core with its service and transaction managers. Those managers are built on top of the
configuration database CDB. The algorithm FASTMAP looks to that any changes in the managed devices
are performed efficiently. Southbound interfaces such as NETCONF, CLI and SNMP connects to the
devices. The northbound APIs includes WebUI, NETCONF and CLI.
The NSO Java VM runs all Java application components implementing the callbacks. It also runs the Cisco
CLI NED, Generic NED Java code and the SNMP Notification Receiver. Which java components that will
run in a specific NSO installation is defined by the NSO packages that are available for the installation.
By default NSO is configured to start and monitor the Java VM, but this can be reconfigured to be another
program or not controlled by the NSO daemon at all. All communication between the NSO daemon and
the Java VM is performed over TCP sockets, for example a Java application may use the Java CDB API to
subscribe on configuration data changes.
Overview
The NSO Java VM is the execution container for all java classes supplied by deployed NSO packages.
The classes, and other resources, are structured in jar files and the specific use of these classes are
described in the component tag in respective package-meta-data.xml file. Also as a framework,
it starts and control other utilities for the use of these components. To accomplish this a main class
com.tailf.ncs.NcsMain, implementing the Runnable interface, is started as a thread. This thread
can be the main thread (running in a java main()) or be embedded into another java program.
When the NcsMain thread starts it establishes a socket connection towards NSO. This is called the NSO
Java VM control socket. It is the responsibility of NcsMain to respond to command requests from NSO
and pass these commands as events to the underlying finite state machine (FSM). The NcsMain FSM will
execute all actions as requested by NSO. This include class loading and instantiation as well as registration
and start of services, NEDs etc.
When NSO detects the control socket connect from NSO Java VM, it starts an initialization process.
1 First NSO sends a INIT_JVM request to the NSO Java VM. At this point the NSO Java VM will
load schemas i.e. retrieve all known YANG module definitions. The NSO Java VM responds when all
modules are loaded.
2 Then NSO sends a LOAD_SHARED_JARS request for each deployed NSO package. This request
contains the URLs for the jars situated in the shared-jar directory in respective NSO package. The
classes and resources in these jars will be globally accessible for all deployed NSO packages.
3 Next step is to send a LOAD_PACKAGE request for each deployed NSO package. This request contains
the URLs for the jars situated in the private-jar directory in respective NSO package. These
classes and resources will be private to respective NSO package. In addition, classes that are referenced
in a component tag in respective NSO package package-meta-data.xml file will be instantiated.
4 NSO will send a INSTANTIATE_COMPONENT request for each component in each deployed
NSO package. At this point the NSO Java VM will register a start method for respective
component. NSO will send these requests in a proper start phase order. This implies that the
INSTANTIATE_COMPONENT requests can be send in a order that mixes components from different
NSO packages.
5 Last, NSO sends a DONE_LOADING request which indicates that the initialization process is finished.
After this the NSO Java VM is up and running.
See the section called “Debugging Startup” for tips on customizing startup behavior and debugging
problems when the Java VM fails to start
YANG model
The file tailf-ncs-java-vm.yang defines the java-vm container which, along with ncs.conf, is
the entry point for controlling the NSO Java VM functionality. Study the content of the YANG model in
Example 4, “The Java VM YANG model”. For a full explanation of all the configuration data, look at the
YANG file and man ncs.conf.
Many of the nodes beneath java-vm are by default invisible due to a hidden attribute. In order to make
everything beneath java-vm visible in the CLI, two steps are required. First the following XML snippet
must be added to ncs.conf:
<hide-group>
<name>debug</name>
</hide-group>
Note The jars in the shared-jar and private-jar directories should NOT be part of the java classpath
The purpose of this is first to keep integrity between packages which should not have access to each others
classes, other than the ones that are contained in the shared jars. Secondly, this way it is possible to hot
redeploy the private jars and classes of a specific package while keeping other packages in a run state.
Should this class loading scheme not be desired, it is possible to suppress it by starting the NSO Java VM
with the system property TAILF_CLASSLOADER set to false.
This will force NSO Java VM to use the standard java system classloader. For this to work, all jars from all
deployed NSO packages needs to be part of the classpath. The drawback of this is that all classes will be
globally accessible and hot redeploy will have no effect.
There are 4 types of components that the NSO Java VM can handle:
• The ned type. The NSO Java VM will handle NEDs of sub type cli and generic which are the ones
that have a java implementation.
• The callback type. These are any forms of callbacks that defined by the Dp API.
• The application type. These are user defined daemons that implements a specific
ApplicationComponent java interface.
• The upgrade type. This component type is activated when deploying a new version of a NSO package
and the NSO automatic CDB data upgrade is not sufficient. See the section called “Writing an
Upgrade Package Component” for more information.
In some situations several NSO packages are expected to use the same code base, e.g. when third party
libraries are used or the code is structured with some common parts. Instead of duplicate jars in several
NSO packages it is possible to create a new NSO package, add these jars to the shared-jar directory
and let the package-meta-data.xml file contain no component definitions at all. The NSO Java VM
will load these shared jars and these will be accessible from all other NSO packages.
Inside the NSO Java VM each component type have a specific Component Manager. The responsibility
of these Managers are to manage a set of component classes for each NSO package. The Component
Manager act as a FSM that controls when a component should be registered, started, stopped etc.
Figure 5. Component Managers
For instance the DpMuxManager controls all callback implementations (services, actions, data providers
etc). It can load, register, start and stop such callback implementations.
The NSO Java VM will start each class in a separate thread. The init() is called prior to the thread
is started. The run() runs in a thread similar to the run() method in the standard java Runnable
interface. The finish() method is called when the NSO Java VM wants the application thread to
stop. It is the responsibility of the programmer to stop the application thread i.e stop the execution in the
run() method when finish() is called. Note, that making the thread stop when finish() is called
is important so that the NSO Java VM will not be hanging at a STOP_VM request.
Example 6. ApplicationComponent Interface
package com.tailf.ncs;
/**
* User defined Applications should implement this interface that
* extends Runnable, hence also the run() method has to be implemented.
* These applications are registered as components of type
* "application" in a Ncs packages.
*
* Ncs Java VM will start this application in a separate thread.
* The init() method is called before the thread is started.
* The finish() method is expected to stop the thread. Hence stopping
* the thread is user responsibility
*
*/
public interface ApplicationComponent extends Runnable {
/**
* This method is called by the Ncs Java vm before the
* thread is started.
*/
public void init();
/**
* This method is called by the Ncs Java vm when the thread
* should be stopped. Stopping the thread is the responsibility of
* this method.
*/
public void finish();
Example 7. Resource injection
@Resource(type=ResourceType.MAAPI, scope=Scope.INSTANCE)
public Maapi m;
This way the NSO Java VM and the Resource manager can keep control over used resources and also have
the possibility to intervene e.g. close sockets at forced shutdowns.
The Resource manager can handle two types of resources, Maapi and Cdb.
Example 8. Resource types
package com.tailf.ncs.annotations;
/**
* ResourceType set by the Ncs ResourceManager
*/
public enum ResourceType {
MAAPI(1),
CDB(2);
}
For both the Maapi and Cdb resource types a socket connection is opened towards NSO by the Resource
manager. At a stop the Resource manager will disconnects these sockets before ending the program.
User programs can also tell the resource manager when its resources are no longer needed with a call to
ResourceManager.unregisterResources().
Example 9. Resource Annotation
package com.tailf.ncs.annotations;
/**
* Annotation class for Action Callbacks Attributes are callPoint and callType
*/
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.FIELD)
public @interface Resource {
Example 10. Scopes
package com.tailf.ncs.annotations;
/**
* Scope for resources managed by the Resource Manager
*/
public enum Scope {
/**
* Context scope implies that the resource is
* shared for all fields having the same qualifier in any class.
* The resource is shared also between components in the package.
* However sharing scope is confined to the package i.e sharing cannot
* be extended between packages.
* If the qualifier is not given it becomes "DEFAULT"
*/
CONTEXT(1),
/**
* Instance scope implies that all instances will
* get new resource instances. If the instance needs
* several resources of the same type they need to have
* separate qualifiers.
*/
INSTANCE(2);
}
When the NSO Java VM starts it will receive component classes to load from NSO. Note, that the
component classes are the classes that are referred to in the package-meta-data.xml file. For each
component class the Resource Manager will scan for annotations and inject resources as specified.
However the package jars can contain lots of classes in addition to the component classes. These will be
loaded at runtime and will be unknown by the NSO Java VM and therefore not handled automatically
by the Resource Manager. These classes can also use resource injection but needs a specific call to the
Resource Manager for the mechanism to take effect. Before the resources are used for the first time the
resource should be used, a call of ResourceManager.registerResources(...) will force
injection of the resources. If the same class is registered several times the Resource manager will detect
this and avoid multiple resource injections.
starts java with the NcsJVMLauncher.main(). This is the recommended way of launching the NSO
Java VM and how it is set up in a default installation.
If there is a need to run the NSO Java VM as an embedded thread inside another program. This can be
done as simply as instantiating the class NcsMain and start this instance in a new thread.
Example 12. Starting NcsMain
ncsThread.start();
However, with the embedding of the NSO Java VM comes the responsibility to manage the life-cycle of
the NSO Java VM thread. This thread cannot be started before NSO has started and is running or else the
NSO Java VM control socket connection will fail. Also, running NSO without the NSO Java VM being
launched will render runtime errors as soon as NSO needs NSO Java VM functionality.
To be able to control an embedded NSO Java VM from another supervising java thread or program an
optional JMX interface is provided. The main functionality in this interface is listing, starting and stopping
the NSO Java VM and its Component Managers.
JMX interface
Normal control of the NSO Java engine is performed from NSO e.g. using the CLI. However NcsMain
class and all component managers implements JMX interfaces to make it possible control the NSO Java
VM also using standard Java tools like JvisualVM and JConsol.
The JMX interface is configured via the java-vm YANG model (see $NCS_DIR/src/ncs/yang/
tailf-ncs-java-vm.yang) in the NSO configuration. For JMX connection purposes there are four
attributes to configure:
The JMX connection server uses two sockets for communication with a JMX client. The first socket is
the JNDI RMI registry where the JMX Mbean objects are looked up. The second socket is the JMX RMI
server from which the JMX connection objects are exported. For all practical purposes the host/ip for both
sockets are the same and only the ports differ.
In addition to the JMX URL the JMX user needs to authenticate using a legitimate user/password from
the AAA configuration. An example on JMX authentication using the JConsol standard java tool is the
following:
/**
* JMX interface - shutdown Ncs java vm main thread
*/
public void shutdown();
/**
* JMX interface - hot redeploy all packages
*/
public void redeployAll();
/**
* JMX interface - list shared jars
*/
public String[] listSharedJars();
}
/**
* JMX interface - list all Application components
*/
public String[] listPackageComponents();
}
/**
* JMX interface - list all callback components
*/
public String[] listPackageComponents();
}
/**
* JMX interface - list all Application components
*/
public String[] listPackageComponents();
}
Logging
NSO has extensive logging functionality. Log settings are typically very different for a production
system compared to a development system. Furthermore, the logging of the NSO daemon and the NSO
Java VM is controlled by different mechanisms. During development, we typically want to turn on the
developer-log. The sample ncs.conf that comes with the NSO release has log settings suitable for
development, while the ncs.conf created by a "system install" are suitable for production deployment.
The NSO Java VM uses Log4j for logging and will read its default log settings from a provided
log4j2.xml file in the ncs.jar. Following that, NSO itself has java-vm log settings that are directly
controllable from the NSO CLI. We can do:
admin@ncs(config)# java-vm java-logging logger com.tailf.maapi level level-trace
admin@ncs(config-logger-com.tailf.maapi)# commit
Commit complete.
This will dynamically reconfigure the log level for package com.tailf.maapi to be at the level trace. Where
the java logs actually end up is controlled by the log4j2.xml file. By default the NSO Java VM writes to
stdout. If the NSO Java VM is started by NSO, as controlled by the ncs.conf parameter /java-vm/
auto-start, NSO will pick up the stdout of the service manager and write it to:
admin@ncs(config)# show full-configuration java-vm stdout-capture
java-vm stdout-capture file /var/log/ncs/ncs-java-vm.log
• new-session-timeout controls how long NSO will wait for the NSO Java VM to respond to a new
session.
• query-timeout controls how long NSO will wait for the NSO Java VM to respond to a request to get
data.
• connect-timeout controls how long NSO will wait for the NSO Java VM to initialize a Dp connection
after the initial socket connect.
Whenever any of these timeouts trigger, NSO will close the sockets from NSO to the NSO Java VM.
The NSO Java VM will detect the socket close and exit. If NSO is configured to start (and restart) the
NSO Java VM, the NSO Java VM will be automatically restarted. If the NSO Java VM is started by some
external entity, if it runs within an application server, it is up to that entity to restart NSO Java VM.
Debugging Startup
When using the auto-start feature (the default), NSO will start the NSO Java VM as outlined in
the section called “Overview”, there are a number of different settings in the java-vm YANG model
(see $NCS_DIR/src/ncs/yang/tailf-ncs-java-vm.yang) that control what happens when
something goes wrong during the startup.
The two timeout configurations connect-time and initialization-time are most relevant
during startup. If the Java VM fails during the initial stages (during INIT_JVM, LOAD_SHARED_JARS,
or LOAD_PACKAGE) either because of a timeout or because of a crash, NSO will log "The NCS Java VM
synchronization failed" in ncs.log.
Note The synchronization error message in the log will also have a hint as to what happened: "closed" usually
means that the Java VM crashed (and closed the socket connected to NSO), "timeout" means that it failed
to start (or respond) within the time limit. For example if the Java VM runs out of memory and crashes,
this will be logged as "closed".
After logging NSO will take action based on the synchronization-timeout-action setting:
log NSO will log the failure, and if auto-restart is set to true NSO will try
to restart the Java VM
log-stop (default) NSO will log the failure, and if the Java VM has not stopped already NSO
will also try to stop it. No restart action is taken.
exit NSO will log the failure, and then stop NSO itself.
If you have problems with the Java VM crashing during startup, a common pitfall is running out of
memory (either total memory on the machine, or heap in the JVM). If you have a lot of Java code (or a
loaded system) perhaps the Java VM did not start in time. Try to determine the root cause, check ncs.log
and ncs-java-vm.log, and if needed increase the timeout.
For complex problems, for example with the class loader, try logging the internals of the startup:
admin@ncs(config)# java-vm java-logging logger com.tailf.ncs level level-all
admin@ncs(config-logger-com.tailf.maapi)# commit
Commit complete.
Setting this will result in a lot more detailed information in ncs-java-vm.log during startup.
When the auto-restart setting is true (the default) it means that NSO will try to restart the Java
VM when it fails (at any point in time, not just during startup). NSO will at most try three restarts within
30 seconds, i.e. if the Java VM crashes more than three times within 30 seconds NSO gives up. You can
check the status of the Java VM using the java-vm YANG model. For example in the CLI:
admin@ncs# show java-vm
java-vm start-status started
java-vm status running
Introduction
NSO is capable of starting one or several Python VMs where Python code in user provided packages can
run.
An NSO package containing a python directory will be considered to be a Python Package. By default,
a Python VM will be started for each Python package that has got a python-class-name defined in its
package-meta-data.xml file. In this Python VM the PYTHONPATH environment variable will be
pointing to the python directory in the package.
Several Python packages can be started in the same Python VM if their corresponding package-meta-
data.xml files contains the same python-package/vm-name.
A Python package skeleton can be created by making use of the ncs-make-package command:
YANG model
The tailf-ncs-python-vm.yang defines the python-vm container which, along with ncs.conf, is
the entry point for controlling the NSO Python VM functionality. Study the content of the YANG model
in Example 20, “The Python VM YANG model”. For a full explanation of all the configuration data, look
at the YANG file and man ncs.conf. Here will follow a description of the most important configuration
parameters.
Note that some of the nodes beneath python-vm are by default invisible due to a hidden attribute.
In order to make everything beneath python-vm visible in the CLI, two steps are required. First the
following XML snippet must be added to ncs.conf:
<hide-group>
<name>debug</name>
</hide-group>
With the logging/level the amount of logged information can be controlled. This is a global setting applied
to all started Python VMs unless explicitly set for a particular VM, see the section called “Debugging of
Python packages”. The levels corresponds to the pre-defined Python levels in the Python logging module,
ranging from level-critical to level-debug.
Note Refer to the official Python documentation for the logging module for more information about the log
levels.
The logging/log-file-prefix define the prefix part of the log file path used for the Python VMs. This prefix
will be appended with a Python VM specific suffix which is based on the Python package name or the
python-package/vm-name from the package-meta-data.xml file. The default prefix is logs/ncs-
python-vm so e.g. if a Python package named l3vpn is started, a logfile with the name logs/ncs-python-vm-
l3vpn.log will be created.
The status/start and status/current contains operational data. The status/start command will show
information about what Python classes, as declared in the package-meta-data.xml file, that where
started and whether the outcome was successful or not. The status/current command will show which
Python classes that are currently running in a separate thread. The latter assume that the user provided code
cooperate by informing NSO about any thread(s) started by the user code, see the section called “Structure
of the User provided code”.
The start and stop actions makes it possible to start and stop a particular Python VM.
Example 21. package-meta-data.xml excerpt
<component>
<name>L3VPN Service</name>
<application>
<python-class-name>l3vpn.service.Service</python-class-name>
</application>
</component>
<component>
<name>L3VPN Service model upgrade</name>
<upgrade>
<python-class-name>l3vpn.upgrade.Upgrade</python-class-name>
</upgrade>
</component>
The component name (L3VPN Service in the example) is a human readable name of this application
component. It will be shown when doing show python-vm in the CLI. The python-class-name should
specify the Python class that implements the application entry point. Note that it needs to be specified
using Python's dot-notation and should be fully qualified (given the fact that PYTHONPATH is pointing to
the package python directory).
Study the excerpt of the directory listing from a package named l3vpn below.
| +-- __init__.py
| +-- l3vpn_ns.py
+-- src
+-- Makefile
+-- yang/
+-- l3vpn.yang
Look closely at the python directory above. Note that directly under this directory is another directory
named as the package (l3vpn) that contains the user code. This is an important structural choice which
eliminates the chance of code clashes between dependent packages (only if all dependent packages uses
this pattern of course).
As you can see, the service.py is located according to the description above. There is also a
__init__.py (which is empty) there to make the l3vpn directory considered a module from Python's
perspective.
Note the _namespaces/l3vpn_ns.py file. It is generated from the l3vpn.yang model using the
ncsc --emit-python command and contains constants representing the namespace and the various
components of the YANG model, which the User code can import and make use of.
The service.py file should include a class definition named Service which acts as the component's
entry point. See the section called “The application component” for details.
Notice that there is also a file named upgrade.py present which hold the implementation of the upgrade
component specified in the package-meta-data.xml excerpt above. See the section called “The
upgrade component” for details regarding upgrade components.
class Service(ncs.application.Application):
def setup(self):
# The application class sets up logging for us. It is accessible
# through 'self.log' and is a ncs.log.Log instance.
self.log.info('Service RUNNING')
def teardown(self):
# When the application is finished (which would happen if NCS went
# down, packages were reloaded or some error occurred) this teardown
# method will be called.
self.log.info('Service FINISHED')
The Service class will be instantiated by NSO when started or whenever packages are reloaded. Custom
initialization such as registering service- and action callbacks should be done in the setup() method. If any
cleanup is needed when NSO finishes or when packages are reloaded it should be placed in the teardown()
method.
The existing log functions are named after the standard Python log levels, thus in the example above the
self.log object contains the functions debug,info,warning,error,critical. Where to log and with what level
can be controlled from NSO.
class Upgrade(ncs.upgrade.Upgrade):
"""An upgrade 'class' that will be instantiated by NSO.
This class can be named anything as long as NSO can find it using the
information specified in <python-class-name> for the <upgrade>
component in package-meta-data.xml.
Arguments:
cdbsock -- a connected CDB data socket for reading current (old) data.
trans -- a ncs.maapi.Transaction instance connected to the init
transaction for writing (new) data.
The method should return True (or None, which means that a return
statement is not needed) if everything was OK.
If something went wrong the method should return False or throw an
error. The northbound client initiating the upgrade will be alerted
Normally the logging objects provided by the Python APIs is used. They are based on the standard Python
logging module. This gives the possibility to control the logging if needed, e.g. getting a module local
logger to increase logging granularity.
The default logging level is set to info. For debugging purposes it is very useful to increase the logging
level:
$ ncs_cli -u admin
admin@ncs> config
admin@ncs% set python-vm logging level level-debug
admin@ncs% commit
This sets the global logging level and will affect all started Python VMs. It is also possible to set the
logging level for a single package (or multiple packages running in the same VM), which will take
precedence over the global setting:
$ ncs_cli -u admin
admin@ncs> config
admin@ncs% set python-vm logging vm-levels pkg_name level level-debug
admin@ncs% commit
The debugging output are printed to separate files for each package and the log file naming is ncs-
python-vm-pkg_name.log
As seen above the command to start Python is simply python. Here we describe a couple of options for
deciding which Python NSO should start.
The new start-command will take effect upon the next restart or configuration reload.
Introduction
NSO is capable of starting user provided Erlang applications embedded in the same Erlang VM as NSO.
The Erlang code is packaged into applications which are automatically started and stopped by NSO
if they are located at the proper place. NSO will search all packages for top level directories called
erlang-lib. The structure of such a directory is the same as a standard lib directory in Erlang. The
directory may contain multiple Erlang applications. Each one must have a valid .app file. See the Erlang
documentation of application and app for more info.
An Erlang package skeleton can be created by making use of the ncs-make-package command:
Erlang API
The Erlang API to NSO is implemented as an Erlang/OTP application called econfd. This application
comes in two flavours. One is builtin into NSO in order to support applications running in the same Erlang
VM as NSO. The other is a separate library which is included in source form in the NSO release, in the
$NCS_DIR/erlang directory. Building econfd as described in the $NCS_DIR/erlang/econfd/
README file will compile the Erlang code and generate the documentation.
This API can be used by applications written in Erlang in much the same way as the C and Java APIs are
used, i.e. code running in an Erlang VM can use the econfd API functions to make socket connections
to NSO for data provider, MAAPI, CDB, etc access. However the API is also available internally in NSO,
which makes it possible to run Erlang application code inside the NSO daemon, without the overhead
imposed by the socket communication.
When the application is started, one of its processes should make initial connections to the NSO
subsystems, register callbacks etc. This is typically done in the init/1 function of a gen_server
or similar. While the internal connections are made using the exact same API functions (e.g.
econfd_maapi:connect/2) as for an application running in an external Erlang VM, any Address
and Port arguments are ignored, and instead standard Erlang inter-process communication is used.
There is little or no support for testing and debugging Erlang code executing internally in NSO, since
NSO provides a very limited runtime environment for Erlang in order to minimize disk and memory
footprints. Thus the recommended method is to develop Erlang code targeted for this by using econfd in
a separate Erlang VM, where an interactive Erlang shell and all the other development support included in
the standard Erlang/OTP releases are available. When development and testing is completed, the code can
be deployed to run internally in NSO without changes.
For information about the Erlang programming language and development tools, please refer to
www.erlang.org and the available books about Erlang (some are referenced on the web site).
The --printlog option to ncs, which prints the contents of the NSO error log, is normally only
useful for Cisco support and developers, but it may also be relevant for debugging problems with
application code running inside NSO. The error log collects the events sent to the OTP error_logger,
e.g. crash reports as well as info generated by calls to functions in the error_logger(3) module. Another
possibility for primitive debugging is to run ncs with the --foreground option, where calls to
io:format/2 etc will print to standard output. Printouts may also be directed to the developer log by
using econfd:log/3.
While Erlang application code running in an external Erlang VM can use basically any version of Erlang/
OTP, this is not the case for code running inside NSO, since the Erlang VM is evolving and provides
limited backward/forward compatibility. To avoid incompatibility issues when loading the beam files, the
Erlang compiler erlc should be of the same version as was used to build the NSO distribution.
NSO provides the VM, erlc and the kernel, stdlib, and crypto OTP applications.
Note Obviously application code running internally in the NSO daemon can have an impact on the execution
of the standard NSO code. Thus it is critically important that the application code is thoroughly tested and
verified before being deployed for production in a system using NSO.
Application configuration
Applications may have dependencies to other applications. These dependencies affects the start order. If
the dependent application resides in another package this should be expressed by using required-package
in the package-meta-data.xml file. Application dependencies within the same package should be
expressed in the .app. See below.
The following config settings in the .app file are explicitly treated by NSO:
Example
The examples.ncs/getting-started/developing-with-ncs/18-simple-service-
erlang example in the bundled collection shows how to create a service written in Erlang and executing
it internally in NSO. This Erlang example is a subset of the Java example examples.ncs/getting-
started/developing-with-ncs/4-rfs-service.
Introduction
This chapter gives a general overview of the features and use of CDB the built-in query configuration
database used by NSO.
CDB is a hierarchical database, with the structure closely reflecting the YANG model it stores. The entire
configuration is kept in RAM at all times. Persistence is handled by keeping journal files on disk.
Important For reliable storage of the configuration on disk, CDB requires that the file system correctly implements
the standard primitives for file synchronization and truncation. For this reason (as well as for performance),
NFS or other network file systems are unsuitable for use with CDB - they may be acceptable for
development, but using them in production is unsupported and strongly discouraged.
Note Since CDB is a RAM database, the amount of RAM needed is proportional to the number of devices and
services managed. When NSO is used to manage a large network the amount of RAM needed can be quite
large.
When using CDB to store the configuration data, the applications need to be able to:
Figure 25, “NSO CDB Architecture Scenario” illustrates the architecture when CDB is used. The
Application components reads configuration data and subscribes to changes to the database using a simple
RPC-based API. The API is part of the Java library and is fully documented in the Javadoc for CDB.
While CDB is the default data store for configuration data in NSO, it is possible to use an external
database, if needed. See the example examples.ncs/getting-started/developing-with-
ncs/6-extern-db for details.
In the following, we will use the files in examples.ncs/service-provider/mpls-vpn as a
source for our examples. Refer to README in that directory for additional details.
namespace "http://com/example/l3vpn";
prefix l3vpn;
...
container topology {
list role {
key "role";
tailf:cli-compact-syntax;
leaf role {
type enumeration {
enum ce;
enum pe;
enum p;
}
}
leaf-list device {
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
}
}
list connection {
key "name";
leaf name {
type string;
}
container endpoint-1 {
tailf:cli-compact-syntax;
uses connection-grouping;
}
container endpoint-2 {
tailf:cli-compact-syntax;
uses connection-grouping;
}
leaf link-vlan {
type uint32;
}
}
}
We can also refer to list instances using the values of the keys of the list. In a YANG model you
specify which leafs (there can be several) are to be used for keys by using the key <name> statement
at the beginning of the list. In our case a connection has the name leaf as key. So the path /
l3vpn:topology/connection{c1}/endpoint-2 refers to the endpoint-2 leaf of the
connection whose name is “c1”.
A YANG list may have more than one key. The syntax for the keys is a space separated list of key values
enclosed within curly brackets: {Key1 Key2 ...}
Which version of list element referencing to use depends on the situation. Indexing with an
integer is convenient when looping through all elements. As a convenience all methods expecting
keypaths accept formatting characters and accompanying data items. For example you can use
CdbSession.getElem("server[%d]/ifc{%s}/mtu", 2, "eth0") to fetch the MTU of
the third server instance's interface named "eth0". Using relative paths and CdbSession.pushd() it is
possible to write code that can be re-used for common sub-trees.
The current position also includes the namespace. To read elements from a different namespace use the
prefix qualified tag for that element like in l3vpn:topology.
Subscriptions
The CDB subscription mechanism allows an external program to be notified when some part of the
configuration changes. When receiving a notification it is also possible to iterate through the changes
written to CDB. Subscriptions are always towards the running data-store (it is not possible to subscribe
to changes to the startup data-store). Subscriptions towards operational data (see the section called
“Operational Data in CDB”) kept in CDB are also possible, but the mechanism is slightly different.
The first thing to do is to inform CDB which paths we want to subscribe to. Registering a path
returns a subscription point identifier. This is done by acquiring an subscriber instance by
calling CdbSubscription Cdb.newSubscription() method. For the subscriber (or
CdbSubscription instance) the paths are registered with the CdbSubscription.subscribe()
that that returns the actual subscription point identifier. A subscriber can have multiple subscription points,
and there can be many different subscribers. Every point is defined through a path - similar to the paths we
use for read operations, with the exception that instead of fully instantiated paths to list instances we can
selectively use tagpaths.
When a client is done defining subscriptions it should inform NSO that it is ready to receive notifications
by calling CdbSubscription.subscribeDone(), after which the subscription socket is ready to be
polled.
We can subscribe either to specific leaves, or entire subtrees. Explaining this by example we get:
/ncs:devices/global-settings/trace
Subscription to a leaf. Only changes to this leaf will generate a notification.
/ncs:devices
Subscription to the subtree rooted at /ncs:devices. Any changes to this subtree will generate a
notification. This includes additions or removals of device instances, as well as changes to already
existing device instances.
/ncs:devices/device{"ex0"}/address
Subscription to a specific element in a list. A notification will be generated when the device “ex0”
changes its ip address.
/ncs:devices/device/address
Subscription to a leaf in a list. A notification will be generated leaf address is changed in any
device instance.
When adding a subscription point the client must also provide a priority, which is an integer (a smaller
number means higher priority). When data in CDB is changed, this change is part of a transaction. A
transaction can be initiated by a commit operation from the CLI or a edit-config operation in NETCONF
resulting in the running database being modified. As the last part of the transaction CDB will generate
notifications in lock-step priority order. First all subscribers at the lowest numbered priority are handled,
once they all have replied and synchronized by calling CdbSubscription.sync() the next set - at
the next priority level - is handled by CDB. Not until all subscription points have been acknowledged is
the transaction complete. This implies that if the initiator of the transaction was for example a commit
command in the CLI, the command will hang until notifications have been acknowledged.
Note that even though the notifications are delivered within the transaction it is not possible for a
subscriber to reject the changes (since this would break the two-phase commit protocol used by the NSO
backplane towards all data-providers).
As a subscriber has read its subscription notifications using CdbSubscription.read()
it can iterate through the changes that caused the particular subscription notification using the
CdbSubscription.diffIterate() method. It is also possible to start a new read-session to the
CdbDBType.CDB_PRE_COMMIT_RUNNING database to read the running database as it was before the
pending transaction.
To view registered subscribers use the ncs --status command.
Sessions
It is important to note that CDB is locked for writing during a read session using the Java API. A
session starts with CdbSession Cdb.startSession() and the lock is not released until the
CdbSession.endSession() (or the Cdb.close()) call. CDB will also automatically release the
lock if the socket is closed for some other reason, such as program termination.
<device>ce0</device>
<interface>GigabitEthernet0/8</interface>
<ip-address>192.168.1.1/30</ip-address>
</endpoint-1>
<endpoint-2>
<device>pe0</device>
<interface>GigabitEthernet0/0/0/3</interface>
<ip-address>192.168.1.2/30</ip-address>
</endpoint-2>
<link-vlan>88</link-vlan>
</connection>
<connection>
<name>c1</name>
...
Another example of using this features is when initializing the AAA database. This is described in
Chapter 9, The AAA infrastructure in NSO 5.3 Administration Guide .
All files ending in .xml will be loaded (in an undefined order) and committed in a single transaction
when CDB enters start phase 1 (see the section called “Starting NSO” in NSO 5.3 Administration Guide
for more details on start phases). The format of the init files is rather lax in that it is not required that
a complete instance document following the data-model is present, much like the NETCONF edit-
config operation. It is also possible to wrap multiple top-level tags in the file with a surrounding config
tag, as shown in Example 29, “Wrapper for Multiple Top Level Tags” like this:
Example 29. Wrapper for Multiple Top Level Tags
<config xmlns="http://tail-f.com/ns/config/1.0">
...
</config>
Note The actual names of the XML files does not matter, i.e., they do not need to correspond to the part of the
YANG model being initialized.
Subscriptions
Subscriptions towards the operational data in CDB are similar to the above, but due to the fact that the
operational data store is designed for light-weight access, and does not have transactions and normally
avoids the use of any locks, there are several differences - in particular:
• Subscription notifications are only generated if the writer obtains the “subscription lock”, by using the
Cdb.startSession() method with the CdbLockType.LOCK_REQUEST flag.
• Subscriptions are registered with the CdbSubscription.subscribe()
method with the flag CdbSubscriptionType.SUB_OPERATIONAL rather than
CdbSubscriptionType.SUB_RUNNING.
• No priorities are used.
• Neither the writer that generated the subscription notifications nor other writes to the same data are
blocked while notifications are being delivered. However the subscription lock remains in effect until
notification delivery is complete.
• The previous value for modified leaf is not available when using the
CdbSubscriber.diffIterate() method.
Essentially a write operation towards the operational data store, combined with the subscription lock,
takes on the role of a transaction for configuration data as far as subscription notifications are concerned.
This means that if operational data updates are done with many single-element write operations, this can
potentially result in a lot of subscription notifications. Thus it is a good idea to use the multi-element
CdbSession.setObject() etc methods for updating operational data that applications subscribe to.
Since write operations that do not attempt to obtain the subscription lock are allowed to proceed even
during notification delivery, it is the responsibility of the applications using the operational data store to
obtain the lock as needed when writing. If subscribers should be able to reliably read the exact data that
resulted from the write that triggered their subscription, the subscription lock must always be obtained
when writing that particular set of data elements. One possibility is of course to obtain the lock for all
writes to operational data, but this may have an unacceptable performance impact.
Example
We will take a first look at the examples.ncs/getting-started/developing-with-
ncs/1-cdb example. This example is a NSO project with two packages: cdb and router.
Example packages
router A NED package with a simple but still realistic model of a network device. The only
component in this package is the NED component that uses NETCONF to communicate
with the device. This package is used in many NSO examples including examples.ncs/
getting-started/developing-with-ncs/0-router-network which is an
introduction to NSO device manager, NSO netsim and this router package.
cdb This package has an even simpler YANG model to illustrate some aspects of CDB data
retrieval. The package consists of 5 application components:
• Plain CDB Subscriber - This cdb subscriber subscribes to changes under the path /
devices/device{ex0}/config Whenever a change occurs there, the code iterates
through the change and prints the values.
• CdbCfgSubscriber - An more advanced CDB subscriber that subscribes to changes under
the path /devices/device/config/sys/interfaces/interface.
• OperSubscriber - An operational data subscribe that subscribes to changes under the path
/t:test/stats-item.
The cdb package include the YANG shown in Example 30, “1-cdb Simple Config Data”.
Example 30. 1-cdb Simple Config Data
module test {
namespace "http://example.com/test";
prefix t;
import tailf-common {
prefix tailf;
}
revision 2012-06-26 {
description "Initial revision.";
}
container test {
list config-item {
key ckey;
leaf ckey {
type string;
}
leaf i {
type int32;
}
}
list stats-item {
config false;
tailf:cdb-oper;
key skey;
leaf skey {
type string;
}
leaf i {
type int32;
}
container inner {
leaf l {
type string;
}
}
}
}
}
Let us new populate the database and look at the "Plain CDB Subscriber" and how it can use the Java
API to react to changes to the data. This component subscribes on changes under the path /devices/
device{ex0}/config which is configuration changes for the device named “ex0” which is a device
connected to NSO via the router NED.
Being an application component in the cdb package implies that this component is realized by a Java class
that implements the com.tailf.ncs.ApplicationComponent Java interface. This interface
inherits the Java standard Runnable interface which requires the run() method to be implemented. In
addition to this method there is a init() and a finish() method that has to be implemented. When
the NSO Java-VM starts this class will be started in a separate thread with an initial call to init() before
thread start. When the package is requested to stop execution a call to finish() is performed and this
method is expected to end thread execution.
public PlainCdbSub() {
}
Object state) {
try {
String kpString = Conf.kpToString(kp);
LOGGER.info("diffIterate: kp= "+kpString+", OP="+op+
", old_value="+old_value+", new_value="+
new_value);
return DiffIterateResultFlag.ITER_RECURSE;
} catch (Exception e) {
return DiffIterateResultFlag.ITER_CONTINUE;
}
}
}
}
We will walk through the code and highlight different aspects. We start with how the Cdb instance is
retrieved in this example. It is always possible to open a socket to NSO and create the Cdb instance
with this socket. But with this comes the responsibility to manage that socket. In NSO there is a
ResourceMananger that can take over this responsibility. In the code, the field that should contain the
Cdb instance is simply annotated with a @Resource annotation. The ResourceManager will find this
annotation and create the Cdb instance as specified. In this example (Example 32, “Resource Annotation”)
Scope.INSTANCE implies that new instances of this example class should have unique Cdb instances
(see more on this in the section called “The Resource Manager”).
Example 32. Resource Annotation
@Resource(type=ResourceType.CDB, scope=Scope.INSTANCE, qualifier="plain")
private Cdb cdb;
The init() method (shown in Example 33, “Plain Subscriber Init” is called before this application
component thread is started. For this subscriber this is the place to setup the subscription. First an
CdbSubscription instance is created and in this instance the subscription points are registered (one in
this case). When all subscription points are registered a call to CdbSubscriber.subscribeDone()
will indicate that the registration is finished and the subscriber is ready to start.
The run() method comes from the standard Java API Runnable interface and is executed when the
application component thread is started. For this subscriber (Example 34, “Plain CDB Subscriber”)
a loop over the CdbSubscription.read() method drives the subscription. This call will block
until data has changed for some of the subscription points that was registered, and the ids for these
subscription points will then be returned. In our example since we only have one subscription point we
know that this id the one stored as subId. This subscriber choose to find the changes by calling the
CdbSubscription.diffIterate() method. Important is to acknowledge the subscription by
calling CdbSubscription.sync() or else this subscription will block the ongoing transaction.
The finish() method (Example 36, “Plain Subscriber finish”) is called when the NSO
Java-VM wants the application component thread to stop execution. An orderly stop of the
thread is expected. Here the subscription will stop if the subscription socket and underlying Cdb
instance is closed. This will be done by the ResourceManager when we tell it that the resources
retrieved for this Java object instance could be unregistered and closed. This is done by a call to the
ResourceManager.unregisterResources() method.
We will now compile and start the 1-cdb example, populate some config data and look at the result.
Example 37, “Plain Subscriber Startup” shows how to do this.
$ ncs
By far the easiest way to populate the database with some actual data is to run the CLI (Example 38,
“Populate Data using CLI”).
admin@ncs(config)# devices device ex0 config r:sys syslog server 4.5.6.7 enabled
admin@ncs(config-server-4.5.6.7)# commit
Commit complete.
admin@ncs(config-server-4.5.6.7)# top
admin@ncs(config)# exit
admin@ncs# show devices device ex0 config r:sys syslog
NAME
----------
4.5.6.7
10.3.4.5
We have now added a server to the syslog. What remains is to check what our "Plain CDB Subscriber"
ApplicationComponent got as a result of this update. In the logs directory of the 1-cdb example there
is a file named PlainCdbSub.out which contains the log data from this application component. In the
beginning of this file a lot of logging is performed which emanates from the sync-from of the device. In
the end of this file we can find the three logrows which comes from our update. See extract in Example 39,
“Plain Subscriber Output” (with each row split over several to fit on the page).
We will turn to look at another subscriber which has a more elaborate diff iteration method. In our example
cdb package we have an application component named "CdbCfgSubscriber". This component consists of
a subscriber for the subscription point /ncs:devices/device/config/r:sys/interfaces/
interface. The iterate() method is here implemented as an inner class called DiffIterateImpl.
The code for this subscriber is left out but can be found in the file ConfigCdbSub.java.
Example 40, “Run CdbCfgSubscriber Example” shows how to build and run the example.
$ ncs
$ ncs_cli -u admin
admin@ncs# devices sync-from suppress-positive-result
admin@ncs# config
admin@ncs(config)# no devices device ex* config r:sys interfaces
admin@ncs(config)# devices device ex0 config r:sys interfaces \
> interface en0 mac 3c:07:54:71:13:09 mtu 1500 duplex half unit 0 family inet \
> address 192.168.1.115 broadcast 192.168.1.255 prefix-length 32
admin@ncs(config-address-192.168.1.115)# commit
Commit complete.
admin@ncs(config-address-192.168.1.115)# top
admin@ncs(config)# exit
If we look in the file logs/ConfigCdbSub.out we will find logrecords from the subscriber
(Example 41, “Subscriber Output”). In the end of this file the last DUMP DB will show only one remaining
interface.
Example 41. Subscriber Output
...
<INFO> 05-Feb-2015::16:10:23,346 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - Device {ex0}
<INFO> 05-Feb-2015::16:10:23,346 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - INTERFACE
<INFO> 05-Feb-2015::16:10:23,346 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - name: {en0}
<INFO> 05-Feb-2015::16:10:23,346 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - description:null
<INFO> 05-Feb-2015::16:10:23,350 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - speed:null
<INFO> 05-Feb-2015::16:10:23,354 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - duplex:half
<INFO> 05-Feb-2015::16:10:23,354 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - mtu:1500
<INFO> 05-Feb-2015::16:10:23,354 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - mac:<<60,7,84,113,19,9>>
<INFO> 05-Feb-2015::16:10:23,354 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - UNIT
<INFO> 05-Feb-2015::16:10:23,354 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - name: {0}
<INFO> 05-Feb-2015::16:10:23,355 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - descripton: null
<INFO> 05-Feb-2015::16:10:23,355 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - vlan-id:null
<INFO> 05-Feb-2015::16:10:23,355 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - ADDRESS-FAMILY
<INFO> 05-Feb-2015::16:10:23,355 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - key: {192.168.1.115}
<INFO> 05-Feb-2015::16:10:23,355 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - prefixLength: 32
<INFO> 05-Feb-2015::16:10:23,355 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - broadCast:192.168.1.255
<INFO> 05-Feb-2015::16:10:23,356 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - Device {ex1}
<INFO> 05-Feb-2015::16:10:23,356 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - Device {ex2}
Operational Data
We will look once again at the the YANG model for the cdb package in the examples.ncs/
getting-started/developing-with-ncs/1-cdb example. Inside the test.yang YANG
model there is a test container. As a child to this container there is a list stats-item (see Example 42,
“1-cdb Simple Operational Data”
type int32;
}
container inner {
leaf l {
type string;
}
}
}
Note the list stats-item has the substatement config false; and below it we find a
tailf:cdb-oper; statement. A standard way to implement operational data is to define a callpoint
in the YANG model and write instrumentation callback methods for retrieval of the operational data (see
more on data callbacks in ???). Here on the other hand we use the tailf:cdb-oper; statement which
implies that these instrumentation callbacks are automatically provided internally by NSO. The downside
is that we must populate this operational data in CDB from the outside.
An example of Java code that create operational data using the Navu API is shown in Example 43,
“Creating Operational Data using Navu API”).
An example of Java code that delete operational data using the CDB API is shown in Example 44,
“Deleting Operational Data using CDB API”.
In the 1-cdb example in the cdb package there is also a application component with a operational data
subscriber that subscribes on data from the path "/t:test/stats-item" (see Example 45, “CDB
Operational Subscriber Java code”).
public OperCdbSub() {
}
} finally {
cdbSubscription.sync(
CdbSubscriptionSyncType.DONE_OPERATIONAL);
}
}
} catch (Exception e) {
LOGGER.error("Fail in run shouldrun", e);
}
requestStop = false;
}
@Override
public DiffIterateResultFlag iterate(ConfObject[] kp,
DiffIterateOperFlag op,
ConfObject oldValue,
ConfObject newValue,
Object initstate) {
LOGGER.info(op + " " + Arrays.toString(kp) +" value: " + newValue);
switch (op){
case MOP_DELETED:
break;
case MOP_CREATED:
case MOP_MODIFIED: {
break;
}
}
return DiffIterateResultFlag.ITER_RECURSE;
}
}
And if we look at the output from the "CDB Operational Subscriber" that is found in the logs/
OperCdbSub.out we will see output similar to Example 47, “Operational subscription Output”.
Every time NSO starts, CDB will check the current contents of the .fxs files with its own copy of the
schema files. If CDB detects any changes in the schema it initiates an upgrade transaction. In the simplest
case CDB automatically resolves the changes and commits the new data before NSO reaches start-phase
one.
The CDB upgrade can be followed by checking the devel.log. The development log is meant to be
used as support while the application is developed. It is enabled in ncs.conf as shown in Example 48,
“Enabling Developer Logging”
Deleted elements
When an element is deleted from the schema, CDB simply deletes it (and any children) from the
database.
Added elements
If a new element is added to the schema it needs to either be optional, dynamic, or have a default
value. New elements with a default are added set to their default value. New dynamic or optional
elements are simply noted as a schema change.
Re-ordering elements
An element with the same name, but in a different position on the same level, is considered to be the
same element. If its type hasn't changed it will retain its value, but if the type has changed it will be
upgraded as described below.
Type changes
If a leaf is still present but its type has changed, automatic coercions are performed, so for example
integers may be transformed to their string representation if the type changed from e.g. int32 to string.
Automatic type conversion succeeds as long as the string representation of the current value can be
parsed into its new type. (Which of course also implies that a change from a smaller integer type, e.g.
int8, to a larger type, e.g. int32, succeeds for any value - while the opposite will not hold, but might!)
If the coercion fails, any supplied default value will be used. If no default value is present in the new
schema the automatic upgrade will fail.
Type changes when user-defined types are used are also handled automatically, provided that some
straightforward rules are followed for the type definitions. Read more about user-defined types in the
confd_types(3) manual page, which also describes these rules.
Hash changes
When a hash value of particular element has changed (due to an addition of, or a change to, a
tailf:id-value statement) CDB will update that element.
Key changes
When a key of a list is modified, CDB tries to upgrade the key using the same rules as explained
above for adding, deleting, re-ordering, change of type, and change of hash value. If automatic
upgrade of a key fails the entire list instance will be deleted.
Default values
If a leaf has a default value, which has not been changed from its default, then the automatic upgrade
will use the new default value (if any). If the leaf value has been changed from the old default, then
that value will be kept.
Adding / Removing namespaces
If a namespace no longer is present after an upgrade, CDB removes all data in that namespace. When
CDB detects a new namespace, it is initialized with default values.
Changing to/from operational
Elements that previously had config false set that are changed into database elements will be
treated as a added elements. In the opposite case, where data elements in the new data model are
tagged with config false, the elements will be deleted from the database.
Callpoint changes
CDB only considers the part of the data model in YANG modules that do not have external data
callpoints. But while upgrading, CDB does handle moving subtrees into CDB from a callpoint and
vice versa. CDB simply considers these as added and deleted schema elements.
Thus an application can be developed using CDB in the first development cycle. When the external
database component is ready it can easily replace CDB without changing the schema.
Should the automatic upgrade fail, exit codes and log-entries will indicate the reason (see the section called
“Disaster management” in NSO 5.3 Administration Guide ).
• Only data for elements that are new as of the upgrade, i.e., elements that did not exist in the previous
schema, will be considered.
• The data will only be loaded if all old, i.e., previously existing, optional/dynamic parent elements and
instances exist in the current configuration.
To clarify this, lets make up the following example. Some ServerManager package was developed
and delivered. It was realized that the data model had a serious shortcoming in that there was no way to
specify the protocol to use, TCP or UDP. To fix this, in a new version of the package, another leaf was
added to the /servers/server list, and the new YANG module can be seen in Example 49, “New
YANG module for the ServerManager Package”.
import ietf-inet-types {
prefix inet;
}
revision "2007-06-01" {
description "added protocol.";
}
revision "2006-09-01" {
description "Initial servers data model";
}
type inet:ip-address;
mandatory true;
}
leaf port {
type inet:port-number;
mandatory true;
}
leaf protocol {
type enumeration {
enum tcp;
enum udp;
}
mandatory true;
}
}
}
}
The differences from the earlier version of the YANG module can be seen in Example 50, “Difference
between YANG Modules”.
Example 50. Difference between YANG Modules
diff ../servers1.5.yang ../servers1.4.yang
9,12d8
< revision "2007-06-01" {
< description "added protocol.";
< }
<
31,37d26
< mandatory true;
< }
< leaf protocol {
< type enumeration {
< enum tcp;
< enum udp;
< }
Since it was considered important that the user explicitly specified the protocol, the new leaf was made
mandatory. The XML init file must include this leaf, and the result can be seen in Example 51, “Protocol
Upgrade Init File” like this:
Example 51. Protocol Upgrade Init File
<servers:servers xmlns:servers="http://example.com/ns/servers">
<servers:server>
<servers:name>www</servers:name>
<servers:ip>192.168.3.4</servers:ip>
<servers:port>88</servers:port>
<servers:protocol>tcp</servers:protocol>
</servers:server>
<servers:server>
<servers:name>www2</servers:name>
<servers:ip>192.168.3.5</servers:ip>
<servers:port>80</servers:port>
<servers:protocol>tcp</servers:protocol>
</servers:server>
<servers:server>
<servers:name>smtp</servers:name>
<servers:ip>192.168.3.4</servers:ip>
<servers:port>25</servers:port>
<servers:protocol>tcp</servers:protocol>
</servers:server>
<servers:server>
<servers:name>dns</servers:name>
<servers:ip>192.168.3.5</servers:ip>
<servers:port>53</servers:port>
<servers:protocol>udp</servers:protocol>
</servers:server>
</servers:servers>
We can then just use this new init file for the upgrade, and the existing server instances in the user's
configuration will get the new /servers/server/protocol leaf filled in as expected. However
some users may have deleted some of the original servers from their configuration, and in those cases
we obviously do not want those servers to get re-created during the upgrade just because they are present
in the XML file - the above restrictions make sure that this does not happen. The configuration after the
upgrade can be seen in Example 52, “Configuration after Upgrade” Here is what the configuration looks
like after upgrade if the "smtp" server has been deleted before upgrade:
Example 52. Configuration after Upgrade
<servers xmlns="http://example.com/ns/servers">
<server>
<name>dns</name>
<ip>192.168.3.5</ip>
<port>53</port>
<protocol>udp</protocol>
</server>
<server>
<name>www</name>
<ip>192.168.3.4</ip>
<port>88</port>
<protocol>tcp</protocol>
</server>
<server>
<name>www2</name>
<ip>192.168.3.5</ip>
<port>80</port>
<protocol>tcp</protocol>
</server>
</servers>
This example also implicitly shows a limitation with this method. If the user has created additional servers,
the new XML file will not specify what protocol to use for those servers, and the upgrade cannot succeed
unless the package upgrade component method is used, see below. However the example is a bit contrived.
In practice this limitation is rarely a problem. It does not occur for new lists or optional elements, nor for
new mandatory elements that are not children of old lists. And in fact correctly adding this "protocol" leaf
for user-created servers would require user input - it can not be done by any fully automated procedure.
Note Since CDB will attempt to load all *.xml files in the CDB directory at the time of upgrade, it is important
to not leave XML init files from a previous version that are no longer valid there.
It is always possible to write an package specific upgrade component to change the data belonging to a
packagebefore the upgrade transaction is committed. This will be explained in the following section.
if a YANG model is changed and new mandatory leaves are introduced that need calculations to set the
values then a programmatic upgrade is needed. This is when the upgrade component of a package comes
in play.
A upgrade component is a Java class with a standard main() method that becomes a standalone program
that is run as part of the package reload action.
As with any package component types, the upgrade components has to be defined in the package-meta-
data.xml file for the package (Example 53, “Upgrade Package Components”).
<ncs-package xmlns="http://tail-f.com/ns/ncs-packages">
....
<component>
<name>do-upgrade</name>
<upgrade>
<java-class-name>com.example.DoUpgrade</java-class-name>
</upgrade>
</component>
</ncs-package>
Lets recapitulate how packages are loaded and reloaded. NSO can search the /ncs-config/load-
path for packages to run and will copy these to a private directory tree under /ncs-config/state-
dir with root directory packages-in-use.cur. However NSO will only do this search when
packages-in-use.cur is empty or when a reload is requested. This scheme makes package upgrades
controlled and predictable, for more on this see the section called “Loading Packages”.
So in preparation for a package upgrade the new packages replaces the old ones in the load path. In our
scenario the YANG model changes are such that the automatic schema upgrade that CDB performs are not
sufficient, therefore the new packages also contain upgrade components. At this point NSO is still running
with the old package definitions.
When the package reload is requested the packages in the load-path is copied to the state directory. The
old state directory is scratched so that packages than no longer exist in the load path are removed and new
packages are added. Obviously, unchanged packages will be unchanged. Automatic schema CDB upgrades
will be performed, and afterwards, for all packages which have and upgrade component and for which at
least one YANG model was changes, this upgrade component will be executed. Also for added packages
that have an upgrade component this component will be executed. Hence the upgrade component needs to
be programmed in such a way that care is taken for both the new and upgrade package scenarios.
So how should an upgrade component be implemented? In the previous section we described how CDB
can perform an automatic upgrade. But this means that CDB has deleted all values that are no longer part
of the schema? Well, not quite yet. At an initial phase of the NSO startup procedure (called start-phase0)
it is possible to use all the CDB Java API calls to access the data using the schema from the database as it
looked before the automatic upgrade. That is, the complete database as it stood before the upgrade is still
available to the application. It is under this condition that the upgrade components are executed and is the
reason why they are standalone programs and not executed by the NSO Java-VM as all other java code for
components are.
So the CDB Java API can be used to read data defined by the old YANG models. To write new config data
Maapi has a specific method Maapi.attachInit(). This method attaches a Maapi instance to the
upgrade transaction (or init transaction) during phase0. This special upgrade transaction is only available
during phase0. NSO will commit this transaction when the phase0 is ended, so the user should only write
config data (not attempt commit etc).
We take a look at the example $NCS_DIR/examples.ncs/getting-started/developing-
with-ncs/14-upgrade-service to see how a upgrade component can be implemented. Here the
vlan package has an original version which is replaced with a version vlan_v2. See the README and play
with example to get aquainted.
Note The 14-upgrade-service is a service package. But the upgrade components here described work
equally well and in the same way for any package type. The only requirement is that the package contain
at least one YANG model for the upgrade component to have meaning. If not the upgrade component will
never be executed.
The complete YANG model for the version 2 of the VLAN service looks as follows:
import tailf-common {
prefix tailf;
}
import tailf-ncs {
prefix ncs;
}
description
"This service creates a vlan iface/unit on all routers in our network. ";
revision 2013-08-30 {
description
"Added mandatory leaf global-id.";
}
revision 2013-01-08 {
description
"Initial revision.";
}
augment /ncs:services {
list vlan {
key name;
leaf name {
tailf:info "Unique service id";
tailf:cli-allow-range;
type string;
}
uses ncs:service-data;
ncs:servicepoint vlanspnt_v2;
tailf:action self-test {
tailf:info "Perform self-test of the service";
tailf:actionpoint vlanselftest;
output {
leaf success {
type boolean;
}
leaf message {
type string;
description
"Free format message.";
}
}
}
leaf global-id {
type string;
mandatory true;
}
leaf iface {
type string;
mandatory true;
}
leaf unit {
type int32;
mandatory true;
}
leaf vid {
type uint16;
mandatory true;
}
leaf description {
type string;
mandatory true;
}
}
}
}
If we diff the changes between the two YANG models for the service we see that in version 2 a new
mandatory leaf has been added (see Example 55, “YANG Service diff”).
We need to create a Java class with a main() method that connects to CDB and MAAPI. This main
will be executed as a separate program an all private and shared jars defined by the package will be in the
classpath. To upgrade the vlan service the following Java code is needed:
public UpgradeService() {
}
int no = cdbsess.getNumberOfInstances("/services/vlan");
for(int i = 0; i < no; i++) {
Integer offset = Integer.valueOf(i);
ConfBuf name = (ConfBuf)cdbsess.getElem("/services/vlan[%d]/name",
offset);
ConfBuf iface = (ConfBuf)cdbsess.getElem("/services/vlan[%d]/iface",
offset);
ConfInt32 unit =
(ConfInt32)cdbsess.getElem("/services/vlan[%d]/unit",
offset);
ConfUInt16 vid =
(ConfUInt16)cdbsess.getElem("/services/vlan[%d]/vid",
offset);
s1.close();
s2.close();
}
}
Lets go through the code and point out the different aspects of writing a upgrade component. First (see
Example 57, “Upgrade Init”) we open a socket and connect to NSO. We pass this socket to a Java API
Cdb instance and call Cdb.setUseForCdbUpgrade(). This method will prepare cdb sessions for
reading old data from the CDB database, and it should only be called in this context. In the end of this first
code fragment we start the CDB upgrade session:
Example 57. Upgrade Init
Socket s1 = new Socket("localhost", Conf.NCS_PORT);
Cdb cdb = new Cdb("cdb-upgrade-sock", s1);
cdb.setUseForCdbUpgrade();
CdbUpgradeSession cdbsess =
cdb.startUpgradeSession(
CdbDBType.CDB_RUNNING,
EnumSet.of(CdbLockType.LOCK_SESSION,
CdbLockType.LOCK_WAIT));
We then open and connect a second socket to NSO and pass this to a Java API Maapi instance. We call the
Maapi.attachInit() method to get the init transaction (Example 58, “Upgrade Get Transaction”).
Using the CdbSession instance we read the number of service instance that exists in the CDB database.
We will work on all these instances. Also if the number of instances is zero the loop will not be entered.
This is a simple way to prevent the upgrade component from doing any harm in the case of this being a
new package that is added to NSO for the first time:
int no = cdbsess.getNumberOfInstances("/services/vlan");
for(int i = 0; i < no; i++) {
The value for new leaf introduced in the new version of the YANG model is calculated, and the value is set
using Maapi and the init transaction:
In the end of the program the sockets are closed. Important to note is that no commits or other handling of
the init transaction is done. This is NSO responsibility:
s1.close();
s2.close();
More complicated service package upgrades scenarios occur when a YANG model containing a service
point is renamed, or moved and augmented to a new place in the NSO model. This is because, not only,
does the complete config data set need to be recreated on the new position but a service also have hidden
private data that is part of the FASTMAP algorithm and necessary for the service to be valid. For this
reason a specific MAAPI method Maapi.ncsMovePrivateData() exists that takes the both the old
and the new position for the service point and moves the service data between these positions.
import tailf-common {
prefix tailf;
}
import tailf-ncs {
prefix ncs;
}
description
"This service creates a tunnel assembly on all routers in our network. ";
revision 2013-01-08 {
description
"Initial revision.";
}
augment /ncs:services {
list tunnel {
key tunnel-name;
leaf tunnel-name {
tailf:info "Unique service id";
tailf:cli-allow-range;
type string;
}
uses ncs:service-data;
ncs:servicepoint tunnelspnt;
tailf:action self-test {
tailf:info "Perform self-test of the service";
tailf:actionpoint tunnelselftest;
output {
leaf success {
type boolean;
}
leaf message {
type string;
description
"Free format message.";
}
}
}
leaf gid {
type string;
mandatory true;
}
leaf interface {
type string;
mandatory true;
}
leaf assembly {
type int32;
mandatory true;
}
leaf tunnel-id {
type uint16;
mandatory true;
}
leaf descr {
type string;
mandatory true;
}
}
}
}
To upgrade from the vlan_v2 to the tunnel package an new upgrade component for the tunnel package has
to be implemented:
Example 60. Tunnel Service Upgrade Java class
public class UpgradeService {
public UpgradeService() {
}
cdb.startUpgradeSession(
CdbDBType.CDB_RUNNING,
EnumSet.of(CdbLockType.LOCK_SESSION,
CdbLockType.LOCK_WAIT));
int no = cdbsess.getNumberOfInstances("/services/vlan");
for(int i = 0; i < no; i++) {
ConfBuf name =(ConfBuf)cdbsess.getElem("/services/vlan[%d]/name",
Integer.valueOf(i));
String nameStr = name.toString();
System.out.println("SERVICENAME = " + nameStr);
ConfCdbUpgradePath oldPath =
new ConfCdbUpgradePath("/ncs:services/vl:vlan{%s}",
name.toString());
ConfPath newPath = new ConfPath("/services/tunnel{%x}", name);
maapi.create(th, newPath);
s1.close();
s2.close();
}
}
We will walk throw this code also and point out the aspects that differ from the earlier more simple
scenario. First we want to create the Cdb instance and get the CdbSession. However in this scenario the
old namespace is removed and the Java API cannot retrieve it from NSO. To be able to use CDB to read
and interpret the old YANG Model the old generated and removed Java namespace classes has to be
temporarily reinstalled. This is solved by adding a jar (Java archive) containing these removed namespaces
into the private-jar directory of the tunnel package. The removed namespace can then be instantiated
and passed to Cdb via an overridden version of the Cdb.setUseForCdbUpgrade() method:
ArrayList<ConfNamespace> nsList = new ArrayList<ConfNamespace>();
nsList.add(new vlanService());
Socket s1 = new Socket("localhost", Conf.NCS_PORT);
Cdb cdb = new Cdb("cdb-upgrade-sock", s1);
cdb.setUseForCdbUpgrade(nsList);
CdbUpgradeSession cdbsess =
cdb.startUpgradeSession(
CdbDBType.CDB_RUNNING,
EnumSet.of(CdbLockType.LOCK_SESSION,
CdbLockType.LOCK_WAIT));
As an alternative to including the old namespace file in the package, a ConfNamespaceStub can be
constructed for each old model that is to be accessed:
nslist.add(new ConfNamespaceStub(500805321,
"http://example.com/vlan-service",
"http://example.com/vlan-service",
"vl"));
Since the old YANG model with the service point is removed the new service container with the new
service has to be created before any config data can we written to this position:
ConfPath newPath = new ConfPath("/services/tunnel{%x}", name);
maapi.create(th, newPath);
The complete config for the old service is read via the CdbUpgradeSession. Note in particular that the
path oldPath is constructed as a ConfCdbUpgradePath. These are paths that allow access to nodes
that are not available in the current schema (i.e., nodes in deleted models).
ConfXMLParam[] oldparams = new ConfXMLParam[] {
new ConfXMLParamLeaf("vl", "global-id"),
new ConfXMLParamLeaf("vl", "iface"),
new ConfXMLParamLeaf("vl", "unit"),
new ConfXMLParamLeaf("vl", "vid"),
new ConfXMLParamLeaf("vl", "description"),
};
ConfXMLParam[] data =
cdbsess.getValues(oldparams, oldPath);
The new data structure with the service data is created and written to NSO via Maapi and the init
transaction:
ConfXMLParam[] newparams = new ConfXMLParam[] {
new ConfXMLParamValue("tl", "gid", data[0].getValue()),
new ConfXMLParamValue("tl", "interface", data[1].getValue()),
new ConfXMLParamValue("tl", "assembly", data[2].getValue()),
new ConfXMLParamValue("tl", "tunnel-id", data[3].getValue()),
new ConfXMLParamValue("tl", "descr", data[4].getValue()),
};
maapi.setValues(th, newparams, newPath);
Last the service private data is moved from the old position to the new position via the method
Maapi.ncsMovePrivateData():
maapi.ncsMovePrivateData(th, oldPath, newPath);
are preserved and no configuration data needs to be moved. Hence no upgrade component has to be written
and the private FASTMAP service data will be handled automatically by NCS.
Here follows a step-by-step description of the service porting work that has to be performed
prior to starting an NCS 3.0 server for a NCS project with existing service instances under the /
ncs:services/service" list.
1 The old service needs to have a defined service-point defined. Also the service container need to be a
presence container. For instance if the old service module is similar to this:
augment /ncs:services/ncs:service/ncs:type/ncs:service-type-choice {
case web-site-case {
container web-site {
leaf description {
description "Short description of the site";
type string;
}
leaf description {
description "Short description of the site";
type string;
}
2 The package-metadata.xml definition no longer support the 'service' component type. This has to be
changed to the 'callback' component type. Where this is specified:
<component>
<name>web-site-service</name>
<service>
<type>web-site</type>
<java-class-name>
com.example.websiteservice.websiteservice.WebSiteServiceRFS
</java-class-name>
</service>
</component>
<component>
<name>web-site-service</name>
<callback>
<java-class-name>
com.example.websiteservice.websiteservice.WebSiteServiceRFS
</java-class-name>
</callback>
</component>
</para>
</listitem>
If the old specification referenced a trigger-java-class-name just remove this reference. How trigger are
handled is described later.
3 Include the following imports in the service Java classes:
import java.util.Properties;
import com.tailf.dp.DpCallbackException;
import com.tailf.dp.services.ServiceContext;
import com.tailf.dp.annotations.ServiceCallback;
import com.tailf.dp.proto.ServiceCBType;
4 Note that the opaque object has changed from java.lang.Object to java.util.Properties. The
recommendation was before to use a Properties object but nothing prevented the programmer from
using other classes. The conversion of the opaque type is automatic in Ncs 3.0 and the following holds
for the conversion:
• If the old opaque was an java.util.Properties object it will remain the same Properties object. The
only difference is that no cast from Object to Properties are necessary.
• If the old opaque was a String object the new opaque will be a Properties object with one element:
In this case the string has to be decoded so that the original java object is retrieved. This is done
using the class com.tailf.util.Base64 which can be used as follows:
throws DpCallbackException {
if (opaque != null) {
opaque.list(System.out);
String serial = opaque.getProperty("BASE64_ENCODED_OPAQUE");
if (serial != null) {
DataContainer obj = (DataContainer)
Base64.decodeToObject(serial,
this.getClass().getClassLoader());
opaque.remove("BASE64_ENCODED_OPAQUE");
opaque.setProperty("strval", obj.strval);
opaque.setProperty("floatval", String.valueOf(obj.floatval));
opaque.setProperty("intval", String.valueOf(obj.intval));
opaque.setProperty("longval", String.valueOf(obj.longval));
opaque.list(System.out);
}
}
....
In this example the serialized object 'DataContainer' is decoded and its values are put directly in
the Properties opaque. This will be performed once and after this the properties opaque is used
without conversions. This type of runtime migration is recommended in the scenario when general
java objects was used as opaque.
5 The service implementation should no longer implement the old interfaces which implies that
declarations like:
to the following:
@ServiceCallback(servicePoint="websiteservice",
callType=ServiceCBType.CREATE)
public Properties create(ServiceContext context,
NavuNode service,
NavuNode root,
Properties opaque)
throws DpCallbackException {
The new service navuNode points directly on the service element so where the service referenced e.g.
change:
to:
list web-site {
uses ncs:service-data;
ncs:servicepoint websiteservice {
ncs:service-interface fastmap;
}
tailf:action self-test {
tailf:info "Perform self-test of the service";
tailf:actionpoint websiteselftest;
output {
leaf success {
type boolean;
}
leaf message {
type string;
description
"Free format message.";
}
}
}
@ActionCallback(callPoint="websiteselftest",
callType=ActionCBType.INIT)
public void init(DpActionTrans trans) throws DpCallbackException {
}
@ActionCallback(callPoint="websiteselftest",
callType=ActionCBType.ACTION)
public ConfXMLParam[] selftest(DpActionTrans trans, ConfTag name,
ConfObject[] kp,
ConfXMLParam[] params)
throws DpCallbackException {
8 Note, that in earlier service implementations it was possible, and common use, to divide the trigger()
and create() method into separate classes. This is no longer the case. As with all callbacks all methods
for a specific servicepoint (or any callpoint) needs to reside in one class since only one registration
on the servicepoint is allowed. This is a constraint only for the callback methods defined by the
DpServiceCallback interface, the code used by these callback methods can still be structured into many
classes.
The triggers as been changed to the pre/post modification methods. If the old trigger has been
implemented it is possible to create wrapper code in the java service class that calls the old trigger.
Note, the new modification methods need to be placed in the same java class as the create method since
the service registration only allows for one class to be registered at the servicepoint. To wrap the old
trigger into the new modification methods the following needs to be performed. First change the trigger
to handle Properties opaque. Both in argument and return value, ie. change:
into:
@ServiceCallback(servicePoint = "project0service",
callType = ServiceCBType.PRE_MODIFICATION)
public Properties preModification(ServiceContext context,
ServiceOperationType operation,
ConfPath path,
Properties opaque)
throws DpCallbackException {
TriggerState state = TriggerState.PRE_CREATE;
if (operation == ServiceOperationType.UPDATE) {
state = TriggerState.PRE_UPDATE;
} else if (operation == ServiceOperationType.DELETE) {
state = TriggerState.PRE_DELETE;
}
try {
NavuContainer root = (NavuContainer) context.getRootNode();
Map<ConfKey, NavuChange> changes =
root.findChanges(new Integer[] {Ncs._services,
Ncs._service});
ConfKey key = (ConfKey) path.getKP()[2];
return trigger(state, root, changes.get(key), opaque);
} catch (Exception e) {
throw new DpCallbackException("pre trigger failed", e);
}
}
@ServiceCallback(servicePoint = "project0service",
callType = ServiceCBType.POST_MODIFICATION)
public Properties postModification(ServiceContext context,
ServiceOperationType operation,
ConfPath path,
Properties opaque)
throws DpCallbackException {
TriggerState state = TriggerState.POST_CREATE;
if (operation == ServiceOperationType.UPDATE) {
state = TriggerState.POST_UPDATE;
} else if (operation == ServiceOperationType.DELETE) {
state = TriggerState.POST_DELETE;
}
try {
NavuContainer root = (NavuContainer) context.getRootNode();
Map<ConfKey, NavuChange> changes =
root.findChanges(new Integer[] {Ncs._services,
Ncs._service});
ConfKey key = (ConfKey) path.getKP()[2];
return trigger(state, root, changes.get(key), opaque);
} catch (Exception e) {
throw new DpCallbackException("pre trigger failed", e);
}
}
Introduction
The NSO Java library contains a variety of APIs for different purposes. In this chapter we introduce these
and explain their usage. The Java library deliverables are found as two jar files (ncs.jar and conf-
api.jar).
NSO Java library requires Java SE 1.6 version or higher. NSO relies on log4j (http://logging.apache.org)
for logging. No other dependencies exists for the NSO Java library. For convenience the Java build tool
Apache ant (http://ant.apache.org) is used to run all of the examples. However this tool is not a requirement
for NSO.
General for all APIs are that they communicate with NSO using tcp sockets. This makes it possible to use
all APIs from a remote location. The following APIs are included in the library:
In addition the Conf API framework contains utility classes for data types, keypaths, etc.
MAAPI
The Management Agent API (MAAPI) provides an interface to the Transaction engine in NSO. As such it
is very versatile. Here are some examples on how the MAAPI interface can be used.
The first step of a typical sequence of MAAPI API calls when writing a management application would
be to create a user session. To create a user session is the equivalent of establishing an SSH connection
from an NETCONF manager. It is up to the MAAPI application to authenticate users. The TCP connection
between MAAPI and NSO is neither encrypted, nor authenticated. The Maapi Java package does however
include an authenticate() method that can be used by the application to hook into the AAA
framework of NSO and let NSO authenticate the user.
When a Maapi socket has been created the next step is to create a user session and supply the relevant
information about the user for authentication.
maapi.startUserSession("admin",
InetAddress.getByName("localhost"),
"maapi",
new String[] {"admin"},
MaapiUserSessionFlag.PROTO_TCP);
When the user has been authenticated and a user session has been created the Maapi reference is now
ready to establish a new transaction towards a data store. The following code snippet starts a read/write
transaction towards running data store.
int th = maapi.startTrans(Conf.DB_RUNNING,
Conf.MODE_READ_WRITE);
The startTrans(int db,int mode) method of the Maapi class returns an integer which
represents a transaction handler. This transaction handler is used when invoking the various Maapi
methods.
Example 64. Maapi.getElem()
The getElem(int th, String fmt, Object ... arguments) first parameter is the
transaction handle which is the integer that was returned by the startTrans() method. The fmt is a
path that leads to a leaf in the data model. The path is expressed as a format string that contain fixed text
with zero to many embedded format specifiers. For each specifier one argument in the variable argument
list is expected.
The return value val contains a reference to a ConfValue which is a superclass of all the ConfValues
that maps to the specific yang data type. If the yang data type ip in the yang model is ietf-inet-types:ipv4-
address we can narrow it to the subclass which is the corresponding com.tailf.conf.ConfIPv4
The opposite operation of the getElem() is the setElem() method which set a leaf with a specific
value.
maapi.setElem(th ,
new ConfUInt16(1500),
"/hosts/host{%x}/interfaces{%x}/ip/mtu",
new ConfBuf("host1"),
new ConfBuf("eth0"));
We have not yet committed the transaction so no modification is permanent. The data is only visible inside
the current transaction. To commit the transaction we call:
maapi.applyTrans(th)
The method applyTrans() commits the current transaction to the running datastore.
Example 65. Commit a transaction
int th = maapi.startTrans(Conf.DB_RUNNING, Conf.MODE_READ_WRITE);
try {
maapi.lock(Conf.DB_RUNNING);
/// make modifications to th
maapi.setElem(th, .....);
maapi.applyTrans(th);
maapi.finishTrans(th);
} catch(Exception e) {
maapi.finishTrans(th);
} finally {
maapi.unLock(Conf.DB_RUNNING);
}
The MAAPI is also intended to attach to already existing NSO transaction to inspect not yet committed
data for example if we want to implement validation logic in Java. See Example 72, “Attach Maapi to the
current transaction”.
CDB API
This API provides an interface to the CDB Configuration database which stores all configuration data.
With this API the user can:
• Subscribe to changes in CDB - The subscription functionality makes it possible to receive events/
notifications when changes occur in CDB.
CDB can also be used to store operational data, i.e. data which is designated with a "config false"
statement in the YANG data model. Operational data is read/write trough the CDB API. NETCONF and
the other northbound agents can only read operational data.
Java CDB API is intended to be fast and lightweight and the CDB read Sessions are expected to be short
lived and fast. The NSO transaction manager is surpassed by CDB and therefore write operations on
configurational data is prohibited. If operational data is stored in CDB both read and write operations on
this data is allowed.
CDB is always locked for the duration of the session. It is therefore the responsibility of the programmer
to make CDB interactions short in time and assure that all CDB sessions are closed when interaction has
finished.
To initialize the CDB API a CDB socket has to be created and passed into the API base class
com.tailf.cdb.Cdb:
After the cdb socket has been established a user could either start a CDB Session or start a subscription of
changes in CDB:
/*
* Retrieve the number of children in the list and
* loop over these children
*/
for(int i = 0; i < session.numInstances("/servers/server"); i++) {
ConfBuf name =
(ConfBuf) session.getElem("/servers/server[%d]/hostname", i);
ConfIPv4 ip =
(ConfIPv4) session.getElem("/servers/server[%d]/ip", i);
}
We can refer to an element in a model with an expression like "/servers/server". This type of string
reference to an element is called keypath or just path. To refer to element underneath a list, we need to
identify which instance of the list elements that is of interest.
This can be performed either by pinpointing the sequence number in the ordered list, starting from 0. For
instance the path: /servers/server[2]/port refers to the "port" leaf of the third server in the
configuration. This numbering is only valid during the current CDB session. Note, the database is locked
during this session.
We can also refer to list instances using the key values for the list. Remember that we specify in the data
model which leaf or leafs in list that constitute the key. In our case a server has the "name" leaf as key. The
syntax for keys is a space separated list of key values enclosed within curly brackets: { Key1 Key2 ...}. So
/servers/server{www}/ip refers to the ip leaf of the server whose name is "www".
A YANG list may have more than one key for example the keypath: /dhcp/subNets/subNet{192.168.128.0
255.255.255.0}/routers refers to the routers list of the subNet which has key "192.168.128.0",
"255.255.255.0".
The keypath syntax allows for formatting characters and accompanying substitution arguments. For
example getElem("server[%d]/ifc{%s}/mtu",2,"eth0") is using a keypath with a mix of
sequence number and keyvalues with formatting characters and argument. Expressed in text the path will
reference the MTU of the third server instance's interface named "eth0".
The CdbSession java class have a number of methods to control current position in the model.
Using relative paths and e.g. CdbSession.pushd() it is possible to write code that can be re-used for
common sub-trees.
The current position also includes the namespace. If an element of another namespace should be read, then
the prefix of that namespace should be set in the first tag of the keypath, like: /smp:servers/server
where "smp" is the prefix of the namespace. It is also possible to set the default namespace for the CDB
session with the method CdbSession.setNamespace(ConfNamespace).
The CDB subscription mechanism allows an external Java program to be notified when different parts of
the configuration changes. For such a notification it is also possible to iterate through the change set in
CDB for that notification.
Subscriptions are primarily to the running data store. Subscriptions towards the operational data store in
CDB is possible, but the mechanism is slightly different see below.
The first thing to do is to register in CDB which paths should be subscribed to. This is accomplished with
the CdbSubscription.subscribe(...) method. Each registered path returns a subscription
point identifier. Each subscriber can have multiple subscription points, and there can be many different
subscribers.
Every point is defined through a path - similar to the paths we use for read operations, with the difference
that instead of fully instantiated paths to list instances we can choose to use tag paths i.e. leave out key
value parts to be able to subscribe on all instances. We can subscribe either to specific leaves, or entire sub
trees. Assume a YANG data model on the form of:
container servers {
list server {
key name;
leaf name { type string;}
leaf ip { type inet:ip-address; }
leaf port type inet:port-number; }
.....
/servers/server/port
/servers
Means that we subscribe to any changes in the sub tree rooted at /servers. This includes additions or
removals of server instances, as well as changes to already existing server instances.
/servers/server{www}/ip
Means that we only want to be notified when the server "www" changes its ip address.
/servers/server/ip
Means we want to be notified when the leaf ip is changed in any server instance.
When adding a subscription point the client must also provide a priority, which is an integer. As CDB is
changed, the change is part of a transaction. For example the transaction is initiated by a commit operation
from the CLI or an edit-config operation in NETCONF resulting in the running database being modified.
As the last part of the transaction CDB will generate notifications in lock-step priority order. First all
subscribers at the lowest numbered priority are handled, once they all have replied and synchronized by
calling sync(CdbSubscriptionSyncType synctype) the next set - at the next priority level - is
handled by CDB. Not until all subscription points have been acknowledged is the transaction complete.
This implies that if the initiator of the transaction was for example a commit command in the CLI, the
command will hang until notifications have been acknowledged.
Note that even though the notifications are delivered within the transaction it is not possible for a
subscriber to reject the changes (since this would break the two-phase commit protocol used by the NSO
back plane towards all data-providers).
When a client is done subscribing it needs to inform NSO it is ready to receive notifications. This is done
by first calling subscribeDone(), after which the subscription socket is ready to be polled.
As a subscriber has read its subscription notifications using read() it can iterate through the changes that
caused the particular subscription notification using the diffIterate() method.
It is also possible to start a new read-session to the CDB_PRE_COMMIT_RUNNING database to read the
running database as it was before the pending transaction.
Subscriptions towards the operational data in CDB are similar to the above, but due to the fact that the
operational data store is designed for light-weight access, and thus does not have transactions and normally
avoids the use of any locks, there are several differences - in particular:
• Subscription notifications are only generated if the writer obtains the "subscription lock", by using the
startSession() with the CdbLockType.LOCKREQUEST. In addition when starting a session
towards the operation data we need to pass the CdbDBType.CDB_OPERATIONAL when starting a
CDB session:
CdbSession sess =
cdb.startSession(CdbDBType.CDB_OPERATIONAL,
EnumSet.of(CdbLockType.LOCK_REQUEST));
• No priorities are used.
• Neither the writer that generated the subscription notifications nor other writers to the same data are
blocked while notifications are being delivered. However the subscription lock remains in effect until
notification delivery is complete.
• The previous value for modified leaf is not available when using the diffIterate() method.
Essentially a write operation towards the operational data store, combined with the subscription lock,
takes on the role of a transaction for configuration data as far as subscription notifications are concerned.
This means that if operational data updates are done with many single-element write operations, this can
potentially result in a lot of subscription notifications. Thus it is a good idea to use the multi-element
setObject() taking an array of ConfValues which sets a complete container or setValues()
taking an array of ConfXMLParam and potent of setting an arbitrary part of the model. This to keep down
notifications to subscribers when updating operational data.
For write operations that do not attempt to obtain the subscription lock are allowed to proceed even during
notification delivery. Therefore it is the responsibility of the programmer to obtain the lock as needed
when writing to the operational data store. E.g. if subscribers should be able to reliably read the exact data
that resulted from the write that triggered their subscription, the subscription lock must always be obtained
when writing that particular set of data elements. One possibility is of course to obtain the lock for all
writes to operational data, but this may have an unacceptable performance impact.
To view registered subscribers use the ncs --status command. For details on how to use the different
subscription functions see the javadoc for NSO Java API.
• A simple Cdb config subscriber that utilizes the low level Cdb API directly to subscribe to changes in
subtree of the configuration.
• Two Navu Cdb subscribers, one subscribing to configuration changes, and one subscribing to changes
in operational data.
DP API
The DP API makes it possible to create callbacks which are called when certain events occur in NSO. As
the name of the API indicates it is possible to write data provider callbacks that provide data to NSO that
is stored externally. However this is only one of several callback types provided by this API. There exist
callback interfaces for the following types:
• Service Callbacks - invoked for a service callpoints in the the YANG model. Implements service to
device information mappings. See for example ${NCS_DIR}/examples.ncs/getting-
started/developing-with-ncs/4-rfs-service
• Action Callbacks - invoked for a certain action in the YANG model which is defined with a callpoint
directive.
• Authentication Callbacks - invoked for external authentication functions.
• Authorization Callbacks - invoked for external authorization of operations and data. Note, avoid this
callback if possible since performance will otherwise be affected.
• Data Callbacks - invoked for data provision and manipulation for certain data elements in the YANG
model which is defined with a callpoint directive.
• DB Callbacks - invoked for external database stores.
• Range Action Callbacks - A variant of action callback where ranges are defined for the key values.
• Range Data Callbacks - A variant of data callback where ranges are defined for the data values.
• Snmp Inform Response Callbacks - invoked for response on Snmp inform requests on a certain
element in the Yang model which is defined by a callpoint directive.
• Transaction Callbacks - invoked for external participants in the two-phase commit protocol.
• Transaction Validation Callbacks - invoked for external transaction validation in the validation phase
of a two phase commit.
• Validation Callbacks - invoked for validation of certain elements in the YANG Model which is
designed with a callpoint directive.
The callbacks are methods in ordinary java POJOs. These methods are adorned with a specific Java
Annotations syntax for that callback type. The annotation makes it possible to add meta data information
to NSO about the supplied method. The annotation includes information of which callType and, when
necessary, which callpoint the method should be invoked for.
Note Only one Java object can be registered on one and the same callpoint. Therefore, when a new Java
object register on a callpoint which already has been registered, the earlier registration (and Java
object) will be silently removed.
The basic idea for a data provider, is that it participates entirely in each NSO transaction, and it is also
responsible for reading and writing all data in the configuration tree below the callpoint. Before explaining
how to write a data provider and what the responsibilities of a data provider are, we must explain how the
NSO transaction manager drives all participants in a lock step manner through the phases of a transaction.
A transaction has a number of phases, the external data provider gets called in all the different phases.
This is done by implementing a Transaction callback class and then registering that class. We have the
following distinct phases of a NSO transaction:
• init() In this phase the Transaction callback class init() methods gets invoked. We use
annotation on the method to indicate that it's the init() method as in:
public class MyTransCb {
@TransCallback(callType=TransCBType.INIT)
public void init(DpTrans trans) throws DpCallbackException {
return;
}
Each different callback method we wish to register, must be annotated with an annotation from
TransCBType
The callback is invoked when a transaction starts, but NSO delays the actual invocation as an
optimization. For a data provider providing configuration data, init() is invoked just before
the first data-reading callback, or just before the transLock() callback (see below), whichever
comes first. When a transaction has started, it is in a state we refer to as READ. NSO will, while
the transaction is in the READ state, execute a series of read operations towards (possibly) different
callpoints in the data provider.
Any write operations performed by the management station are accumulated by NSO and the data
provider doesn't see them while in the READ state.
• transLock() - This callback gets invoked by NSO at the end of the transaction. NSO has
accumulated a number of write operations and will now initiate the final write phases. Once the
transLock() callback has returned, the transaction is in the VALIDATEstate. In the VALIDATE
state, NSO will (possibly) execute a number of read operations in order to validate the new
configuration. Following the read operations for validations comes the invocation of one of the
writeStart() or transUnlock() callbacks.
• transUnlock() - This callback gets invoked by NSO if the validation failed or if the validation
was done separate from the commit (e.g. by giving a validate command in the CLI). Depending
on where the transaction originated, the behavior after a call to transUnlock() differs. If the
transaction originated from the CLI, the CLI reports to the user that the configuration is invalid and
the transaction remains in the READ state whereas if the transaction originated from a NETCONF
client, the NETCONF operation fails and a NETCONF rpc error is reported to the NETCONF client/
manager.
• writeStart() - If the validation succeeded, the writeStart() callback will be called and
the transaction enters the WRITE state. While in WRITE state, a number of calls to the write data
callbacks setElem(), create() and remove() will be performed.
If the underlying database supports real atomic transactions, this is a good place to start such a
transaction.
The application should not modify the real running data here. If, later, the abort() callback is
called, all write operations performed in this state must be undone.
• prepare() - Once all write operations are executed, the prepare() callback is executed. This
callback ensures that all participants have succeeded in writing all elements. The purpose of the
callback is merely to indicate to NSO that the data provider is ok, and has not yet encountered any
errors.
• abort() - If any of the participants die or fail to reply in the prepare() callback, the remaining
participants all get invoked in the abort() callback. All data written so far in this transaction should
be disposed of.
• commit() - If all participants successfully replied in their respective prepare() callbacks, all
participants get invoked in their respective commit() callbacks. This is the place to make all data
written by the write callbacks in WRITE state permanent.
• finish() - And finally, the finish() callback gets invoked at the end. This is a good place to
deallocate any local resources for the transaction.
The finish() callback can be called from several different states.
The following picture illustrates the conceptual state machine a NSO transaction goes through.
All callbacks methods are optional. If a callback method is not implemented, it is the same as having an
empty callback which simply returns.
Similar to how we have to register Transaction callbacks, we must also register data callbacks. The
transaction callbacks cover the life span of the transaction, and the data callbacks are used to read and
write data inside a transaction. The data callbacks have access to what is referred to as the transaction
context in the form of a DpTrans object.
• getElem() This callback is invoked by NSO when NSO needs to read the actual value of a leaf
element. We must also implement the getElem() callback for the keys. NSO invokes getElem()
on a key as an existence test.
We define the getElem callback inside a class as:
public static class DataCb {
@DataCallback(callPoint="foo", callType=DataCBType.GET_ELEM)
public ConfValue getElem(DpTrans trans, ConfObject[] kp)
throws DpCallbackException {
.....
• existsOptional() This callback is called for all type less and optional elements, i.e. presence
containers and leafs of type empty. If we have presence containers or leafs of type empty we cannot
use the getElem() callback to read the value of such a node, since it does not have a type. An example
of a data model could be:
container bs {
presence "";
tailf:callpoint bcp;
list b {
key name;
max-elements 64;
leaf name {
type string;
}
container opt {
presence "";
leaf ii {
type int32;
}
}
leaf foo {
type empty;
}
}
}
The above YANG fragment has 3 nodes that may or may not exist and that do not have a type. If we
do not have any such elements, nor any operational data lists without keys (see below), we do not
need to implement the existsOptional() callback.
If we have the above data model, we must implement the existsOptional(), and our implementation
must be prepared to reply on calls of the function for the paths /bs, /bs/b/opt, and /bs/b/foo. The leaf /
bs/b/opt/ii is not mandatory, but it does have a type namely int32, and thus the existence of that leaf
will be determined through a call to the getElem() callback.
The existsOptional() callback may also be invoked by NSO as "existence test" for an entry in an
operational data list without keys. Normally this existence test is done with a getElem() request for
the first key, but since there are no keys, this callback is used instead. Thus if we have such lists, we
must also implement this callback, and handle a request where the keypath identifies a list entry.
• iterator() and getKey() This pair of callback is used when NSO wants to traverse a YANG
list. The job of the iterator() callback is to return a Iterator object that is invoked by the
library. For each Object returned by the iterator, the NSO library will invoke the getKey()
callback on the returned object. The getkey callback shall return a ConfKey value.
An alternative to the getKey() callback is to register the optional getObject() callback whose
job it is to return not just the key, but the entire YANG list entry. It is possible to register both
getKey() and getObject() or either. If the getObject() is registered, NSO will attempt to
use it only when bulk retrieval is executed.
We also have two additional optional callbacks that may be implemented for efficiency reasons.
• getObject() If this optional callback is implemented, the work of the callback is to return an
entire object, i.e. a list instance. This is not the same getObject() as the one that is used in
combination with the iterator()
• numInstances() When NSO needs to figure out how many instances we have of a certain
element, by default NSO will repeatedly invoke the iterator() callback. If this callback is
installed, it will be called instead.
The following example illustrates an external data provider. The example is possible to run from the
examples collection. It resides under ${NCS_DIR}/examples.ncs/getting-started/
developing-with-ncs/6-extern-db.
The example comes with a tailor made database - MyDb. That source code is provided with the example
but not shown here. However the functionality will be obvious from the method names like newItem(),
lock(), save() etc.
Two classes are implemented, one for the Transaction callbacks and another for the Data callbacks.
The data model we wish to incorporate into NSO is a trivial list of work items. It looks like:
Example 69. work.yang
module work {
namespace "http://example.com/work";
prefix w;
import ietf-yang-types {
prefix yang;
}
import tailf-common {
prefix tailf;
}
description "This model is used as a simple example model
illustrating how to have NCS configuration data
that is stored outside of NCS - i.e not in CDB";
revision 2010-04-26 {
description "Initial revision.";
}
container work {
tailf:callpoint workPoint;
list item {
key key;
leaf key {
type int32;
}
leaf title {
type string;
}
leaf responsible {
type string;
}
leaf comment {
type string;
}
}
}
}
Note the callpoint directive in the model, it indicates that an external Java callback must register itself
using that name. That callback will be responsible for all data below the callpoint.
To compile the work.yang data model and then also to generate Java code for the data model we invoke
make all in the example package src directory. The Makefile will compile the yang files in the package,
generate Java code for those data models and then also invoke ant in the Java src directory.
The Data callback class looks as follows:
Example 70. DataCb class
@DataCallback(callPoint=work.callpoint_workPoint,
callType=DataCBType.ITERATOR)
public Iterator<Object> iterator(DpTrans trans,
ConfObject[] keyPath)
throws DpCallbackException {
return MyDb.iterator();
}
@DataCallback(callPoint=work.callpoint_workPoint,
callType=DataCBType.GET_NEXT)
public ConfKey getKey(DpTrans trans, ConfObject[] keyPath,
Object obj)
throws DpCallbackException {
Item i = (Item) obj;
return new ConfKey( new ConfObject[] { new ConfInt32(i.key) });
}
@DataCallback(callPoint=work.callpoint_workPoint,
callType=DataCBType.GET_ELEM)
public ConfValue getElem(DpTrans trans, ConfObject[] keyPath)
throws DpCallbackException {
@DataCallback(callPoint=work.callpoint_workPoint,
callType=DataCBType.SET_ELEM)
public int setElem(DpTrans trans, ConfObject[] keyPath,
ConfValue newval)
throws DpCallbackException {
return Conf.REPLY_ACCUMULATE;
}
@DataCallback(callPoint=work.callpoint_workPoint,
callType=DataCBType.CREATE)
public int create(DpTrans trans, ConfObject[] keyPath)
throws DpCallbackException {
return Conf.REPLY_ACCUMULATE;
}
@DataCallback(callPoint=work.callpoint_workPoint,
callType=DataCBType.REMOVE)
public int remove(DpTrans trans, ConfObject[] keyPath)
throws DpCallbackException {
return Conf.REPLY_ACCUMULATE;
}
@DataCallback(callPoint=work.callpoint_workPoint,
callType=DataCBType.NUM_INSTANCES)
@DataCallback(callPoint=work.callpoint_workPoint,
callType=DataCBType.GET_OBJECT)
public ConfValue[] getObject(DpTrans trans, ConfObject[] keyPath)
throws DpCallbackException {
ConfInt32 kv = (ConfInt32) ((ConfKey) keyPath[0]).elementAt(0);
Item i = MyDb.findItem( kv.intValue() );
if (i == null) return null; // not found
return getObject(trans, keyPath, i);
}
@DataCallback(callPoint=work.callpoint_workPoint,
callType=DataCBType.GET_NEXT_OBJECT)
public ConfValue[] getObject(DpTrans trans, ConfObject[] keyPath,
Object obj)
throws DpCallbackException {
Item i = (Item) obj;
return new ConfValue[] {
new ConfInt32(i.key),
new ConfBuf(i.title),
new ConfBuf(i.responsible),
new ConfBuf(i.comment)
};
}
First we see how the Java annotations are used to declare the type of callback for each method. Secondly,
we see how the getElem() callback inspects the keyPath parameter passed to it to figure out exactly
which element NSO wants to read. The keyPath is an array of ConfObject values. Keypaths
are central to the understanding of the NSO Java library since they are used to denote objects in the
configuration. A keypath uniquely identifies an element in the instantiated configuration tree.
Furthermore, the getElem() switches on the tag keyPath[0] which is a ConfTag using symbolic
constants from the class "work". The "work" class was generated through the call to ncsc --emit-
java ....
The three write callbacks, setElem(), create() and remove() all return the value
Conf.REPLY_ACCUMULATE. If our backend database has real support to abort transactions, it is a good
idea to initiate a new backend database transaction in the Transaction callback init() (more on that
later), whereas if our backend database doesn't support proper transactions, we can fake real transactions
by returning Conf.REPLY_ACCUMULATE instead of actually writing the data. Since the final verdict of
the NSO transaction as a whole may very well be to abort the transaction, we must be prepared to undo all
write operations. The Conf.REPLY_ACCUMULATE return value means that we ask the library to cache
the write for us.
The Transaction callback class, looks like:
Example 71. TransCb class
@TransCallback(callType=TransCBType.INIT)
public void init(DpTrans trans) throws DpCallbackException {
return;
}
@TransCallback(callType=TransCBType.TRANS_LOCK)
public void transLock(DpTrans trans) throws DpCallbackException {
MyDb.lock();
@TransCallback(callType=TransCBType.TRANS_UNLOCK)
public void transUnlock(DpTrans trans) throws DpCallbackException {
MyDb.unlock();
}
@TransCallback(callType=TransCBType.PREPARE)
public void prepare(DpTrans trans) throws DpCallbackException {
Item i;
ConfInt32 kv;
for (Iterator<DpAccumulate> it = trans.accumulated();
it.hasNext(); ) {
DpAccumulate ack= it.next();
// check op
switch (ack.getOperation()) {
case DpAccumulate.SET_ELEM:
kv = (ConfInt32) ((ConfKey) ack.getKP()[1]).elementAt(0);
if ((i = MyDb.findItem( kv.intValue())) == null)
break;
// check leaf tag
ConfTag leaf = (ConfTag) ack.getKP()[0];
switch (leaf.getTagHash()) {
case work._title:
i.title = ack.getValue().toString();
break;
case work._responsible:
i.responsible = ack.getValue().toString();
break;
case work._comment:
i.comment = ack.getValue().toString();
break;
}
break;
case DpAccumulate.CREATE:
kv = (ConfInt32) ((ConfKey) ack.getKP()[0]).elementAt(0);
MyDb.newItem(new Item(kv.intValue()));
break;
case DpAccumulate.REMOVE:
kv = (ConfInt32) ((ConfKey) ack.getKP()[0]).elementAt(0);
MyDb.removeItem(kv.intValue());
break;
}
}
try {
MyDb.save("running.prep");
} catch (Exception e) {
throw
new DpCallbackException("failed to save file: running.prep",
e);
}
}
@TransCallback(callType=TransCBType.ABORT)
public void abort(DpTrans trans) throws DpCallbackException {
MyDb.restore("running.DB");
MyDb.unlink("running.prep");
}
@TransCallback(callType=TransCBType.COMMIT)
public void commit(DpTrans trans) throws DpCallbackException {
try {
MyDb.rename("running.prep","running.DB");
} catch (DpCallbackException e) {
throw new DpCallbackException("commit failed");
}
}
@TransCallback(callType=TransCBType.FINISH)
public void finish(DpTrans trans) throws DpCallbackException {
;
}
}
We can see how the prepare() callback goes through all write operations and actually execute them
towards our database MyDb.
• init() Similar to the transaction init() callback. However note that unlike the case with
transaction and data callbacks, both init() and action() are registered for each actionpoint
(i.e. different action points can have different init() callbacks), and there is no finish()
callback - the action is completed when the action() callback returns.
• action() This callback is invoked to actually execute the rpc or action. It receives the input
parameters (if any) and returns the output parameters (if any).
Beneath the service callback definition we add a action callback definition so the resulting YANG looks
like the following:
uses ncs:service-data;
ncs:servicepoint vlanspnt;
tailf:action self-test {
tailf:info "Perform self-test of the service";
tailf:actionpoint vlanselftest;
output {
leaf success {
type boolean;
}
leaf message {
type string;
description
"Free format message.";
}
}
}
/**
* Selftest action implementation for service
*/
@ActionCallback(callPoint="l3vpn-self-test", callType=ActionCBType.ACTION)
public ConfXMLParam[] selftest(DpActionTrans trans, ConfTag name,
ConfObject[] kp, ConfXMLParam[] params)
throws DpCallbackException {
try {
// Refer to the service yang model prefix
String nsPrefix = "l3vpn";
// Get the service instance key
String str = ((ConfKey)kp[0]).toString();
Validation Callbacks
In the VALIDATE state of a transaction, NSO will validate the new configuration. This consists of
verification that specific YANG constraints such as min-elements, unique, etc, as well as arbitrary
constraints specified by must expressions, are satisfied. The use of must expressions is the recommended
way to specify constraints on relations between different parts of the configuration, both due to its
declarative and concise form and due to performance considerations, since the expressions are evaluated
internally by the NSO transaction engine.
In some cases it may still be motivated to implement validation logic via callbacks in code. The YANG
model will then specify a validation point by means of a tailf:validate statement. By default
the callback registered for a validation point will be invoked whenever a configuration is validated,
since the callback logic will typically be dependent on data in other parts of the configuration, and these
dependencies are not known by NSO. Thus it is important from a performance point of view to specify the
actual dependencies by means of tailf:dependency substatements to the validate statement.
Validation callbacks use the MAAPI API to attach to the current transaction. This makes it possible to read
the configuration data that is to be validated, even though the transaction is not committed yet. The view of
the data is effectively the pre-existing configuration "shadowed" by the changes in the transaction, and thus
exactly what the new configuration will look like if it is committed.
Similar to the case of transaction and data callbacks, there are transaction validation callbacks that are
invoked when the validation phase starts and stops, and validation callbacks that are invoked for the
specific validation points in the YANG model.
The transaction validation callbacks are:
• init() This callback is invoked when the validation phase starts. It will typically attach to the
current transaction:
Example 72. Attach Maapi to the current transaction
public class SimpleValidator implements DpTransValidateCallback{
...
@TransValidateCallback(callType=TransValidateCBType.INIT)
public void init(DpTrans trans) throws DpCallbackException{
try {
th = trans.thandle;
maapi.attach(th, new MyNamesapce().hash(), trans.uinfo.usid);
..
} catch(Exception e) {
throw new DpCallbackException("failed to attach via maapi: "+
e.getMessage());
}
}
}
• stop() This callback is invoked when the validation phase ends. If init() attached to the
transaction, stop() should detach from it.
Transforms
Transforms implement a mapping between one part of the data model - the front-end of the transform - and
another part - the back-end of the transform. Typically the front-end is visible to northbound interfaces,
while the back-end is not, but for operational data (config false in the data model), a transform
may implement a different view (e.g. aggregation) of data that is also visible without going through the
transform.
The implementation of a transform uses techniques already described in this section: Transaction and
data callbacks are registered and invoked when the front-end data is accessed, and the transform uses the
MAAPI API to attach to the current transaction, and accesses the back-end data within the transaction.
To specify that the front-end data is provided by a transform, the data model uses the
tailf:callpoint statement with a tailf:transform true substatement. Since transforms do
not participate in the two-phase commit protocol, they only need to register the init() and finish()
transaction callbacks. The init() callback attaches to the transaction, and finish() detaches from
it. Also, a transform for operational data only needs to register the data callbacks that read data, i.e.
getElem(), existsOptional(), etc.
Hooks
Hooks make it possible have changes to the configuration trigger additional changes. In general this
should only be done when the data that is written by the hook is not visible to northbound interfaces,
since otherwise the additional changes will make it difficult for e.g. EMS or NMS systems to manage the
configuration - the complete configuration resulting from a given change can not be predicted. However
one use case in NSO for hooks that trigger visible changes is precisely to model managed devices that
have this behavior: hooks in the device model can emulate what the device does on certain configuration
changes, and thus the device configuration in NSO remains in sync with the actual device configuration.
The implementation technique for a hook is very similar to that for a transform. Transaction and data
callbacks are registered, and the MAAPI API is used to attach to the current transaction and write the
additional changes into the transaction. As for transforms, only the init() and finish() transaction
callbacks need to be registered, to do the MAAPI attach and detach. However only data callbacks that
write data, i.e. setElem(), create(), etc need to be registered, and depending on which changes
should trigger the hook invocation, it is possible to register only a subset of those. For example, if the
hook is registered for a leaf in the data model, and only changes to the value of that leaf should trigger
invocation of the hook, it is sufficient to register setElem().
To specify that changes to some part of the configuration should trigger a hook invocation, the data model
uses the tailf:callpoint statement with a tailf:set-hook or tailf:transaction-
hook substatement. A set-hook is invoked immediately when a north bound agent requests a write
operation on the data, while a transaction-hook is invoked when the transaction is committed. For the
NSO-specific use case mentioned above, a set-hook should be used. The tailf:set-hook and
tailf:transaction-hook statements take an argument specifying the extent of the data model the
hook applies to.
NED API
NSO can speak southbound to an arbitrary management interface. This is of course not entirely automatic
like with NETCONF or SNMP, and depending on the type of interface the device has for configuration,
this may involve some programming. Devices with a Cisco style CLI can however be managed by writing
YANG models describing the data in the CLI, and a relatively thin layer of Java code to handle the
communication to the devices. Refer to (Chapter 2, Network Element Drivers (NEDs) in NSO 5.3 NED
Development) for more information.
NAVU API
The NAVU API provides a DOM driven approach to navigate the NSO service and device models. The
main features of the NAVU API is dynamic schema loading at start up and lazy loading of instance data.
The navigation model is based on the YANG language structure. In addition to navigation and reading
of values NAVU also provides methods to modify the data model. Furthermore, it supports execution of
actions modelled in the service model.
By using NAVU it is easy to drill down through tree structures with minimal effort using the node by node
navigation primitives. Alternatively, we can use the NAVU search feature. This feature is especially useful
when we need find information deep down in the model structures.
NAVU requires all models i.e. the complete NSO service model with all its augmented sub models. This
is loaded at runtime from NSO. NSO has in turn acquired these from loaded .fxs files. The .fxs files are a
product from the ncsc tool with compiles these from the .yang files.
The ncsc tool can also generate java classes from the .yang files. These files, extending the
ConfNamespace baseclass, are the java representation of the models and contains all defined nametags and
their corresponding hash values. These java classes can, optionally, be used as help classes in the service
applications to make NAVU navigation type safe, e.g. eliminating errors from misspelled model container
names.
The service models are loaded at start up and are always the latest version. The models are always
traversed in a lazy fashion i.e. data is only loaded when it is needed. This is in order to minimize the
amount of data transferred between NSO and the service applications.
The most important classes of NAVU are the classes implementing the YANG node types. These are used
to navigate the DOM. These classes are as follows.
• NavuContainer - the NavuContainer is a container representing either the root of model, a YANG
module root, a YANG container.
• NavuList - the NavuList represents a YANG list node.
• NavuListEntry - list node entry.
• NavuLeaf - the NavuLeaf represents a YANG leaf node.
The remaining part of this section will guide us through the most useful features of the NAVU. Should
further information be required, please refer to the corresponding Javadoc pages.
NAVU relies on MAAPI as an underlying interfaces to access NSO. The starting point in NAVU
configuration is to create a NavuContext instance using the NavuContext(Maapi maapi)
constructor. To read and/or write data a transaction has to be started in Maapi. There are methods in the
NavuContext class to start and handle this transaction.
If data has to be written the Navu transaction has to be started differently depending on the
data being configuration or operational data. Such a transaction is started by the methods
NavuContext.startRunningTrans() or NavuContext.startOperationalTrans()
respectively. The Javadoc describes this in more details.
When navigating using NAVU we always start by creating a NavuContainer and passing in the
NavuContext instance, this is a base container from which navigation can be started. Furthermore we
need to create a root NavuContainer which is the top of the YANG module in which to navigate down.
This is done by using the NavuContainer.container(int hash) method. Here the argument is
the hash value for the modules namespace.
Example 75. NSO Module
module tailf-ncs {
namespace "http://tail-f.com/ns/ncs";
...
}
NAVU maps the YANG node types; container, list, leaf and leaf-list into its own structure. As mentioned
previously NavuContainer is used to represent both the module and the container node type. The
NavuListEntry is also used to represent a list node instance (actually NavuListEntry extends
NavuContainer). I.e. an element of a list node.
list device {
key name;
leaf name {
type string;
}
....
}
}
.......
}
}
If the purpose is to directly access a list node we would typically do a direct navigation to the list element
using the NAVU primitives.
The above example uses the select() which uses a recursive regexp match against its children.
Or alternatively, if the purpose is to drill down deep into a structure we should use select(). The
select() offers a wild card based search. The search is relative and can be performed from any node in
the structure.
All of the above are valid ways of traversing the lists depending on the purpose. If we know what we want,
we use direct access. If we want to apply something to a large amount of nodes, we use select().
An alternative method is to use the xPathSelect() where a XPath query could be issued instead.
.....
context.finishClearTrans();
NavuContainer and NavuList are structural nodes with NAVU. I.e. they have no values. Values
are always kept by NavuLeaf. A NavuLeaf represents the YANG node types leaf. A NavuLeaf can
be both read and set. NavuLeafList represents the YANG node type leaf-list and has some features in
common from both NavuLeaf (which it inherits from) and NavuList.
Example 82. NSO Leaf
module tailf-ncs {
namespace "http://tail-f.com/ns/ncs";
...
container ncs {
.....
list service {
key object-id;
leaf object-id {
type string;
}
....
leaf reference {
type string;
}
....
}
}
.......
}
}
To read and update a leaf we simply navigate to the leaf and request the value. And in the same manner we
can update the value.
In addition to the YANG standard node types NAVU also supports the Tailf proprietary node type
action. An action is considered being a NavuAction. It differs from an ordinary container in that it
can be executed using the call() primitive. Input and output parameters are represented as ordinary
nodes. The action extension of YANG allows an arbitrary structure to be defined both for input and output
parameters.
Consider the excerpt below. It represents a module on a managed device. When connected and
synchronized to the NSO, the module will appear in the /devices/device/config container.
Example 84. YANG Action
module interfaces {
namespace "http://router.com/interfaces";
prefix i;
.....
list interface {
key name;
max-elements 64;
tailf:action ping-test {
description "ping a machine ";
tailf:exec "/tmp/mpls-ping-test.sh" {
tailf:args "-c $(context) -p $(path)";
}
input {
leaf ttl {
type int8;
}
}
output {
container rcon {
leaf result {
type string;
}
leaf ip {
type inet:ipv4-address;
}
leaf ival {
type int8;
}
}
}
}
.....
.....
}
To execute the action below we need to access a device with this module loaded. This is done in a similar
way to non action nodes.
/*
* Execute ping on all devices with the interface module.
*/
for (NavuNode node: ncs.container(Ncs._devices_).
select("device/.*/config/interface/.*")) {
NavuContainer if = (NavuContainer)node;
/*
* Execute action.
*/
ConfXMLParamResult[] result = ping.call(new ConfXMLParam[] {
new ConfXMLParamValue(new interfaces().hash(),
interfaces._ttl,
new ConfInt64(64))};
result = ping.call("<if:ttl>64</if:ttl>");
/*
* Output the result of the action.
*/
System.out.println("result_ip: "+
((ConfXMLParamValue)result[1]).getValue().toString());
System.out.println("result_ival:" +
((ConfXMLParamValue)result[2]).getValue().toString());
}
.....
context.finishClearTrans();
/*
* Execute ping on all devices with the interface module.
*/
for (NavuNode node: ncs.container(Ncs._devices_).
xPathSelect("device/config/interface")) {
NavuContainer if = (NavuContainer)node;
/*
* Execute action.
*/
ConfXMLParamResult[] result = ping.call(new ConfXMLParam[] {
new ConfXMLParamValue(new interfaces().hash(),
interfaces._ttl,
new ConfInt64(64))};
result = ping.call("<if:ttl>64</if:ttl>");
/*
* Output the result of the action.
*/
System.out.println("result_ip: "+
((ConfXMLParamValue)result[1]).getValue().toString());
System.out.println("result_ival:" +
((ConfXMLParamValue)result[2]).getValue().toString());
}
.....
context.finishClearTrans();
The examples above have described how to attach to the NSO module and navigate through the data model
using the NAVU primitives. When using NAVU in the scope of the NSO Service manager, we normally
don't have to worry about attaching the NavuContainer to the NSO data model. NSO does this for us
providing NavuContainer nodes pointing at the nodes of interest.
ALARM API
Since this API is potent of both producing and consuming alarms, this becomes an API that can be used
both north and east bound. It adheres to the NSO Alarm model.
For more information see ( Chapter 6, The Alarm Manager in NSO 5.3 User Guide)
For situations where alarm subscription outside of the NSO java-vm is desired, the starting
the AlarmSourceCentral is performed by opening a Cdb socket, pass this Cdb to the
AlarmSourceCentral class and then call the start() method.
// Set up a CDB socket
Socket socket = new Socket("127.0.0.1",Conf.NCS_PORT);
Cdb cdb = new Cdb("my-alarm-source-socket", socket);
// Get and start alarm source - this must only be done once per JVM
AlarmSourceCentral source =
AlarmSourceCentral.getAlarmSource(10000, cdb);
source.start();
To retrieve alarms from the AlarmSource listener, a initial startListening() is required. Then
either a blocking takeAlarm() or a timeout based pollAlarm() can be used to retrieve the alarms.
The first method will wait indefinitely for new alarms to arrive while the second will timeout if an alarm
has not arrived in the stipulated time. When a listener no longer is needed then a stopListening()
call should be issued to deactivate it.
alarm = mySource.takeAlarm();
}
} catch (Exception e) {
e.printStackTrace();
} finally {
mySource.stopListening();
}
Both the takeAlarm() and the pollAlarm() method returns a Alarm object from which all alarm
information can be retrieved.
To directly store alarms an AlarmSink instance is created using the AlarmSink(Maapi maapi)
constructor.
//
// Maapi socket used to write alarms directly.
//
Socket socket = new Socket("127.0.0.1",Conf.NCS_PORT);
Maapi maapi = new Maapi(socket);
maapi.startUserSession("system", InetAddress.getByName(host),
"system", new String[] {},
MaapiUserSessionFlag.PROTO_TCP);
On the other hand if the alarms is to be stored using the AlarmSinkServer then the AlarmSink()
constructor without arguments is used.
AlarmSink sink = new AlarmSink();
However this case requires that the AlarmSinkServer is started prior to the instantiation of the
AlarmSink. The NSO java-vm will take care of starting this server so any use of the ALARM API inside
the java-vm can expect this server to be running. If it is desired to store alarms in an application outside of
the NSO java vm the AlarmSinkServer needs to be started like the following example:
//
// You will need a Maapi socket to write you alarms.
//
Socket socket = new Socket("127.0.0.1",Conf.NCS_PORT);
Maapi maapi = new Maapi(socket);
maapi.startUserSession("system", InetAddress.getByName(host),
"system", new String[] {},
MaapiUserSessionFlag.PROTO_TCP);
To store an alarm using the AlarmSink an Alarm instance must be created. This alarm alarm instance is
then stored by a call to the submitAlarm() method.
ArrayList<AlarmId> idList = new ArrayList<AlarmId>();
ConfIdentityRef alarmType =
new ConfIdentityRef(NcsAlarms.hash,
NcsAlarms._ncs_dev_manager_alarm);
ManagedObject managedObject1 =
new ManagedObject("/ncs:devices/device{device0}/config/root1");
ManagedObject managedObject2 =
new ManagedObject("/ncs:devices/device{device0}/config/root2");
ManagedObject managedObject3 =
new ManagedObject("/ncs:devices/device{device0}/config/root3");
Alarm myAlarm =
new Alarm(new ManagedDevice("device0"),
managedObject3,
alarmType,
PerceivedSeverity.WARNING,
false,
"This is a warning",
null,
idList,
null,
ConfDatetime.getConfDatetime(),
new AlarmAttribute(myAlarm.hash,
myAlarm._custom_alarm_attribute_,
new ConfBuf("An alarm attribute")),
new AlarmAttribute(myAlarm.hash,
myAlarm._custom_status_change_,
new ConfBuf("A status change")));
sink.submitAlarm(myAlarm);
NOTIF API
Applications can subscribe to certain events generated by NSO. The event types are defined by the
com.tailf.notif.NotificationType enumeration. The following notification can be subscribed
on:
• NotificationType.NOTIF_AUDIT - all audit log events are sent from NSO on the event notification
socket.
• NotificationType.NOTIF_COMMIT_SIMPLE - an event indicating that a user has somehow
modified the configuration.
• NotificationType.NOTIF_COMMIT_DIFF - an event indicating that a user has somehow
modified the configuration. The main difference between this event and the above mentioned
NOTIF_COMMIT_SIMPLE is that this event is synchronous, i.e. the entire transaction hangs until
we have explicitly called Notif.diffNotificationDone(). The purpose of this event is to
give the applications a chance to read the configuration diffs from the transaction before it commits.
A user subscribing to this event can use the MAAPI api to attach Maapi.attach() to the running
transaction and use Maapi.diffIterate() to iterate through the diff.
• NotificationType.NOTIF_COMMIT_FAILED - This event is generated when a data provider fails in
its commit callback. NSO executes a two-phase commit procedure towards all data providers when
committing transactions. When a provider fails in commit, the system is an unknown state. If the
provider is "external", the name of failing daemon is provided. If the provider is another NETCONF
agent, the IP address and port of that agent is provided.
• NotificationType.NOTIF_COMMIT_PROGRESS - This event provides progress information about
the commit of a transaction, i.e. the same information that is reported when the commit | details CLI
command is used.
• NotificationType.NOTIF_CONFIRMED_COMMIT - This event is generated when a user has started
a confirmed commit, when a confirming commit is issued, or when a confirmed commit is aborted;
represented by ConfirmNotification.confirm_type. For a confirmed commit, the timeout
value is also present in the notification.
• NotificationType.NOTIF_FORWARD_INFO - This event is generated whenever the server forwards
(proxies) a northbound agent.
• NotificationType.NOTIF_HA_INFO - an event related to NSOs perception of the current cluster
configuration.
• NotificationType.NOTIF_HEARTBEAT - This event can be be used by applications that wish to
monitor the health and liveness of the server itself. It needs to be requested through a Notif instance
which has been constructed with a heartbeat_interval. The server will continuously generate heartbeat
events on the notification socket. If the server fails to do so, the server is hung. The timeout interval
is measured in milli seconds. Recommended value is 10000 milli seconds to cater for truly high load
situations. Values less than 1000 are changed to 1000.
• NotificationType.NOTIF_SNMPA - This event is generated whenever an SNMP pdu is processed by
the server. The application receives an SnmpaNotification with a list of all varbinds in the pdu. Each
varbind contains subclasses that are internal to the SnmpaNotification.
• NotificationType.NOTIF_SUBAGENT_INFO - only sent if NSO runs as a master agent with
subagents enabled. This event is sent when the subagent connection is lost or reestablished. There are
two event types, defined in SubagentNotification.subagent_info_type}: "subagent
up" and "subagent down".
• NotificationType.NOTIF_DAEMON - all log events that also goes to the /NCSConf/logs/NSCLog
log are sent from NSO on the event notification socket.
• NotificationType.NOTIF_NETCONF - all log events that also goes to the /NCSConf/logs/netconfLog
log are sent from NSO on the event notification socket.
• NotificationType.NOTIF_DEVEL - all log events that also goes to the /NCSConf/logs/develLog log
are sent from NSO on the event notification socket.
• NotificationType.NOTIF_TAKEOVER_SYSLOG - If this flag is present, NSO will stop syslogging.
The idea behind the flag is that we want to configure syslogging for NSO in order to let NSO log its
startup sequence. Once NSO is started we wish to subsume the syslogging done by NSO. Typical
applications that use this flag want to pick up all log messages, reformat them and use some local
logging method. Once all subscriber sockets with this flag set are closed, NSO will resume to syslog.
• NotificationType.NOTIF_UPGRADE_EVENT - This event is generated for the different phases of an
in-service upgrade, i.e. when the data model is upgraded while the server is running. The application
receives an UpgradeNotification where the UpgradeNotification.event_type gives
the specific upgrade event. The events correspond to the invocation of the Maapi functions that drive
the upgrade.
To receive events from the NSO the application opens a socket and passes it to the notification base class
com.tailf.notif.Notif together with an EnumSet of NotificationType for all types of notifications
that should be received. Looping over the Notif.read() method will read and deliver notification
which are all subclasses of the com.tailf.notif.Notification base class.
Socket sock = new Socket("localhost", Conf.NCS_PORT);
EnumSet notifSet = EnumSet.of(NotificationType.NOTIF_COMMIT_SIMPLE,
NotificationType.NOTIF_AUDIT);
Notif notif = new Notif(sock, notifSet);
while (true) {
Notification n = notif.read();
if (n instanceof CommitNotification) {
// handle NOTIF_COMMIT_SIMPLE case
.....
} else if (n instanceof AuditNotification) {
// handle NOTIF_AUDIT case
.....
}
}
HA API
The HA API is used to setup and control High Availablity cluster nodes. This package is used to connect
to the High Availability (HA) subsystem. Configuration data can then be replicated on several nodes in a
cluster. (Chapter 8, High Availability in NSO 5.3 Administration Guide )
The following example configures three nodes in a HA cluster. One is set as master and the other two as
slaves.
ConfHaNode master =
new ConfHaNode(new ConfBuf("node0"),
new ConfIPv4(InetAddress.getByName("localhost")));
ha0.beMaster(master.nodeid);
....
• Value - A concrete value classes which inherits ConfValue that in turn is a subclass of
ConfObject.
• TypeDescriptor - a class representing the type of a ConfValue. A type-descriptor is represented as
an instance of ConfTypeDescriptor. Usage is primarily to be able to map a ConfValue to its
internal integer value representation or vice versa.
• Tag - A tag is representation of an element in the YANG model. A Tag is represented as an instance
of com.tailf.conf.Tag. The primary usage of tags are in the representation of keypaths.
• Key - a key is a representation of the instance key for a element instance. A key is represented as
an instance of com.tailf.conf.ConfKey. A ConfKey is constructed from an array of values
(ConfValue[]). The primary usage of keys are in the representation of keypaths.
• XMLParam - subclasses of ConfXMLParam which are used to represent a, possibly instantiated,
subtree of a YANG model. Useful in several APIs where multiple values can be set or retrieved in one
function call.
The class ConfObject defines public int constants for the different value types. Each value type
are mapped to a specific YANG type and are also represented by a specific subtype of ConfValue.
Having a ConfValue instance it is possible to retrieve its integer representation by the use of the static
method getConfTypeDescriptor() in class ConfTypeDescriptor. This functions returns a
ConfTypeDescriptor instance representing the value from which the integer representation can be
retrieved. The values represented as integers are:
Table 88. ConfValue types
An important class in the com.tailf.conf package, not inheriting ConfObject, is ConfPath. ConfPath
is used to represent a keypath which can point to any element in an instantiated model. As such it is
constructed from an array of ConfObject[] instances where each element is expected to be either a
ConfTag or a ConfKey.
As an example take the keypath /ncs:devices/device{d1}/iosxr:interface/
Loopback{lo0}. The following code snippets shows the instantiating of a ConfPath object representing
this keypath:
ConfPath keyPath = new ConfPath(new ConfObject[] {
new ConfTag("ncs","devices"),
new ConfTag("ncs","device"),
new ConfKey(new ConfObject[] {
new ConfBuf("d1")}),
new ConfTag("iosxr","interface"),
new ConfTag("iosxr","Loopback"),
new ConfKey(new ConfObject[] {
new ConfBuf("lo0")})
});
Another, more commonly used option is to use the format string + arguments constructor from ConfPath.
Where ConfPath parsers and creates the ConfTag/ConfKey representation from the string representation
instead.
Each element in the array is associated with the the node in the data model.
The array corresponding to the /servers/server{www} which is representation of the instance XML
document:
<servers>
<server>
<name>www</name>
</server>
</servers>
A namespace class is a subclass of ConfNamespace and comes in one of two shapes. Either created at
compile time using the ncsc compiler or created at runtime with the use of Maapi.loadSchemas. These two
types also indicates two main usages of namespace classes. The first is in programming where the symbol
name are used e.g. in Navu navigation. This is where the compiled namespaces are used. The other is for
internal mapping between symbol names and hash values. This is were the runtime type normally are used,
however compiled namespace classes can be used for these mappings too.
The compiled namespace classes are generated from compiled .fxs files through ncsc,(ncsc --emit-java).
ncsc --java-disable-prefix --java-package \
com.example.app.namespaces \
--emit-java \
java/src/com/example/app/namespaces/foo.java \
foo.fxs
Runtime namespace classes are created by calling Maapi.loadschema(). Thats it, the rest is dynamic. All
namespaces known by NSO are downloaded and runtime namespace classes are created. these can be
retrieved by calling Maapi.getAutoNsList()
Socket s = new Socket("localhost", Conf.NCS_PORT);
Maapi maapi = new Maapi(s);
maapi.loadSchemas();
The schema information is loaded automatically at first connect of the NSO server so no manually method
call to Maapi.loadSchemas() is needed.
With all schemas loaded, the java engine can make mappings between hash codes and symbol names on
the fly. Also the ConfPath class can find and add namespace information when parsing keypaths provided
that the namespace prefixes are added in the start element for each namespace.
ConfPath key1 = new ConfPath("/ncs:devices/device{d1}/iosxr:interface");
As an option, several APIs e.g. MAAPI have the possibility to set the default namespace which will be the
expected namespace for paths without prefixes. For example if the namespace class smp is generated with
the legal path "/smp:servers/server" an option in maapi could be the following:
Socket s = new Socket("localhost", Conf.NCS_PORT);
Maapi maapi = new Maapi(s);
int th = maapi.startTrans(Conf.DB_CANDIDATE,
Conf.MODE_READ_WRITE);
Introduction
The NSO Python library contains a variety of APIs for different purposes. In this chapter we introduce
these and explain their usage. The NSO Python modules deliverables are found in two variants, the low-
level APIs and the high-level APIs.
The low-level APIs is a direct mapping of the NSO C APIs, CDB and MAAPI. These will follow the
evolution of the C APIs. See man confd_lib_lib for further information.
The high-level APIs is an abstraction layer on top of the low-level APIs to make the them easier to use, to
improve code readability and development rate for common use cases. E.g. services and action callbacks
and common scripting towards NSO.
Python versions 3.4 or higher are supported. There are no dependencies to external modules.
Python scripting
Scripting in Python is a very easy and powerful way of accessing NSO. This document has several
examples of scripts showing various ways in accessing data and requesting actions in NSO.
The examples are directly executable with the python interpreter after sourcing the ncsrc file in the NSO
installation directory. This sets up the PYTHONPATH environment variable, which enables access to the
NSO Python modules.
Edit a file and execute it directly on the command line like this:
$ python script.py
The simplest way to access NSO is to use the single_transaction helper. It creates a MAAPI
context and a transaction in one step.
This example shows its usage, connecting as user 'admin' and 'python' as AAA context:
A common use case is to create a MAAPI context and re-use it for several transactions. This reduces the
latency and increases the transaction throughput, especially for back-end applications. For scripting the
lifetime is shorter and there is no need to keep the MAAPI contexts alive.
This example shows how to keep a MAAPI connection alive between transactions:
with ncs.maapi.Maapi() as m:
with ncs.maapi.Session(m, 'admin', 'python'):
Maagic API
Maagic is a module provided as part of the NSO Python APIs. It reduces the complexity of programming
towards NSO, is used on top of the MAAPI high-level API and addresses areas which require more
programming. First it helps in navigating in the model, using standard Python object dot notation, giving
very clear and easily read code. The context handlers removes the need to close sockets, user sessions and
transactions and the problems when they are forgotten and kept open. Finally it removes the need to know
the data types of the leafs, helping you to focus on the data to be set.
When using Maagic you still do the same procedure of starting a transaction.
with ncs.maapi.Maapi() as m:
with ncs.maapi.Session(m, 'admin', 'python'):
with m.start_write_trans() as t:
# Read/write/request ...
To use the Maagic functionality you get access to a Maagic object either pointing to the root of the CDB:
root = ncs.maagic.get_root(t)
Namespaces
Maagic handles namespaces by a prefix to the names of the elements. This is optional, but recommended
to avoid future side effects.
The syntax is to prefix the names with the namespace name followed by two underscores, e.g. ns_name__
name.
Examples how to use namespaces:
# The examples are equal unless there is a namespace collision.
# For the ncs namespace it would look like this:
root.ncs__devices.ncs__device['ce0'].ncs__address
# equals
root.devices.device['ce0'].address
In cases where there is a name collision, the namespace prefix is required to access an entity from a
module, except for the module that was first loaded. Namespace is always required for root entities when
there is a collision. The module load order is found in the ncs log file: logs/ncs.log.
# This example have three namespaces referring to a leaf, value, with the same
# name and this load order: /ex/a:value=11, /ex/b:value=22 and /ex/c:value=33
root.ex.value # returns 11
root.ex.a__value # returns 11
root.ex.b__value # returns 22
root.ex.c__value # returns 33
Reading data
Reading data using Maagic is straight forward. You will just specify the leaf you are interested in and the
data is retrieved. The data is returned in the nearest available Python data type.
For non-existing leafs, None is returned.
dev_name = root.devices.device['ce0'].name # 'ce0'
dev_address = root.devices.device['ce0'].address # '127.0.0.1'
dev_port = root.devices.device['ce0'].port # 10022
Writing data
Writing data using Maagic is straightforward. You will just specify the leaf you are interested in and assign
a value. Any data type can sent as input, as the str function is called, converting it to a string. The format
is depending on the data type. If the type validation fails an Error exception is thrown.
root.devices.device['ce0'].name = 'ce0'
root.devices.device['ce0'].address = '127.0.0.1'
root.devices.device['ce0'].port = 10022
root.devices.device['ce0'].port = '10022' # Also valid
Deleting data
Data is deleted the Python way of using the del function:
del root.devices.device['ce0'] # List element
del root.devices.device['ce0'].name # Leaf
del root.devices.device['ce0'].device_type.cli # Presence container
Some entities have a delete method, this is explained under the corresponding type.
Object The delete mechanism in Maagic is implemented using the __delattr__ method on the Node class.
deletion This means that executing the del function on a local or global variable will only delete the object from the
Python local or global namespaces. E.g. del obj.
Containers
Containers are addressed using standard Python dot notation: root.container1.container2
Presence containers
A presence container is created using the create method:
pc = root.container.presence_container.create()
Choices
The case of a choice is checked by addressing the name of the choice in the model:
ne_type = root.devices.device['ce0'].device_type.ne_type
if ne_type == 'cli':
# Handle CLI
elif ne_type == 'netconf':
# Handle NETCONF
elif ne_type == 'generic':
# Handle generic
else:
# Don't handle
The objects ce5 and o above are of type ListElement which is actually an ordinary Container object with a
different name.
Existence is checked with the exists or bool functions List class:
'ce0' in root.devices.device # Returns True or False
To delete the whole list use the python del function or delete() on the list.
# use Python's del function
del root.devices.device
Unions
Unions are not handled in any specific way, you just read or write to the leaf and the data is validated
according to the model.
Enumeration
Enumerations are returned as an Enum object, giving access to both the integer and string values.
str(root.devices.device['ce0'].state.admin_state) # May return 'unlocked'
root.devices.device['ce0'].state.admin_state.string # May return 'unlocked'
root.devices.device['ce0'].state.admin_state.value # May return 1
Leafref
Leafrefs are read as regular leafs and the returned data type corresponds to the referred leaf.
# /model/device is a leafref to /devices/device/name
Leafrefs are set as the leaf it refers. Data type is validated as it is set. The reference is validated when the
transaction is committed.
# /model/device is a leafref to /devices/device/name
root.model.device = 'ce0'
Identityref
Identityrefs are read and written as string values. Writing an identityref without prefix is possible, but
doing do is error prone and may stop working if another model is added which also has an identity with the
same name. The recommendation is to always use prefix when writing identityrefs. Reading an identityref
will always return a prefixed string value.
# Read
root.devices.device['ce0'].device_type.cli.ned_id # May return 'ios-id:cisco-ios'
# Write when identity cisco-ios is unique throughout the system (not recommended)
root.devices.device['ce0'].device_type.cli.ned_id = 'cisco-ios'
Instance-identifier
Instance-identifiers are read as xpath formatted string values.
# /model/iref is an instance-identifier
Instance-identifiers are set as xpath formatted strings. The string is validated as it is set. The reference is
validated when the transaction is committed.
# /model/iref is an instance-identifier
root.devices.device['ce0'].device_type.cli.ned_id = "/ncs:devices/ncs:device[ncs:name='ce0']"
Leaf-list
A leaf-list is represented by a LeafList object. This object behaves very much like a Python list. You may
iterate it, check for existence of a specific element using in, remove specific items using the del operator.
See examples below.
N.B. From NSO version 4.5 and onwards a yang leaf-list is represented differently than before. Reading a
leaf-list using Maagic used to result in an ordinary Python list (or None if the leaf-list was non-existent).
Now, reading a leaf-list will give back a LeafList object whether it exists or not. The LeafList object may
be iterated like a Python list and you may check for existence using the exists() method or the bool()
operator. A Maagic leaf-list node may be assigned using a Python list, just like before, and you may
convert it to a Python list using the as_list() method or by doing list(my_leaf_list_node).
You should update your code to cope with the new behaviour. If you for any reason are unable to
do so you can instruct Maagic to behave as in previous versions by setting the environment variable
DEPRECATED_MAAGIC_WANT_LEAF_LIST_AS_LEAF to 'true', 'yes' or '1' before starting your Python
process (or NSO).
Please note that this environment variable is deprecated and will go away in the future.
# /model/ll is a leaf-list with the type string
# iteration
for item in root.model.ll:
do_stuff(item)
# length
len(root.model.ll)
Binary
Binary values are read and written as byte strings.
# Read
root.model.bin # May return '\x00foo\x01bar'
# Write
root.model.bin = b'\x00foo\x01bar'
Bits
Reading a bits leaf will give a Bits object back (or None if the bits leaf is non-existent). To get some useful
information out of the Bits object you can either use the bytearray() method to get a Python bytearray
object in return or the Python str() operator to get a space separated string containing the bit names.
# get a bytearray
root.model.bits.bytearray()
There are four ways of setting a bits leaf. One is to set it using a string with space separated bit names, the
other one is to set it using a bytearray, the third by using a Python binary string and as a last option it may
be set using a Bits object. Note that updating a Bits object does not change anything in the database - for
that to happen you need to assign it to the Maagic node.
# set a bits leaf using a string of space separated bit names
root.model.bits = 'turboMode enableEncryption'
Empty leaf
An empty leaf is created using the create method:
pc = root.container.empty_leaf.create()
Maagic examples
Action requests
Requesting an action may not require an ongoing transaction and this example shows how to use Maapi as
a transactionless back-end for Maagic.
with ncs.maapi.Maapi() as m:
with ncs.maapi.Session(m, 'admin', 'python'):
root = ncs.maagic.get_root(m)
output = root.devices.check_sync()
This example shows how to request an action that require an ongoing transaction. It is also valid to request
an action that does not require an ongoing transaction.
with ncs.maapi.Maapi() as m:
with ncs.maapi.Session(m, 'admin', 'python'):
with m.start_read_trans() as t:
root = ncs.maagic.get_root(t)
output = root.devices.check_sync()
Providing parameters to an action with Maagic is very easy. You request an input object, with
get_input from the Maagic action object and sets the desired (or required) parameters as defined in the
model specification.
with ncs.maapi.Maapi() as m:
with ncs.maapi.Session(m, 'admin', 'python'):
root = ncs.maagic.get_root(m)
input = root.action.double.get_input()
input.number = 21
output = root.action.double(input)
print(output.result)
with ncs.maapi.Maapi() as m:
with ncs.maapi.Session(m, 'admin', 'python'):
root = ncs.maagic.get_root(m)
input = root.leaf_list_action.llist.get_input()
input.args = ['testing action']
output = root.leaf_list_action.llist(input)
print(output.result)
A common use case is to script creation of devices. With the Python APIs this is easily done without the
need to generate set commands and execute them in the CLI.
import ncs
def parseArgs():
parser = argparse.ArgumentParser()
parser.add_argument('--name', help="device name", required=True)
parser.add_argument('--address', help="device address", required=True)
parser.add_argument('--port', help="device address", type=int, default=22)
parser.add_argument('--desc', help="device description",
default="Device created by maagic_create_device.py")
parser.add_argument('--auth', help="device authgroup", default="default")
return parser.parse_args()
def main(args):
with ncs.maapi.Maapi() as m:
with ncs.maapi.Session(m, 'admin', 'python'):
with m.start_write_trans() as t:
root = ncs.maagic.get_root(t)
device = device_list.create(args.name)
device.address = args.address
device.port = args.port
device.description = args.desc
device.authgroup = args.auth
dev_type = device.device_type.cli
dev_type.ned_id = 'cisco-ios-cli-3.0'
device.state.admin_state = 'unlocked'
#
# fetch-host-keys and sync-from does not require a transaction
# continue using the Maapi object
#
root = ncs.maagic.get_root(m)
device = root.devices.device[args.name]
print("Syncing configuration...")
output = device.sync_from()
print("Result: %s" % output.result)
if not output.result:
print("Error: %s" % output.info)
if __name__ == '__main__':
main(parseArgs())
PlanComponent
This class is a helper to support service progress reporting using plan-data as part of a Reactive FASTMAP
service. More info about plan-data is found in the section called “Progress reporting using plan-data”.
The interface of the PlanComponent is identical to the corresponding Java class and supports the setup
of plans and setting the transition states.
class PlanComponent(object):
"""Service plan component.
See pydoc ncs.application.PlanComponent for further information about the Python class.
The pattern is to add an overall plan (self) for the service and separate plans for each component that builds
the service.
self_plan = PlanComponent(service, 'self', 'ncs:self')
self_plan.append_state('ncs:init')
self_plan.append_state('ncs:ready')
self_plan.set_reached('ncs:init')
route_plan.append_state('ncs:ready')
route_plan.set_reached('ncs:init')
When appending a new state to a plan the initial state is set to ncs:not-reached. At completion of a
plan the state is set to ncs:ready. In this case when the service is completely setup:
self_plan.set_reached('ncs:ready')
Python packages
Action handler
The Python high-level API provides an easy way to implement an action handler for your modeled actions.
The easiest way to create a handler is to use the ncs-make-package command. It creates some ready to use
skeleton code.
$ cd packages
$ ncs-make-package --service-skeleton python pyaction --component-class
action.Action \
--action-example
This example action handler takes a number as input, doubles it, and returns the result.
When debugging Python packages refer to the section called “Debugging of Python packages”.
# ---------------
# ACTIONS EXAMPLE
# ---------------
class DoubleAction(Action):
@Action.action
def cb_action(self, uinfo, name, kp, input, output):
self.log.info('action name: ', name)
self.log.info('action input.number: ', input.number)
output.result = input.number * 2
class LeafListAction(Action):
@Action.action
def cb_action(self, uinfo, name, kp, input, output):
self.log.info('action name: ', name)
self.log.info('action input.args: ', input.args)
output.result = [ w.upper() for w in input.args]
# ---------------------------------------------
# COMPONENT THREAD THAT WILL BE STARTED BY NCS.
# ---------------------------------------------
class Action(Application):
def setup(self):
self.log.info('Worker RUNNING')
self.register_action('action-action', DoubleAction)
self.register_action('llist-action', LeafListAction)
def teardown(self):
self.log.info('Worker FINISHED')
The input and output parameters are the most commonly used parameters of the action callback method.
They provide the access objects to the data provided to the action request and the returning result.
They are maagic.Node objects, which provide easy access to the modeled parameters.
Service handler
The Python high-level API provides an easy way to implement a service handler for your modeled
services. The easiest way to create a handler is to use the ncs-make-package command. It creates some
skeleton code.
$ cd packages
$ ncs-make-package --service-skeleton python pyservice \
--component-class service.Service
pyservice/
+-- README
+-- doc/
+-- load-dir/
+-- package-meta-data.xml
+-- python/
| +-- pyservice/
| +-- __init__.py
| +-- service.py
+-- src/
| +-- Makefile
| +-- yang/
| +-- service.yang
+-- templates/
This example has some code added for the service logic, including a service template.
When debugging Python packages refer to the section called “Debugging of Python packages”.
# ------------------------
# SERVICE CALLBACK EXAMPLE
# ------------------------
class ServiceCallbacks(Service):
@Service.create
def cb_create(self, tctx, root, service, proplist):
self.log.info('Service create(service=', service._path, ')')
self.log.info('Template is applied')
dev = root.devices.device[service.device]
dev.description = "This device was modified by %s" % service._path
# <<<<<<<<< service logic
@Service.pre_lock_create
def cb_pre_lock_create(self, tctx, root, service, proplist):
self.log.info('Service plcreate(service=', service._path, ')')
@Service.pre_modification
def cb_pre_modification(self, tctx, op, kp, root, proplist):
self.log.info('Service premod(service=', kp, ')')
@Service.post_modification
def cb_post_modification(self, tctx, op, kp, root, proplist):
self.log.info('Service premod(service=', kp, ')')
# ---------------------------------------------
# COMPONENT THREAD THAT WILL BE STARTED BY NCS.
# ---------------------------------------------
class Service(Application):
def setup(self):
self.log.info('Worker RUNNING')
self.register_service('service-servicepoint', ServiceCallbacks)
def teardown(self):
self.log.info('Worker FINISHED')
Low-level APIs
The Python low-level APIs are a direct mapping of the C-APIs. A C call has a corresponding Python
function entry. From a programmers point of view it wraps the C data structures into Python objects and
handles the related memory management when requested by the Python garbage collector. Any errors are
reported as error.Error.
The low-level APIs will not be described in detail in this document, but you will find a few examples
showing its usage in the coming sections.
See pydoc _ncs and man confd_lib_lib for further information.
sock_maapi = socket.socket()
maapi.connect(
sock_maapi,
ip='127.0.0.1',
port=_ncs.NCS_PORT)
maapi.load_schemas(sock_maapi)
maapi.start_user_session(
sock_maapi,
'admin',
'python',
[],
'127.0.0.1',
_ncs.PROTO_TCP)
maapi.install_crypto_keys(sock_maapi)
path = "/devices/authgroups/group{default}/umap{admin}/remote-password"
encrypted_password = maapi.get_elem(sock_maapi, th, path)
decrypted_password = _ncs.decrypt(str(encrypted_password))
maapi.finish_trans(sock_maapi, th)
maapi.end_user_session(sock_maapi)
sock_maapi.close()
This example is a script to do a check-sync action request using the low-level MAAPI API.
Example 102. Action request
import socket
import _ncs
from _ncs import maapi
sock_maapi = socket.socket()
maapi.connect(
sock_maapi,
ip='127.0.0.1',
port=_ncs.NCS_PORT)
maapi.load_schemas(sock_maapi)
_ncs.maapi.start_user_session(
sock_maapi,
'admin',
'python',
[],
'127.0.0.1',
_ncs.PROTO_TCP)
ns_hash = _ncs.str2hash("http://tail-f.com/ns/ncs")
maapi.end_user_session(sock_maapi)
sock_maapi.close()
Setting of operational data has historically been done using one of the CDB API:s (Python, Java, C). This
example shows how set a value and trigger subscribers for operational data using the Python low-level
API. API.
sock_cdb = socket.socket()
cdb.connect(
sock_cdb,
type=cdb.DATA_SOCKET,
ip='127.0.0.1',
port=_ncs.NCS_PORT)
path = "/operdata/value"
cdb.set_elem(sock_cdb, _ncs.Value(42, _ncs.C_UINT32), path)
cdb.end_session(sock_cdb)
sock_cdb.close()
Package Overview
All user code that needs to run in NSO must be part of a package. A package is basically a directory of
files with a fixed file structure. A package consists of code, YANG modules, custom Web UI widgets etc.,
that are needed in order to add an application or function to NSO. Packages is a controlled way to manage
loading and versions of custom applications.
A package is a directory where the package name is the same as the directory name. At the toplevel of this
directory a file called package-meta-data.xml must exist. The structure of that file is defined by the
YANG model $NCS_DIR/src/ncs/yang/tailf-ncs-packages.yang. A package may also be
a tar archive with the same directory layout. The tar archive can be either uncompressed with suffix .tar,
or gzip-compressed with suffix .tar.gz or .tgz.
Figure 104. Package Model
Packages are composed of components. The following types of components are defined: NED,
Application, and Callback.
<package-name>/package-meta-data.xml
load-dir/
shared-jar/
private-jar/
webui/
templates/
src/
doc/
netsim/
The package-meta-data.xml defines several important aspects of the package, such as the name,
dependencies on other packages, the package's components etc. This will be thoroughly described later in
this chapter.
When NSO starts, it needs to search for packages to load. The ncs.conf parameter /ncs-config/
load-path defines a list of directories. At initial startup, NSO searches these directories for packages,
copies the packages to a private directory tree in the directory defined by the /ncs-config/state-
dir parameter in ncs.conf, and loads and starts all the packages found. All .fxs (compiled YANG files)
and .ccl (compiled CLI spec files) files found in the directory load-dir in a package are loaded. On
subsequent startups, NSO will by default only load and start the copied packages - see the section called
“Loading Packages” for different ways to get NSO to search the load path for changed or added packages.
A package usually contains Java code. This Java code is loaded by a class loader in the NSO Java VM.
A package that contains Java code must compile the Java code so that the compilation results are divided
into jar files where code that is supposed to be shared among multiple packages is compiled into one set of
jar files, and code that is private to the package itself is compiled into another set of jar files. The shared
and the common jar files shall go into the shared-jar directory and the private-jar directory,
respectively. By putting for example the code for a specific service in a private jar, NSO can dynamically
upgrade the service without affecting any other service.
The optional webui directory contains webui customization files.
An Example Package
The NSO example collection for developers contains a number of small self-contained examples. The
collection resides at $NCS_DIR/examples.ncs/getting-started/developing-with-ncs
Each of these examples defines a package. Let's take a look at some of these packages. The example 3-
aggregated-stats has a package ./packages/stats. The package-meta-data.xml file
for that package looks like:
Example 105. An Example Package
<ncs-package xmlns="http://tail-f.com/ns/ncs-packages">
<name>stats</name>
<package-version>1.0</package-version>
<description>Aggregating statistics from the network</description>
<ncs-min-version>3.0</ncs-min-version>
<required-package>
<name>router-nc-1.0</name>
</required-package>
<component>
<name>stats</name>
<callback>
<java-class-name>com.example.stats.Stats</java-class-name>
</callback>
</component>
</ncs-package>
The package-meta-data.xml file defines the name of the package as well as one component. Let's
go through the different parts of the meta data file:
• name - the name of the package. All packages in the system must have unique names.
• package-version - The version of the package. This is for administrative purposes only, NSO
cannot simultaneously handle two versions of the same package.
• ncs-min-version - which is the oldest known NSO version where this package works.
• required-package - a list of names of other packages that are required for this package to work.
• component - Each package defines zero or more components.
Components
Each component in a package has a name. The names of all the components must be unique within the
package. The YANG model for packages contain:
....
list component {
key name;
leaf name {
type string;
}
...
choice type {
mandatory true;
case ned {
...
}
case callback {
...
}
case application {
...
}
case upgrade {
...
}
....
}
....
Lots of additional information can be found in the YANG module itself. The mandatory choice that defines
a component must be one of ned, callback, application or upgrade. We have:
Component types
A CLI NED and a generic NED component must also come with additional user written
Java code, whereas a NETCONF NED and an SNMP NED have no Java code.
callback This defines component with one or many java classes that implements callbacks using
the Java callback annotations.
If we look at the component in the stats package above we have:
<component>
<name>stats</name>
<callback>
<java-class-name>
com.example.stats.Stats
</java-class-name>
</callback>
</component>
The Stats class here implements a read-only data provider. See the section called “DP
API”.
The callback type of component is used for a wide range of callback type Java
applications, where one of the most important are the Service Callbacks. The following
list of Java callback annotations apply to callback components.
application Used to cover Java applications that do not fit into the callback type. Typically this is
functionality that should be running in separate threads and work autonomously.
The example $NCS_DIR/examples.ncs/getting-started/
developing-with-ncs/1-cdb contains three components that are of type
application. These components must also contain a java-class-name element. For
application components, that Java class must implement the ApplicationComponent
Java interface.
upgrade Used to migrate data for packages where the yang model has changed and the automatic
cdb upgrade is not sufficient. The upgrade component consists of a java class with a
main method that is expected to run one time only.
The example $NCS_DIR/examples.ncs/getting-started/
developing-with-ncs/14-upgrade-service illustrates user cdb upgrades
using upgrade components.
Creating Packages
NSO ships with a tool ncs-make-package that can be used to create packages. Chapter 9, Package
Development discusses in depth how to develop a package.
The above command will create a package called acme in ./acme. The acme package can be used for two
things; managing real acme routers and also be used as input to the ncs-netsim tool to simulate a network
of acme routers.
In the first case, managing real acme routers, all we really need to do is to put the newly generated package
in the load-path of NSO, start NSO with package reload (see the section called “Loading Packages”), and
then add one or more acme routers as managed devices to NSO. The ncs-setup tool can be used to do this:
$ ncs-setup --ned-package ./acme --dest ./ncs-project
The above command generates a directory ./ncs-project which is suitable for running NSO.
Assume we have an existing router at IP address 10.2.3.4 and that we can log into that router over the
NETCONF interface using user name bob, and password secret. The following session shows how to
setup NSO to manage this router:
$ cd ./ncs-project
$ ncs
$ ncs_cli -u admin
> configure
> set devices authgroups group southbound-bob umap admin \
remote-name bob remote-password secret
> set devices device acme1 authgroup southbound-bob address 10.2.3.4
> set devices device acme1 device-type netconf
> commit
We can also use the newly generated acme package to simulate a network of acme routers. During
development this is especially useful. The ncs-netsim tool can create a simulated network of acme routers
as:
And finally, ncs-setup can be used to initialize an environment where NSO is used to manage all devices in
an ncs-netsim network:
$ ncs-setup --netsim-dir ./netsim --dest ncs-project
As explained in Chapter 8, NSO Packages, NSO runs all user Java code and also loads all data models
through an NSO package. Thus a development project is the same as developing a package. Testing and
running the package is done by putting the package in the NSO load-path and running NSO.
There are different kinds of packages; NED packages, service packages etc. Regardless of package type,
the structure of the package as well as the deployment of the package into NSO is the same. The script
ncs-make-package creates the following for us:
• A Makefile to build the source code of the package. The package contains source code and needs to
be built.
• If it's a NED package, a netsim directory which is used by the ncs-netsim tool to simulate a
network of devices.
• If it is a service package, skeleton YANG and Java files that can be modified are generated.
In this chapter we are going to develop an MPLS service for a network of provider edge routers (PE) and
customer equipment routers (CE). The assumption is that the routers speak NETCONF and that we have
proper YANG modules for the two types of routers. The techniques described here work equally well for
devices that speak other protocols than NETCONF, such as Cisco CLI or SNMP.
The first thing we want to do is to create a simulation environment where ConfD is used as NETCONF
server to simulate the routers in our network. We plan to create a network that looks like:
Figure 106. MPLS network
In order to create the simulation network, the first thing we need to do is to create NSO packages for the
two router models. The packages is also exactly what NSO needs in order to manage the routers.
Assume that the yang files for the PE routers reside in ./pe-yang-files and the YANG files for the
CE routers reside in ./ce-yang-files The ncs-make-package tool is used to create two device
packages, one called pe and the other ce.
$ ncs-make-package --netconf-ned ./pe-yang-files pe
$ ncs-make-package --netconf-ned ./ce-yang-files ce
$ (cd pe/src; make)
$ (cd pe/src; make)
At this point, we can use the ncs-netsim tool to create a simulation network. ncs-netsim will use
the Tail-f ConfD daemon as a NETCONF server to simulate the managed devices, all running on localhost.
$ ncs-netsim create-network ./ce 5 ce create-network ./pe 3 pe
The above command creates a network with 8 routers, 5 running the YANG models for a CE router and 3
running a YANG model for the PE routers. ncs-netsim can be used to stop, start and manipulate this
network. For example:
$ ncs-netsim start
DEVICE ce0 OK STARTED
DEVICE ce1 OK STARTED
DEVICE ce2 OK STARTED
DEVICE ce3 OK STARTED
DEVICE ce4 OK STARTED
DEVICE pe0 OK STARTED
DEVICE pe1 OK STARTED
DEVICE pe2 OK STARTED
ncs-setup
In the previous section, we described how to use ncs-make-package and ncs-netsim to setup a
simulation network. Now, we want to use ncs to control and manage precisely the simulated network. We
can use the ncs-setup tool setup a directory suitable for this. ncs-setup has a flag to setup NSO
initialization files so that all devices in a ncs-netsim network are added as managed devices to NSO. If
we do:
$ ncs-setup --netsim-dir ./netsim --dest NCS;
$ cd NCS
$ cat README.ncs
.......
$ ncs
The above commands, db, log etc directories and also creates an NSO XML initialization file in ./NCS/
ncs-cdb/netsim_devices_init.xml. The init file is important, it is created from the content of
the netsim directory and it contains the IP address, port, auth credentials and NED type for all the devices
in the netsim environment. There is a dependency order between ncs-setup and ncs-netsim since ncs-setup
creates the XML init file based on the contents in the netsim environment, therefor we must run the ncs-
netsim create-network command before we execute the ncs-setup command. Once ncs-setup has been run,
and the init XML file has been generated it is possible to manually edit that file.
If we start the NSO CLI, we have for example :
$ ncs_cli -u admin
admin connected from 127.0.0.1 using console on zoe
admin@zoe> show configuration devices device ce0
address 127.0.0.1;
port 12022;
authgroup default;
device-type {
netconf;
}
state {
admin-state unlocked;
}
It is a NED package, and it has a directory called netsim at the top. This indicates to the ncs-netsim
tool that ncs-netsim can create simulation networks that contains devices running the YANG models
from this package. This section describes the netsim directory and how to modify it. ncs-netsim uses
ConfD to simulate network elements, and in order to fully understand how to modify a generated netsim
directory, some knowledge of how ConfD operates may be required.
The netsim directory contains three files:
• confd.conf.netsim is a configuration file for the ConfD instances. The file will be /bin/sed
substituted where the following list of variables will be substituted for the actual value for that ConfD
instance:
1 %IPC_PORT% for /confdConfig/confdIpcAddress/port
2 %NETCONF_SSH_PORT% - for /confdConfig/netconf/transport/ssh/port
3 %NETCONF_TCP_PORT% - for /confdConfig/netconf/transport/tcp/port
4 %CLI_SSH_PORT% - for /confdConfig/cli/ssh/port
5 %SNMP_PORT% - for /confdConfig/snmpAgent/port
6 %NAME% - for the name of ConfD instance.
7 %COUNTER% - for the number of the ConfD instance
• The Makefile should compile the YANG files so that ConfD can run them. The Makefile
should also have an install target that installs all files required for ConfD to run one instance of a
simulated network element. This is typically all fxs files.
• An optional start.sh file where additional programs can be started. A good example of a package
where the netsim component contains some additional C programs is the webserver package in the
NSO website example $NCS_DIR/web-server-farm.
Remember the picture of the network we wish to work with, there the routers, PE and CE, have IP address
and some additional data. So far here, we have generated a simulated network with YANG models. The
routers in our simulated network have no no data in them, we can log in to one of the routers to verify that:
$ ncs-netsim cli pe0
admin connected from 127.0.0.1 using console on zoe
admin@zoe> show configuration interface
No entries found.
[ok][2012-08-21 16:52:19]
admin@zoe> exit
The ConfD devices in our simulated network all have a Juniper CLI engine, thus we can, using the
command ncs-netsim cli [devicename] login to an individual router.
In order to achieve this, we need to have some additional XML initializing files for the ConfD instances.
It is the responsibility of the install target in the netsim Makefile to ensure that each ConfD instance
gets initialized with the proper init data. In the NSO example collection, the example $NCS_DIR/
examples.ncs/mpls contains precisely the two above mentioned PE and CE packages, but modified
so that the network elements in the simulated network gets initialized properly.
mask 255.255.255.252;
}
interface eth3 {
ip 10.0.17.13;
mask 255.255.255.252;
}
interface lo {
ip 10.10.10.1;
mask 255.255.0.0;
}
A full simulated router network loaded into NSO, with ConfD simulating the 7 routers.
Plug-and-play scripting
With the scripting mechanism it is possible for an end-user to add new functionality to NSO in a plug-
and-play like manner. See Chapter 7, Plug-and-play scripting in NSO 5.3 User Guide about the scripting
concept in general. It is also possible for a developer of an NSO package to enclose scripts in the package.
Scripts defined in an NSO package works pretty much as system level scripts configured with the /ncs-
config/scripts/dir configuration parameter. The difference is that the location of the scripts is
predefined. The scripts directory must be named scripts and must be located in the top directory of the
package.
make sure that package is part of the load-path, and we can then create test service instances - that do
nothing.
admin@zoe> show status packages package myrfs
package-version 1.0;
description "Skeleton for a resource facing service - RFS";
ncs-min-version 2.0;
component RFSSkeleton {
callback {
java-class-name [ com.example.myrfs.myrfs ];
}
}
oper-status {
up;
}
[ok][2012-08-22 15:30:13]
admin@zoe> configure
[edit]
admin@zoe% set services myrfs s1 dummy 3.4.5.6
[ok][2012-08-22 15:32:56]
ncs-make-package will generate skeleton files for our service models and for our service logic. The
package is fully build-able and runnable even though the service models are empty. Both CLI and Webui
can be run. In addition to this we also have a simulated environment with ConfD devices configured with
YANG modules.
Calling ncs-make-package with the arguments above will create a service skeleton that is placed in
the root in the generated service model. However services can be augmented anywhere or can be located
in any YANG module. This can be controlled by giving the argument --augment NAME where NAME is
the path to where the service should be augmented, or in the case of putting the service as a root container
in the service YANG this can be controlled by giving the argument --root-container NAME.
Services created using ncs-make-package will be of type list. However it is possible to have
services that are of type container instead. A container service need to be specified as a presence
container.
We declare in the component for the package, that we have a callback component. In the package-
meta-data.xml for the generated package, we have:
<component>
<name>RFSSkeleton</name>
<callback>
<java-class-name>com.example.myrfs.myrfs</java-class-name>
</callback>
</component>
When the package is loaded, the NSO Java VM will load the jar files for the package, and register the
defined class as a callback class. When the user creates a service of this type, the create() method will
be called.
If we take a look at the Java code in the service generated by ncs-make-package, first we have the
create() which takes four parameters. The ServiceContext instance is a container for the current
service transaction, with this e.g. the transaction timeout can be controlled. The container service is a
NavuContainer holding a read/write reference to path in the instance tree containing the current service
instance. From this point you can start accessing all nodes contained within created service. The root
container is a NavuContainer holding a reference to the NSO root. From here you can access the whole
data model of the NSO. The opaque parameter contains a java.util.Properties object instance.
This object may be used to transfer additional information between consecutive calls to the create callback.
It is always null in the first callback method when a service is first created. This Properties object can be
updated (or created if null) but should always be returned.
The opaque object is extremely useful to pass information between different invocations of the create()
method. The returned Properties object instance is stored persistently. If the create method computes
something on its first invocation, it can return that computation in order to have it passed in as a parameter
on the second invocation.
This is crucial to understand, the Mapping Logic fastmap mode relies on the fact that a modification of
an existing service instance can be realized as a full deletion of what the service instance created when
the service instance was first created, followed by yet another create, this time with slightly different
parameters. The NSO transaction engine will then compute the minimal difference and send southbound
to all involved managed devices. Thus a good service instance create() method will - when being
modified - recreate exactly the same structures it created the first time.
The best way to debug this and to ensure that a modification of a service instance really only sends
the minimal NETCONF diff to the south bound managed devices, is to turn on NETCONF trace in the
NSO, modify a service instance and inspect the XML sent to the managed devices. A badly behaving
create() method will incur large reconfigurations of the managed devices, possible leading to traffic
interruptions.
The selftest() implementation is expected to do some diagnosis of the service. This can possibly
include use of testing equipment or probes.
Example 109. Selftest action
/**
* Init method for selftest action
*/
@ActionCallback(callPoint="myrfsselftest", callType=ActionCBType.INIT)
public void init(DpActionTrans trans) throws DpCallbackException {
}
/**
* Selftest action implementation for service
*/
@ActionCallback(callPoint="myrfsselftest", callType=ActionCBType.ACTION)
public ConfXMLParam[] selftest(DpActionTrans trans, ConfTag name,
ConfObject[] kp, ConfXMLParam[] params)
throws DpCallbackException {
try {
// Refer to the service yang model prefix
String nsPrefix = "myrfs";
// Get the service instance key
String str = ((ConfKey)kp[0]).toString();
} catch (Exception e) {
When we start the NSO Java VM in main() the log4j2.xml log file is parsed by the LOG4J
framework and it applies the static settings to the NSO Java VM environment. The file is searched for in
the Java CLASSPATH.
NSO Java VM starts a number of internal processes or threads, one of these thread executes a service
called NcsLogger which handles the dynamic configurations of the logging framework. When NcsLogger
starts it initially reads all the configurations from /java-vm/java-logging and applies them, thus
overwriting settings that was previously parsed by the LOG4J framework.
After it has applied the changes from the configuration it starts to listen to changes that are made under /
java-vm/java-logging.
Example 110. tailf-ncs-java-vm.yang
typedef log-level-type {
type enumeration {
enum level-all {
value 1;
}
enum level-debug {
value 2;
}
enum level-error {
value 3;
}
enum level-fatal {
value 4;
}
enum level-info {
value 5;
}
enum level-off {
value 6;
}
enum level-trace {
value 7;
}
enum level-warn {
value 8;
}
}
description
"Levels of logging for Java packages in log4j.";
}
....
container java-vm {
....
container java-logging {
tailf:info "Configure Java Logging";
list logger {
tailf:info "List of loggers";
key "logger-name";
description
"Each entry in this list holds one representation of a logger with
a specific level defined by log-level-type. The logger-name
is the name of a Java package. logger-name can thus be for
example com.tailf.maapi, or com.tailf etc.";
leaf logger-name {
tailf:info "The name of the Java package";
type string;
mandatory true;
description
"The name of the Java package for which this logger
entry applies.";
}
leaf level {
tailf:info "Log-level for this logger";
type log-level-type;
mandatory true;
description
"Corresponding log-level for a specific logger.";
}
}
}
To change a verbosity level one needs to create a logger. A logger is something that controls the logging of
a certain parts of the NSO Java API.
The loggers in the system are hierarchically structured which means that there is one root logger that
always exists. All descendants of the root logger inherits its settings from the root logger if the descendant
logger don't overwrite its settings explicitly.
The LOG4J loggers are mapped to the package level in NSO Java API so the root logger that exits have a
direct descendant that is the package: com and it has in turn a descendant com.tailf.
The com.tailf logger has direct descendant that corresponds to every package in the system for
example: com.tailf.cdb, com.tailf.maapi etc.
As in the default case one could configure a logger in the static settings that is in a
log4j2.properties file this would mean that we need to explicitly restart the NSO Java VM ,or one
could alternatively configure a logger dynamically if an NSO restart is not desired.
Recall that if a logger is not configured explicitly then it will inherit its settings from its predecessors. To
overwrite a logger setting we create a logger in NSO.
To create a logger, for example let say that one uses Maapi API to read and write configuration changes in
NSO. We want to show all traces including INFO level traces. To enable INFO traces for Maapi classes
(located in package com.tailf.maapi) during runtime we start for example a CLI session and create a
logger called com.tailf.maapi.
ncs@admin% set java-vm java-logging logger com.tailf.maapi level level-info
[ok][2010-11-05 15:11:47]
ncs@admin% commit
Commit complete.
When we commit our changes to CDB the NcsLogger will notice that a change has been made under
/java-vm/java-logging, it will then apply the logging settings to the logger com.tailf.maapi
that we just created. We explicitly set the INFO level to that logger. All the descend ands from
com.tailf.maapi will automatically inherit its settings from that logger.
In NSO all stdout/stderr goes first through the service manager. The service manager has configuration
under /java-vm/stdout-capture that controls where the stdout/stderr will end up.
Example 111. stdout capture
container stdout-capture {
tailf:info "Capture stdout and stderr";
description
"Capture stdout and stderr from the Java VM.
It is important to consider that when creating a logger (in this case com.tailf.maapi) the name of the logger
has to be an existing package known by NSO classloader.
One could also create a logger named com.tailf with some desired level. This would set all packages
(com.tailf.*) to the same level. A common usage is to set com.tailf to level INFO which would set all
traces, including INFO from all packages to level INFO.
If one would like to turn off all available traces in the system (quiet mode) then configure com.tailf or
(com) to level OFF.
There are INFO level messages in all parts of the NSO Java API. ERROR levels when exception occurs
and some warning messages (level WARN) for some places in packages.
There is also protocol traces between the Java API and NSO which could be enabled if we create a logger
com.tailf.conf with DEBUG trace level.
In debugging and error reporting these root cause messages can be valuable to understand what actually
happens in the java code. On the other hand, in normal operations, just a top level message message
without to much details are preferred. The exceptions are also always logged in the java-vm log but if
this log is large it can be troublesome to correlate a certain exception to a specific action in Ncs. For this
reason it is possible to configure the level of details shown by ncs for an java-vm exception. The leaf /
ncs:java-vm/exception-error-message/verbosity takes one of three values:
• standard - Show the message from the top exception. This is the default
• verbose - Show all messages for the chain of cause exceptions, if any
• trace - Show messages for the chain of cause exceptions with exception class and the trace for the
bottom root cause
Here is an example in how this can be used. In the web-site-service example we try to create a service
without the necessary prepreparations:
[edit]
admin@ncs% commit
Aborted: Service create failed
[error][2013-03-25 10:46:48]
This is a very generic error message with does not describe what really
happens in the java code. Here the java-vm log has to be analyzed to find
the problem. However, with this cli session open we can from another cli
set the error reporting level to trace:
$ ncs_cli -u admin
admin@ncs> configure
admin@ncs% set java-vm exception-error-message verbosity trace
admin@ncs% commit
If we now in the original cli session issue the commit again we get the
following error message that pinpoint the problem in the code:
admin@ncs% commit
Aborted: [com.tailf.dp.DpCallbackException] Service create failed
Trace : [java.lang.NullPointerException]
com.tailf.conf.ConfKey.hashCode(ConfKey.java:145)
java.util.HashMap.getEntry(HashMap.java:361)
java.util.HashMap.containsKey(HashMap.java:352)
com.tailf.navu.NavuList.refreshElem(NavuList.java:1007)
com.tailf.navu.NavuList.elem(NavuList.java:831)
com.example.websiteservice.websiteservice.WebSiteServiceRFS.crea...
com.tailf.nsmux.NcsRfsDispatcher.applyStandardChange(NcsRfsDispa...
com.tailf.nsmux.NcsRfsDispatcher.dispatch(NcsRfsDispatcher.java:...
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessor...
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethod...
java.lang.reflect.Method.invoke(Method.java:616)
com.tailf.dp.annotations.DataCallbackProxy.writeAll(DataCallback...
com.tailf.dp.DpTrans.protoCallback(DpTrans.java:1357)
com.tailf.dp.DpTrans.read(DpTrans.java:571)
com.tailf.dp.DpTrans.run(DpTrans.java:369)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExec...
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExe...
java.lang.Thread.run(Thread.java:679)
com.tailf.dp.DpThread.run(DpThread.java:44)
[error][2013-03-25 10:47:09]
Loading Packages
NSO will att first start take the packages found in the load path and copy these into a directory under
supervision of NSO located at ./state/package-in-use. Later starts of NSO will not take any new
copies from the packages load-path so changes will not take effect by default. The reason for this is that in
normal operation changing packages definition as a side-effect of a restart is an unwanted behavior. Instead
these type of changes are part of an NSO installation upgrade.
During package development as opposed to operations it is usually desirable that all changes to package
definitions in the package load-path takes effect immediately. There are two ways to make this happen.
Either start ncs with the --with-reload-packages directive:
$ ncs --with-reload-packages
$ export NCS_RELOAD_PACKAGES=true
This request can only be performed in operational mode, and the effect is that all packages will be updated,
and any change in YANG models or code will be effectuated. If any YANG models are changed an
automatic CDB data upgrade will be executed. If manual (user code) data upgrades are necessary the
package should contain an upgrade component. This upgrade component will be executed as a part of the
package reload. See the section called “Writing an Upgrade Package Component” for information how to
develop an upgrade component.
If the change in a package does not affect the data model or shared Java code, there is another command
This will redeploy the private JARs in the Java VM for the Java package, restart Python VM for the Python
package and reload the templates associated with the package. However this command will not be sensitive
to changes in the YANG models or shared JARs for the Java package.
The class NcsJVMLauncher contains the main() method. The started java VM will automatically
retrieve and deploy all java code for the packages defined in the load-path in the ncs.conf file. No other
specification than the package-meta-data.xml for each package is needed.
In the NSO CLI, there exist a number of settings and actions for the NSO Java VM, if we do:
$ ncs_cli -u admin
We see some of the settings that are used to control how the NSO Java VM run. In particular here we're
interested in /java-vm/stdout-capture/file
The NSO daemon will, when it starts, also start the NSO Java VM, and it will capture the stdout output
from the NSO Java VM and send it to the file ./logs/ncs-java-vm.log. For more detail on the Java VM
settings see Chapter 2, The NSO Java VM.
Thus if we tail -f that file, we get all the output from the Java code. That leads us to the first and most
simple way of developing the Java code. If we now:
Or instructing NSO to just redeploy the package we're currently working on.
We can then do tail -f logs/ncs-java-vm.log in order to check for printouts and log
messages. Typically there is quite a lot of data in the NSO Java VM log. It can sometime be hard to find
our own printouts and log messages. Therefore it can be convenient to use the command:
admin@iron% set java-vm exception-error-message verbosity trace
which will make the relevant exception stack traces visible in the CLI.
It's also possible to dynamically, from the CLI control the level of logging as well as which Java packages
that shall log. Say that we're interested in Maapi calls, but don't want the log cluttered with what is really
NSO Java library internal calls. We can then do:
admin@iron% set java-vm java-logging logger com.tailf.ncs level level-error
[ok][2012-07-12 11:10:50]
admin@iron% set java-vm java-logging logger com.tailf.conf level level-error
[ok][2012-07-12 11:11:15]
admin@iron% commit
Commit complete.
Now considerably less log data will come. If we want these settings to always be there, even if we restart
NSO from scratch with an empty database (no .cdb file in ./ncs-cdb) we can save these settings as XML,
and put that XML inside the ncs-cdb directory, that way ncs will use this data as initialization data on fresh
restart. We do:
$ ncs_load -F p -p /ncs:java-vm/java-logging > ./ncs-cdb/loglevels.xml
$ ncs-setup --reset
$ ncs
The ncs-setup --reset command, stops the NSO daemon and resets NSO back into "factory
defaults". A restart of NSO will reinitialize NSO from all XML files found in the CDB directory.
we will see that the oper-status of the packages is java-uninitialized. We can also do
admin@iron> show status java-vm
start-status auto-start-not-enabled;
status not-connected;
[ok][2012-07-12 11:27:28]
And this is expected, since we've told NSO to not start the NSO Java VM. Now, we can do that manually,
at the UNIX shell prompt.
$ ncs-start-java-vm
.....
.. all stdout from &product; Java VM
So, now we're in position where we can manually stop the NSO Java VM, recompile the Java code, restart
the NSO Java VM. This development cycle works fine. However, even though we're running the NSO Java
VM standalone, we can still redeploy packages from the NSO CLI as
admin@iron% request packages package stats redeploy
result true
[ok][2012-07-12 10:59:01]
to reload and restart just our Java code, no need to restart the NSO Java VM.
This will generate two files, .classpath and .project. If we add this directory to eclipse as a "File-
>New->Java Project", uncheck the "Use the default location" and enter the directory where the .classpath
and .project have been generated. We're immediately ready to run this code in eclipse. All we need to do is
to choose the main() routine in the NcsJVMLauncher class.
The eclipse debugger works now as usual, and we can at will start and stop the Java code. One caveat here
which is worth mentioning is that there are a few timeouts between NSO and the Java code that will trigger
when we sit in the debugger. While developing with the eclipse debugger and breakpoints we typically
want to disable all these timeouts.
First we have 3 timeouts in ncs.conf that matter. Copy the system ncs.conf and set the three values of
/ncs-config/japi/new-session-timeout
/ncs-config/japi/query-timeout
/ncs-config/japi/connect-timeout
to a large value. See man page ncs.conf(5) for a detailed description on what those values are. If these
timeouts are triggered, NSO will close all sockets to the Java VM and all bets are off.
$ cp $NCS_DIR/etc/ncs/ncs.conf .
Edit the file and enter the following XML entry just after the Webui entry.
<japi>
<new-session-timeout>PT1000S</new-session-timeout>
<query-timeout>PT1000S</query-timeout>
<connect-timeout>PT1000S</connect-timeout>
</japi>
[ok][2012-07-12 12:54:31]
[edit]
admin@iron% set devices global-settings read-timeout 1000
[ok][2012-07-12 12:54:39]
[edit]
admin@iron% set devices global-settings write-timeout 1000
[ok][2012-07-12 12:54:44]
[edit]
admin@iron% commit
Commit complete.
and then to save these settings so that ncs will have them again on a clean restart (no cdb files)
$ ncs_load -F p -p /ncs:devices/global-settings > ./ncs-cdb/global-settings.xml
This is what is needed to be able to remote connect to the NSO Java VM, in the ncs.conf file
<java-vm>
<start-command>ncs-start-java-vm -d</start-command>
</java-vm>
Now if we in Eclipse, add a "Debug Configuration" and connect to port 9000 on localhost, we can attach
the Eclipse debugger to an already running system and debug it remotely.
By using the ncs-project commands, the project can be populated with the necessary packages and
kept updated. This can be used for encapsulating NCS demos or even a full blown turn-key system.
get a default ncs.conf configuration file that can be edited to better match different scenarios. All files
and directories should be put into a version control system, such as git.
A directory called test_project is created containing the files and directories of a NCS project as
shown below:
test_project/
|-- init_data
|-- logs
|-- Makefile
|-- ncs-cdb
|-- ncs.conf
|-- packages
|-- project-meta-data.xml
|-- README.ncs
|-- scripts
|-- |-- command
|-- |-- post-commit
|-- setup.mk
|-- state
|-- test
|-- |-- internal
|-- |-- |-- lux
|-- |-- |-- basic
|-- |-- |-- |-- Makefile
|-- |-- |-- |-- run.lux
|-- |-- |-- Makefile
|-- |-- Makefile
|-- Makefile
|-- pkgtest.env
The Makefile contains targets for building, starting, stopping and cleaning the system. It also contains
targets for entering the CLI as well as some useful targets for dealing with any git packages. Study the
Makefile to learn more.
Any initial CDB data can be put in the init_data directory. The Makefile will copy any files in this
directory to the ncs-cdb before starting NCS.
There is also a test directory created with a directory structure used for automatic tests. These tests are
dependent on the test tool Lux (https://github.com/hawk/lux.git).
Project setup
To fill this project with anything meaningful, the project-meta-data.xml file needs to be edited.
Project version number is configurable, the version we get from the create command is 1.0. The
description should also be changed to a small text explaining what the project is intended for. Our initial
content of the project-meta-data.xml may now look like this:
<project-version>1.0</project-version>
<description>Skeleton for a NCS project</description>
</project-meta-data>
<package>
<name>cisco-ios</name>
<url>file:///tmp/ncs-4.1.2-cisco-ios-4.1.5.tar.gz</url>
</package>
<package>
<name>foo</name>
<git>
<repo>ssh://[email protected]/foo.git</repo>
<branch>master</branch>
</git>
</package>
<package>
<name>mypack</name>
<local/>
</package>
By specifying netsim devices in the project-meta-data.xml file, the necessary commands for
creating the netsim configuration will be generated in the setup.mk file that ncs-project update
creates. The setup.mk file is included in the top Makefile, and provides some useful make targets for
creating and deleting our netsim setup.
When done editing the project-meta-data.xml, run the command ncs-project update. Add
the -v switch to see what the command does.
Answer yes when asked to overwrite the setup.mk. After this a new runtime directory is created with ncs
and simulated devices configured. You are now ready to compile your system with: make all.
If you have a lot of packages, all located at the same git repository, it is convenient to specify the
repository just once. This can be done by adding a packages-store section as shown below:
This means that if a package does not have a git repository defined, the repository and branch in the
packages-store is used.
Note If a package has specified that it is dependent on some other packages in its package-meta-data.xml
file, ncs-project update will try to clone those packages from any of the specified packages-
store. To override this behaviour, specify explicitly all packages in your project-meta-data.xml
file.
Export
When the development is done the project can be bundled together and distributed further. The ncs-project
comes with a command, export, used for this purpose. The export command creates a tarball of the
required files and any extra files as specified in the project-meta-data.xml file.
Note Developers are encouraged to distribute the project, either via some Source Code Managements system,
like git, or by exporting bundles using the export command.
When using export, a subset of the packages should be configured for exporting. The reason for not
exporting all packages in a project is if some of the packages is used solely for testing or similar. When
configuring the bundle the packages included in the bundle are leafrefs to the packages defined at the
root of the model, see Example 122, “The NCS Project YANG model”. We can also define a specific tag,
commit or branch, even a different location for the packages, different from the one used while developing.
For example we might develop against an experimental branch of a repository, but bundle with a specific
release of that same repository.
Note Bundled packages specified as of type file:// or url:// will not be built, they will simply be
included as is by the export command.
The bundle also have a name and a list of included files. Unless another name is specified from the
command line the final compressed file will be named using the configured bundle name and project
version.
We create the tar-ball by using the export command:
In the first scenario, it is possible to create an NCS project, populated with the packages from the bundle,
to create a ready to run NCS system. The optional init_data part makes it possible to prepare CDB with
configuration, prior to starting the system the very first time. The project-meta-data.xml file will
specify all the packages as local to avoid any dangling pointers to non-accessible git repositories.
The second scenario is intended for the case when you want to install the packages manually, or via a
custom process, into your running NCS systems.
The switch --snapshot will add a timestamp in the name of the created bundle file in order to make it
clear that it is not a proper version numbered release.
To import our exported project we would do an ncs-project create and point out where the bundle
is located.
$ ncs-project --help
COMMANDS
command.
OPTIONS
module: tailf-ncs-project
+--rw project-meta-data
+--rw name string
+--rw project-version? version
+--rw description? string
+--rw packages-store
| +--rw directory* [name]
| | +--rw name string
| +--rw git* [repo]
| +--rw repo string
| +--rw (git-type)?
| +--:(branch)
| | +--rw branch? string
| +--:(tag)
| | +--rw tag? string
| +--:(commit)
| +--rw commit? string
+--rw netsim
| +--rw device* [name]
| +--rw name -> /project-meta-data/package/name
| +--rw prefix string
| +--rw num-devices int32
+--rw bundle!
| +--rw name? string
| +--rw includes
| | +--rw file* [path]
| | +--rw path string
| +--rw package* [name]
| +--rw name -> ../../../package/name
| +--rw (package-location)?
| +--:(local)
| | +--rw local? empty
| +--:(url)
| | +--rw url? string
| +--:(git)
| +--rw git
| +--rw repo? string
| +--rw (git-type)?
| +--:(branch)
| | +--rw branch? string
| +--:(tag)
| | +--rw tag? string
| +--:(commit)
| +--rw commit? string
+--rw package* [name]
+--rw name string
+--rw (package-location)?
+--:(local)
| +--rw local? empty
+--:(url)
| +--rw url? string
+--:(git)
+--rw git
+--rw repo? string
+--rw (git-type)?
+--:(branch)
| +--rw branch? string
+--:(tag)
| +--rw tag? string
+--:(commit)
+--rw commit? string
Introduction
This section describes how to develop a service application. A service application maps input parameters
to create, modify, and delete a service instance into the resulting native commands to devices in the
network. The input parameters are given from a northbound system such as a self-service portal via API
calls to NSO or a network engineer using any of the NSO User Interfaces such as the NSO CLI.
The service application has a single task: from a given set of input parameters for a service instance
modification, calculate the minimal set of device interface operations to achieve the desired service
change.
It is very important that the service application supports any change, i.e., full create, delete, and update of
any service parameter.
Definitions
Below follows a set of definitions that is used throughout this section:
Service type A specific type of service like "L2 VPN", "L3 VPN", "VLAN", "Firewall
Rule set".
Service instance A specific instance of a service type, such as "ACME L3 VPN"
Service model The schema definition for a service type. In NSO YANG is used as the
schema language to define service types. Service models are used in
different contexts/systems and therefore have slightly different meanings.
In the context of NSO, a service model is a black-box specification of the
attributes required to instantiate the service.
This is different from service models in ITIL-based CMDBs or OSS
inventory systems, where a service model is more of a white-box model
that describes the complete structure.
Service application The code that implements a service, i.e., maps the parameters for a service
instance to device configuration.
Device configuration Network devices are configured to perform network functions. Every
service instance results in corresponding device configuration changes.
The dominating way to represent and change device configurations in
current networks are CLI representations and sequences. NETCONF
represents the configuration as XML instance documents corresponding to
the YANG schema.
The Fundamentals
Mapping
Developing a service application that transforms a service request to corresponding device configurations
is done differently in NSO than in other tools on the market. It is therefore important to understand the
underlying fundamental concepts and how they differ from what you might assume.
As a developer you need to express the mapping from a YANG service model to the corresponding device
YANG model. This is a declarative mapping in the sense that no sequencing is defined.
Note well that irrespective of the underlying device type and corresponding native device interface, the
mapping is towards a YANG device model, not the native CLI for example. This means that as you write
the service mapping, you do not have to worry about the syntax of different devices' CLI commands or in
which order these commands are sent to the devices. This is all taken care of by the NSO device manager.
The above means that implementing a service in NSO is reduced to transforming the input data structure
(described in YANG) to device data structures (also described in YANG).
Who writes the models?
• Developing the service model is part of developing the service application and is covered later in this
chapter.
• Every device NED comes with a corresponding device YANG model. This model has been designed
by the NED developer to capture the configuration data that is supported by the device.
This means that a service application has two primary artifacts: a YANG service model and a mapping
definition to the device YANG as illustrated below.
The possible run-time changes for an existing service instance are numerous. If a developer has to define
a back-end for every possible change, like a script or a workflow, the task is daunting, error-prone, and
never-ending.
NSO reduces this problem to a single data-mapping definition for the "create" scenario. At run-time
NSO will render the minimum change for any possible change like all the ones mentioned below. This is
managed by the FASTMAP algorithm explained later in this section.
Another challenge in traditional systems is that a lot of code goes into managing error scenarios. The NSO
built-in transaction manager takes that away from the developer of the Service Application.
• VPN name
• AS Number
• End-point CE device and interface
• End-point PE device and interface
Based on this example device configuration for a service instance, note which part of the device
configuration are variables resulting from the service configuration.
The figure below illustrates an example VPN configuration. Configuration items in bold are variables that
are mapped from the service input.
Now look at the attributes of the service model and make sure you have a clear picture how the values are
mapped into the corresponding device configuration.
Mapping Iterations
During the above exercises you might come into a situation where the input parameters for a service are
not sufficient to render the device configuration.
Examples:
• Assume the northbound system only provides the CE device and wants NSO to pick the right PE.
• Assume the northbound system wants NSO to pick an IP address and does not pass that as an input
parameter.
This is part of the service design iteration. If the input parameters are not sufficient to define the
corresponding device configuration you either add more attributes to the service model so that the device
configuration data can be defined as a pure data model mapping or you assume the mapping can fetch the
missing pieces.
In the latter case there are several alternatives. All of these will be explained in detail later. Typical
patterns are listed below:
• If the mapping needs pre-configured data, you can define a YANG data model for this data. For
example, in the VPN case NSO could have a list of CE-PE links loaded into NSO and the mapping
then uses this list to find the PE for a CE and the PE therefore does not need to be part of the service
model.
• If the mapping needs to request data from an external system, for example query an IP address
manager for the IP addresses, you can use the Reactive FASTMAP pattern.
• Use NSO to handle allocation of resources like VLAN IDs etc. A package can be defined to manage
VLAN pools within NSO and the mapping then requests a new VLAN from the VLAN pool and
therefore it needs not to be passed as input. The Reactive FASTMAP pattern is used in this case as
well.
Service templates If the mapping is a pure data model mapping without any complex
calculations, algorithms, external call-outs, resource management or
Reactive FASTMAP patterns, the mapping can be defined as service
templates. Service templates requires no programming skills and are
derived from example device configurations. They are therefore well
suited for network engineers. See examples.ncs/service-
provider/simple-mpls-vpn for an example.
Java and configuration templates This is the most common technique for real-life deployments. A thin
layer of Java implements the device-type independent algorithms
and passes variables to templates that maps this into device specific
configurations across vendors. The templates are often defined
"per feature". This means that the Java code calculates a number of
variables that are device independent. The Java code then applies
templates with these variables as inputs, and the templates maps
this to the various device types. All device-specifics are done in the
templates, thus keeping the Java code clean. See examples.ncs/
service-provider/mpls-vpn for an example.
Java only There are no real benefits of this approach compared to the above
combination of Java and templates. This more depends on the skills
of the developer, programmers with less networking skills might
prefer this approach. Abstracting away different device vendors are
often more cumbersome than in the Java and templates approach.
See examples.ncs/datacenter/datacenter for an
example.
All of the below assume you have a NSO local installation (see Chapter 2, NSO Local Install in NSO
Installation Guide, and have created an NSO instance with ncs-setup in NSO 5.3 Manual Pages This
command creates the NSO instance in a directory, called the NSO runtime directory, which is specified on
the command line:
$ ncs-setup --dest ./ncs-run
Step 1 Generate a service package in the packages directory in the runtime directory. In this example, the package name is
vlan, and it is a service package with java code and templates:
$ cd ncs-run/packages
$ ncs-make-package --service-skeleton TYPE PACKAGE-NAME
Step 2 Edit the skeleton YANG service model in the generated package. The YANG file resides in PACKAGE-NAME/src/
yang
Preparation
In order to reuse an existing environment for NSO and netsim, the examples.ncs/getting-
started/using-ncs/1-simulated-cisco-ios/ example is used. Make sure you have stopped
any running NSO and netsim.
Step 1 The first step is to generate a skeleton package for a service. (For details on packages see the section called
“Packages” in NSO 5.3 Getting Started Guide). The package is called vlan:
$ cd packages
$ ncs-make-package --service-skeleton template vlan
import ietf-inet-types {
prefix inet;
}
import tailf-ncs {
prefix ncs;
}
augment /ncs:services {
list vlan {
key name;
uses ncs:service-data;
ncs:servicepoint "vlan";
leaf name {
type string;
}
}
}
If this is your first exposure to YANG you can see that the modeling language is very straightforward and easy to
understand. See RFC 6020 for more details and examples for YANG.
The concepts you should understand in the above generated skeleton are:
1 The vlan service list is augmented into the services tree in NSO. This specifies the path to reach vlans in the CLI,
REST etc. There is no requirements on where the service shall be added into ncs, if you want vlans to be at the
top-level, just remove the augments statement.
2 The two lines of uses ncs:service-data and ncs:servicepoint "vlan" tells NSO that this is a
service.
Step 2 The next step is to modify the skeleton service YANG model and add the real parameters.
So, if a user wants to create a new VLAN in the network what should the parameters be? A very simple service model
could look like below (modify the src/yang/vlan.yang file):
augment /ncs:services {
list vlan {
key name;
uses ncs:service-data;
ncs:servicepoint "vlan";
leaf name {
type string;
}
leaf vlan-id {
type uint32 {
range "1..4096";
}
}
list device-if {
key "device-name";
leaf device-name {
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
}
leaf interface-type {
type enumeration {
enum FastEthernet;
enum GigabitEthernet;
enum TenGigabitEthernet;
}
}
leaf interface {
type string;
}
}
}
}
3 The VLAN is attached to a list of devices and interfaces. In order to make this example as simple as possible the
interface reference is selected by picking the type and then the name as a plain string.
Step 3 The next step is to build the data model:
$ cd $NCS_DIR/examples.ncs/getting-started/using-ncs/1-simulated-cisco-ios/packages/vlan/src
$ make
.../ncsc `ls vlan-ann.yang > /dev/null 2>&1 && echo "-a vlan-ann.yang"` \
-c -o ../load-dir/vlan.fxs yang/vlan.yang
A nice property of NSO is that already at this point you can load the service model into NSO and try if it works well
in the CLI etc. Nothing will happen to the devices since the mapping is not defined yet. This is normally the way to
iterate a model; load it into NSO, test the CLI towards the network engineers, make changes, reload it into NSO etc.
Step 4 Go to the root directory of the simulated-ios example:
$ cd $NCS_DIR/examples.ncs/getting-started/using-ncs/1-simulated-cisco-ios
Step 5 Start netsim and NSO:
$ ncs-netsim start
DEVICE c0 OK STARTED
DEVICE c1 OK STARTED
DEVICE c2 OK STARTED
$ ncs --with-package-reload
When NSO was started above, you gave NSO a parameter to reload all packages so that the newly added vlan
package is included. Without this parameter, NSO starts with the same packages as last time. Packages can also be
reloaded without starting and stopping NSO.
Step 6 Start the NSO CLI:
$ ncs_cli -C -u admin
Step 7 Since this is the first time NSO is started with some devices, you need to make sure NSO synchronizes its database
with the devices:
admin@ncs# config
Entering configuration mode terminal
admin@ncs(config)# services vlan net-0 vlan-id 1234 \
device-if c0 interface-type FastEthernet interface 1/0
admin@ncs(config-device-if-c0)# top
admin@ncs(config)# show configuration
services vlan net-0
vlan-id 1234
device-if c0
interface-type FastEthernet
interface 1/0
!
!
admin@ncs(config)# services vlan net-0 vlan-id 1234 \
device-if c1 interface-type FastEthernet interface 1/0
admin@ncs(config-device-if-c1)# top
admin@ncs(config)# show configuration
services vlan net-0
vlan-id 1234
device-if c0
interface-type FastEthernet
interface 1/0
!
device-if c1
interface-type FastEthernet
interface 1/0
!
!
admin@ncs(config)# commit dry-run outformat native
admin@ncs(config)# commit
Commit complete.
Committing service changes at this point has no effect on the devices since there is no mapping defined. This is why
the output to the command commit dry-run outformat native doesn't show any output. The service instance data
will just be stored in the data base in NSO.
Note that you get tab completion on the devices since they are references to device names in CDB. You also get tab
completion for interface types since the types are enumerated in the model. However the interface name is just a
string, and you have to type the correct interface name. For service models where there is only one device type like
in this simple example, a reference to the ios interface name according to the IOS model could be used. However that
makes the service model dependent on the underlying device types and if another type is added, the service model
needs to be updated and this is most often not desired. There are techniques to get tab completion even when the data
type is a string, but this is omitted here for simplicity.
Make sure you delete the vlan service instance before moving on with the example:
admin@ncs(config)# no services vlan
admin@ncs(config)# commit
Commit complete.
Step 1 Now it is time to define the mapping from service configuration to actual device configuration. The first step is
to understand the actual device configuration. In this example, this is done by manually configuring one vlan on a
device. This concrete device configuration is a starting point for the mapping; it shows the expected result of applying
the service.
admin@ncs(config)# devices device c0 config ios:vlan 1234
admin@ncs(config-vlan)# top
admin@ncs(config)# devices device c0 config ios:interface \
FastEthernet 10/10 switchport trunk allowed vlan 1234
admin@ncs(config-if)# top
admin@ncs(config)# show configuration
devices device c0
config
ios:vlan 1234
!
ios:interface FastEthernet10/10
<config xmlns="http://tail-f.com/ns/config/1.0">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>c0</name>
<config>
<vlan xmlns="urn:ios">
<vlan-list>
<id>1234</id>
</vlan-list>
</vlan>
</config>
</device>
</devices>
</config>
<config xmlns="http://tail-f.com/ns/config/1.0">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>c0</name>
<config>
<interface xmlns="urn:ios">
<FastEthernet>
<name>10/10</name>
<switchport>
<trunk>
<allowed>
<vlan>
<vlans>1234</vlans>
</vlan>
</allowed>
</trunk>
</switchport>
</FastEthernet>
</interface>
</config>
</device>
</devices>
</config>
Step 3 Now, we shall build that template. When the package was created a skeleton XML file was created in packages/
vlan/templates/vlan.xml
<config-template xmlns="http://tail-f.com/ns/config/1.0"
servicepoint="vlan">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<!--
Select the devices from some data structure in the service
model. In this skeleton the devices are specified in a leaf-list.
Select all devices in that leaf-list:
-->
<name>{/device}</name>
<config>
<!--
Add device-specific parameters here.
In this skeleton the service has a leaf "dummy"; use that
to set something on the device e.g.:
<ip-address-on-device>{/dummy}</ip-address-on-device>
-->
</config>
</device>
</devices>
</config-template>
We need to specify the right path to the devices. In our case the devices are identified by /device-if/device-
name (see the YANG service model).
For each of those devices we need to add the VLAN and change the specified interface configuration. Copy the XML
config from the CLI and replace with variables:
<config-template xmlns="http://tail-f.com/ns/config/1.0"
servicepoint="vlan">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>{/device-if/device-name}</name>
<config>
<vlan xmlns="urn:ios">
<vlan-list tags="merge">
<id>{../vlan-id}</id>
</vlan-list>
</vlan>
<interface xmlns="urn:ios">
<?if {interface-type='FastEthernet'}?>
<FastEthernet tags="nocreate">
<name>{interface}</name>
<switchport>
<trunk>
<allowed>
<vlan tags="merge">
<vlans>{../vlan-id}</vlans>
</vlan>
</allowed>
</trunk>
</switchport>
</FastEthernet>
<?end?>
<?if {interface-type='GigabitEthernet'}?>
<GigabitEthernet tags="nocreate">
<name>{interface}</name>
<switchport>
<trunk>
<allowed>
<vlan tags="merge">
<vlans>{../vlan-id}</vlans>
</vlan>
</allowed>
</trunk>
</switchport>
</GigabitEthernet>
<?end?>
<?if {interface-type='TenGigabitEthernet'}?>
<TenGigabitEthernet tags="nocreate">
<name>{interface}</name>
<switchport>
<trunk>
<allowed>
<vlan tags="merge">
<vlans>{../vlan-id}</vlans>
</vlan>
</allowed>
</trunk>
</switchport>
</TenGigabitEthernet>
<?end?>
</interface>
</config>
</device>
</devices>
</config-template>
Walking through the template can give a better idea of how it works. For every /device-if/device-name from
the service instance do the following:
1 Add the vlan to the vlan-list, the tag "merge" tells the template to merge the data into an existing list (default is to
replace).
2 For every interface within that device, add the vlan to the allowed vlans and set mode to trunk. The tag "nocreate"
tells the template to not create the named interface if it does not exist.
Tip While experimenting with the template it can be helpful to remove the nocreate tag. In that way you will always
create configuration from the template even if the interface does not exist.
It is important to understand that every path in the template above refers to paths from the service model in
vlan.yang.
For details on the template syntax, see the section called “Service Templates”
Step 4 Throw away the uncommitted changes to the device, and request NSO to reload the packages:
Previously we started NSO with a reload package option, the above shows how to do the same without starting and
stopping NSO.
Step 5 We can now create services that will make things happen in the network. Create a VLAN service:
admin@ncs# config
Entering configuration mode terminal
admin@ncs(config)# services vlan net-0 vlan-id 1234 device-if c0 \
Note that the commit command stored the service data in NSO, and at the same time pushed the changes to the two
devices affected by the service.
Step 6 The VLAN service instance can now be changed:
name c1
data no vlan 1234
vlan 1222
!
interface FastEthernet1/0
switchport trunk allowed vlan 1222
exit
}
}
admin@ncs(config)# commit
Commit complete.
It is important to understand what happens above. When the VLAN id is changed, NSO is able to calculate
the minimal required changes to the configuration. The same situation holds true for changing elements in the
configuration or even parameters of those elements. In this way NSO does not need any explicit mappings to for a
VLAN change or deletion. NSO does not overwrite a new configuration on the old configuration. Adding an interface
to the same service works the same:
admin@ncs(config)# commit
Commit complete.
Step 8 To make the VLAN service package complete edit the vlan/package-meta-data.xml to reflect the service
model purpose.
This example showed how to use template-based mapping. NSO also allows for programmatic mapping
and also a combination of the two approaches. The latter is very flexible, if some logic need to be attached
to the service provisioning that is expressed as templates and the logic applies device agnostic templates.
Note well that the examples in this section are extremely simplified from a networking perspective in order
to illustrate the concepts.
Step 1 Prepare a simulated environment of Cisco IOS devices: in this example we start from scratch in order to illustrate the
complete development process. We will not reuse any existing NSO examples.
Step 2 Generate a template service skeleton package: use NSO tools to generate a Java based service skeleton package.
Step 3 Write and test the VLAN Service Model.
Step 4 Analyze the VLAN service mapping to IOS configuration.
The above steps are no different from defining services using templates. Next is to start playing with the
Java Environment:
$ mkdir ~/vlan-service
$ cd ~/vlan-service
Now lets create a simulated environment with 3 IOS devices and a NSO that is ready to run with this
simulated network:
ios:ip routing
...
Finally, set VLAN information manually on a device to prepare for the mapping later.
admin@ncs(config)# commit
$ ls -F1
README.ncs
README.netsim
logs/
ncs-cdb/
ncs.conf
netsim/
packages/
scripts/
state/
$ cd packages
$ ls -l
total 8
cisco-ios -> .../packages/neds/cisco-ios
Currently there is only one package, the Cisco IOS NED. We will now create a new package that will
contain the VLAN service.
Figure 126. Package Structure
During the rest of this section we will work with the vlan/src/yang/vlan.yang and vlan/src/
java/src/com/example/vlan/vlanRFS.java files.
uses ncs:service-data;
ncs:servicepoint "vlan-servicepoint";
leaf name {
type string;
}
leaf vlan-id {
type uint32 {
range "1..4096";
}
}
list device-if {
key "device-name";
leaf device-name {
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
}
leaf interface {
type string;
}
}
}
}
3 The VLAN is attached to a list of devices and interfaces. In order to make this example as simple as
possible the interface name is just a string. A more correct and useful example would specify this is a
reference to an interface to the device, but for now it is better to keep the example simple.
The first line expands to a YANG structure that is shared amongst all services. The second line connects
the service to the Java callback.
To build this service model cd to packages/vlan/src and type make (assuming you have the make
build system installed).
$ cd packages/vlan/src/
$ make
We can now test the service model by requesting NSO to reload all packages:
$ ncs_cli -C -U admin
admin@ncs# packages reload
>>> System upgrade is starting.
>>> Sessions in configure mode must exit to operational mode.
>>> No configuration changes can be performed until upgrade has completed.
>>> System upgrade has completed successfully.
result Done
You can also stop and start NSO, but then you have to pass the option --with-package-reload when
starting NSO. This is important, NSO does not by default take any changes in packages into account when
restarting. When packages are reloaded the state/packages-in-use is updated.
Now, create a VLAN service, (nothing will happen since we have not defined any mapping).
admin@ncs(config)# services vlan net-0 vlan-id 1234 device-if c0 interface 1/0
admin@ncs(config-device-if-c0)# top
admin@ncs(config)# commit
Ok, that worked let us move on and connect that to some device configuration using Java mapping. Note
well that Java mapping is not needed, templates are more straight-forward and recommended but we use
this as an "Hello World" introduction to Java Service Programming in NSO. Also at the end we will show
how to combine Java and templates. Templates are used to define a vendor independent way of mapping
service attributes to device configuration and Java is used as a thin layer before the templates to do logic,
call-outs to external systems etc.
By default, ncs will start the Java VM invoking the command $NCS_DIR/bin/ncs-start-java-vm That
script will invoke
$ java com.tailf.ncs.NcsJVMLauncher
The class NcsJVMLauncher contains the main() method. The started java vm will automatically
retrieve and deploy all java code for the packages defined in the load-path of the ncs.conf file. No other
specification than the package-meta-data.xml for each package is needed.
For more detail on the Java VM settings see Chapter 2, The NSO Java VM.
The corresponding generated Java skeleton, (one print hello world statement added):
Modify the generated code to include the print "Hello World!" statement in the same way. Re-build the
package:
$ cd packages/vlan/src/
$ make
Whenever a package has changed we need to tell NSO to reload the package. There are three ways:
1 Just reload the implementation of a specific package, will not load any model changes: admin@ncs#
packages package vlan redeploy
2 Reload all packages including any model changes: admin@ncs# packages reload
3 Restart NSO with reload option: $ncs --with-package-reload
When that is done we can create a service (or modify an existing) and the callback will be triggered:
$ tail ncs-java-vm.log
...
<INFO> 03-Mar-2014::16:55:23.705 NcsMain JVM-Launcher: \
- REDEPLOY PACKAGE COLLECTION --> OK
<INFO> 03-Mar-2014::16:55:23.705 NcsMain JVM-Launcher: \
- REDEPLOY ["vlan"] --> DONE
<INFO> 03-Mar-2014::16:55:23.706 NcsMain JVM-Launcher: \
- DONE COMMAND --> REDEPLOY_PACKAGE
<INFO> 03-Mar-2014::16:55:23.706 NcsMain JVM-Launcher: \
- READ SOCKET =>
Hello World!
Tailing the ncs-java-vm.log is one way of developing. You can also start and stop the Java VM
explicitly and see the trace in the shell. First of all tell NSO not to start the VM by adding the following
snippet to ncs.conf:
<java-vm>
<auto-start>false</auto-start>
</java-vm>
Then, after restarting NSO or reloading the configuration, from the shell prompt:
$ ncs-start-java-vm
.....
.. all stdout from JVM
So modifying or creating a VLAN service will now have the "Hello World!" string show up in the shell.
You can modify the package and reload/redeploy and see the output.
Using Eclipse
First of all generate environment for Eclipse:
$ ncs-setup --eclipse-setup
This will generate two files, .classpath and .project. If we add this directory to eclipse as a "File-
>New->Java Project", uncheck the "Use the default location" and enter the directory where the .classpath
and .project have been generated. We're immediately ready to run this code in eclipse.
All we need to do is to choose the main() routine in the NcsJVMLauncher class. The eclipse debugger
works now as usual, and we can at will start and stop the Java code.
One caveat here which is worth mentioning is that there are a few timeouts between NSO and the Java
code that will trigger when we sit in the debugger. While developing with the eclipse debugger and
breakpoints we typically want to disable all these timeouts. First we have 3 timeouts in ncs.conf that
matter. Set the three values of /ncs-config/japi/new-session-timeout /ncs-config/japi/query-timeout /
ncs-config/japi/connect-timeout to a large value. See man page ncs.conf(5) for a detailed description on
what those values are. If these timeouts are triggered, NSO will close all sockets to the Java VM and all
bets are off.
$ cp $NCS_DIR/etc/ncs/ncs.conf .
Edit the file and enter the following XML entry just after the Webui entry.
<japi>
<new-session-timeout>PT1000S</new-session-timeout>
<query-timeout>PT1000S</query-timeout>
<connect-timeout>PT1000S</connect-timeout>
</japi>
$ ncs -c ./ncs.conf
You can verify that the Java VM is not running by checking the package status:
package-version 1.0
description "Skeleton for a resource facing service - RFS"
ncs-min-version 3.0
directory ./state/packages-in-use/1/vlan
component RFSSkeleton
callback java-class-name [ com.example.vlan.vlanRFS ]
oper-status java-uninitialized
You can start and stop the Java VM from Eclipse. Note well that this is not needed since the change cycle
is: modify the Java code, make in the src directory and then reload the package. All while NSO and the
JVM is running. Change the VLAN service and see the console output in Eclipse:
Another option is to have Eclipse connect to the running VM. Start the VM manually with the -d option.
$ ncs-start-java-vm -d
Listening for transport dt_socket at address: 9000
NCS JVM STARTING
...
Then you can setup Eclipse to connect to the NSO Java VM:
In order for Eclipse to show the NSO code when debugging add the NSO Source Jars, (add external Jar in
Eclipse):
Navigate to the service create for the VLAN service and add a breakpoint:
Commit a change of a VLAN service instance and Eclipse will stop at the breakpoint:
The NAVU API lets the Java programmer navigate the service model and the device models as a DOM
tree. Have a look at the create signature:
@ServiceCallback(servicePoint="vlan-servicepoint",
callType=ServiceCBType.CREATE)
public Properties create(ServiceContext context,
NavuNode service,
NavuNode ncsRoot,
Properties opaque)
throws DpCallbackException {
Two NAVU nodes are passed: the actual service serviceinstance and the NCS root ncsRoot.
We can have a first look at NAVU be analyzing the first try statement:
try {
// check if it is reasonable to assume that devices
// initially has been sync-from:ed
NavuList managedDevices =
ncsRoot.container("devices").list("device");
for (NavuContainer device : managedDevices) {
if (device.list("capability").isEmpty()) {
String mess = "Device %1$s has no known capabilities, " +
NAVU is a lazy evaluated DOM tree that represents the instantiated YANG model. So knowing the NSO
model: devices/device, (container/list) corresponds to the list of capabilities for a device, this
can be retrieved by ncsRoot.container("devices").list("device").
The service node can be used to fetch the values of the VLAN service instance:
• vlan/name
• vlan/vlan-id
• vlan/device-if/device and vlan/device-if/interface
A first snippet that iterates the service model and prints to the console looks like below:
The com.tailf.conf package contains Java Classes representing the YANG types like
ConfUInt32.
Try it out by the following sequence:
Remember the service attribute is passed as a parameter to the create method. As a starting point, look
at the first three lines:
1 To reach a specific leaf in the model use the NAVU leaf method with the name of the leaf as
parameter. This leaf then has various methods like getting the value as a string.
Next step is to iterate over the devices and interfaces. The NAVU elements() returns the elements of a
NAVU list.
In order to write the mapping code, make sure you have an understanding of the device model. One good
way of doing that is to create a corresponding configuration on one device and then display that with pipe
target "display xpath". Below is a CLI output that shows the model paths for "FastEthernet 1/0":
admin@ncs% show devices device c0 config ios:interface
FastEthernet 1/0 | display xpath
/devices/device[name='c0']/config/ios:interface/
FastEthernet[name='1/0']/switchport/mode/trunk
/devices/device[name='c0']/config/ios:interface/
FastEthernet[name='1/0']/switchport/trunk/allowed/vlan/vlans [ 111 ]
This can then be opened in a Web browser and model paths are shown to the right:
Now, we replace the print statements with setting real configuration on the devices.
Let us walk through the above code line by line. The device-name is a leafref. The deref method
returns the object that the leafref refers to. The getParent() might surprise the reader. Look at the
path for a leafref: /device/name/config/ios:interface/name. The name leafref is the key
that identifies a specific interface. The deref returns that key, while we want to have a reference to the
interface, (/device/name/config/ios:interface), that is the reason for the getParent().
The next line sets the vlan-list on the device. Note well that this follows the paths displayed earlier using
the NSO CLI. The sharedCreate() is important, it creates device configuration based on this service,
and it says that other services might also create the same value, "shared". Shared create maintains reference
counters for the created configuration in order for the service deletion to delete the configuration only
when the last service is deleted. Finally the interface name is used as a key to see if the interface exists,
"containsNode()".
The last step is to update the VLAN list for each interface. The code below adds an element to the VLAN
leaf-list.
// The interface
NavuNode theIf = feIntfList.elem(feIntfName);
theIf.container("switchport").
sharedCreate().
container("mode").
container("trunk").
sharedCreate();
// Create the VLAN leaf-list element
theIf.container("switchport").
container("trunk").
container("allowed").
container("vlan").
leafList("vlans").
sharedCreate(vlanID16);
The above create method is all that is needed for create, read, update and delete. NSO will automatically
handle any changes, like changing the VLAN ID, adding an interface to the VLAN service and deleting
the service. Play with the CLI and modify and delete VLAN services and make sure you realize this. This
is handled by the FASTMAP engine, it renders any change based on the single definition of the create
method.
• Managing different device vendors. If we would introduce more vendors in the network this would
need to be handled by the Java code. Of course this can be factored into separate classes in order to
keep the general logic clean and just passing the device details to specific vendor classes, but this gets
complex and will always require Java programmers for introducing new device types.
• No clear separation of concerns, domain expertise. The general business logic for a service is one
thing, detailed configuration knowledge of device types something else. The latter requires network
engineers and the first category is normally separated into a separate team that deals with OSS
integration.
In this model the Java layer focus on required logic, but it never touches concrete device models from
various vendors. The vendor specific details are abstracted away using feature templates. The templates
takes variables as input from the service logic, and the templates in turn transforms these into concrete
device configuration. Introducing of a new device type does not affect the Java mapping.
Note that the logic layer does not understand the device types, the templates will dynamically apply the
correct leg of the template depending on which device is touched.
• VLAN id
• Device and interface
So the mapping logic can just pass these variables to the feature template and it will apply it to a multi-
vendor network.
<config-template xmlns="http://tail-f.com/ns/config/1.0"
servicepoint="vlan">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>{$DEVICE}</name>
<config>
<vlan xmlns="urn:ios" tags="merge">
<vlan-list>
<id>{$VLAN_ID}</id>
</vlan-list>
</vlan>
<interface xmlns="urn:ios" tags="merge">
<FastEthernet tags="nocreate">
<name>{$INTF_NAME}</name>
<switchport>
<trunk>
<allowed>
<vlan tags="merge">
<vlans>{$VLAN_ID}</vlans>
</vlan>
</allowed>
</trunk>
</switchport>
</FastEthernet>
</interface>
</config>
</device>
</devices>
</config-template>
This template only maps to Cisco IOS devices (the xmlns="urn:ios" namespace), but you can add "legs"
for other device types at any point in time and reload the package.
Note that the Java code has no clue about the underlying device type, it just passes the feature variables
to the template. At run-time you can update the template with mapping to other device types. The Java-
code stays untouched, if you modify an existing VLAN service instance to refer to the new device type the
commit will generate the corresponding configuration for that device.
The smart reader will complain, "why do we have the Java layer at all?", this could have been done as
a pure template solution. That is true, but now this simple Java layer gives room for arbitrary complex
service logic before applying the template.
logic may need to reach out to other data in order to generate the device configuration. This is common in
the following scenarios:
• Policies: it might make sense to define policies that can be shared between service instances. The
policies, for example QoS, have data models of their own (not service models) and the mapping code
reads from that.
• Topology information: the service mapping might need to know connected devices, like which PE the
CE is connected to.
• Resources like VLAN IDs, IP addresses: these might not be given as input parameters. This can be
modeled separately in NSO or fetched from an external system.
It is important to design the service model to consider the above examples: what is input? what is available
from other sources? This example illustrates how to define QoS policies "on the side". A reference to an
existing QoS policy is passed as input. This is a much better principle than giving all QoS parameters to
every service instance. Note well that if you modify the QoS definitions that services are referring to, this
will not change the existing services. In order to have the service to read the changed policies you need to
perform a re-deploy on the service.
This example also uses a list that maps every CE to a PE. This list needs to be populated before any service
is created. The service model only has the CE as input parameter, and the service mapping code performs
a lookup in this list to get the PE. If the underlying topology changes a service re-deploy will adopt the
service to the changed CE-PE links. See more on topology below.
NSO has a package to manage resources like VLAN and IP addresses as a pool within NSO. In this way
the resources are managed within the transaction. The mapping code could also reach out externally to get
resources. The Reactive FASTMAP pattern is recommended for this.
Topology
Using topology information in the instantiation of a NSO service is a common approach, but also an area
with many misconceptions. Just like a service in NSO takes a black-box view of the configuration needed
for that service in the network NSO treats topologies in the same way. It is of course common that you
need to reference topology information in the service but it is highly desirable to have a decoupled and
self-sufficient service that only uses the part of the topology that is interesting/needed for the specific
service should be used.
Other parts of the topology could either be handled by other services or just let the network state sort it out,
it does not necessarily relate to configuration the network. A routing protocol will for example handle the
IP path through the network.
It is highly desirable to not introduce unneeded dependencies towards network topologies in your service.
To illustrate this, lets look at a Layer3 MPLS VPN service. A logical overview of an MPLS VPN with
three endpoints could look something like this. CE routers connecting to PE routers, that are connected to
an MPLS core network. In the MPLS core network there are a number of P routers.
In the service model you only want to configure the CE devices to use as endpoints. In this case topology
information could be used to sort out what PE router each CE router is connected to. However what type of
topology do you need. Lets look at a more detailed picture of what the L1 and L2 topology could look like
for one side of the picture above.
Figure 145. L1-L2 Topology
In pretty much all networks there is an access network between the CE and PE router. In the picture above
the CE routers are connected to local Ethernet switches connected to a local Ethernet access network,
connected through optical equipment. The local Ethernet access network is connected to a regional
Ethernet access network, connected to the PE router. Most likely the physical connections between
the devices in this picture has been simplified, in the real world redundant cabling would be used. The
example above is of course only one example of how an access network could look like and it is very
likely that a service provider have different access technologies. For example Ethernet, ATM, or a DSL
based access network.
Depending on how you design the L3VPN service, the physical cabling or the exact traffic path taken in
the layer 2 Ethernet access network might not be that interesting, just like we don't make any assumptions
or care about how traffic is transported over the MPLS core network. In both these cases we trust the
underlying protocols handling state in the network, spanning tree in the Ethernet access network, and
routing protocols like BGP in the MPLS cloud. Instead in this case it could make more sense to have
a separate NSO service for the access network, both so it can be reused for both for example L3VPN's
and L2VPN's but also to not tightly couple to the access network with the L3VPN service since it can be
different (Ethernet or ATM etc.).
Looking at the topology again from the L3VPN service perspective, if services assume that the access
network is already provisioned or taken care of by another service, it could look like this.
Figure 146. Black-box topology
The information needed to sort out what PE router a CE router is connected to as well as configuring both
CE and PE routers is:
• Interface on the CE router that is connected to the PE router, and IP address of that interface.
• Interface on the PE router that is connected to the CE router, and IP address to the interface.
Step 1 Identify the resulting device configurations for a deployed service instance.
Step 2 Identify what parameters from the device configurations that are common and should be put in the service model.
Step 3 Ensure that the scope of the service and the structure of the model works with the NSO architecture and service
mapping concepts. For example, avoid unnecessary complexities in the code to work with the service parameters.
Step 4 Ensure that the model is structured in a way so that integration with other systems north of NSO works well. For
example, ensure that the parameters in the service model map to the needed parameters from an ordering system.
Deploying a MPLS VPN in the network results in the following basic CE and PE configurations. The
snippets below only include the Cisco IOS and Cisco IOS-XR configurations. In a real process all
applicable device vendor configurations should be analyzed.
interface GigabitEthernet0/1.77
description Link to PE / pe0 - GigabitEthernet0/0/0/3
encapsulation dot1Q 77
ip address 192.168.1.5 255.255.255.252
service-policy output volvo
!
policy-map volvo
class class-default
shape average 6000000
!
!
interface GigabitEthernet0/11
description volvo local network
ip address 10.7.7.1 255.255.255.0
exit
router bgp 65101
neighbor 192.168.1.6 remote-as 100
neighbor 192.168.1.6 activate
network 10.7.7.0
!
vrf volvo
address-family ipv4 unicast
import route-target
65101:1
exit
export route-target
65101:1
exit
exit
exit
policy-map volvo-ce1
class class-default
shape average 6000000 bps
!
end-policy-map
!
interface GigabitEthernet 0/0/0/3.77
description Link to CE / ce1 - GigabitEthernet0/1
ipv4 address 192.168.1.6 255.255.255.252
service-policy output volvo-ce1
vrf volvo
encapsulation dot1q 77
exit
router bgp 100
vrf volvo
rd 65101:1
address-family ipv4 unicast
exit
neighbor 192.168.1.5
remote-as 65101
address-family ipv4 unicast
as-override
exit
exit
exit
exit
The device configuration parameters that need to be uniquely configured for each VPN have been marked
in bold.
When configuring a new MPLS l3vpn in the network we will have to configure all CE routers that should
be interconnected by the VPN, as well as the PE routers they connect to.
However when creating a new l3vpn service instance in NSO it would be ideal if only the endpoints (CE
routers) are needed as parameters to avoid having knowledge about PE routers in a northbound order
management system. This means a way to use topology information is needed to derive or compute what
PE router a CE router is connected to. This makes the input parameters for a new service instance very
simple. It also makes the entire service very flexible, since we can move CE and PE routers around,
without modifying the service configuration.
container vpn {
list l3vpn {
tailf:info "Layer3 VPN";
uses ncs:service-data;
ncs:servicepoint l3vpn-servicepoint;
key name;
leaf name {
tailf:info "Unique service id";
type string;
}
leaf as-number {
tailf:info "MPLS VPN AS number.";
mandatory true;
type uint32;
list endpoint {
key id;
leaf id {
tailf:info "Endpoint identifier";
type string;
}
leaf ce-device {
mandatory true;
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
}
leaf ce-interface {
mandatory true;
type string;
}
leaf ip-network {
tailf:info “private IP network”;
mandatory true;
type inet:ip-prefix;
}
leaf bandwidth {
tailf:info "Bandwidth in bps";
mandatory true;
type uint32;
}
}
}
}
The snipped above contains the l3vpn service model. The structure of the model is very simple. Every
VPN has a name, an as-number and a list of all the endpoints in the VPN. Each endpoint has:
• A unique id
• A reference to a device (a CE router in our case)
• A pointer to the LAN local interface on the CE router. This is kept as a string since we want this to
work in a multi-vendor environment.
• LAN private IP network
• Bandwidth on the VPN connection.
To be able to derive the CE to PE connections we use a very simple topology model. Notice that this
YANG snippet does not contain any servicepoint, which means that this is not a service model but rather
just a YANG schema letting us store information in CDB.
container topology {
list connection {
key name;
leaf name {
type string;
}
container endpoint-1 {
tailf:cli-compact-syntax;
uses connection-grouping;
}
container endpoint-2 {
tailf:cli-compact-syntax;
uses connection-grouping;
}
leaf link-vlan {
type uint32;
}
}
}
grouping connection-grouping {
leaf device {
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
}
leaf interface {
type string;
}
leaf ip-address {
type tailf:ipv4-address-and-prefix-length;
}
}
The model basically contains a list of connections, where each connection points out the device, interface
and ip-address in each of the connection.
The Java logic lets you set a list of parameters that can be consumed by the configuration templates. One
huge benefit of this approach is that all the parameters set in the Java code is completely vendor agnostic.
When writing the code there is no need for knowledge of what kind of devices or vendors that exists
in the network, thus creating an abstraction of vendor specific configuration. This also means that in to
create the configuration template there is no need to have knowledge of the service logic in the Java code.
The configuration template can instead be created and maintained by subject matter experts, the network
engineers.
With this service mapping approach it makes sense to modularize the service mapping by creating
configuration templates on a per feature level, creating an abstraction for a feature in the network. In this
example means we will create the following templates:
• CE router
• PE router
This is both to make services easier to maintain and create but also to create components that are reusable
from different services. This can of course be even more detailed with templates with for example BGP or
interface configuration if needed.
Since the configuration templates are decoupled from the service logic it is also possible to create and add
additional templates in a running NSO system. You can for example add a CE router from a new vendor to
the layer3 VPN service by only creating a new configuration template, using the set of parameters from the
service logic, to a running NSO system without changing anything in the other logical layers.
This section will go through relevant parts of the Java outlined by the pseudo code above. The code starts
with defining the configuration templates and reading the list of endpoints configured and the topology.
The Navu API is used for navigating the data models.
Template peTemplate = new Template(context, "l3vpn-pe");
The next step is iterating over the VPN endpoints configured in the service, find out connected PE router
using small helper methods navigating the configured topology.
for(NavuContainer endpoint : endpoints.elements()) {
try {
String ceName = endpoint.leaf("ce-device").valueAsString();
// Get the PE connection for this endpoint router
NavuContainer conn =
getConnection(topology,
endpoint.leaf("ce-device").valueAsString());
NavuContainer peEndpoint = getConnectedEndpoint(
conn,ceName);
NavuContainer ceEndpoint = getMyEndpoint(
conn,ceName);
The parameter dictionary is created from the TemplateVariables class and is populated with appropriate
parameters.
TemplateVariables vpnVar = new TemplateVariables();
vpnVar.putQuoted("PE",peEndpoint.leaf("device").valueAsString());
vpnVar.putQuoted("CE",endpoint.leaf("ce-device").valueAsString());
vpnVar.putQuoted("VLAN_ID", vlan.valueAsString());
vpnVar.putQuoted("LINK_PE_ADR",
getIPAddress(peEndpoint.leaf("ip-address").valueAsString()));
vpnVar.putQuoted("LINK_CE_ADR",
getIPAddress(ceEndpoint. leaf("ip-address").valueAsString()));
vpnVar.putQuoted("LINK_MASK",
getNetMask(ceEndpoint. leaf("ip-address").valueAsString()));
vpnVar.putQuoted("LINK_PREFIX",
getIPPrefix(ceEndpoint.leaf("ip-address").valueAsString()));
The last step after all parameters have been set is applying the templates for the CE and PE routers for this
VPN endpoint.
peTemplate.apply(service, vpnVar);
ceTemplate.apply(service, vpnVar);
Configuration Templates
The configuration templates are XML templates based on the structure of device YANG models.There
is a very easy way to create the configuration templates for the service mapping if NSO is connected to a
device with the appropriate configuration on it, using the following steps.
The commands in NSO give the following output. To make the example simpler only the BGP part of the
configuration is used
<config xmlns="http://tail-f.com/ns/config/1.0">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>ce1</name>
<config>
<router xmlns="urn:ios">
<bgp>
<as-no>65101</as-no>
<neighbor>
<id>192.168.1.6</id>
<remote-as>100</remote-as>
<activate/>
</neighbor>
<network>
<number>10.7.7.0</number>
</network>
</bgp>
</router>
</config>
</device>
</devices>
</config>
The final configuration template with the replaced parameters marked in bold is shown below. If the
parameter starts with a $-sign is taken from the Java parameter dictionary, otherwise it is a direct xpath
reference to the value from the service instance.
<config-template xmlns="http://tail-f.com/ns/config/1.0">
<devices xmlns="http://tail-f.com/ns/ncs">
<device tags="nocreate">
<name>{$CE}</name>
<config>
<router xmlns="urn:ios" tags="merge">
<bgp>
<as-no>{/as-number}</as-no>
<neighbor>
<id>{$LINK_PE_ADR}</id>
<remote-as>100</remote-as>
<activate/>
</neighbor>
<network>
<number>{$LOCAL_CE_NET}</number>
</network>
</bgp>
</router>
</config>
</device>
</devices>
</config-template>
FASTMAP Description
FASTMAP covers the complete service life-cycle: creating, changing and deleting the service. The
solution requires a minimum amount of code for mapping from a service model to a device model.
FASTMAP is based on generating changes from an initial create. When the service instance is created
the reverse of the resulting device configuration is stored together with the service instance. If an NSO
user later changes the service instance, NSO first applies (in a transaction) the reverse diff of the service,
effectively undoing the previous results of the service creation code. Then it runs the logic to create the
service again, and finally executes a diff to current configuration. This diff is then sent to the devices.
Note This means that it is very important that the service create code produces the same device changes for
a given set of input parameters every time it is executed. See the section called “ Persistent FASTMAP
Properties ” for techniques to achieve this.
If the service instance is deleted, NSO applies the reverse diff of the service, effectively removing all
configuration changes the service did from the devices.
Assume we have a service model that defines a service with attributes X, Y, and Z. The mapping logic
calculates that attributes A, B, and C shall be created on the devices. When the service is instantiated, the
inverse of the corresponding device attributes A, B, and C are stored with the service instance in the NSO
data-store CDB. This inverse answers the question: what should be done to the network to bring it back to
the state before the service was instantiated.
Now let us see what happens if one service attribute is changed. In the scenario below the service attribute
Z is changed. NSO will execute this as if the service was created from scratch. The resulting device
configurations are then compared with the actual configuration and the minimum diff is sent to the devices.
Note that this is managed automatically, there is no code to handle "change Z".
When a user deletes a service instance NSO can pick up the stored device configuration and delete that:
Reactive FASTMAP
A FASTMAP service is not allowed to perform explicit function calls that have side effects. The only
action a service is allowed to take is to modify the configuration of the current transaction. For example,
a service may not invoke a RPC to allocate a resource or start a virtual machine. All such actions must
take place before the service is created and provided as input parameters to the service. The reason for this
restriction is that the FASTMAP code may be executed as part of a commit dry-run, or the commit
may fail, in which case the side effects would have to be undone.
Reactive FASTMAP is a design pattern that provides a side-effect free solution to invoking RPCs from a
service. In the services discussed previously in this chapter, the service was modeled in such a way that all
required parameters were given to the service instance. The mapping logic code could immediately do its
work.
Sometimes this is not possible. Two examples where Reactive FASTMAP is the solution are:
1 A resource is allocated from an external system, such as an IP address or vlan id. It's not possible
to do this allocation from within the normal FASTMAP create() code since there is no ways to
deallocate the resource on commit abort or failure, and when the service is deleted. Furthermore, the
create() code runs within the transaction lock. The time spent in the create() should be as short
as possible.
2 The service requires the start of one or more Virtual Machines, Virtual Network Functions. The VMs
don't yet exist, and the create() code needs to trigger something that starts the VMs, and then later,
when the VMs are operational, configure them.
The basic idea is to let the create() code not just write data in the /ncs:devices tree, but also
write data in some auxiliary data structure. A CDB subscriber subscribes to that auxiliary data structure
and perform the actual side effect, for example a resource allocation. The response is written to CDB as
operational data where the service can read it during subsequent invocations.
The pseudo code for a Reactive FASTMAP service that allocates an id from an id pool may look like this:
create(serv) {
/* request resource allocation */
ResourceAllocator.requestId(serv, idPool, allocId);
/* read allocation id */
id = ResourceAllocator.idRead(idPool, allocId);
The actual deployment of a Reactive FASTMAP service will involve multiple executions of the
create() code.
1 In the first run the code will request an id by writing an allocation request to the resource manager tree.
It will then check if the response is ready, which it will not be, and return.
2 The resource manager subscribes to changes to the resource manager tree, looking for allocation
request to be created and deleted. In this case a new allocation request is created. The resource manager
allocates the resource and write the response in a CDB oper leaf. Finally the resource manager trigger a
service reactive-re-deploy action.
3 The create() is run for a second time. The code will create the allocation request, just as it did the
first time, then check if the response is ready. This time it will be ready and the code can proceed to
read the allocated id, and use it in its configuration.
Let us make a small digress on the reactive-re-deploy action mentioned above. Any service will
expose both a re-deploy and a reactive-re-deploy action. Both actions are similar in that
they activate the FASTMAP algorithm and invokes the service create() logic. However, while re-
deploy is user facing and has e.g. dry-run functionality, the reactive-re-deploy is specifically
tailored for the Reactive FASTMAP pattern. Hence the reactive-re-deploy takes no arguments and
has no extra functionality, instead it performs the re-deploy as the same user and with the same commit
parameters as the original service commit. Also the reactive-re-deploy will make a "shallow"
re-deploy in the sense that underlying stacked services will not be re-deployed. This "shallow" feature is
important when stacked services are used for performance optimization reasons. In the rest of this chapter
when service re-deploy is mentioned we will imply that this is performed using the reactive-re-
deploy action.
In the above ResourceAllocator example, when the service is deleted we want the allocated id to be
returned to the resource manager and become available for others to allocate. This is achieved as follows.
1 The service is deleted with the consequence that all configuration that the service created during its
deployment will be removed, in particular the id allocation request will be removed.
2 Since the resource manager subscribes to changes in the resource manager tree it will be notified
that an allocation request has been deleted. It can then release the resource allocated for this specific
request.
Other side effects can be handled in similar ways, for example, starting virtual machines, updating
external servers etc. The resource-manager-example and id-allocator-example packages can be found in
examples.ncs//service-provider/virtual-mpls-vpn
Note All packages and NEDs used in the examples are just example packages/NEDs, and are in no way
production ready packages nor are they supported. There are official Function Packs (a collection of
packages) and NEDs which resembles the packages used in the examples, but they are not the same. Never
consider packages and NEDs found in the example collection to be official supported packages.
1 The YANG model, which is very similar to the vlan package previously described in this chapter. The
difference is that two parameters are missing, the unit and the vlan-id.
Another difference is that a parallel list structure to the services is maintained. The list entries contain
help data and eventually the operational data holding the missing parameters will end up there.
2 The create() method. This code drives the Reactive FASTMAP loop forward. The YANG model
for the service has this structure
module: alloc-vlan-service
+--rw alloc-vlan* [name]
+--rw name string
+--rw iface string
+--rw description string
+--rw arp* enumeration
When the create() method gets called the code creates an allocation request by writing config data
into the buddy list entry. It then checks its "buddy" list entry to see if the unit and the vlan-id are there.
If they are, the FASTMAP code starts to write into the /ncs:devices tree. If they are not it returns.
3 A CDB subscriber that subscribes to the /alloc-vlan-data tree where the normal FASTMAP
create() code writes. The CDB subscriber picks up, in this case for example the "CREATE" of /
alloc-vlan-data[name="KEY"]/request-allocate-unit and allocates a unit number,
writes that number as operational data in the /alloc-vlan-data tree, and finally redeploys the
service, thus triggering the call of create() again. This loop of create(), CDB subscriber,
redeploy continues until the create() decides that it has all required data to enter the normal
FASTMAP phase, where the code writes to the /ncs:devices tree.
There are many variations on this same pattern that can be applied. The common theme is that the
create() code relies on auxiliary operational data to be filled in. This data contains the missing
parameters.
import ietf-yang-types {
prefix yang;
}
import tailf-common {
prefix tailf;
}
include tailf-ncs-common;
include tailf-ncs-services;
include tailf-ncs-devices;
include tailf-ncs-log;
description
"This submodule contains a collection of YANG definitions for
configuring plans in NCS.
modification.";
revision 2019-11-28 {
description
"Released as part of NCS-5.3.
revision 2019-04-09 {
description
"Released as part of NCS-5.1.
revision 2018-11-12 {
description
"Released as part of NCS-4.7.2.
revision 2018-06-21 {
description
"Released as part of NCS-4.7.
revision 2017-03-16 {
description
"Released as part of NCS-4.4.
revision 2016-11-24 {
description
"Released as part of NCS-4.3.
revision 2016-05-26 {
description
"Initial revision";
}
typedef plan-xpath {
type yang:xpath1.0;
description
/*
* Plan Component Types
*/
typedef plan-component-type-t {
description
"This is a base type from which all service specific plan components
can be derived.";
type identityref {
base plan-component-type;
}
}
identity plan-component-type {
description
"A service plan consists of several different plan components.
Each plan component moves forward in the plan as the service
comes closer to fulfillment.";
}
identity self {
description
"A service should when it constructs it's plan, include a column
of type 'self', this column can be used by upper layer software to
determine which state the service is in as a whole.";
base plan-component-type;
}
/*
* Plan States
*/
typedef plan-state-name-t {
description
"This is a base type from which all plan component specific states can
be derived.";
type identityref {
base plan-state;
}
}
typedef plan-state-operation-t {
type enumeration {
enum created {
tailf:code-name "plan_state_created";
}
enum modified {
tailf:code-name "plan_state_modified";
}
enum deleted {
tailf:code-name "plan_state_deleted";
}
}
}
typedef plan-state-status-t {
type enumeration {
enum not-reached;
enum reached;
enum failed {
tailf:code-name "plan_failed";
}
}
}
typedef plan-state-action-status-t {
type enumeration {
enum not-reached;
enum create-reached;
enum delete-reached;
enum failed {
tailf:code-name "plan_action_failed";
}
enum create-init;
enum delete-init;
}
}
identity plan-state {
description
"This is the base identity for plan states. A plan component in a
plan goes through certain states, some, such as 'init' and
'ready', are specified here, and the application augments these
with app specific states.";
}
identity init {
description
"The init state in all plan state lists, primarily used as a
place holder with a time stamp.";
base plan-state;
}
identity ready {
description
"The final state in a 'state list' in the plan";
base plan-state;
}
/*
* Plan Notifications
*/
augment "/ncs:services" {
container plan-notifications {
description
"Configuration to send plan-state-change notifications for
plan state transitions. A notification can be configured to
be sent when a specified service's plan component enters a
given state.
leaf name {
type string;
description
"A unique identifier for this subscription.";
}
leaf service-type {
type tailf:node-instance-identifier;
tailf:cli-completion-actionpoint "servicepoints-with-plan";
description
"The type of service. If not set, all service types are
subscribed.";
}
leaf component-type {
type plan-component-type-t;
description
"The type of component in the service's plan. If not set,
all component types of the specified service types are
subscribed.";
}
leaf state {
type plan-state-name-t;
description
"The name of the state for the component in the service's plan.
If not set, all states of the specified service types and
plan components are subscribed.";
}
leaf operation {
type plan-state-operation-t;
description
"The type of operation performed on the state(s) in the
component(s). If not set, all operations are subscribed.";
}
}
}
}
notification plan-state-change {
description
"This notification indicates that the specified service's
plan component has entered the given state.
/*
* Groupings
*/
grouping plan-data {
description
"This grouping contains the plan data that can show the
progress of a Reactive FASTMAP service. This grouping is optional
and should only be used by services i.e lists or presence containers
that uses the ncs:servicepoint callback";
container plan {
config false;
tailf:cdb-oper {
tailf:persistent true;
}
uses plan-components;
container commit-queue {
leaf time {
type yang:date-and-time;
tailf:cli-value-display-template "$(.|datetime)";
}
uses plan-components;
}
}
}
grouping plan-components {
description
"This grouping contains a list of components that reflects the
different steps or stages that a Reactive FASTMAP service comprises.";
list component {
ordered-by user;
key name;
description
"A component has a type and a list of states.
It is required that the first plan component is of type ncs:self.
It is also required that the first state of a component is ncs:init
and the last state is ncs:ready.
A service can in addition to the 'self' component have any number of
components. These additional components will have types that are
defined by user specified YANG identities.";
uses plan-component-body {
refine "state/status" {
mandatory true;
}
}
}
}
grouping plan-component-body {
leaf name {
type string;
}
leaf type {
description
"The plan component type is defined by an YANG identity.
It is used to identify the characteristics of a certain component.
Therefore, if two components in the same service are of the same
type they should be identical with respect to number, type and order
of their contained states.";
type plan-component-type-t;
mandatory true;
}
list state {
description
"A plan state represents a certain step or stage that a service needs
to execute and/or reach. It is identified as an YANG identity.
There are two predefined states ncs:init and ncs:ready which is the
first respectively last state of a plan component.";
ordered-by user;
key name;
leaf name {
tailf:alt-name state;
type plan-state-name-t;
}
leaf status {
description
"A plan state is always in one of three states 'not-reached' when
the state has not been executed, 'reached' when the state has been
executed and 'failed' it the state execution failed.";
type plan-state-status-t;
}
leaf when {
type yang:date-and-time;
tailf:cli-value-display-template "$(.|datetime)";
when '../status != "not-reached"';
description
"The time this state was successfully reached or failed.";
}
leaf service-reference {
description
"If this component reflects the state of some other data, e.g
an instantiated RFS, an instantiated CFS or something else, this
optional field can be set to point to that instance";
type instance-identifier {
require-instance false;
}
tailf:display-column-name "ref";
}
}
}
/*
* Nano-service related definitions
*/
grouping post-action-input-params {
description
"A Nano service post-action can choose to implement this grouping
as its input parameters. If so the action will be invoked with:
* opaque-props - The list of name, value pairs in the service opaque
* component-props - The list of component properties for
the invoking plan component state.
grouping nano-plan-data {
description
"This grouping is required for nano services. It replaces the
plan-data grouping. This grouping contains an executable plan
that has additional state data which is internally used to
control service execution.";
uses nano-plan;
}
grouping nano-plan {
container plan {
config false;
tailf:cdb-oper {
tailf:persistent true;
}
uses nano-plan-components {
augment "component" {
tailf:action force-back-track {
tailf:info "Force a component to back-track";
description
"Forces an existing component to start back-tracking";
tailf:actionpoint ncsinternal {
tailf:internal;
}
input {
leaf no-networking {
type empty;
description
"Do not send any data to the devices. Even if the transaction
manipulates data below /devices/device/config, nothing will
be sent to the managed devices. This is a way to manipulate
CDB in NCS without generating any southbound traffic.";
}
leaf back-tracking-goal {
type leafref {
path "../../state/name";
}
description
"Target state for back-track.";
}
}
output {
leaf result {
type boolean;
}
leaf info {
type string;
}
}
}
}
}
container commit-queue {
presence "The service is being committed through the commit queue.";
list queue-item {
key id;
leaf id {
type uint64;
description
"If the queue item in the commit queue refers to this service
this is the queue number.";
}
}
}
leaf failed {
type empty;
}
container error-info {
presence "Additional info if plan has failed";
leaf message {
type string;
description
"An explanatory message for the failing plan.";
}
leaf log-entry {
type instance-identifier {
require-instance false;
}
description
"Reference to a service log entry with additional information.";
}
}
leaf deleting {
tailf:hidden fastmap-private;
type empty;
}
leaf service-location {
tailf:hidden fastmap-private;
type instance-identifier {
require-instance false;
}
}
}
}
grouping nano-plan-components {
description
"This grouping contains a list of components that reflects the
different steps or stages that a nano service comprises.";
list component {
ordered-by user;
key "type name";
description
"A component has a type and a list of states. It is required
that the first plan component is of type ncs:self. It is
also required that the first state of a component is ncs:init
and the last state is ncs:ready. A service can in addition
to the 'self' component have any number of components. These
additional components will have types that are defined by
user specified YANG identities.";
uses plan-component-body {
augment "state" {
leaf create-cb {
tailf:hidden full;
description
"indicate if a create callback should be registered
for this state";
type boolean;
}
leaf delete-cb {
tailf:hidden full;
description
"indicate if a delete callback should be registered
for this state";
type boolean;
}
container pre-conditions {
tailf:display-groups "summary";
description
"Pre-conditions for a state controls whether or not a
state should be executed. There are separate conditions
for the 'create' and 'delete' case. At create the
create conditions checked and if possible executed with
the ultimate goal for the state of having status
'reached'. At the 'delete' case the delete conditions
control whether the state changes should be deleted
with the ultimate goal of the state having status
'not-reached'";
container post-actions {
tailf:display-groups "summary";
description
"Post-actions are called after successful execution of a
state. These are optional and there are separate
action that can be set for the 'create' and 'delete'
case respectively.
type yang:xpath1.0;
}
leaf delete-action-node {
description
"This leaf identifies the node on which a specified
action resides. This action is called after this state
as got a 'not-reached' status.";
type yang:xpath1.0;
}
leaf delete-action-name {
description
"The name of the action.";
type string;
}
leaf delete-action-result-expr {
description
"An action responds with a structured result. A certain
value could indicate an error or a successful result, e.g.
'result true'.
leaf post-action-status {
when '../post-actions';
type plan-state-action-status-t;
description
"This leaf is initially set to 'not-reached'.
container modified {
tailf:display-groups "summary";
config false;
tailf:callpoint ncs {
tailf:internal;
}
description
"Devices and other services this service has modified directly or
indirectly (through another service).";
tailf:info
container directly-modified {
tailf:display-groups "summary";
config false;
tailf:callpoint ncs {
tailf:internal;
}
description
"Devices and other services this service has explicitly
modified.";
tailf:info
"Devices and other services this service has explicitly
modified.";
leaf-list devices {
tailf:info
"Devices this service has explicitly modified.";
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
}
leaf-list services {
tailf:info
"Services this service has explicitly modified.";
type instance-identifier {
require-instance false;
}
}
leaf-list lsa-services {
tailf:info
"Services residing on remote LSA nodes this service
has explicitly modified.";
type instance-identifier {
require-instance false;
}
}
}
uses service-get-modifications;
container private {
description
"NCS service related internal data stored here.";
tailf:hidden fastmap-private;
leaf diff-set {
description
"Internal node use by NCS service manager to remember
the reverse diff for a service instance. This is the
data that is used by FASTMAP";
tailf:hidden full;
type binary;
}
leaf forward-diff-set {
description
"Internal node use by NCS service manager to remember
the forwards diff for a service instance. This data is
is used to produce the proper 'get-modifications' output";
tailf:hidden full;
type binary;
}
leaf-list device-list {
description
"A list of managed devices this state has manipulated.";
tailf:hidden full;
type string;
}
leaf-list ned-id-list {
description
"A list of NED identities this service instance has
manipulated.";
tailf:hidden full;
type string;
}
leaf-list service-list {
description
"A list of services this state has manipulated.";
tailf:hidden full;
type instance-identifier {
require-instance false;
}
}
leaf-list lsa-service-list {
description
"A list of LSA services this service instance has manipulated.";
tailf:hidden full;
type instance-identifier {
require-instance false;
}
}
}
}
}
container private {
description
"NCS service related internal data stored here.";
tailf:hidden fastmap-private;
container property-list {
description
"FASTMAP service component instance data used by the
service implementation.";
list property {
key name;
leaf name {
type string;
}
leaf value {
type string;
}
}
}
}
leaf back-track {
type boolean;
default false;
}
leaf back-track-goal {
tailf:alt-name goal;
type plan-state-name-t;
}
}
}
grouping nano-plan-history {
container plan-history {
config false;
tailf:cdb-oper {
tailf:persistent true;
}
list plan {
key time;
description
"Every time the plan changes its structure, i.e., a
plan component is added or deleted, or a state is added or
deleted in a plan component, a copy of the old plan is stored
in the plan history list.";
leaf time {
type yang:date-and-time;
tailf:cli-value-display-template "$(.|datetime)";
}
uses nano-plan-components;
}
}
}
/*
* Internal structures
*/
container side-effect-queue {
tailf:hidden debug;
config false;
tailf:cdb-oper {
tailf:persistent true;
}
list side-effect {
key id;
leaf id {
description
tailf:action invoke {
tailf:info "Invoke queued side-effects asynchronously";
description
"Invokes all not already executing/executed side-effects in the
side effect queue.";
tailf:actionpoint ncsinternal {
tailf:internal;
}
input {
}
output {
leaf num-invoked {
type uint32;
}
}
}
container zombies {
config false;
tailf:cdb-oper {
tailf:persistent true;
}
description
"Container for deleted Nano Services that still perform staged deletes.";
list service {
key service-path;
leaf service-path {
description
"The path to where the service resided that has been deleted
and become a zombie.";
type string;
}
leaf delete-path {
description
"The path to the node nearest to the top that was deleted and resulted
in this service becoming a zombie.";
type string;
}
leaf diffset {
tailf:hidden full;
type binary;
}
leaf latest-u-info {
tailf:hidden full;
type binary;
description
"Latest transactions user info is stored there, these are
used in reactive-re-deploy actions that must be performed by
a user with the same user info.";
}
container plan {
uses nano-plan-components {
augment "component" {
tailf:action force-back-track {
tailf:info "Force a component to back-track";
description
"Forces an existing component to start back-tracking";
tailf:actionpoint ncsinternal {
tailf:internal;
}
input {
leaf no-networking {
type empty;
description
"Do not send any data to the devices.
Even if the transaction manipulates data below
/devices/device/config, nothing will
be sent to the managed devices. This is a way to manipulate
CDB in NCS without generating any southbound traffic.";
}
leaf back-tracking-goal {
type leafref {
path "../../state/name";
}
description
"Target state for back-track.";
}
}
output {
leaf result {
type boolean;
}
leaf info {
type string;
}
}
}
}
}
leaf failed {
tailf:code-name "failedx";
type empty;
}
container error-info {
presence "Additional info if plan has failed";
leaf message {
type string;
description
"An explanatory message for the failing plan.";
}
leaf log-entry {
type instance-identifier {
require-instance false;
}
description
"Reference to a service log entry with additional information.";
}
}
leaf deleting {
tailf:hidden fastmap-private;
type empty;
}
}
tailf:action re-deploy {
tailf:info "revive the zombie and re-deploy it.";
description
"The nano service became a zombie since it was deleted but not
all delete pre-conditions was fulfilled. This action revives the
zombie service and re-deploys and stores it back as a zombie if
necessary. This will be performed with the user who requested the
action.";
tailf:actionpoint ncsinternal {
tailf:internal;
}
input {
}
output {
uses ncs-commit-result;
}
}
tailf:action reactive-re-deploy {
tailf:info "revive the zombie and reactive re-deploy it.";
description
"The nano service became a zombie since it was deleted but not
all delete pre-conditions was fulfilled. This action revives the
zombie service and re-deploys and stores it back as a zombie if
necessary. This will be performed with the same user as the original
commit.";
tailf:actionpoint ncsinternal {
tailf:internal;
}
input {
}
output {
uses ncs-commit-result;
}
}
tailf:action resurrect {
tailf:info "Load the zombie back as service in current state.";
description
"The zombie resurrection is used to stop the progress of a staged
nano service delete and restore current state as is.";
tailf:actionpoint ncsinternal {
tailf:internal;
}
input {
container dry-run {
presence "";
leaf outformat {
type outformat3;
}
}
}
output {
leaf result {
type string;
}
choice outformat {
case case-xml {
uses dry-run-xml;
}
case case-cli {
uses dry-run-cli;
}
case case-native {
uses dry-run-native;
}
}
}
}
uses log-data;
}
}
/*
* Plan Extension Statements
*/
extension plan-outline {
argument id {
tailf:arg-type {
type tailf:identifier;
}
}
tailf:occurence "*";
tailf:use-in "module";
tailf:use-in "submodule";
tailf:substatement "description";
tailf:substatement "ncs:component-type" {
tailf:occurence "+";
}
description
"This statement is optionally used in a node that defines a
service to document its plan. It is required for a nano
service.
extension component-type {
argument name {
tailf:arg-type {
type tailf:identifier-ref;
}
}
tailf:substatement "description";
tailf:substatement "ncs:state" {
tailf:occurence "*";
}
description
"This statement identifies the component type, which is a
reference to a YANG identity.
extension state {
argument name {
tailf:arg-type {
type tailf:identifier-ref;
}
}
tailf:substatement "description";
tailf:substatement "ncs:create" {
tailf:occurence "?";
}
tailf:substatement "ncs:delete" {
tailf:occurence "?";
}
description
"This statement identifies the state, which is a reference to a
YANG identity.
extension create {
tailf:substatement "description";
tailf:substatement "ncs:nano-callback" {
tailf:occurence "?";
}
tailf:substatement "ncs:pre-condition" {
tailf:occurence "?";
}
tailf:substatement "ncs:post-action-node" {
tailf:occurence "?";
}
description
"This statement defines nano service state characteristics for
entering this state.
extension delete {
tailf:substatement "description";
tailf:substatement "ncs:nano-callback" {
tailf:occurence "?";
}
tailf:substatement "ncs:pre-condition" {
tailf:occurence "?";
}
tailf:substatement "ncs:post-action-node" {
tailf:occurence "?";
}
tailf:substatement "description";
description
"This statement defines nano service state characteristics for
leaving this state.
extension nano-callback {
description
"This statement indicates that a callback function (or a
template) is defined for this state and operation.";
}
extension post-action-node {
argument xpath {
tailf:arg-type {
type plan-xpath;
}
}
tailf:substatement "description";
tailf:substatement "ncs:action-name" {
tailf:occurence "1";
}
tailf:substatement "ncs:result-expr" {
tailf:occurence "?";
}
description
"This statement defined a action side-effect to be executed
after the state has been successfully been executed.
extension action-name {
argument name {
tailf:arg-type {
type string;
}
}
tailf:substatement "description";
description
"The name of the action.";
}
extension result-expr {
argument xpath {
tailf:arg-type {
type yang:xpath1.0;
}
}
tailf:substatement "description";
description
"An action responds with a structured result. A certain value
can indicate an error or a successful result, e.g.,
'result true'.
/*
* Behavior tree extensions for nano services
*/
extension service-behavior-tree {
argument servicepoint {
tailf:arg-type {
type tailf:identifier;
}
}
tailf:occurence "*";
tailf:use-in "module";
tailf:use-in "submodule";
tailf:substatement "description";
tailf:substatement "ncs:plan-outline-ref" {
tailf:occurence "1";
}
tailf:substatement "ncs:plan-location" {
tailf:occurence "?";
}
tailf:substatement "ncs:selector" {
tailf:occurence "*";
}
tailf:substatement "ncs:multiplier" {
tailf:occurence "*";
}
description
"This statement is used to define the behavior tree for a nano
service.
There are two types of control flow nodes, defined with the
'ncs:selector' and 'ncs:multiplier' statements.
extension plan-outline-ref {
argument id {
tailf:arg-type {
type tailf:identifier-ref;
}
}
description
"The name of the plan outline that the behavior tree will use
to synthesize a service instance's plan.";
}
extension plan-location {
argument path {
tailf:arg-type {
type string;
}
}
description
extension selector {
tailf:substatement "description";
tailf:substatement "ncs:pre-condition" {
tailf:occurence "?";
}
tailf:substatement "ncs:observe" {
tailf:occurence "?";
}
tailf:substatement "ncs:variable" {
tailf:occurence "*";
}
tailf:substatement "ncs:selector" {
tailf:occurence "*";
}
tailf:substatement "ncs:multiplier" {
tailf:occurence "*";
}
tailf:substatement "ncs:create-component" {
tailf:occurence "*";
}
description
"This control flow node synthesizes its children
that have their pre-conditions met.
extension multiplier {
tailf:substatement "description";
tailf:substatement "ncs:pre-condition" {
tailf:occurence "?";
}
tailf:substatement "ncs:observe" {
tailf:occurence "?";
}
tailf:substatement "ncs:foreach" {
tailf:occurence "1";
}
description
"This control flow node synthesizes zero or more copies of
its children.
extension foreach {
argument xpath {
tailf:arg-type {
type plan-xpath;
}
}
tailf:substatement "description";
tailf:substatement "ncs:when" {
tailf:occurence "?";
}
tailf:substatement "ncs:variable" {
tailf:occurence "*";
}
tailf:substatement "ncs:selector" {
tailf:occurence "*";
}
tailf:substatement "ncs:multiplier" {
tailf:occurence "*";
}
tailf:substatement "ncs:create-component" {
tailf:occurence "*";
}
description
"This statement's argument is an XPath expression for the node set
that is the basis for a multiplier selection. For each node in
the resulting node set the children will be evaluated.
extension when {
argument xpath {
tailf:arg-type {
type plan-xpath;
}
}
tailf:substatement "description";
description
"This optional statement describes an XPath expression that is
used to further filter the selection of nodes from the
node set.
/* Execution nodes */
extension create-component {
argument name {
tailf:arg-type {
type plan-xpath;
}
}
tailf:substatement "description";
tailf:substatement "ncs:component-type-ref" {
tailf:occurence "1";
}
tailf:substatement "ncs:pre-condition" {
tailf:occurence "?";
}
tailf:substatement "ncs:observe" {
tailf:occurence "?";
}
description
"When this execution node is evaluated, it instantiates a component
in the service's plan.
extension component-type-ref {
argument name {
tailf:arg-type {
type tailf:identifier-ref;
}
}
description
"This statement identifies the component type for the component.
It must refer to a component-type defined in the plan-outline
for the service.";
}
/* Common substatements */
extension pre-condition {
tailf:substatement "description";
tailf:substatement "ncs:monitor" {
tailf:occurence "1";
}
description
"This statement defines a pre-condition that must hold for
further evaluation/execution to proceed.
extension observe {
tailf:substatement "description";
tailf:substatement "ncs:monitor" {
tailf:occurence "1";
}
description
"If a control flow node has been successfully evaluated, this
statement's 'monitor' will be installed as a kicker, which will
re-deploy the service if the monitor's trigger conditions are met.";
}
extension monitor {
argument node {
tailf:arg-type {
type plan-xpath;
}
}
tailf:substatement "description";
tailf:substatement "ncs:trigger-expr" {
tailf:occurence "?";
}
description
"If a node that matches the value of this statement and the
'trigger' expression evaluates to true, this condition is
satisfied.
extension trigger-expr {
argument xpath {
tailf:arg-type {
type plan-xpath;
}
}
tailf:substatement "description";
description
"This optional statement is used to further filter nodes
in a given nodeset.
extension variable {
argument name {
tailf:arg-type {
type string;
}
}
tailf:substatement "description";
tailf:substatement "ncs:value-expr" {
tailf:occurence "?";
}
description
"This statement defines an XPath variable with a name and a
value. The value is evaluated as an XPath expression.
extension value-expr {
argument xpath {
tailf:arg-type {
type plan-xpath;
}
}
tailf:substatement "description";
description
"This statement defines an XPath expression that when evaluated
constitutes a value for a variable.
The ncs:plan-data grouping is defined as operational data that is supposed to be added to the
Reactive FASTMAP service yang with a uses ncs:plan-data YANG directive.
A plan consists of one or many component entries. Each component has a name and a type. The type is
an identityref and the service must therefore define identities for the types of components it uses. There is
one predefined component type named self and a service with a plan is expected to have at least the self
component defined.
Each component consists of two or more state entries where the state name is an identityref. The service
must define identities for the states it wants to use. There are two predefined states init and ready and each
plan component is expected to have init as its first state and ready as its last.
A state has a status leaf which can take one of the values not-reached, reached or failed.
The purpose of the self component is to show the overall progress of the Reactive FASTMAP service
and the self component ready state should have status reached if and only if the service has completed
successfully. All other components and states are optional and should be used to show the progress in more
detail if necessary.
The plan should be defined and the statuses written inside the service create() method. Hence the
same FASTMAP logic applies to the plan as for any other configuration data. This implies that the plan
has to be defined completely at create() as if this was the first definition. If a service modification
or reactive-re-deploy leave out a state or component, that has been earlier defined, this state or
component will be removed.
When the status leaf in a component state changes value NSO will log the time of the status change in the
when leaf. Furthermore when there is a structural changes of the plan, i.e added/removed components or
states, NSO will log this in the plan-history list. The Reactive FASTMAP service need not and should not
attempt doing this logging inside the create method.
A plan also defines an empty leaf failed. NSO will set this leaf when there exists states in the plan with
status failed. As such this is an aggregation to make it easy to verify if a RFM service is progressing
without problems.
In the Java API there exist a utility class to help writing plan data in the service create method. This class is
called PlanComponent and has the following methods:
public class PlanComponent {
/**
* Creation of a plan component.
/**
* This method supplies a state to the specific component.
* The initial status for this state can be ncs:reached or ncs:not-reached
* and is indicated by setting the reached boolean to true or false
* respectively
*
* @param stateName
* @param reached
* @return
* @throws NavuException
*/
public PlanComponent append(String stateName) throws NavuException;
/**
* Setting status to ncs:not-reached for a specific state in the
* plan component
*
* @param stateName
* @return
* @throws NavuException
*/
public PlanComponent setNotReached(String stateName) throws NavuException;
/**
* Setting status to ncs:reached for a specific state in the plan component
*
* @param stateName
* @return
* @throws NavuException
*/
public PlanComponent setReached(String stateName) throws NavuException;
/**
* Setting status to ncs:failed for a specific state in the plan component
*
* @param stateName
* @return
* @throws NavuException
*/
public PlanComponent setFailed(String stateName) throws NavuException;
The constructor for the PlanComponent takes the service NavuNode from the create() method together
with the component name and type. The type is either ncs:self or any other type defined as an identity in
the service YANG module. The PlanComponent instance has an append() method to add new states
which is either ncs:init, ncs:ready or any other state defined as an identity in the service YANG module.
The setNotReached(), setReached() or setFailed() methods are used to set the current
status of a given state.
identity l3vpn {
base ncs:component-type;
}
identity pe-created {
base ncs:plan-state;
}
identity ce-vpe-topo-added {
base ncs:plan-state;
}
identity vpe-p0-topo-added {
base ncs:plan-state;
}
identity qos-configured {
base ncs:plan-state;
}
container vpn {
list l3vpn {
description "Layer3 VPN";
key name;
leaf name {
tailf:info "Unique service id";
tailf:cli-allow-range;
type string;
}
uses ncs:plan-data;
uses ncs:service-data;
ncs:servicepoint l3vpn-servicepoint;
....
In the service list definition the plan data is introduced using the uses ncs:plan-data directive.
In the service create() method we introduce a Java Properties instance where we temporarily store
data for the relevant Reactive FASTMAP steps that currently are completed. We create a private method
writePlanData() that can write the plan with this Properties instance as input. Before we return from
the create() method we call the writePlanData() method. The following code snippets from the
class l3vpnRFS.java illustrates this design:
Initially we create a Properties instance called rfmProgress:
@ServiceCallback(servicePoint = "l3vpn-servicepoint",
callType = ServiceCBType.CREATE)
public Properties create(ServiceContext context,
NavuNode service,
For each Reactive FastMap step that we reach we store some relevant data in the rfmProgress Properties
instance:
String virtualPEName =
Helper.makeDevName(tenant, deploymentName, "CSR", "esc0");
if (CEonVPE.contains(ceName)) {
rfmProgress.setProperty(endpointId + ".ONVPE", "true");
Before we return from the create() method we call the writePlanData() method passing in the
rfmProgress instance:
writePlanData(service, rfmProgress);
return opaque;
}
The writePlanData() method first creates all components and sets the default values for all statuses.
Then we read the rfmProgress instance and change the states for the all the Reactive FASTMAP steps that
we have reached. In the end we check if the self component ready state has been reached. The reason
for initially writing the complete plan with default values for the statuses is not to miss a component
that have not made any progress yet, remember this is FASTMAP, components and states that was
written in an earlier reactive-re-deploy but is not written now will be deleted by NSO. The
writePlanData() method has the following design:
}
}
if (rfmProgress.containsKey(p + ".qos-configured")) {
pcomp.setReached("l3vpn:qos-configured");
} else {
p_ready = false;
}
if (p_ready) {
pcomp.setReached("ncs:ready");
} else {
ready = false;
}
}
if (ready) {
self.setReached("ncs:ready");
}
} catch (Exception e) {
throw new NavuException("could not update plan.", e);
}
Running the example and showing the plan while the chain of reactive-re-deploy is still in
execution could look something like the following:
ncs# show vpn l3vpn volvo plan
NAME TYPE STATE STATUS WHEN
------------------------------------------------------------------------------------
self self init reached 2016-04-08T09:22:40
ready not-reached -
endpoint-branch-office l3vpn init reached 2016-04-08T09:22:40
qos-configured reached 2016-04-08T09:22:40
ready reached 2016-04-08T09:22:40
endpoint-head-office l3vpn init reached 2016-04-08T09:22:40
pe-created not-reached -
ce-vpe-topo-added not-reached -
vpe-p0-topo-added not-reached -
qos-configured not-reached -
ready not-reached -
import ietf-yang-types {
prefix yang;
}
import tailf-common {
prefix tailf;
}
include tailf-ncs-plan;
description
"This submodule contains a collection of YANG definitions for
Service Progress Monitoring (SPM) in NCS.
revision 2018-06-01 {
description
"Initial revision";
}
/*
* Plan Component State
*/
identity any-state {
description
"Can be used in SPM and plan trigger policies to denote any plan state.";
base ncs:plan-state;
}
/*
* Plan Component Types
*/
identity any {
description
"Can be used in SPM and plan triggers to denote any component type.";
base ncs:plan-component-type;
}
/*
* Groupings
*/
typedef spm-trigger-status {
type enumeration {
enum passed {
tailf:code-name spm-passed;
}
enum failed {
tailf:code-name spm-failed;
}
}
}
grouping service-progress-monitoring-trigger-action {
tailf:action timeout {
description
"This action should be used by a custom model that is separate
from the service (which may be made by someone else),
input {
leaf service {
description
"The path to the service.";
type instance-identifier;
mandatory true;
}
leaf trigger {
description "The name of the trigger that fired.";
type leafref {
path "/ncs:service-progress-monitoring/ncs:trigger/ncs:name";
}
mandatory true;
}
leaf policy {
description "The name of the policy that fired.";
type leafref {
path "/ncs:service-progress-monitoring/ncs:policy/ncs:name";
}
mandatory true;
}
leaf timeout {
description "What timeout has triggered.";
type enumeration {
enum violation {tailf:code-name spm-violation-timeout;}
enum jeopardy {tailf:code-name spm-jeopardy-timeout;}
enum success {tailf:code-name spm-success-timeout;}
}
mandatory true;
}
leaf status {
description "SPM passed or failed.";
type spm-trigger-status;
mandatory true;
}
}
}
}
grouping service-progress-monitoring-data {
container service-progress-monitoring {
config false;
description
"Service Progress Monitoring triggers.
A service may have multiple SPMs.
For example, if a CPE is added at a later stage it would have
its own SPM defined, separate from the main SPM of the service.
However, in many cases there will be just one SPM per service.
leaf will be set when the policy evaluates to true, i.e. when
that part of the product is considered to be delivered by the
policy expression. Note that this is operational data.
";
list trigger-status {
description
"The operation status of the trigger.";
key name;
leaf name {
type string;
description
"The trigger name.";
}
leaf policy {
type string;
description
"Name of policy.";
}
leaf start-time {
type yang:date-and-time;
tailf:cli-value-display-template "$(.|datetime)";
description
"Time when the triggers started ticking.";
}
leaf jeopardy-time {
type yang:date-and-time;
tailf:cli-value-display-template "$(.|datetime)";
description
"Time when the conditions are evaluated for a jeopardy trigger.";
}
leaf jeopardy-result {
type spm-trigger-status;
description
"The result will be 'passed' if no jeopardy was detected at
jeopardy-time, 'failed' if it was detected. It is not set until
it has been evaluated. It will be set to 'passed' if the
condition is satisfied prior to the timeout expiring as well.";
}
leaf violation-time {
type yang:date-and-time;
tailf:cli-value-display-template "$(.|datetime)";
description
"Time when the conditions are evaluated for a violation trigger.";
}
leaf violation-result {
type spm-trigger-status;
description
"The result will be 'passed' if no violation was detected at
violation-time, 'failed' if it was detected. It is not set until
it has been evaluated. It will be set to 'passed' if the
condition is satisfied prior to the timeout expiring as well.";
}
leaf status {
type enumeration {
enum running {
tailf:code-name spm-running;
description
"Service Progress Monitoring has been started but
not yet triggered";
}
enum jeopardized {
tailf:code-name spm-jeopardized;
description
"The jeopardy timer has triggered and the policy has evaluated
to false.";
}
enum violated {
tailf:code-name spm-violated;
description
"The violation timer has triggered and the policy has evaluated
to false.";
}
enum successful {
tailf:code-name spm-successful;
description
"One of the timers have triggered and the policy has evaluated
to true.";
}
}
}
leaf success-time {
type yang:date-and-time;
tailf:cli-value-display-template "$(.|datetime)";
description
"Time when the conditions were evaluated to true,
i.e SPM was successful.";
}
}
}
}
container service-progress-monitoring {
tailf:info "Service Progress Monitoring policies";
list policy {
tailf:info "Policy definitions for Service Progress Monitoring";
description
"A list of all the policies.";
key name;
leaf name {
type string;
description
"The name of the policy.";
}
leaf violation-timeout {
tailf:info "Violation timeout in seconds";
mandatory true;
type uint32;
units "seconds";
description
"The timeout in seconds for a policy to be violated.";
leaf jeopardy-timeout {
tailf:info "Jeopardy timeout in seconds";
mandatory true;
type uint32;
units "seconds";
description
"The timeout in seconds for a policy to be in jeopardy.";
}
list condition {
min-elements 1;
description
"A list of the conditions that decides whether a policy is
fulfilled or not.";
key name;
leaf name {
type string;
description
"Name of the condition.";
}
list component-type {
min-elements 1;
description
"Each condition can specify what state must be reached for
a portion of the components to not trigger the action below.";
key type;
leaf type {
description
"We can either specify a particular component name
(trigger/component) or a component-type (which may
exist in several instances).";
type union {
type ncs:plan-component-type-t;
type enumeration {
enum "component-name" {
tailf:code-name spm-component-name;
}
}
}
}
leaf what {
description
"Condition put on the component with respect to the
../plan-state and ../status.
So, either:
mandatory true;
type union {
type uint32 {
range "0..100";
}
type enumeration {
enum all{
tailf:code-name spm-what-all;
}
enum at-least-one {
tailf:code-name spm-what-at-least-one;
}
}
}
}
leaf plan-state {
mandatory true;
type ncs:plan-state-name-t;
description
"The plans state. init, ready or any specific for the
component.";
}
leaf status {
type ncs:plan-state-status-t;
default "reached";
description
"status of the new state for the component in the service's plan.
reached not-reached or failed.";
}
}
}
container action {
leaf action-path {
type instance-identifier {
require-instance false;
}
}
leaf always-call {
type boolean;
default "false";
description
"If set to true, the action will be invoked also when
the condition is evaluated to 'passed'.";
}
}
}
list trigger {
description
"A list of all the triggers. A trigger is used to apply a SPM policy
to a service.";
key name;
leaf name {
type string;
description
"Name of the trigger.";
}
leaf description {
type string;
description
"Service Progress Monitoring trigger description.";
}
leaf policy {
tailf:info "Service Progress Monitoring Policy";
mandatory true;
description
"A reference to a policy that should be used with this trigger.";
type leafref {
path "/ncs:service-progress-monitoring/policy/name";
}
}
leaf start-time {
type yang:date-and-time;
tailf:cli-value-display-template "$(.|datetime)";
description
"Optionally provide a start-time.
If this is unset the SPM server will set the start-time to
the commit time of the trigger.";
}
leaf component {
type string;
description
"If the policy contains a condition with the key component-name,
this is the component to apply the condition to.";
}
leaf target {
mandatory true;
description
"Instance identifier to whichever service the SPM policy should
be applied. Typically this is the creator of the trigger instance.";
type instance-identifier {
require-instance true;
}
}
}
}
}
Performance Considerations
When using the Reactive FASTMAP technique the service tends to be re-deployed multiple times for
the service to be fully deployed; i.e., the create() function is executed more frequently. This makes it
desirable to reduce the execution time of the create() function as much as possible.
Normal code performance optimization methods should be used, but there are a couple of techniques that
can be used that are specific to the Reactive FASTMAP pattern.
1 Stacked services (see the section called “Stacked Services and Shared Structures”) can be a very
efficient technique to reduce both the size of the service diff-set and the execution time.
For example, if a service applies a template to configure a device, then all changes resulting from this
will be stored in the diff-set of the service. During a re-deploy all changes will first be undone to later
be restored when the template is applied.
A more efficient solution is to use a stacked service to apply the template. The input parameters to the
stacked service will be the variables that would go into the template. The stacked service would pick
them up and apply the original template. As a consequence the diff-set resulting from applying the
template ends up in the stacked service, and as long as there are no changes in the input parameter to
the stacked service its create() code will not have to run. Instead of applying the same template
multiple times the template will only be applied once.
2 CDB subscriber refactoring. Stacked services can be used when no response is required from the
factored out code. However, if the create() code contains a CPU intensive computation that takes
a number of input parameters and produce some result, then it would be desirable to also minimize the
number of times this computation is performed, and to perform it outside the database lock.
This can be done by treating the problem similarly to resource allocation above - create a configuration
tree where computation requests can be written. A CDB subscriber is registered to subscribe to this
tree. Whenever a new request is commited it performs the computation and writes the result into a
CDB operational data leaf, and re-deploys the service that requested the computation.
As a consequence of this the computation will take place outside the lock, and the computation will
only be performed once for each set of input parameters. The cost of this technique is that an extra re-
deploy will be performed. The service pseudo-code looks like this:
create(serv) {
/* request computation */
create("/compute-something{id}");
setElem("/compute-something{id}/param1", value1);
setElem("/compute-something{id}/param2", value2);
/* read result */
res = getElem("/compute-something{id}/response");
1 Instructing a VIM or VNF-M to start the virtual device with some input parameters (which image, cpu
settings, day0 configuration etc).
2 Waiting for the virtual device to be started, the VIM/VNF-M may signal this through some event, or
polling of some state might be necessary.
There are several ways to achieve the above process with Reactive FASTMAP. One solution is
implemented in the vm-manager and vm-manager-esc packages found in the example examples.ncs/
service-provider/virtual-mpls-vpn.
Using these packages the service does not directly talk to the VIM/VNF-M but instead registers a vm-
manager/start request using the vm-manager API. This is done by adding a list instance in the /vm-
manager/start list.
The contract with the vm-manager is that it should be responsible for starting the virtual device, adding it
to the /devices/device tree, perform sync-from, setting the /devices/device/vmm:ready
leaf to true, and finally re-deploy the service that made the start request. This greatly simplifies the
implementation of the service that would otherwise have to perform all those operations itself.
The vm-manager package is only an interface package. It must be combined with a package that actually
talks to the VIM/VNF-M. In the virtual-mpls-vpn example this is done through a package called
vm-manager-esc that interfaces with a VNF-M called ESC. The vm-manager-esc package subscribes
to changes in the /vm-manager/start configuration tree provided by the vm-manager package.
Whenever a new request is created in that tree it attempts to start the corresponding VM on the indicated
ESC device.
When the vm-manager-esc package receives a CREATE event in the /vm-manager/start list
it initiates starting the VM. This involves a number of steps and components. In addition to the CDB
subscriber for the /vm-manager/start tree it also has the following parts.
1 A CDB subscriber (notif nubscriber) that subscribes to NETCONF notifications from the ESC device.
NETCONF notification are used to communicate the state of the virtual machine. Events are sent
when a new VM is registered, when it is started, when it has become alive, when it stops etc. The vm-
manager-esc package needs to react differently to the different events, and ignore some of them.
2 A local service (vm-manager/esc) for starting the VM on the ESC. The CDB subscriber that subscribes
to the /vm-manager/start list will create new instances of this service whenever a new vm-
manager/start request is received, and delete the corresponding service when a vm-manager/start
entry is deleted. The reason the CDB subscriber doesn't configure the ESC directly is that it would
then have to keep track of what to delete when the vm-manager/start entry is deleted, but perhaps
more importantly, if resources should be allocated, for example a management IP, then this can done
conveniently from inside a service using the resource manager package.
The vm-manager/esc service writes configuration to the ESC device to start a new VM, to monitor it,
and to send NETCONF notifications on state changes. It may also perform resource allocation and other
activities.
When the notif subscriber receives a VM ALIVE event it mounts the device in the device tree, performs
fetch ssh keys and sync-from, sets the ready leaf to true, and re-deploys the service that requested the VM.
The user of the ready leaf is critical. The original service cannot just inspect the devices tree to see if the
device is there. The device being in the devices tree is no guarantee for it being ready to configure.
not start the virtual device). When the service is deleted the configuration that the service has created will
be deleted, including both the configuration to start the VM and the configuration on the VM.
If the service configured the VIM/VNF-M directly the result would be that the VIM/VNF-M would be
told to stop the VM at the same time as NSO is trying to change the configuration on the VM (deleting the
configuration that the service created). This results in a race condition that frequently results in an error
(the VM is spun down while NSO is talking to it, trying to delete its configuration).
This problem is handled by using the vm-manager package between the service and the VIM/VNF-M.
When a service is deleted the vm-manager/start configuration is deleted. This in turn will trigger the CDB
subscriber to stop the service, but this will be done after the service delete transaction has been completed,
and consequently after NSO has removed the configuration that the service created on the device. The race
condition is avoided.
Another problem is how to remove the device from the NSO device tree. A service that directly configures
the VIM/VNF-M would have to use some trick to deal with this. The vm-manager-esc package can handle
this directly in the CDB vm-manager/start subscriber. When it registers a delete of a vm-manager/start
instance it deletes the corresponding vm-service, but also the devices that have been mounted. If scaling is
supported by the VIM/VNF-M there might be multiple entries in the NSO device tree that must be deleted.
The vm-manager YANG model contains a list of names of all devices mounted as a response to a vm-
manager/start request. This list can be read both by the initiating service, but also by the vm-manager-esc
CDB subscriber to know which devices to delete.
The real problem is usually to de-register a license. When a VM is stopped it is desirable to release the
license if a license server is used. This process typically consists of deleting some configuration on the
device and then waiting for the device to talk to the license server.
This complicates the device delete process a bit. Not only should the device be stopped but it must be
a staged process where the device config first is removed, then the license released, and then, when the
device has actually released the license, instruct the VIM/VNF-M to stop the device.
There are at least two solutions to this problem, with slightly different trade-offs.
1 The device NED is modified to deal with license release such that when it receives a license delete
command, it detects this and waits until the license has actually be released before returning. This
assumes that the license was applied as part of the device configuration that the initial service applied.
The drawback of this approach is that the commit may be slow since it will delay until the license has
been released. The advantage is that it is easy to implement.
2 The specific vm-manager package, vm-manager-esc in our example, could be modified to release
the license before instructing the VIM/VNF-M to stop the VM. This is more efficient, but also a bit
more complicated. The CDB subscriber that listens to vm-manager/start modifications would detect
a DELETE operation and before removing the device from the NSO device tree it would invoke a
license release action on the device. The NED implementing this action (as a NED command) would
release the license and then wait until the device has actually released the license before returning. The
CDB subscriber would then proceed to delete the device from the NSO device tree, and the vm-service
instance. This whole procedure could be spawned off in a separate thread to avoid blocking other vm-
manager/start operations.
NavuContainer iface =
ifs.list("interface").sharedCreate(
vlan.leaf("iface").value());
iface.leaf("enabled").sharedCreate();
unit.leaf("vlan-id").sharedSet(vlan.leaf("vid").value());
unit.leaf("enabled").sharedSet(new ConfBool(true));
unit.leaf("description").sharedSet(
vlan.leaf("description").value());
for (ConfValue arpValue : vlan.leafList("arp")) {
unit.leafList("arp").sharedCreate(arpValue);
}
}
Build the example and create two services on the same interface:
$ cd $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/4-rfs-service
$ make clean all
$ ncs-netsim start
$ ncs
$ ncs_cli -C -u admin
admin@ncs# configure
admin@ncs(config)# devices sync-from
admin@ncs(config)# services vlan s1 iface ethX unit 1 vid 1 description descr1
admin@ncs(config-vlan-s1)# commit
admin@ncs(config-vlan-s1)# top
admin@ncs(config)# services vlan s2 iface ethX unit 2 vid 2 description descr2
admin@ncs(config-vlan-s2)# commit
admin@ncs(config-vlan-s2)# top
We can now look at the device data for one of the relevant devices. We are especially interested in the
Refcount and the Backpointer attributes that are used by the NSO FASTMAP algorithm to deduce when
the data is eligible for deletion:
admin@ncs(config)# show full-configuration devices device ex0 \
config r:sys interfaces interface | display service-meta-data
...
/* Refcount: 2 */
/* Backpointer: [ /ncs:services/vl:vlan[vl:name='s1'] /ncs:services/vl:vlan[vl:name='s2'] ] */
r:sys interfaces interface ethX
...
If we now delete the first service instance, the device interface still exists, but with a decremented
reference counter:
admin@ncs(config)# no services vlan s1
admin@ncs(config)# commit
admin@ncs(config)# show full-configuration devices device ex0 \
config r:sys interfaces interface | display service-meta-data
...
/* Refcount: 1 */
/* Backpointer: [ /ncs:services/vl:vlan[vl:name='s2'] ] */
r:sys interfaces interface ethX
...
The opaque properties object is made available as an argument to the service create() method. When
a service instance is first created this object is null. The code can add properties to it, and it returns the
possibly updated opaque object, which NSO stores with the service instance. Later when the service
instance is updated NSO will pass the stored opaque to create().
Note It is vital that the create() returns the opaque object that was passed to it, even if the method itself
does not use it. The reason for this is that, as we will see in the section called “ Pre and post hooks ” the
create() method is not the only callback that uses this opaque object. The opaque object can actually be
chained in several different callbacks. Having a return null; in the create() method is not good
practice.
A pseudo code implementation of our IP allocation scenario could then look something like the following:
@ServiceCallback(servicePoint="my-service",
callType=ServiceCBType.CREATE)
public Properties create(ServiceContext context,
NavuNode service,
NavuNode root,
Properties opaque)
throws DpCallbackException {
String allocIP = null;
if (opaque != null) {
allocIP = opaque.getProperty("ALLOCATED_IP");
}
if (allocIP == null) {
// This implies that the service instance is created for the first
// time the allocation algorithm should execute
...
...
opaque.setProperty.setProperty("ALLOCATED_IP, allocIP);
}
@ServiceCallback(servicePoint = "",
callType = ServiceCBType.PRE_MODIFICATION)
public Properties preModification(ServiceContext context,
ServiceOperationType operation,
ConfPath path,
Properties opaque)
throws DpCallbackException;
@ServiceCallback(servicePoint = "",
callType = ServiceCBType.POST_MODIFICATION)
public Properties postModification(ServiceContext context,
ServiceOperationType operation,
ConfPath path,
Properties opaque)
throws DpCallbackException;
The pre/postModification methods have an context argument of type ServiceContext which contains
methods to retrieve NavuNodes pointing to the service instance and the ncs model rootnode. Data that
is modified using these NavuNodes will be handled outside the scope of the FASTMAP algorithm and
therefore untouched by changes of the service instance (if not changed in another pre/postModification
callback):
public interface ServiceContext {
...
public NavuNode getServiceNode() throws ConfException;
In addition to the above arguments the pre/postModification methods also has an path argument that points
to the current service instance and the opaque Properties object corresponding to this service instance.
Hence the opaque can be first created in a preModification methods, passed to, and modified, in the
FASTMAP create() method and in the end also handled in a postModification method before stored
with the the service instance.
The examples.ncs/getting-started/developing-with-ncs/15-pre-modification
example show how a preModification method can be used to permanently set a dns server in the device
configuration. This dns server is thought of as a prerequisite for the service instances and should always
be set for the devices. Instead of having to fail in the fastmap service when the prerequisite is not fulfilled.
The preModification can instead check and set the config. We have the following preModification code:
@ServiceCallback(servicePoint = "vpnep-servicepoint",
callType = ServiceCBType.PRE_MODIFICATION)
public Properties preModification(ServiceContext context,
ServiceOperationType operation,
ConfPath path,
Properties opaque)
throws DpCallbackException {
try {
vpnep vep = new vpnep();
if (ServiceOperationType.DELETE.equals(operation)) {
return opaque;
if (!serverList.containsNode("10.10.10.1")) {
serverList.create("10.10.10.1");
}
break;
}
}
} catch (Exception e) {
throw new DpCallbackException("Pre modification failed",e);
}
return opaque;
}
We walk trough this example code and explain what it does. The first part is a check of which operation
is being performed. If the operation is a delete we can return. We always return the opaque passed to us as
an argument. Even though this is a delete it is not necessarily the last callback in the callback chain, if we
would return null we would impose a null opaque to later callbacks.
if (ServiceOperationType.DELETE.equals(operation)) {
return opaque;
}
Next we need check if the router leaf of the service has changed in the transaction. This leaf is mandatory,
but if the operation is an UPDATE then this leaf is not necessarily changed. The following code snippet
navigates to the relevant service instance NavuNode and get the list of all changed NavuNodes in this
transaction and for this service instance:
NavuNode service = context.getRootNode().container(Ncs._services_).
namespace(vpnep.id).list(vpnep._vpn_endpoint_).
elem((ConfKey) path.getKP()[0]);
List<NavuNode> changedNodes = service.getChanges(true);
We check if any of the changed NavuNodes is the is the router leaf which is of type leafref to a device
name under the /ncs:devices/device tree:
for (NavuNode n : changedNodes) {
if (n.getName().equals(vpnep._router_)) {
NavuLeaf routerName = (NavuLeaf) n;
If the router leaf has changed, since it is an leafref to another leaf we can deref it and get the device name
leaf in the /ncs:devices/device tree. Note that in the general case a deref will not necessarily return
a singular NavuNode, but in this case it will and therefore we can just call get(0) on the deref list of
NavuNodes. We want to get the device container NavuNode and we can retrieve this as the parent node of
the deviceName leaf.
NavuNode deviceNameNode = routerName.deref().get(0);
NavuContainer device =
(NavuContainer) deviceNameNode.getParent();
We now know that the router leaf has changed, we have the device container NavuNode for this device and
we can check the device configuration for the dns servers. If the IP address 10.10.10.1 does not appear in
the list we add it.
String routerNs = "http://example.com/router";
NavuContainer sys = device.container(Ncs._config_).
namespace(routerNs).container("sys");
if (!serverList.containsNode("10.10.10.1")) {
serverList.create("10.10.10.1");
}
We have here used the preModification callback to hardwire a enabling for a service. This setting will stay
on the device independently of the lifecycle changes of the service instance which created it.
Note This above serialization of the transaction is part of the NSO service manager's FASTMAP algorithm.
It should NOT be mistaken for the NSO device managers propagation of data to the relevant devices,
which is performed at a later stage of the transaction commit. The latter is performed in a fully concurrent
manner.
The reasons for the serialization of FASTMAP transactions are transaction consistency and making it
simpler to write create() functions, since they do not need to be thread-safe.
However in certain scenarios with services where the create() function requires heavy computation and
in the same time have no overlap in written data, this serialization is not necessary and will prevent higher
throughput. For this reason a preLockCreate() function has been introduced. This function serves
exactly the same purpose as the create() function but is called before the common transaction lock is
acquired.
The guidelines for using a preLockCreate() function instead of the ordinary create() are:
• The service creation is computationally heavy, i.e., consumes substantial CPU time.
• The service creation can be coded in a thread-safe fashion.
• Different service instances has no config data overlap, or the probability for config data overlap is
low.
The preLockCreate FASTMAP algorithm has internal detection of conflicting concurrent transaction data
updates. This implies that the there is no risk of persistent data inconsistencies but instead a conflicting
transaction might fail in commit.
For services that also uses the preModification() function, this function will also be called before
the transaction lock if preLockCreate() is used.
If a stacked service (see the section called “Stacked Services and Shared Structures”) has a
preLockCreate(), and the stacked service is created by another service's create() function, then
the stacked service's preLockCreate() will be called inside the lock.
Service Caveats
Under some circumstances the mapping logic of a service needs special consideration. Services can either
map to disjunctive data sets or shared data sets.
If the services map to disjunctive data sets, which means no other service will manipulate the same data,
there are no known caveats.
If on the other hand several services manipulate the same data there are some things to consider. All these
special cases will be discusses below.
Finding Caveats
A useful tool for finding potential problems with overlapping data is the CLI debug service flag.
Example:
admin@ncs(config)# commit dry-run | debug service
The debug service flag will display the net effect of the service create code as well as issue warnings
about potential problematic usage. Note these warnings are only for situations were services have
overlapping shared data.
In all examples below the WARNING message is the result of using the flag debug service.
delete
A general rule of thumb is to never use delete in service create code.
If a delete is used in service create code the following warning is displayed:
*** WARNING ***: delete in service create code is unsafe if data is
shared by other services
The deleted elements will be restored when the service instance which did the delete is deleted. Other
services which relied on the same configuration will be out of sync.
The explicit delete is easy to detect in the XML of a template or in the Java source code. The not so easy
detection are the when and choice statements in the YANG data model.
If a when statement is evaluated to false the configuration tree below that node will be deleted.
If a case is set in a choice statement the previously set case will be deleted.
Both the above when and case scenarios will behave the same as an explicit delete.
One working design pattern for these use cases is to let one special init service be responsible for the
deletion and initialization. This init service should be a singleton and be shared created by other services
depending on the specific delete and initialization.
By using this stacked service design the other services just share create that init service. When the last one
of the other services is deleted the init service is also deleted as it is reference counted.
Another design pattern is to have such delete and initialization code in the pre- and post-modification code
of a service. This is possible but generally results in more complex code than the stacked service approach
above.
set
If a set operation instead of a shared set operation is used in service create code the following warning is
displayed:
*** WARNING ***: set in service create code is unsafe if data is
shared by other services
The set operation does not add the service meta-data reference count to the element. If the first service,
which set the element, is deleted the original value will restored and other services will be out of sync.
create
If a create operation instead of a shared create operation is used in service create code the following
warning is displayed:
*** WARNING ***: create in service create code is unsafe if data is
shared by other services
The create operation does not add the service meta-data back-pointer and reference count to the element. If
the first service, which created the element, is deleted the created item is deleted and other services will be
out of sync.
move
If items in an ordered by user list are moved and these items were created by another service the following
warning is displayed:
*** WARNING ***: due to the move the following services will be
out of sync:
Service Discovery
Discovery basics
A very common situation when NSO is deployed in an existing network is that the network already has
services implemented. These services may have been deployed manually or through an older provisioning
system. The task is to introduce NSO and import the existing services into NSO. The goal is to use NSO to
manage existing services, and to add additional instances of the same service type using NSO.
The whole process of identifying services and importing them into NSO is called Service Discovery.
Some steps in the process can be automated others are highly manual. The amount of work differs a lot
depending on how structured and consistent the original deployment is.
The process can be broken down in a number of steps:
Figure 154. Service Discovery
2
4
YANG/mapping logic
commit
Service list
change YANG/mapping logic
re-deploy
re-deploy reconcile
1 sync-from
Inventory
One of the prerequisites for this to work is that it is possible to construct a list of the already existing
services. Maybe such a list exists in an inventory system, an external database, or maybe just an Excel
spreadsheet. It must also be possible to:
2 Write the YANG data model for the service and the the the mapping logic.
3 Write a program, using Python/Maapi or Java/Maapi which traverses the entire network configuration
and computes the services list.
4 Verify the mapping logic is correct.
The last step, verifying the mapping logic is an iterative process. The goal is to ensure all relevant device
configuration is covered by the mapping logic.
Verifying the mapping logic is achieved by using the action re-deploy reconcile { } dry-run of NSO.
When the output is empty the data is covered.
NSO uses special attributes on instance data to indicate the data used by a service. Two attributes are used
for this Refcount and Backpointer.
By using the flag display service-meta-data to show full-configuration these attributes can be inspected.
Even if all data is covered in the mapping there might still be manual configured data below service data. If
this is not desired use the action re-deploy reconcile { discard-non-service-config } dry-run to find such
configuration.
Below the steps to reconcile a service are shown, first in a visual form and later as commands in one of the
examples.
v1 O v2
α ο β
δ ε
The service v1 and v2 has been created on top the existing original data.
The service v1 has sole control of the instance data in α, which is not part of δ, and service v2 has sole
control of the instance data β, which is not part of ε.
The data solely owned by service v1 and v1 has a reference count of one.
The data in δ and in ε is both part of the original data and part of service data. The reference counter in
these areas is two.
If the service v1 was to be deleted the data with reference count of one would be removed. The data in δ
would be kept but the reference count would be removed.
After throughly inspection of the service and the affected data the service can become the sole owner of the
data which is part of the original data.
Check the effect of the reconciliation by use of the dry-run option to re-deploy:
admin@ncs(config)# services vlan v1 re-deploy reconcile { } dry-run
The output of the dry-run will only display configuration changes, not changes in service-meta-data like
reference count and back-pointers.
Figure 156. Reconcile Service v1
v1 O v2
α ω β
ε
services vlan v1 re-deploy reconcile
After reconciliation of v1 the service is the sole owner of the data α. All the data in α now has the
reference count set to one after the operation.
Figure 157. Reconcile Service v2
v1 O v2
α ψ β
All data in α and β now has a reference count of one and will thus be removed when services v1 and v2 are
removed or un-deployed.
If at later stage it shows parts of ψ should belong to a service just change the mapping logic of the service
and execute the action again:
admin@ncs(config)# services vlan v1 re-deploy reconcile
admin@ncs(config)# services vlan v2 re-deploy reconcile
If the service mapping logic is changes so services starts to overlap each other and start to control more of
the original data like in the following figure:
Figure 158. Overlapping services
v1 O v2
α ψ β
ζ η
services vlan v1 re-deploy reconcile
services vlan v2 re-deploy reconcile
just reconcile the services again. After reconciliation α and β has a reference count of one and the
reference count of ζ is two.
The command re-deploy reconcile can be executed over and over again, if the service is already
reconciled nothing will happen.
v1 O v2
The data ψ is outside any service and is kept after the services are gone. If the services v1 and v2 had been
deleted ψ would still look the same.
Now after the visualization try this by hand in one of the examples: examples.ncs/getting-
started/developing-with-ncs/4-rfs-service
First we create two service instances:
$ cd $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/4-rfs-service
$ make clean all
$ ncs-netsim start
$ ncs
$ ncs_cli -C -u admin
admin@ncs# config
admin@ncs(config)# devices sync-from
admin@ncs(config)# services vlan v1 description v1-vlan iface eth1 unit 1 vid 111
admin@ncs(config-vlan-v1)# top
admin@ncs(config)# services vlan v2 description v2-vlan iface eth2 unit 2 vid 222
admin@ncs(config-vlan-v2)# top
admin@ncs(config)# commit
That created two services in the network. Now let's destroy that.
admin@ncs(config)# devices device * delete-config
admin@ncs(config)# no services
admin@ncs(config)# commit no-networking
We now have a situation with two services deployed in the network, but no services, nor any device
configuration in NSO.
This is the case when NSO first is set up in a network. Now start by getting all the device data into the data
base.
admin@ncs(config)# devices sync-from
This resembles the point were brown field deployment starts. Lets introduce the two service instances in
NSO.
admin@ncs(config)# services vlan v1 description v1-vlan iface eth1 unit 1 vid 111
admin@ncs(config-vlan-v1)# top
admin@ncs(config)# services vlan v2 description v2-vlan iface eth2 unit 2 vid 222
admin@ncs(config-vlan-v2)# top
admin@ncs(config)# commit no-networking
We're almost there now. If we take a look at the deployed configuration in NSO, we see for example:
admin@ncs(config)# show full-configuration devices device ex0 \
config r:sys interfaces | display service-meta-data
...
! Refcount: 2
! Backpointer: [ /ncs:services/vl:vlan[vl:name='v1'] ]
r:sys interfaces interface eth1
! Refcount: 2
! Backpointer: [ /ncs:services/vl:vlan[vl:name='v1'] ]
enabled
! Refcount: 2
! Backpointer: [ /ncs:services/vl:vlan[vl:name='v1'] ]
unit 1
! Refcount: 2
! Originalvalue: true
enabled
! Refcount: 2
! Originalvalue: v1-vlan
description v1-vlan
! Refcount: 2
! Originalvalue: 111
vlan-id 111
!
!
! Refcount: 2
! Backpointer: [ /ncs:services/vl:vlan[vl:name='v2'] ]
r:sys interfaces interface eth2
! Refcount: 2
! Backpointer: [ /ncs:services/vl:vlan[vl:name='v2'] ]
enabled
! Refcount: 2
! Backpointer: [ /ncs:services/vl:vlan[vl:name='v2'] ]
unit 2
! Refcount: 2
! Originalvalue: true
enabled
! Refcount: 2
! Originalvalue: v2-vlan
description v2-vlan
! Refcount: 2
! Originalvalue: 222
vlan-id 222
!
When we commit a service to the network, the FASTMAP code will create the Refcount and the
Backpointer attributes. These attributes are used to connect the device config to services. They are also
used by FASTMAP when service instances are changed or deleted. In the configuration snippet above you
can see the interface "eth1" and "eth2" has a refcount of 2 but only one back-pointer, pointing back to the
services. This is the state when the data is not owned by the service but still is part of the original data..
admin@ncs(config)# services vlan v1 re-deploy reconcile
admin@ncs(config)# services vlan v2 re-deploy reconcile
Now the services v1 and v2 are in the same state as in the figure: Figure 157, “Reconcile Service v2”
above.
admin@ncs(config)# show full-configuration devices device ex0 \
config r:sys interfaces | display service-meta-data
...
! Refcount: 1
! Backpointer: [ /ncs:services/vl:vlan[vl:name='v1'] ]
r:sys interfaces interface eth1
! Refcount: 1
! Backpointer: [ /ncs:services/vl:vlan[vl:name='v1'] ]
enabled
! Refcount: 1
! Backpointer: [ /ncs:services/vl:vlan[vl:name='v1'] ]
unit 1
! Refcount: 1
enabled
! Refcount: 1
description v1-vlan
! Refcount: 1
vlan-id 111
!
!
! Refcount: 1
! Backpointer: [ /ncs:services/vl:vlan[vl:name='v2'] ]
r:sys interfaces interface eth2
! Refcount: 1
! Backpointer: [ /ncs:services/vl:vlan[vl:name='v2'] ]
enabled
! Refcount: 1
! Backpointer: [ /ncs:services/vl:vlan[vl:name='v2'] ]
unit 2
! Refcount: 1
enabled
! Refcount: 1
description v2-vlan
! Refcount: 1
vlan-id 222
!
!
The two services v1 and v2 have been reconciled. The reference counter as well as the back pointers are
correct and indicates the data is owned by the services.
Reconciliation caveats
This scheme works less well sometimes depending on the service type. If the services delete data on the
managed devices expecting FASTMAP to recreate that data when the service is removed, this technique
doesn't work.
Also, if the service instances have allocated data, this scheme has to be modified to take that allocation into
account.
A reconcile exercise is also a cleanup exercise, and every reconciliation exercise will be different.
Reconciling in bulk
Once we have convinced ourselves that the reconciliation process works, we probably want to reconcile all
services in bulk. One way to do that would be to write a shell script to do it. The script needs input; assume
we have a file vpn.txt that contains all the already existing VPNs in the network as a CSV file.
$ cat vpn.txt
volvo,volvo VLAN,eth4,1,444
saab,saab VLAN,eth4,2,445
astra,astra VLAN,eth4,3,446
A small shell script to generate input to the CLI could look like
#!/bin/sh
infile=$1
IFS=,
echo "config" > out.cli
cat $infile |
while read id d iface unit vid; do
c="services vlan $id description \"$d\" iface $iface unit $unit vid $vid"
cat $infile |
while read id desc iface unit vid; do
echo "Reconcile of '$id'"
echo "services vlan $id re-deploy reconcile" >> out.cli
done
Partial sync-from
In some cases a service may need to rely on the actual devices configuration to compute the changeset. It
is often a requirement to pull the current devices' configurations from the network before executing such
service. Doing a full sync-from on a number of devices is an expensive task especially if it needs to be
performed often, so the suggested way in this case is using partial-sync-from.
Partial-sync-from action allows to specify certain portions of the devices' configuration to be pulled from
the network rather than the full config. This is a more efficient operation on NETCONF devices and NEDs
that support partial-show feature. NEDs that do not support partial-show feature will fall back to pulling
the whole configuration.
Pulling the configuration from the network needs to be initiated outside the service code. At the same
time the list of configuration subtrees required by a certain service should be maintained by the service
developer. Hence it is a good practice for such service to implement a wrapper action that invokes the
generic /devices/partial-sync-from action with the correct list of paths. The user or application that
manages the service would only need to invoke the wrapper action without needing to know which parts of
configuration the service is interested in.
The snippet in Example 160, “Example of running partial-sync-from action via Java API” gives an
example of running partial-sync-from action via Java, using "router" device from examples.ncs/getting-
started/developing-with-ncs/0-router-network.
Even though it allows to pull only a part of device's configuration, partial-sync-from is not allowed to
break consistency of configuration in CDB as defined by the YANG model. Hence extra consideration
needs to be given to dependencies inside the device model. If some configuration item A depends on
configuration item B in the device's configuration, pulling only A may fail due to unsatisfied dependency
on B. In this case both A and B need to be pulled, even if the service is only interested in the value of A.
It is important to note that the partial-sync-from does not update transaction ID for the device unless the
whole configuration has been selected (e.g. /ncs:devices/ncs:device[ncs:name='ex0']/ncs:config).
Introduction
Templates is a flexible and powerful mechanism in NSO which simplifies how changes can be made
across the configuration data, for example across devices of different type. They also allow a declarative
way to describe such manipulations.
Two types of Templates exist, device-templates and config-templates. The former is invoked as an action,
where the latter is invoked either because of changes to some service data or through a programmatic API
(e.g Java). For more information about device-templates, refer to: the section called “Device Templates” in
NSO 5.3 User Guide. The rest of the text in this chapter mainly describes config-templates.
Note config-templates are often called “Service Templates”, but this is only partly true. A config-template
invoked through an API does not have to deal with NSO services.
When a template is used as part of a service implementation, thanks to NSO FASTMAP, NSO remembers
the configuration changes made towards the devices and the template changes can for example be reverted.
There exist an API for applying config templates so they can replace large portions of boilerplate code
when configuration data needs to be manipulated from a programming language.
config-templates gets loaded as part of packages. The config-templates are stored in XML files in
the templates sub directory of a package. config-templates can be used to update any part of the
configuration.
Config templates
Config template terminology:
• Config template: The use of templates in a programmatic way. Based on the same basic principles as
the device-template with the distinction that a config template is part of the implementation (device-
templates on the other hand, are dynamically created by the operator as needed, for example, in
the CLI and stored in the configuration). config-templates gets loaded when NSO starts. If a NSO
package has a templates sub directory it gets scanned for files with the suffix '.xml' and those
files gets loaded. With config-templates one can implement full services or abstract functionality.
config-templates makes it possible to divide the work when implementing a service. One group may
focus on the logic and the programming and another focus on the networking details declaring the
modifications of devices in a template.
• Service template: A config-template connected to a service-point. These are services which are
entirely implemented with a config-template. A service template is useful in cases when only a
mapping of service instance data to device data is needed to implement the service, and no logic is
involved. See the section called “Service Templates”.
• Feature Template: A config-template used to configure a specific feature (for example acl) on a set
of - different - devices. It is a convenient way of updating structured configuration data and can save
lots of boiler plate code. Feature templates are normally applied from service code. See $NCS_DIR/
examples.ncs/service-provider/mpls-vpn/packages/l3vpn/templates/
l3vpn-acl.xml which is used in the l3vpn service, in mpls-vpn example.
The typical way to design a template is to start with direct manipulation of devices until the desired result
is reached or to write a device-template and apply it and watch the changes, correct it until the result is
correct. This process can be performed in the NSO CLI or in the WebUI. Below will show how to do this
in the CLI.
The process of defining a config-template is illustrated in two ways:
both methods will result in the same config-template Example 161, “Created Config-template l3vpn-
ce.xml” being constructed.
Note The name of the config-template is the name of the file without the extension .xml. All the template
names reside in the same namespace. A good practice is to name the file: <package name>-<feature>.xml
That way the name of the template will always be unique.
Apply the device-template to the ce0 device and check the changes by doing a dry run using the native out
format of the device:
admin@ncs(config)# devices device ce0 apply-template template-name l3vpn-ce
admin@ncs(config)# commit dry-run outformat native
native {
device {
name ce0
data interface GigabitEthernet0/1
description Link to PE
ip address 10.1.1.1 255.255.255.252
exit
}
}
If you spot an error, revert the changes done by the template, add the missing configuration to the template
and commit.
When you are satisfied with the result, save the device-template to a file:
admin@ncs(config)#
show full-configuration devices template l3vpn-ce | display xml | save l3vpn-ce.xml
Pick up the file l3vpn-ce.xml in any editor. Change the beginning of the file from:
<config xmlns="http://tail-f.com/ns/config/1.0">
<devices xmlns="http://tail-f.com/ns/ncs">
<template>
<name>l3vpn-ce</name>
to:
<config-template xmlns="http://tail-f.com/ns/config/1.0">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>ce0</name>
Note the name of the device, it just serves as an example. The device name can be a static value, an XPath
variable or the result of an XPath expression.
At the end the XML elements needs to be closed, change from:
</template>
</devices>
</config>
to:
</device>
</devices>
</config-template>
The final result should look like Example 161, “Created Config-template l3vpn-ce.xml”.
Check that modifications of the network is the desired. Here is what is actually sent to the device by using
the native format:
admin@ncs(config)# commit dry-run outformat native
native {
device {
name ce0
The same changes in the XML format serves as a good skeleton for a template. Save the modifications to a
file:
admin@ncs(config)# commit dry-run outformat xml | save l3vpn-ce.xml
pick up the file l3vpn-ce.xml in any text editor. Remove the leaf name result-xml. Insert the template
specific tag at the beginning of the file:
<config-template xmlns="http://tail-f.com/ns/config/1.0"
<devices xmlns="http://tail-f.com/ns/ncs">
The final result should look like Example 161, “Created Config-template l3vpn-ce.xml”.
This above skeleton is now ready to be edited, have expressions introduced instead of static values to
make it more useful. Look at $NCS_DIR/examples.ncs/service-provider/mpls-vpn/
packages/l3vpn/templates/l3vpn-ce.xml to see how the above skeleton could be extended.
the section called “Service Templates” Shows how Config-template is used as a service template and the
section called “Templates applied from an API” shows how a more complex Config-template is used from
Java API.
Basic principles
A template is declared in accordance with the YANG data model. You just set and create node elements to
build the desired structure. Or paraphrased, you simply declare the result.
A template can be defined across different vendors, but still in the same template. Furthermore, the
templates also allow for defining different behaviour when applying the template. This is accomplished
by setting tags such as merge, replace, delete, create or nocreate on relevant nodes in the
template.
Values in a template
Note The variables $DEVICE and $TEMPLATE_NAME are set internally by NSO. $DEVICE is set to the name
of the current device. The variable $TEMPLATE_NAME is set to the name of the current template. None of
these variables can be set by a user, it can however be used in a template as any other variable.
Each value in a template is stored as a string. This string value is converted to the actual value type of the
YANG model when the template is applied.
If the value contains a pair of {...} the string between the curly braces is treated as an XPath 1.0
expression. The simplest form of an XPath expression is a plain XPath variable:
<config-template xmlns="http://tail-f.com/ns/config/1.0">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>{$CE}</name>
...
</device>
</devices>
</config-template>
Any value assigned to the variable CE, which can be done via the Java API, will be used when the
template is applied:
...
TemplateVariables var = new TemplateVariables();
var.putQuoted("CE","ce0");
...
A value can contain any number of {...} and strings, the total result will be the concatenation of all the
strings and all the XPath expressions.
<config-template xmlns="http://tail-f.com/ns/config/1.0">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>{$CE}</name>
...
<config>
<interface xmlns="urn:ios">
<GigabitEthernet>
...
<description>Link to PE / {$PE} - {$PE_INT_NAME}</description>
</device>
</devices>
</config-template>
This would for example, when applied, evaluate the XPath expressions {$PE} and {$PE_INT_NAME}
to "pe0" and "GigabitEthernet0/0/0/3". giving the the total result of
<description>Link to PE / pe0 - GigabitEthernet0/0/0/3</description>
As the text between the {...} is an XPath 1.0 expression we can select any reachable node:
/endpoint/ce/device
this selects a leaf node, device. The value of the selected leaf will be used to assign a new value where it
is used when the template is applied.
The result of a selection is internally converted to string representation. All concatenations are done on
strings. When the string value is assigned to a leaf element a string to value conversion takes places in the
context of the target node.
Processing instructions
NSO template engine supports a number of processing instructions to allow for more dynamic templates.
The following table lists the available processing instructions.
Syntax Description
<?set variable = Allows to assign new variables or manipulate existing variable value. If
value?> used to create a new variable, then the scope of visibility of this variable is
limited to the parent tag of the processing instruction or the current processing
instruction block. Specifically, if a new variable is defined inside a loop, then it
is discarded at the end of each iteration.
<?if Processing instruction block that allows conditional execution based on the
{expression}? boolean result of the expression. For the detailed description see the section
>...<?else?>...<? called “Conditional Statements”
end?>
<?foreach The expression must evaluate to a (possibly empty) XPath node-set. The
{expression}? template engine will then iterate over each node in the node-set by changing
>...<?end?> the XPath current context node to this node and evaluating all children tags
within this context. For the detailed description see the section called “Loop
Statements”
<?for [variable = This processing instruction allows to iterate over the same set of template tags
initial value]; by changing a variable value. The variable visibility scope obeys the same
{progress rules as in the case of set processing instruction, except the variable value is
condition}; carried over to the next iteration instead of being discarded at the end of each
[variable = next iteration. The square brackets indicate optional clauses, so only the condition
value]?>...<?end? expression is mandatory. For the detailed description see the section called
> “Loop Statements”
<?copy-tree This instruction is analogous to copy_tree function available in the MAAPI
{source}?> API. The parameter is an XPath expression which must evaluate to exactly
one node in the data tree and indicates the source path to copy from. The target
path is defined by the position of the copy-tree instruction in the template
within the current context.
<?set- Allows to manipulate the current context node used to evaluate XPath
context-node expressions in the template. The expression is evaluated within the current
{expression}?> XPath context and must evaluate to exactly one node in the data tree.
<?set-root-node Allows to manipulate the root node of the XPath accessible tree. This
{expression}?> expression is evaluated in an XPath context where the accessible tree is the
entire datastore, which means that it is possible to select a root node outside
Syntax Description
the current accessible tree. The current context node remains unchanged. Just
like with the set-context-node instruction the expression must evaluate
to exactly one node in the data tree.
<?save-context Store both the current context node and the root node of the XPath accessible
name?> tree with name being the key to access it later. It is possible to switch to this
context later using switch-context with the name. Multiple contexts can
be stored simultaneously under different names. Using save-context with the
same name multiple times will result in the stored context being overwritten.
<?switch-context Used to switch to a context stored using save-context with the specified
name?> name. This means that both the current context node and the root node of
the XPath accessible tree will be changed to the stored values. switch-
context does not remove the context from the storage and can be used as
many times as needed, however using it with a name that does not exist in the
storage would cause an error.
The variable value in both set and for processing instructions is evaluated in the same way as the values
within XML tags in a template (see the section called “Values in a template”). So, it can be a mix of literal
values and XPath expressions surrounded by {...}.
The variable value is always stored as a string, so any XPath expression will be converted to literal using
XPath string() function. Namely, if the expression results in an integer or a boolean, then the resulting
value would be a string representation of the integer or boolean. If the expression results in a node-set, then
the value of the variable is a concatenated string of values of nodes in this node-set.
It is important to keep in mind that while in some cases XPath converts the literal to another type implicitly
(for example, in an expression {$x < 3} a possible value x='1' would be converted to integer 1
implicitly), in other cases an explicit conversion is needed. For example, in the following expression
{$x > $y} if x='9' and y='11', then the result of the expression is true due to alphabetic order. In
order to compare the values as numbers an explicit conversion of at least one argument is required:
{number($x) > $y}.
Note When a template is applied from an API, the context node can be specified. See the documentations for the
API of your choice.
The root context node can also be changed from within template with help of set-root-node
processing instruction. It takes an XPath expression as a parameter, and this expression is evaluated in a
special context, where the root node is the root of the datastore. This makes it possible to change to a node
outside the current evaluation context. For example: <?set-root-node {/}?> changes the accessible
tree to the whole datastore. As with other processing instruction, the effect of set-root-node only
applies until the closing parent tag.
Under certain criteria the evaluation context node is changed to make it easier to work with lists in the
templates. To understand the evaluation of templates it is important to understand how and when the
context node is changed.
The evaluation context will change if the value being set in the template is the key of a list, and the XPath
expression evaluates to a node set. However if the expression evaluates to a value, the context will not
change. To explain the algorithm above, let's look at the following example.
The device YANG model has the following snippet.
container vrf {
list definition {
key "name";
leaf name {
type string;
}
}
}
The XPath function string() is used within the first expression. This evaluates to a value which means
the context will remain unchanged. This also means that the second template parameter {as-number}
will still be relative to what the context was before, i.e. it remains unchanged.
Another way to write the latter example would be:
<vrf xmlns="urn:ios" tags="merge">
<?set-root-node {.}?>
<definition>
<name>{/vpn-service[name='volvo']/leg[node='branch']/name}</name>
<rd>{/as-number}:1</rd>
</definition>
</vrf>
This example temporarily changes the accessible tree to the subtree under the current context node which
makes it easier to write paths under the current node. The accessible tree is restored after the closing </vrf>
tag.
It is also possible to change the current context node using the processing instruction context. For
example: <?set-context-node {..}?> will change the context node to the parent of the current
context node.
Conditional Statements
Sometimes it is necessary to control which parts of a template that should be evaluated. The if processing
instruction makes it possible to set a conditional statement that controls if the sub-tree should be evaluated
or not.
<policy-map xmlns="urn:ios" tags="merge">
<name>{$POLICY_NAME}</name>
<class>
<name>{$CLASS_NAME}</name>
<?if {qos-class/priority}?>
<priority>
<percent>{$CLASS_BW}</percent>
</priority>
<?else?>
<bandwidth>
<percent>{$CLASS_BW}</percent>
</bandwidth>
<?end?>
<set>
<ip>
<dscp>{$CLASS_DSCP}</dscp>
</ip>
</set>
</class>
</policy-map>
The template example above shows the use of the if and else processing instructions to select between
tags priority and bandwidth. The sub-tree containing the priority tag will only be evaluated
if the XPath statement, {qos-class/priority} in the if processing instruction evaluates to
true. In this case it will be true as long as it doesn't point to an empty node set, i.e. the leaf qos-
class/priority exists. The subtree under the else processing instruction will be executed when the
preceding if expression evaluated to false, i.e. if the leaf qos-class/priority doesn't exist.
The evaluation of the XPath statements used in the if processing instruction follows the XPath standard
for computing boolean values. In summary the conditional expression will evaluate to false when:
Loop Statements
Sometimes statements in a sub-tree needs to be applied several times. foreach and for processing
instructions can be used to accomplish this iteration.
<ip xmlns="urn:ios">
<route>
<vrf>
<name>VPN{/vpn-number}</name>
<?foreach {tunnel}?>
<ip-route-forwarding-list>
<prefix>{network}</prefix>
<mask>{netmask}</mask>
<forwarding-address>10.255.254.{(tunnel-number-1)*4+2}</forwarding-address>
</ip-route-forwarding-list>
<?end?>
</vrf>
</route>
</ip>
The template example above shows the use of the foreach processing instruction to populate the list
ip-route-forwarding-list. If the result of the XPath expression tunnel is a non-empty node-
set then the sub-tree containing ip-route-forwarding-list tag will be evaluated for every node in
that node-set.
For each iteration the initial context will be set to the node in that iteration. The XPath function
current() can be used to retrieve that initial context.
There can be only one XPath statement in the foreach processing instruction and the result needs to be a
node-set, not a simple value. It is however possible to use XPath union operator to join multiple node-sets
in a single expression: {some-list-1 | some-leaf-list-2}
for is a processing instruction that allows to control flow by means of changing a variable rather than
changing context node. For example, the following template snippet could be used to disable a range of
interfaces:
<interface xmlns="urn:ios">
<?for i=0; {$i < 4}; i={$i + 1}?>
<FastEthernet>
<name>0/{$i}</name>
<shutdown/>
</FastEthernet>
<?end?>
</interface>
In this example the for keyword is followed by three semicolon-separated clauses with the following
meaning:
• The first clause is the initial step executed before the loop is entered the first time. The format of the
clause is variable name followed by equal sign and an expression that combines literal string and
XPath expressions surrounded by {}. This expression is evaluated in the same way as the XML tag
contents in templates. This clause is optional.
• The second clause is the progress condition. The loop will execute as long as this condition evaluates
to true, using the same rules as the if processing instruction. The format of this clause is an XPath
expression surrounded by {}. This clause is mandatory.
• The third clause is executed after each iteration. It has the same format as the first clause (variable
assignment) and is optional.
• merge: Merge with a node if it exists, otherwise create the node. This is the default operation if no
operation is explicitly set.
...
<config tags="merge">
<interface xmlns="urn:ios">
...
• replace: Replace a node if it exists, otherwise create the node.
...
<GigabitEthernet tags="replace">
<name>{link/interface-number}</name>
<description tags="merge">Link to PE</description>
...
• create: Creates a node. The node can not already exist. An error is raised if the node exists.
...
<GigabitEthernet tags="create">
<name>{link/interface-number}</name>
<description tags="merge">Link to PE</description>
...
• nocreate: Merge with a node if it exists. If it does not exist, it will not be created.
...
<GigabitEthernet tags="nocreate">
<name>{link/interface-number}</name>
<description tags="merge">Link to PE</description>
...
• delete: Delete the node.
...
<GigabitEthernet tags="delete">
<name>{link/interface-number}</name>
<description tags="merge">Link to PE</description>
...
It is not uncommon that there are multiple services managing the same ordered-by user list or leaf-list.
The relative order of elements inserted by these services might not matter, but there are constraints on
element positions that need to be fulfilled. Following the list of rules example, suppose that initially the list
contains only the "deny-all" rule:
<rule>
<ip>0.0.0.0</ip>
<mask>0.0.0.0</mask>
<action>deny</action>
</rule>
There are services that prepend permit rules to the beginning of the list using insert="first" operation.
If there are two services creating one entry each, 10.0.0.0/8 and 192.168.0.0/24 respectively, then the
resulting configuration looks like:
<rule>
<ip>192.168.0.0</ip>
<mask>255.255.255.0</mask>
<action>permit</action>
</rule>
<rule>
<ip>10.0.0.0</ip>
<mask>255.0.0.0</mask>
<action>permit</action>
</rule>
<rule>
<ip>0.0.0.0</ip>
<mask>0.0.0.0</mask>
<action>deny</action>
</rule>
If we now try to check-sync the first service (10.0.0.0/8), then it will report out-of-sync, and re-deploying
it would move the 10.0.0.0/8 rule first. But all we want is for the deny-all rule to be last. This is when
guard attribute comes in handy. If both insert and guard attributes are specified on a list entry in template,
then the template engine will first check whether the list entry already exists in the resulting configuration
between the target position (as indicated by the insert attribute) and the position of an element indicated by
the guard.
• If the element exists and fulfills this constraint, then its position will be preserved. If a template list
entry results in multiple configuration list entries, then all of them need to exist in the configuration in
the same order as calculated by template, and all of them need to fulfill the guard constraint in order
for their position to be preserved.
• If the list entry/entries do not exist, are not in the same order or do not fulfill the constraint, then the
list is reordered as instructed by the insert statement.
enum permit;
enum deny;
}
mandatory true;
}
...
}
...
...
<rule insert="first" guard="0.0.0.0 0.0.0.0">
<ip>{$IP}</ip>
<mask>{$MASK}</mask>
<action>permit</action>
</rule>
...
A guard can be specified literally (e.g. guard="0.0.0.0 0.0.0.0" if "ip mask" is the key of the
list) or using an XPath expression (e.g. guard="{$GUARDIP} {$GUARDMASK}"). If the guard
evaluates to a node-set consisting of multiple elements, then only the first element in this node-set is
considered as the guard. The constraint defined by the guard is evaluated as follows:
• If insert="first", then the constraint is fulfilled if the element exists in the configuration before
the element indicated by the guard.
• If insert="last", then the constraint is fulfilled if the element exists in the configuration after
the element indicated by the guard.
• If insert="after", then the constraint is fulfilled if the element exists in the configuration before
the element indicated by the guard, but after the element indicated by the key or value statement.
• If insert="before", then the constraint is fulfilled if the element exists in the configuration after
the element indicated by the guard, but before the element indicated by the key or value statement.
• If the guard evaluates to an empty node-set (i.e. the node indicated by the guard does not exist in the
target configuration), then the constraint is not fulfilled.
Debugging templates
Sometimes there is a need of some extra information when applying templates in order to understand what
is going on. When applying or committing a template there is a cli pipe command debug that enables
debug information:
debug template will output XPath expression results from the template, under which context it is
evaluated, what operation is used and how it effects the configuration, for all templates invoked. It can be
narrowed down to only show debugging information for a specific template:
debug xpath will output all XPath evaluations for the transaction, and is not limited to the XPath
expressions inside templates.
Note The example-snippets in this section are based on the service template l3vpn which is described in
detail in the section called “Service Templates”. The template itself can be found under $NCS_DIR/
examples.ncs/service-provider/simple-mpls-vpn. Create the service as described in the
README in the example directory, to try it out yourself.
Using the cli pipe flag debug template when committing a service which uses template(s) will give
you detailed information on what the template will do. Below are selected snippets, with explanations, on
output when committing the l3vpn service template with the debug flag. Some lines has been shortened to
fit the page.
admin@ncs(config)# commit dry-run | debug template
Evaluating "/endpoint/ce/device" (from file "l3vpn.xml", line 5)
Context node: /vpn/l3vpn[name='volvo']
Result:
For /vpn/l3vpn[name='volvo']/endpoint[id='c1']/ce, it evaluates to "ce0"
For /vpn/l3vpn[name='volvo']/endpoint[id='c2']/ce, it evaluates to "ce2"
Operation 'nocreate' on node /devices/device[name='ce0'] (from file "l3vpn.xml", line 5)
Node exists, continuing...
...
The output shows that a config-template is in play (as the origin of the template is an xml-file
l3vpn.xml. It shows that /endpoint/ce/device from line 5 is evaluated under the context /
vpn/l3vpn[name='volvo'], and that it evaluates to 2 values, ce0 and ce2, as /endpoint/ce/
device is a node set of 2 instances, c1 and c2. Next, it shows that it will start with the first node in the
set, and perform a 'nocreate' operation, and that the node exists. Line 4-6 in the l3vpn.xml:
4 <device tags="nocreate" >
5 <name>{/endpoint/ce/device}</name>
6 <config tags="merge">
Above it shows that the node description, which did not exist beforehand, will be set to the value
Link to PE. Line 11 in the l3vpn.xml:
11 <description tags="merge">Link to PE</description>
Further down in the output it is shown that the device pe2 does not support configuration with namespace
urn:ios and therefore that part of the template, and its siblings, will be skipped. Line 69 in the
l3vpn.xml:
69 <interface xmlns="urn:ios" tags="nocreate">
This was just selected extracts of the output; the real output shows each and every effect the template will
have. As seen the debug command is very useful for the understanding on how a template is interpret by
the system.
XPath help
In addition to the debug command there are some other options available which can be used in other
scenarios. To get the XPath selections right use the NSO CLI show command with the xpath display
flag to find out the correct path to an instance node. This shows the name of the key elements and also the
name space changes.
% show full-configuration devices device c0 config ios:interface | display xpath
/devices/device[name='c0']/config/ios:interface/FastEthernet[name='1/0']
/devices/device[name='c0']/config/ios:interface/FastEthernet[name='1/1']
/devices/device[name='c0']/config/ios:interface/FastEthernet[name='1/2']
/devices/device[name='c0']/config/ios:interface/FastEthernet[name='2/1']
/devices/device[name='c0']/config/ios:interface/FastEthernet[name='2/2']
When using more complex expressions the ncs_cmd utility can be used to experiment and to debug
expressions. ncs_cmd is used in a command shell. The command does not print the result as XPath
selections but is still of great use when debugging XPath expressions. The below example selects
FastEthernet interface names on device c0:
$ ncs_cmd -c \
"x /devices/device[name='c0']/config/ios:interface/FastEthernet/name"
/devices/device{c0}/config/interface/FastEthernet{1/0}/name [1/0]
/devices/device{c0}/config/interface/FastEthernet{1/1}/name [1/1]
/devices/device{c0}/config/interface/FastEthernet{1/2}/name [1/2]
/devices/device{c0}/config/interface/FastEthernet{2/1}/name [2/1]
/devices/device{c0}/config/interface/FastEthernet{2/2}/name [2/2]
Service Templates
The core function of NSO is the capability to map service models to device models. When the mapping
is a pure function from service parameters to device parameters the template mechanism can be used to
define this mapping in a declarative way. When the mapping includes calls to external systems, complex
algorithms etc this can be expressed in programmatic mapping logic instead. NSO supports Java for
defining Mapping Logic.
Templates can express the mapping in many cases and have the benefit of being expressed in a way
network engineers think. Based on the concepts presented above you can look at using templates to specify
service models and how the service models can be transformed to device configurations. This would be
of limited use without FASTMAP. The template mechanism declaratively maps the service configuration
to device configurations. But the NSO FASTMAP algorithm will enable NSO users to modify service
instances and deleting service instances and FASTMAP in combination with the template definition will
calculate and apply the minimum diff to apply the changes to the network.
The example will illustrate this by setting up Layer3 VPNs in a service provider MPLS network. The
example consists of Cisco ASR 9k core routers (P and PE) and Cisco IOS based CE routers, as shown in
Figure 163, “The example network” All the code for the example can be found as a running example in:
$NCS_DIR/examples.ncs/service-provider/simple-mpls-vpn
key name;
leaf name {
tailf:info "Unique service id";
tailf:cli-allow-range;
type string;
}
uses ncs:service-data;
ncs:servicepoint "l3vpn-template";
list endpoint {
key "id";
leaf id{
tailf:info "Endpoint identifier";
type string;
}
leaf as-number {
description "AS used within all VRF of the VPN";
tailf:info "MPLS VPN AS number.";
mandatory true;
type uint32;
}
container ce {
leaf device {
mandatory true;
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
}
container local {
uses endpoint-grouping;
}
container link {
uses endpoint-grouping;
}
}
container pe {
leaf device {
mandatory true;
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
}
container link {
uses endpoint-grouping;
}
}
}
grouping endpoint-grouping {
leaf interface-name {
tailf:info "Interface name. For example FastEthernet.";
type string;
}
leaf interface-number {
tailf:info "Interface number. For example 0 or 1/0";
type string;
}
leaf ip-address {
tailf:info "Local interface address.";
type inet:ipv4-address;
}
}
The corresponding template that maps the service data to the device data is shown in Example 165,
“Service to Device Model mapping using Templates”
Note In order to save space, only some parts of the template will be shown here. Refer to the full example for the
complete template.
In several places the template uses data from the service data-model, note the expressions enclosed in curly
brackets in the listing below
...
T 181: </devices>
T 182:</config-template>}
The template will have the same name as the file it was loaded from except the .xml suffix. Note the
attribute at Line 'T 2': this template will be applied when changes are made at servicepoint: l3vpn-
template. Or paraphrased: this template implements the service at servicepoint l3vpn-template.
The service data used in this example is:
S 3: as-number 65001
S 4: ce device ce0
S 5: ce local interface-name GigabitEthernet
S 6: ce local interface-number 0/9
S 7: ce local ip-address 192.168.0.1
S 8: ce link interface-name GigabitEthernet
S 9: ce link interface-number 0/2
S 10: ce link ip-address 10.1.1.1
S 11: pe device pe2
S 12: pe link interface-name GigabitEthernet
S 13: pe link interface-number 0/0/0/1
S 14: pe link ip-address 10.1.1.2
S 15: !
S 16: endpoint c2
S 17: as-number 65001
S 18: ce device ce2
S 19: ce local interface-name GigabitEthernet
S 20: ce local interface-number 0/3
S 21: ce local ip-address 192.168.1.1
S 22: ce link interface-name GigabitEthernet
S 23: ce link interface-number 0/1
S 24: ce link ip-address 10.2.1.1
S 25: pe device pe2
S 26: pe link interface-name GigabitEthernet
S 27: pe link interface-number 0/0/0/2
S 28: pe link ip-address 10.2.1.2
S 28:}
• Line 'T 4': This annotates the node with the tag nocreate. No device nodes will be created, this
tag is in effect until a new tag is introduced. No sub-nodes to the device node will be created either.
Changes introduced by the template will only be applied to existing nodes.
Further The XPath expression within curly brackets: /endpoint/ce is an absolute path. The root
of all paths is the service data at 'S 1': vpn l3vpn volvo. The XPath evaluation context is set
to this root. Hence, 'T 4' will result in a node set consisting of the nodes at: vpn l3vpn volvo
endpoint ce.
The processing instruction foreach will make the template to be applied once for every node in the
resulting node-set of the XPath expression endpoint/ce. For every iteration the initial context will
be set to each node in turn.
• Line 'T 5': The XPath expression within curly brackets: device is relative path. This path is relative
to the initial context.
In this example, this node set will contain ce0 (line: 'S 4') and ce2 (line: 'S 18'), which means that
the lines of the template, 'T 4' to 'T 49', will be applied to first the device ce0 and secondly to the
device ce2.
As a side effect of the XPath expression at 'T 5', the evaluation context with be changed to its parent
node (ce). This will make it more convenient to use relative XPath expressions in the succeeding
template. Read more about the XPath evaluation context in the section called “XPath Context in
config-templates”.
• Line 'T 6': Change the way the template is applied by annotating the node config with the tag
merge. Existing nodes will be changed and if a node does not exist it will be created.
• Line 'T 8': Note then namespace attribute value "urn:ios" of the node interface. This will
make sure that only devices modeled with this particular namespace will be affected by this part of
the template. Hence, it is possible for one template to handle a multitude of devices from various
manufacturers.
• Line 'T 9': Iterate over the nodes in the node-set resulting from the relative XPath expression link.
Note how you can use a relative XPath expression here since the evaluation context was changed at
line 'T 5'.
• Line 'T 10': Select the data at the relative XPath interface-number. This selects the value 0/2
at 'S 9' for the ce0 node, and the value 0/1 at 'S 23' for the ce2 node.
Note how you can use a relative XPath expression here since the evaluation context was changed at
line 'T 5'.
When you have entered the Service configuration, you can check what will be committed by doing a
commit dry-run. Only some pieces of the output is shown below:
Feature Template
A feature template is a config-template used to configure a specific feature. By using feature templates, the
service code can be completely device-vendor agnostic. Feature templates is a nice way to decompose a
problem into smaller, reusable, chunks.
In the example $NCS_DIR/examples.ncs/service-provider/mpls-vpn a l3vpn-acl template
is used to create ACL entries for all service specific match rules, and is applied from the Java service
code. Below you will find the l3vpn-acl.xml as well as parts of the Java logic which shows how the
template variables are set and then how the template is applied.
The setting of the variables and applying of the template from Java code:
private TemplateVariables setAclVars(NavuContainer match,
String namePrefix)
throws NavuException, UnknownHostException {
TemplateVariables aclVar = new TemplateVariables();
prefixToWildcardMask(getIPPrefix(
match.leaf("destination-ip").
valueAsString())));
}
aclVar.putQuoted("PORT_START", match.leaf("port-start").
valueAsString());
aclVar.putQuoted("PORT_END", match.leaf("port-end").
valueAsString());
return aclVar;
}
...
TemplateVariables aclVar = setAclVars(match,namePrefix);
aclTemplate.apply(service, aclVar);
...
One way to handle this is to use smaller templates with variables and invoke them repeatedly passing
the parameters from the list one by one, or pair by pair in case of a map. However there are certain
disadvantages with this approach. One of them is the performance: every invocation of the template from
the API requires a context switch between the user application process and NSO core process, which can
be costly. Another disadvantage is that the logic of the service is split between Java/Python code and the
template, which makes it harder to write and understand such services.
An approach suggested in this section involves modelling the auxiliary data used by the service as
operational data and populating it by the service code in Java or Python. After that the service callback
passes control to the template that handles main service logic. Such auxiliary data would then be available
to select by means of XPath, just like any other service input data.
There could be different approaches to modelling the auxiliary data. It can reside in the service tree as it
is private to the service instance: either integrated in the existing data tree, or as a separate subtree under
the service instance. It can also be located outside of the service instance, however it is important to keep
in mind that operational data cannot be shared by multiple services because there is no refcounters or
backpointers stored on operational data.
After the service is deployed, the auxiliary leafs remain in the database which facilitates debugging
because they can be seen via all northbound interfaces. If this is not the intention, they can be hidden
with help of tailf:hidden statement. Because operational data is also a part of FASTMAP diff,
these values will be deleted when the service is deleted and need to be recomputed when the service is re-
deployed. This also means that in most cases there should be no need to write any additional code to clean
up this data.
One example of a task that is hard to solve in the template itself using native XPath functions is converting
a network prefix into a network mask or vice versa. Below is a snippet of data model that is part of a
service input data and contains a list of interfaces along with IP addresses to be configured on those
interfaces. If the format of the IP address on the input is an IP address with prefix but the device accepts an
IP address with network mask instead, then we can use an auxiliary leaf mask to be able to convert prefix
to the suitable format before the service template takes control.
list interface {
key name;
leaf name {
type string;
}
leaf address {
type tailf:ipv4-address-and-prefix-length;
description
"IP address with prefix in the following format, e.g.: 10.2.3.4/24";
}
leaf mask {
config false;
type inet:ipv4-address;
description
"Auxiliary data populated by service code, represents network mask
corresponding to the prefix in the address field, e.g.: 255.255.255.0";
}
}
The service code would need to populate the mask. It uses NAVU API in this example to do that, but if the
list is known to be large it might be useful to use Maapi.setValues() instead to set the leafs in one go.
for (NavuListEntry interface : interfaces) {
String address = interface.leaf("address").valueAsString();
String prefix = address.split("/")[1];
String mask = prefixToNetMask(prefix);
// sharedSet() does not take effect on operational data
// using set() instead
interface.leaf("mask").set(mask);
}
...
// TemplateVariables don't need to contain mask
// it is passed via database
TemplateVariables tv = new TemplateVariables();
template.apply(service, tv);
Introduction
This chapter focus on how to manipulate the NSO alarm table using the dedicated Alarm APIs. Make sure
that the concepts in the section called “Alarm Manager Introduction” in NSO 5.3 User Guide are well
understood before reading this section.
The Alarm API provides a simplified way of managing your alarms for the most common alarm
management use cases. The API is divided into a producer and a consumer part.
The producer part provides an alarm sink. Using an alarm sink you can submit your alarms into the system.
The alarms are then queued and fed into the NSO alarm list. You can have multiple alarm sinks active at
any time.
The consumer part provides an Alarm Source. The alarm source lets you listen to new alarms and alarm
changes. As with the producer side you can have multiple alarm sources listening for new and changed
alarms in parallel.
The diagram below show a high level view of the flow of alarms in and out of the system. Alarms are
received, e.g as SNMP notifications, and fed into the NSO Alarm List. At the other end you subscribe for
the alarm changes.
• Centralized Mode: This is the preferred mode for NSO. In the centralized mode we submit alarms
towards a central alarm writer that optimizes the number of sessions towards the CDB. The NSO Java
VM will setup the centralized alarm sink at start-up which will be available for all java components
run by the NSO Java VM.
• Local Mode: In the local mode we submit alarms directly into the CDB. In this case each Alarm Sink
keeps its own CDB session. This mode is the recommended mode for applications run outside of the
NSO java VM or java components that have specific need of controlling the CDB session.
The difference between the two modes is manifested by the way you retrieve the AlarmSink instance
to use for alarm submission. For submitting an alarm in centralized mode a prerequisite is that a central
alarm sink has been set up within your JVM. For components in the NSO java VM this is done for you.
For applications outside of the NSO java VM which want to utilize the centralized mode, you need to get
a AlarmSinkCentral instance. This instance has to be started and the central will then execute in a
separate thread. The application needs to maintain this instance and stop it when the application finish.
The centralized alarm sink can then retrieved using the default constructor in the AlarmSink class.
When submitting an alarm using the local mode, you need a CDB socket and a Cdb instance. The local
mode alarm sink needs the Cdb instance in order to write alarm info to CDB. The local alarm sink is
retrieved using a constructor with a Cdb instance as an argument.
The sink.submitAlarm(...) method provided by the AlarmSink instance can be used in both
centralized and local mode to submit an alarm.
Example 172. Alarm submit
package com.tailf.ncs.alarmman.producer;
...
/**
* Submits the specified <code>Alarm</code> into the alarm list.
* If the alarms key
* "managedDevice, managedObject, alarmType, specificProblem" already
* exists, the existing alarm will be updated with a
* new status change entry.
*
* Alarm identity:
*
* @param managedDevice the managed device which emits the alarm.
*
* @param managedObject the managed object emitting the alarm.
*
* @param alarmtype the alarm type of the alarm.
*
* @param specificProblem is used when the alarmtype cannot uniquely
* identify the alarm type. Normally, this is not the case,
* and this leaf is the empty string.
*
* Status change within the alarm:
* @param severity the severity of the alarm.
* @param alarmText the alarm text
* @param impactedObjects Objects that might be affected by this alarm
* @param relatedAlarms Alarms related to this alarm
* @param rootCauseObjects Objects that are candidates for causing the
* alarm.
* @param timeStamp The time the status of the alarm changed,
* as reported by the device
* @param customAttributes Custom attributes
*
* @return boolean true/false whether the submitting the specified
* alarm was successful
*
* @throws IOException
* @throws ConfException
* @throws NavuException
*/
public synchronized boolean
submitAlarm(ManagedDevice managedDevice,
ManagedObject managedObject,
ConfIdentityRef alarmtype,
ConfBuf specificProblem,
PerceivedSeverity severity,
ConfBuf alarmText,
List<ManagedObject> impactedObjects,
List<AlarmId> relatedAlarms,
List<ManagedObject> rootCauseObjects,
ConfDatetime timeStamp,
Attribute ... customAttributes)
...
}
Below follows an example showing how to submit alarms using the centralized mode, which is the normal
scenario for components running inside the NSO Java VM. In the example we create an alarm sink and
submit an alarm.
Example 173. Submitting an alarm in a centralized environment
...
AlarmSink sink = new AlarmSink();
...
sink.submitAlarm(new ManagedDevice("device0"),
new ManagedObject("/ncs:devices/device{device0}"),
new ConfIdentityRef(new MyAlarms().hash(),
MyAlarms._device_on_fire),
PerceivedSeverity.INDETERMINATE,
"Indeterminate Alarm",
null,
null,
null,
ConfDatetime.getConfDatetime(),
new AlarmAttribute(new myAlarm(), // A custom alarm attribute
myAlarm._custom_alarm_attribute_,
new ConfBuf("this is an alarm attribute")),
new StatusChangeAttribute(new myAlarm(), // A custom status change attribute
myAlarm._custom_status_change_attribute_,
new ConfBuf("this is a status change attribute")));
...
The central alarm source subscribes to changes in the alarm list and forwards them to the instantiated alarm
sources. The alarms are broadcast to the alarm sources. This means that each alarm source will receive its
own copy of the alarm.
The alarm source promotes two ways of receiving alarms:
/**
* Waits indefinitely for a new alarm or until the
* queue is interrupted.
*
* @return a new alarm.
* @throws InterruptedException
*/
public Alarm takeAlarm() throws InterruptedException{
...
}
...
/**
* Waits until the next alarm comes or until the time has expired.
*
* @param time time to wait.
* @param unit
* @return a new alarm or null it timeout expired.
* @throws InterruptedException
*/
public Alarm pollAlarm(int time, TimeUnit unit)
throws InterruptedException{
...
}
As soon as you create an alarm source object the alarm source object will start receiving alarms. If you do
not poll or take any alarms from the alarm source object the queue will fill up until it reaches the maximum
number of queued alarms as specified by the alarm source central. The alarm source central will then start
to drop the oldest alarms until the alarm source starts the retrieval. This only affects the alarm source that
is lagging behind. Any other alarm sources that are active at the same time will receive alarms without
discontinuation.
Example 176. Consuming alarms
AlarmSource source = new AlarmSource();
Example 177. Extending alarm-type
module my-alarms {
namespace "http://examples.com/ma";
prefix ma;
....
import tailf-ncs-alarms {
prefix al;
}
import tailf-common {
prefix tailf;
}
identity website-alarm {
base al:alarm-type;
}
identity webserver-on-fire {
base website-alarm;
}
The identity statement in the YANG language is used for this type of constructs. To complete our
alarm type extension we also need to populate configuration data related to the new alarm type. A good
way to do that is to provide XML data in a CDB initialization file and place this file in the ncs-cdb
directory:
Example 178. my-alarms.xml
<alarms xmlns="http://tail-f.com/ns/ncs-alarms">
<alarm-model>
<alarm-type>
<type
xmlns:ma="http://examples.com/ma">ma:webserver-on-fire</type>
<event-type>equipmentAlarm</event-type>
<has-clear>true</has-clear>
<kind-of-alarm>root-cause</kind-of-alarm>
<probable-cause>957</probable-cause>
</alarm-type>
</alarm-model>
</alarms>
Another possibility of extension is to add fields to the existing NSO alarms. This can be useful if you want
add extra fields for attributes not directly supported by the NSO alarm list.
Below follows an example showing how to extend the alarm and the alarm status.
....
augment /al:alarms/al:alarm-list/al:alarm {
leaf custom-alarm-attribute {
type string;
}
}
augment /al:alarms/al:alarm-list/al:alarm/al:status-change {
leaf custom-status-change-attribute {
type string;
}
}
}
int th = -1;
try {
//A helper object that can return the topmost tag (not key)
//and that can reduce the path by one tag at a time (parent)
ExtConfPath p = new ExtConfPath(path);
while(!(p.topTag().equals("config")
|| p.topTag().equals("ncs:config"))){
for(ConfAttributeValue v : vals){
ConfList refs = (ConfList)v.getAttributeValue();
for (ConfObject co : refs.elements()){
ManagedObject mo = new ManagedObject((ConfObjectRef)co);
objs.add(mo);
}
p = p.parent();
}
}
catch (IOException ioe){
LOGGER.warn("Could not access Maapi, "
+" aborting mapping attempt of impacted objects");
}
catch (ConfException ce){
ce.printStackTrace();
LOGGER.warn("Failed to retrieve Attributes via Maapi");
}
finally {
maapi.finishTrans(th);
}
return objs;
}
Introduction
NSO can act as an SNMP notification receiver (v1, v2c, v3) for its managed devices. The application can
register notification handlers and react on the notifications, for example by mapping SNMP notifications to
NSO alarms.
The notification reception can be turned on and off using the enabled lead. NSO will listen to notifications
at the end-points configured in listen. There is no need to manually configure the NSO engine-id.
NSO will do this automatically using the algorithm described in RFC 3411. However, it can be assigned an
engine-id manually by setting this leaf.
The managed devices must also be configured to send notifications to the NSO addresses.
NSO silently ignores any notification received from unknown devices. By default, NSO uses the /
devices/device/address leaf, but this can be overridden by setting /devices/device/snmp-
notification-address.
+--rw device [name]
| +--rw name string
| +--rw address inet:host
| +--rw snmp-notification-address? inet:host
Built-in Filters
There are some standard built-in filters for the SNMP notification receiver which perform standard tasks.
Standard filter for suppression of received snmp events which are not of type TRAP, NOTIFICATION or
INFORM.
Standard filter for suppression of notifications emanating from ip addresses outside defined set of
addresses This filter determines the source ip address first from the snmpTrapAddress 1.3.6.1.6.3.18.1.3
varbind if this is set in the PDU, or otherwise from the emanating peer ip address. If the resulting ip
address does not match either the snmp-notification-address or the address leaf of any device
in the device-model this notification is discarded.
Notification Handlers
NSO uses the Java package SNMP4J to parse the SNMP PDUs.
Notification Handlers are user supplied Java classes that implement the
com.tailf.snmp.snmp4j.NotificationHandler interface. The processPDU method is
expected to react on the SNMP4J event, e.g. by mapping the PDU to an NSO alarm. The handlers are
registered in the NotificationReceiver. The NotificationReceiver is the main class that in
addition to maintain the handlers also has responsibility to read the NSO SNMP notification configuration,
and setup SNMP4J listeners accordingly.
An example of a notification handler can be found at $NCS_DIR/examples.ncs/snmp-
notification-receiver. This example handler receives notifications and sets an alarm text if the
notification is an IF-MIB::linkDown trap.
Example 181.
public class ExampleHandler implements NotificationHandler {
/**
* This callback method is called when a notification is received from
* Snmp4j.
*
* @param event
* a CommandResponderEvent, see Snmp4j javadoc for details
* @param opaque
* any object passed in register()
*/
public HandlerResponse
processPdu(EventContext context,
CommandResponderEvent event,
Object opaque)
throws Exception {
if (vb.getOid().toString().equals("1.3.6.1.6.3.1.1.4.1.0")) {
String linkStatus = vb.getVariable().toString();
if ("1.3.6.1.6.3.1.1.5.3".equals(linkStatus)) {
alarmText = "IF-MIB::linkDown";
}
}
}
return HandlerResponse.CONTINUE;
}
}
Example 182.
/**
* This class starts the Snmp-notification-receiver.
*/
public class App implements ApplicationComponent {
static {
LogFactory.setLogFactory(new Log4jLogFactory());
}
notifRec =
NotificationReceiver.getNotificationReceiver();
// register example filter
notifRec.register(handl, null);
}
}
Introduction
This document describes an embedded basic web server that can deliver static and Common Gateway
Interface (CGI) dynamic content to a web client, commonly a browser. Due to the limitations of this web
server, and/or of its configuration capabilities, a proxy server such as Nginx is recommended to address
special requirements.
Here is a brief overview of what you can configure on the web server:
• "custom headers": set custom headers across all static and dynamic content, including requests to "/
jsonrpc".
In addition to what is configurable, the web server also GZip-compresses responses automatically if the
browser handles such responses, either by compressing the response on the fly, or, if requesting a static
file, like "/bigfile.txt", by responding with the contents of "/bigfile.txt.gz", if there is such a file.
CGI support
The web server includes CGI functionality, disabled by default. Once you enable it in ncs.conf - see the
manual pages of the section called “CONFIGURATION PARAMETERS” in NSO 5.3 Manual Pages ,
you can write CGI scripts, that will be called with the following NSO environment variables prefixed with
NCS_ when a user has logged-in via JSON-RPC:
• "SERVER_SOFTWARE":
• "SERVER_NAME":
• "GATEWAY_INTERFACE":
• "SERVER_PROTOCOL":
• "SERVER_PORT":
• "REQUEST_METHOD":
• "REQUEST_URI":
• "DOCUMENT_ROOT":
• "DOCUMENT_ROOT_MOUNT":
• "SCRIPT_FILENAME":
• "SCRIPT_TRANSLATED":
• "PATH_INTO":
• "PATH_TRANSLATED":
• "SCRIPT_NAME":
• "REMOTE_ADDR":
• "REMOTE_HOST":
• "SERVER_ADDR":
• "LOCAL_ADDR":
• "QUERY_STRING":
• "CONTENT_TYPE":
• "CONTENT_LENGTH":
• "HTTP_*": HTTP headers e.g. "Accept" value is exported as HTTP_ACCEPT
<webui>
<transport>
<ssl>
<enabled>true</enabled>
<ip>0.0.0.0</ip>
<port>8889</port>
<read-from-db>true</read-from-db>
</ssl>
</transport>
</webui>
Note that the options key-file, cert-file, and ca-cert-file, are ignored when read-from-db is set to true. See
the ncs.conf.5 man page for more details.
The database is populated with TLS data by configuring the /tailf-tls:tls/private-key, /tailf-tls:tls/certificate,
and, optionally, /tailf-tls/ca-certificates. It is possible to use password protected private keys, then the
passphrase leaf in the private-key container needs to be set to the password of the encrypted private key.
Unencrypted private key data can be supplied in both PKCS#8 and PKCS#1 format, while encrypted
private key data needs to be supplied in PKCS#1 format.
In the following example a password protected private key, the passphrase, a public key certificate, and
two CA certificates are configured with the CLI.
admin@io> configure
Entering configuration mode private
[ok][2019-06-10 19:54:21]
[edit]
admin@io% set tls certificate cert-data
(<unknown>):
[Multiline mode, exit with ctrl-D.]
> -----BEGIN CERTIFICATE-----
> MIICrzCCAZcCFBh0ETLcNAFCCEcjSrrd5U4/a6vuMA0GCSqGSIb3DQEBCwUAMBQx
> ...
> -----END CERTIFICATE-----
>
[ok][2019-06-10 19:59:36]
[edit]
admin@confd% set tls private-key key-data
(<unknown>):
[Multiline mode, exit with ctrl-D.]
> -----BEGIN RSA PRIVATE KEY-----
> Proc-Type: 4,ENCRYPTED
> DEK-Info: AES-128-CBC,6E816829A93AAD3E0C283A6C8550B255
> ...
> -----END RSA PRIVATE KEY-----
[ok][2019-06-10 20:00:27]
[edit]
admin@confd% set tls private-key passphrase
(<AES encrypted string>): ********
[ok][2019-06-10 20:00:39]
[edit]
admin@confd% set tls ca-certificates ca-cert-1 cert-data
(<unknown>):
[Multiline mode, exit with ctrl-D.]
> -----BEGIN CERTIFICATE-----
> MIIDCTCCAfGgAwIBAgIUbzrNvBdM7p2rxwDBaqF5xN1gfmEwDQYJKoZIhvcNAQEL
> ...
> -----END CERTIFICATE-----
[ok][2019-06-10 20:02:22]
[edit]
admin@confd% set tls ca-certificates ca-cert-2 cert-data
(<unknown>):
[Multiline mode, exit with ctrl-D.]
> -----BEGIN CERTIFICATE-----
> MIIDCTCCAfGgAwIBAgIUZ2GcDzHg44c2g7Q0Xlu3H8/4wnwwDQYJKoZIhvcNAQEL
> ...
> -----END CERTIFICATE-----
[ok][2019-06-10 20:03:07]
[edit]
admin@confd% commit
Commit complete.
[ok][2019-06-10 20:03:11]
[edit]
The SHA256 fingerprints of the public key certificate and the CA certificates can be accessed as
operational data. The fingerprint is shown as a hex string. The first octet identifies what hashing algorithm
is used, 04 is SHA256, and the following octets is the actual fingerprint.
[ok][2019-06-10 20:43:31]
$ ncs --reload
After configuring NSO, populating the database, and reloading, the TLS transport is usable.
Introduction
Kickers constitutes a declarative notification mechanism for triggering actions on certain stimuli like a
database change or a received notification. These different stimuli and their kickers are defined separately
as data-kicker and notification-kicker respectively.
Common to all types of kickers is that they are declarative. Kickers are modeled in YANG and Kicker
instances stored as configuration data in CDB.
Immediately after a transaction which defines a new kicker is committed the kicker will be active. The
same holds for removal. This also implies that the amount of programming for a kicker is a matter of
implementing the action to be invoked.
The data-kicker replicates much of the functionality otherwise attained by a CDB subscriber. Without the
extra coding in registration and runtime daemon that comes with a CDB subscriber. The data-kicker works
for all data providers.
The notification-kicker reacts on notifications received by NSO using a defined notification subscription
under /ncs:devices/device/netconf-notifications/subscription. This simplifies
handling of southbound emitted netconf notifications. Traditionally these where choosen to be stored in
CDB as operational data and a separate CDB subscriber was used to act on the received notifications. With
the use of notification-kicker the CDB subscriber can be removed and there is no longer any need to store
the received notification in CDB.
The kicker mechanism first checks if the input parameters matches those in the kicker:action-
input-params YANG grouping defined in the tailf-kicker.yang file. If so the action will be
invoked with the input parameters:
The "synthetic" transaction implies that this is a copy of the original transaction that lead to the kicker
triggering. It only contains the data tree under the monitor. The original transaction is already committed
and this data might no longer reflect the "running" datastore. Its useful in that the action implementation
can attach and diff-iterate over this transaction and retrieve the certain changes that lead to the kicker
invocation.
If the kicker mechanism finds an action that do not match the above input parameters it will invoke
the action with an empty parameter list. This implies that an kicker action must either match the above
kicker:action-input-params grouping precisely or accept an empty incoming parameter list.
Otherwise the action invocation will fail.
• Kicker are triggered by changes in the sub-tree indicated by the monitor. potentially triggers a
Kicker.
• Actions are invoked during the commit phase. Hence an aborted transaction never trigger kickers.
• No distinction is made between configuration and operational data.
• No distinction is made between CRUD types, i.e. create, delete, update. All changes potentially
trigger kickers.
• Kickers may have constraints that suppress invocations. Changes in the sub-tree indicated by
monitor is a necessary but perhaps not a sufficient condition for the action to be invoked.
Generalized Monitors
For a Data Kicker it is the monitor that specifies which subtree under which a change should invoke the
kicker. The monitor leaf is of type node-instance-identifier which means that predicates for
keys are optional, i.e. keys may be omitted and then represent all instances for that key.
The resulting evaluation of the monitor defines a node-set. Each node in this node-set will be root context
for any further xpath evaluations necessary before invoking the kicker action.
The following example shows the strengths of using xpath to define the kickers. Say that we have a
situation described by the following YANG model snippet:
module example {
namespace "http://tail-f.com/ns/test/example";
prefix example;
...
container sys {
list ifc {
key name;
max-elements 64;
leaf name {
type interfaceName;
}
leaf description {
type string;
}
leaf enabled {
type boolean;
default true;
}
container hw {
leaf speed {
type interfaceSpeed;
}
leaf duplex {
type interfaceDuplex;
}
leaf mtu {
type mtuSize;
}
leaf mac {
type string;
}
}
list ip {
key address;
max-elements 1024;
leaf address {
type inet:ipv4-address;
}
leaf prefix-length {
type prefixLengthIPv4;
mandatory true;
}
leaf broadcast {
type inet:ipv4-address;
}
}
tailf:action local_me {
tailf:actionpoint kick-me-point;
input {
}
output {
}
}
}
tailf:action kick_me {
tailf:actionpoint kick-me-point;
input {
}
output {
}
}
tailf:action iter_me {
tailf:actionpoint kick-me-point;
input {
uses kicker:action-input-params;
}
output {
}
}
}
Then we can define a kicker for monitoring a specific element in the list and calling the correlated
local_me action:
admin(config-data-kicker-e1)# commit
Commit complete
admin(config-data-kicker-e1)# top
admin@ncs(config)# show full-configuration kickers
kickers data-kicker e1
monitor /sys/ifc[name='port-0']
kick-node /sys/ifc[name='port-0']
action-name local_me
!
On the other hand we can define a kicker for monitoring all elements of the list and and call the correlated
local_me action for each element:
admin(config-data-kicker-e2)# commit
Commit complete
admin(config-data-kicker-e2)# top
admin@ncs(config)# show full-configuration kickers
kickers data-kicker e2
monitor /sys/ifc
kick-node .
action-name local_me
!
Here the "." in the kick-node refer to the current node in the node-set defined by the monitor.
Kicker Constraints/Filters
A Data Kicker may be constrained by adding conditions that suppress invocations. The leaf trigger-
expression contains a boolean XPath expression that is evaluated twice, before and after the change-set
of the commit has been applied to the database(s).
The Xpath expression has to evaluated twice in order to detect the change caused by the transaction.
The two boolean results together with the leaf trigger-type controls if the Kicker should be triggered
or not:
enter-and-leave false -> true (i.e. positive flank) or true -> false (negative flank)
enter false -> true
Since the trigger-expression evaluates to false, the kicker is not triggered. Let's try again:
Variable Bindings
A Data Kicker may be provided with a list of variables (named values). Each variable binding consists of a
name and a XPath expression. The Xpath expressions are evaluated on-demand, i.e. when used in either of
monitor or trigger-expression nodes.
In the example above PATH is defined and refered to by the monitor expression by using the expression
$PATH.
Note A monitor expression is not evaluated by the XPath engine. Hence no trace of the evaluation can be found
in the the Xpath log.
Monitor expressions are expanded and installed in an internal data-structure at kicker creation/compile
time. XPath may be used while defining kickers by referring to a named XPath expression (see ???).
is used. This action is defined in the website-service package. The following is the yang snippet for the
action definition from the website.yang file:
module web-site {
namespace "http://examples.com/web-site";
prefix wse;
...
augment /ncs:services {
...
container actions {
tailf:action diffcheck {
tailf:actionpoint diffcheck;
input {
uses kicker:action-input-params;
}
output {
}
}
}
}
The implementation of the action can be found in the WebSiteServiceRFS.java class file. Since
it takes the kicker:action-input-params as input, the "Tid" for the synthetic transaction is available. This
transaction is attached and diff-iterated. The result of the diff-iteration is printed in the ncs-java-vm.log:
class WebSiteServiceRFS {
....
@ActionCallback(callPoint="diffcheck", callType=ActionCBType.ACTION)
public ConfXMLParam[] diffcheck(DpActionTrans trans, ConfTag name,
ConfObject[] kp, ConfXMLParam[] params)
throws DpCallbackException {
try {
System.out.println("-------------------");
System.out.println(params[0]);
System.out.println(params[1]);
System.out.println(params[2]);
});
maapi3.detach(tid);
s3.close();
} catch (Exception e) {
throw new DpCallbackException("diffcheck failed", e);
}
}
}
We are now ready to start the website-service example and define our data-kicker. Do the following:
$ make all
$ ncs-netsim start
$ ncs
$ ncs_cli -C -u admin
The kickers are defined under the hide-group "debug". To be able to show and declare kickers we need
first to unhide this hide-group:
admin@ncs# config
admin@ncs(config)# unhide debug
We now define a data-kicker for the "profile" list under the by the service augmented container "/services/
properties/wsp:web-site":
admin@ncs(config)# kickers data-kicker a1 \
> monitor /services/properties/wsp:web-site/profile \
> kick-node /services/wse:actions action-name diffcheck
admin@ncs(config-data-kicker-a1)# commit
admin@ncs(config-data-kicker-a1)# top
admin@ncs(config)# show full-configuration kickers data-kicker a1
kickers data-kicker a1
monitor /services/properties/wsp:web-site/profile
kick-node /services/wse:actions
action-name diffcheck
!
We now commit a change in the profile list and we use the "debug kicker" pipe option to be able to follow
the kicker invokation:
admin@ncs(config)# services properties web-site profile lean lb lb0
admin@ncs(config-profile-lean)# commit | debug kicker
2017-02-15T16:35:36.039 kicker: a1 at /ncs:services/ncs:properties/wsp:web-site/wsp:profile[wsp:
Commit complete.
admin@ncs(config-profile-lean)# top
admin@ncs(config)# exit
We can also check the result of the action by looking into the ncs-java-vm.log:
admin@ncs# file show logs/ncs-java-vm.log
In the end we will find the following printout from the diffcheck action:
-------------------
{[669406386|id], a1}
{[669406386|monitor], /ncs:services/properties/web-site/profile{lean}}
{[669406386|tid], 168}
path = /ncs:services/properties/wsp:web-site/profile{lean}
op = MOP_CREATED
newValue = null
path = /ncs:services/properties/wsp:web-site/profile{lean}/name
op = MOP_VALUE_SET
newValue = lean
path = /ncs:services/properties/wsp:web-site/profile{lean}/lb
op = MOP_VALUE_SET
newValue = lb0
[ok][2017-02-15 17:11:59]
• Notification Kickers are triggered by the arrival of notifications from any device subscription.
These subscriptions are defined under the /devices/device/netconf-notification/
subscription path.
• Storing the received notifications in CDB is optional and not part of the notification kicker
functionality.
• The kicker invocations are serialized under a certain subscription i.e. kickers are invoked in the same
sequence as notifications are received for the same subscription. This means that invocations are
queued up and executed as quickly as the action permits.
SUBSCRIPTION_NAME The name of the current subscription from which the notification was received.
the kicker
NOTIFICATION_NAME The name of the current notification.
NOTIFICATION_NS The namespace of the current notification.
The selector-expr technique for defining the notification kickers is very flexible. For instance a
kicker can be defined:
In addition to this usage of the predefined variable bindings it is possible to further drill down into the
specific notification to trigger on certain leafs in the notification.
Variable Bindings
In addition to the four variable bindings mentioned above, a Notification Kicker may also be provided with
a list of variables (named values). Each variable binding consists of a name and a XPath expression. The
Xpath expression is evaluated when the selector-expr is run.
admin@ncs(config)# set kickers notification-kicker k4
selector-expr "$NOTIFICATION_NAME=linkUp and address[ip=$IP]"
kick-node /x/y[id='n1']
action-name kick-me
variable IP value '192.168.128.55'
admin@ncs(config)#
In the example above PATH is defined and refered to by the monitor expression by using the expression
$PATH.
Note A monitor expression is not evaluated by the XPath engine. Hence no trace of the evaluation can be found
in the the Xpath log.
Monitor expressions are expanded and installed in an internal data-structure at kicker creation/compile
time. XPath may be used while defining kickers by referring to a named XPath expression (see ???).
admin@ncs(config-notification-kicker-n1)# commit
admin@ncs(config-notification-kicker-n1)# top
Now we define the "mysub" subscription on a device "www0" and refer to the notification stream
"interface". As soon as this definition is committed the kicker will start triggering:
admin@ncs(config)# devices device www0 netconf-notifications subscription mysub \
> local-user admin stream interface
admin@ncs(config-subscription-mysub)# commit
admin@ncs(config-profile-lean)# top
admin@ncs(config)# exit
If we now inspect the ncs-java-vm.log we will see a number of notifications that are received. We also
see that the transaction that is diff-iterated contains the notification as data under the path /devices/device/
netconf-notifications/received-notifications/notification/data. This is a operational data list. However this
transaction is synthetic and will not be committed. If the notification will be stored CDB is optional and
not depending on the notification kicker functionality:
admin@ncs# file show logs/ncs-java-vm.log
-------------------
{[669406386|id], n1}
{[669406386|monitor], /ncs:devices/device{www0}/netconf.../data/linkUp}
{[669406386|tid], 758}
path = /ncs:devices/device{www0}
op = MOP_MODIFIED
newValue = null
path = /ncs:devices/device{www0}/netconf...
op = MOP_CREATED
newValue = null
path = /ncs:devices/device{www0}/netconf.../event-time
op = MOP_VALUE_SET
newValue = 2017-02-15T16:35:36.039204+00:00
path = /ncs:devices/device{www0}/netconf.../sequence-no
op = MOP_VALUE_SET
newValue = 0
path = /ncs:devices/device{www0}/netconf.../data/notif:linkUp
op = MOP_CREATED
newValue = null
path = /ncs:devices/device{www0}/netconf.../data/notif:linkUp/address{192.168.128.55}
op = MOP_CREATED
newValue = null
path = /ncs:devices/device{www0}/netconf.../data/notif:linkUp/address{192.168.128.55}/ip
op = MOP_VALUE_SET
newValue = 192.168.128.55
path = /ncs:devices/device{www0}/netconf.../data/notif:linkUp/address{192.168.128.55}/mask
op = MOP_VALUE_SET
newValue = 255.255.255.0
path = /ncs:devices/device{www0}/netconf.../data/notif:linkUp/ifName
op = MOP_VALUE_SET
newValue = eth2
path = /ncs:devices/device{www0}/netconf.../data/notif:linkUp/linkProperty{0}
op = MOP_CREATED
newValue = null
path = /ncs:devices/device{www0}/netconf.../data/notif:linkUp/linkProperty{0}/extensions{0}
op = MOP_CREATED
newValue = 4668
path = /ncs:devices/device{www0}/netconf.../data/notif:linkUp/linkProperty{0}/extensions{1}/na
op = MOP_VALUE_SET
newValue = 2
path = /ncs:devices/device{www0}/netconf.../data/notif:linkUp/linkProperty{0}/flags
op = MOP_VALUE_SET
newValue = 42
path = /ncs:devices/device{www0}/netconf.../data/notif:linkUp/linkProperty{0}/newlyAdded
op = MOP_CREATED
newValue = null
Use Cases
Kickers are very often used as an implementation technique for Reactive FastMap services. Assume an
NFV/ESC based application which:
Now, with kickers we can simply such code by having a CDB subscriber that simply
The RFM service code is then responsible for setting up a kicker with the monitor pointing to that field
written by the CDB subscriber. Thus effectively decoupling the CDB subscriber code from the RFM
service code making them independent of each other. Another advantage, is that the error handling code
when the redeploy fails, is unified inside NSO kicker implementation.
RFM Example
The example can be found in examples.ncs/getting-started/developing-with-
ncs/21-kicker and uses two NSO packages. The router package introduced in ../0-router-
network and a package called 'ppp-accounting' which is described in this section.
The example is a bit contrived, but since want to exemplify the usage of kickers, it's simplified and
artificial.
$ ls ./packages
router ppp-accounting
To start the ncs-netsim network, follow the instructions in ../0-router-network, it's the same
network here.
$ ncs-netsim start
DEVICE ex0 OK STARTED
DEVICE ex1 OK STARTED
DEVICE ex2 OK STARTED
This will start NCS, and NCS will load the two packages, load the data models defined by the two
packages and start the Java code defined by the packages.
The service data model we have here looks like:
list ppp-accounting {
uses ncs:service-data;
ncs:servicepoint kickerspnt;
key "interface";
leaf interface {
type string;
}
}
list ppp-accounting-data {
description "This is helper data, created by the service code for
/ppp-accounting";
key "interface";
leaf interface {
type string;
}
leaf accounting {
description "populated externally";
type string;
}
}
The purpose of the service /ppp-accounting is to set the accounting field in the provided ppp
interface on all routers in our example network. The catch here is that the name of the 'accounting' field is
not provided as in input parameter to the service, instead it is populated externally and read and used by the
service code.
The FastMap code tries to read the field /ppp-accounting-data[interface=<if>]/
accounting and if it doesn't exist, the code creates a kicker on that field and returns. If the 'accounting'
field exists, it used and data is written into the /devices tree for our routers.
[edit]
admin@ncs% commit
Commit complete.
[ok][2016-12-13 16:20:04]
[edit]
admin@ncs% request ppp-accounting ppp0 get-modifications
cli {
local-node {
data
}
}
We created the service, and verified that it didn't do anything. Looking at the code in packages/ppp-
accounting/src/java/src/com/example/kicker/KickerServiceRFS.java we can see
though that the code created a kicker.
Let's take a look at that:
admin@ncs% show kickers
---------------------------------^
syntax error: element does not exist
[error][2016-12-13 16:22:53]
The kicker data is hidden, and we cannot directly view it in the CLI. The src/ncs/yang/tailf-
kicker.yang file says:
container kickers {
tailf:info "Kicker specific configuration";
tailf:hidden debug;
list data-kicker {
key id;
......
[edit]
admin@ncs%$ commit
Commit complete.
[ok][2016-12-13 16:26:46]
[edit]
+ accounting radius;
..........
Note Looking at the RFM java code we see that the /ppp-accounting-data help entry is created by a so
called PRE_MODIFICATION hook. This is a common trick in RFM applications. We don't want that data
to be part of the FastMap diffset. Usually the help entry is also used to contain various 'config false' fields
pertaining to the service instance. If that data was part of FastMap diffset, the data would disappear with
every redeploy turn, thus we use the PRE_MODIFICATION trick.
Debugging kickers
Kicker CLI Debug target
In order to find out why a Kicker kicked when it shouldn't or more commonly and annoying, why it didn't
kick when it should, use the CLI pipe debug kicker.
Evaluation of potential Kicker invocations are reported in the CLI together with XPath evaluation results:
admin@ncs(config)# set sys ifc port-0 hw mtu 8000
admin@ncs(config)# commit | debug kicker
2017-02-15T16:35:36.039 kicker: k1 at /kicker_example:sys/kicker_example:ifc[kicker_example:n
not invoking 'kick-me' trigger-expr false -> false
Commit complete.
admin@ncs(config)#
Unhide Kickers
The top level container kickers is by default invisible due to a hidden attribute. In order to make
kickers visible in the CLI, two steps are required. First the following XML snippet must be added to
ncs.conf:
<hide-group>
<name>debug</name>
</hide-group>
XPath log
Detailed information from the XPath evaluator can be enabled and made available in the xpath log. Add
the following snippet to ncs.conf.
<xpathTraceLog>
<enabled>true</enabled>
<filename>./xpath.trace</filename>
</xpathTraceLog>
Devel Log
Error information is written to the development log. The development log is meant to be used as support
while developing the application. It is enabled in ncs.conf:
Introduction
NSO includes a native time-based job scheduler suitable for scheduling background work. Tasks can be
scheduled to run at particular times or periodically at fixed times, dates, or intervals. It can typically be
used to automate system maintenance or administration-though tasks.
If the action node instance is given as an XPath 1.0 expression, the expression is evaluated with the root as
the context node, and the expression must return a node set. The action is then invoked on each node in this
node set.
Optionally action parameters can be configured in XML format to be passed to the action during
invocation.
admin(config-task-sync)# action-params "<device>ce0</device><device>ce1</device>"
admin(config)# commit
Once the task has been configured you could view the next run times of the task:
admin(config)# scheduler task sync get-next-run-times display 3
next-run-time [ "2017-11-01 02:00:00" "2017-12-01 02:00:00" "2018-01-01 02:00:00" ]
Schedule Expression
A standard Vixie Cron expression is a string comprising five fields separated by white space that
represents a set of times. The following rules can be used to create an expression.
Table 187. Expression rules
The following list describes the legal special characters and how you can use them in a Cron expression.
• Star ( * ). Selects all values within a field. For example, * in the minute field selects every minute.
• Comma ( , ). Commas are used to specify additional values. For example, using MON,WED,FRI in
the day of week field.
• Hyphen ( - ). Hyphens define ranges. For example 1-5 in the day of week field indicates every day
between Monday and Friday, inclusive.
• Forward slash ( / ). Slashes can be combined with ranges to specify increments. For example, */5 in
the minutes field indicates every 5 minutes.
A non-recurring task will by default be removed when it has finished executing. It will be up to the action
to raise an alarm if an error would occur. The task can also be kept in the task list by setting the keep leaf.
Scheduling in a HA Cluster
In a HA cluster a scheduled task will by default be run on the master HA node. By configuring the ha-
mode leaf a task can be scheduled to run on nodes with a particular HA mode, for example a read-only
action on the slave nodes. More specifically a task can be configured with the ha-node-id to only run
on a certain node. These settings will not have any effect on a standalone node.
Note Non-recurring work can only be scheduled to run on the master HA node.
Troubleshooting
History log
In order to find out whether a scheduled task has run successfully or not, the easiest way is to view the
history log of the scheduler. It will display the last five runs of the scheduled task.
admin# show scheduler task sync history | notab
history 2017-11-01T02:00:00.55003+00:00
duration "0.15 sec"
succeeded true
history 2017-12-01T02:00:00.549939+00:00
duration "0.09 sec"
succeeded true
history 2017-01-01T02:00:00.550128+00:00
duration "0.01 sec"
succeeded false
info "Resource device ce0 doesn't exist"
XPath log
Detailed information from the XPath evaluator can be enabled and made available in the xpath log. Add
the following snippet to ncs.conf.
<xpathTraceLog>
<enabled>true</enabled>
<filename>./xpath.trace</filename>
</xpathTraceLog>
Devel Log
Error information is written to the development log. The development log is meant to be used as support
while developing the application. It is enabled in ncs.conf:
<developer-log>
<enabled>true</enabled>
<file>
<name>./logs/devel.log</name>
<enabled>true</enabled>
</file>
</developer-log>
<developer-log-level>trace</developer-log-level>
Introduction
Progress tracing in NSO provides developers with useful information for debugging, diagnostics and
profiling. This information can be used both during development cycles and after release of the software.
When a transaction or action is applied, NSO emits progress events. These events can be displayed and
recorded in a number of different ways. The easiest way is to pipe an action to details in the CLI.
As seen by the details output, all events are recorded with a timestamp and in some cases with the duration.
All phases of the transaction, service and device communication are printed.
Some actions (usually those involving device communication) also produces progress data.
<hide-group>
<name>debug</name>
</hide-group>
Log to File
Progress data can be outputted to a given file. This is useful when the data is to be analyzed in some third
party software like a spreadsheet application.
admin(config)# set progress trace test destination file event.csv format csv
The file can be formatted as a comma-separated values file defined by RFC 4180 or in a pretty printed log
file with each event on a single line.
Verbosity
The verbosity parameter is used to control the level of output. The following levels are available:
• normal - Tracing will occur for the various phases and steps of a transaction/action. This is the default
and the lowest verbosity level.
• verbose - The various service and device phases and their duration will be traced. This is useful to get
an overview over where time is spent in a transaction/action.
• very-verbose - Detailed information about the service, device and internal operations will be traced.
• debug - This is the highest verbosity level we could have. Detailed information from the XPath
evaluator will be traced. This is mostly useful during the development phase to profile e.g. a service
with performance degradation. Setting this level could result in a large number of events being
generated. Thus it is not recommended to use it in a production system, at least not without using
some filter.
Progress events up to the level verbose can be subscribed to with the Notifications API.
Using Filters
By default all transaction and action events with the given verbosity level will be logged. To get a more
selective choice of events, filters can be used.
The context filter can be used to only log events that originate through a specific northbound interface. The
context is either one of netconf, cli, webui, snmp, rest, system or it can be any other context string defined
through the use of MAAPI.
Note Remember when choosing the verbosity level of the progress event that only events up to the level verbose
can be subscribed to with the Notifications API.
Introduction
This chapter describes a new technique to develop Reactive FASTMAP (RFM) services.
For RFM services the recommendation is to use the ncs:plan-data grouping to report the progress of the
service under the resulting plan container. In this case, the service create() callback creates the plan
components, their states, and set the status of the individual states as the service creation progresses. Each
state represents a step in the RFM loop.
plan-data
plan
self
legend
component1
not-reached
component2 reached
failed
However this is just a progress reporting or logging of the service. It does not support the developer in
implementing the RFM service with full life-cycle support, i.e., correct handling of the RFM loop in
creating, modifying and deleting the service. There is no guarantee that what the service code writes in the
plan is actually correct.
Deleting an RFM service is known to be hard, since it often involves removing the device modifications in
steps. One solution that is often used is to have configuration data outside of the service with a correlated
CDB subscriber. The service writes this configuration data as part of its mapping code and the CDB
subscriber can then handle the life-cycle changes on behalf of the service.
This pattern is well proven but the issue is that the code for the RFM service is distributed in both the
service create() callback and the correlated CDB subscriber. When also taking into account that the
plan data has to be updated correctly by the service, this all becomes several sources which potentially can
contain errors that can be complicated to track down.
The purpose of nano services is to break down an RFM service into its isolated steps. The RFM nano
service code is able to support the full life-cycle of the service without any use of CDB subscribers or other
mechanism outside of the service code itself. Nano services introduces the grouping nano-plan-data which
can be seen as an executable plan. Nano services also make use of kickers to autonomously drive the RFM
loop forward. In the rest of this chapter this will be described in detail. Knowledge of plan-data and kickers
are prerequisites.
Principles
In order to understand how nano services work we need to first recapitulate the internals of normal
services.
Normal service
A normal service has a YANG definition of the service parameters, a service point name, and in the case of
a RFM service, an optional YANG definition of a plan.
When a normal service is committed, the FASTMAP algorithm removes all previous data changes
internally, and present the service data to the create() callback as if this was the initial create. When
the create() callback returns, the FASTMAP algorithm compares the result and calculates a reverse
diff-set from the data changes. This reverse diff-set contains the operations that are needed to restore the
configuration data to the state as it was before the service was created. This reverse diff-set is applied for
instance if the service is deleted.
normal service
private
diff-set
....
create
fastmap algoritm
Create()
This fundamental principle is what makes the implementation of services and the create() callback
simple. Also a lot of the NSO functionality relies on this principle.
However in the RFM pattern the create() callback is re-entered several times while the RFM loop is
active using subsequent reactive-re-deploy calls. Storing all changes in a single reverse diff-set
then becomes an impediment. For instance if a staged delete is necessary there is no way to single out
which changes each RFM step has performed. Nano services are designed to overcome this impediment
and support the full service life-cycle.
Nano service
A nano service is implemented with the following:
• A YANG definition of the service input parameters, with a service point name and the nano-plan-data
grouping.
• A YANG definition of the plan component types and their states in a plan outline.
• A YANG definition of a behavior tree for the service. The behavior tree defines how and when to
instantiate components in the plan.
• Code or templates for individual state transfers in the plan.
A nano service abandon the single reverse diff-set by introducing nano-plan-data and a new
NanoCreate() callback. The nano-plan-data YANG grouping is based on the plan-data
grouping but has additional storage for reverse diff-set and pre conditions per state in the plan components.
Kicker for
precondition
nano service
service config data reactive-re-deploy
nano-plan
component
Create()
The service get-modifications action can still be used to visualize all data changes performed by
the service as an aggregation of all entered states' individual data changes. In addition each state also has
its own get-modifications action that visualizes the data-changes for that particular state.
These principles are described in more detail in the rest of this chapter.
Next, the set of components for the new plan is compared with the previous plan (before the commit). If
there are new components they will advance state by state as explained below. If a component was present
in the previous plan, but no longer present, it will go into back-tracking mode during which the goal is to
remove all reached states, and eventually remove the component from the plan.
For each component in the plan, it is executed state by state in order. Before entering a new state, the
create pre-condition for the state is evaluated if it exists. If a create pre-condition exists and if it is not
satisfied, the RFM loop will stop progressing this component and jump to next component in order. A
kicker will be defined for the pre-condition that was not satisfied, and when this kicker triggers and the
pre-condition is satisfied, it will perform a reactive-re-deploy and the kicker will be removed. This
kicker mechanism becomes a self-sustained RFM loop.
If a state's pre-conditions are met, the callback function or template associated with the state is invoked, if
it exists. If the callback is successful, the state is marked as reached, and the next state is executed.
The state has also a delete pre-condition which is used if the changes of a state should be removed.
Removing state data changes for a plan component must be performed in a strict reverse order beginning
with the last reached state first. This mechanism is fundamental to handle the full RFM life-cycle and is
called back-tracking. A plan component performing back-tracking is said to be in back-tracking mode.
Back-tracking
NSO uses plan-component back-tracking to solve the issues with RFM life-cycle handling. This
mechanism is completely automatic and occurs in the following scenarios:
State pre-condition not satisfied A reached state's pre-condition is no longer satisfied, and there are
subsequent states that are reached and contain reverse diff-sets.
Plan component is removed When a plan component is removed and it has reached states that
contain reverse diff-sets.
Service is deleted When a service is deleted NSO will set all plan components to back-
tracking mode before deleting the service. diff-sets.
We will digress into these different scenarios since they are important to understand. This is the way NSO
can handle any life-cycle change.
nano service
nano-plan
self
component1
component2
While traversing the states, a create pre-condition that was previously satisfied may become non-satisfied.
If there are subsequent reached states that contain reverse diff-sets, then the component must be set to
back-tracking mode. The back-tracking mode has as its goal to revert all changes up to the state which
originally failed to satisfy its create pre-condition. While back-tracking, the delete pre-condition for each
state is evaluated, if it exists. If the delete pre-condition is satisfied, the state's reverse diff-set is applied,
and the next state is considered. If the delete pre-condition is not satisfied a kicker will be created to
monitor this delete pre-condition. When the kicker triggers a reactive-re-deploy will be called and
the back-tracking will continue until the goal is reached.
nano service
nano-plan
self
component1
back-tracking
goal=state2
component2
When the back-tracking plan component has reached its goal state, the component is set to normal mode
again. The state's create pre-condition is evaluated and if it is satisfied the state is entered or otherwise a
kicker is created as described above.
nano service
nano-plan
self
component1
component2
nano service
nano-plan
self
component1
component2
If the removed component contains reached states with reverse diff-sets the deletion of the component is
deferred and the component is set to back-tracking mode.
nano service
nano-plan
self
component1
back-tracking
component2
In this case there is no specified goal state for the back-tracking. This means that when all states have been
reverted the component is automatically deleted.
nano service
nano-plan
self
component2
Service is deleted
If a service is deleted, all components are set to back-tracking mode. If all delete pre-conditions are
satisfied, all reverse diff-sets are applied and the service deleted. If not, the service becomes a zombie, i.e
stored away with its plan states so that the service configuration can be removed. More on zombies in the
next chapter, we here focus on the back-tracking functionality.
nano service
nano-plan
self
back-tracking
component1
back-tracking
component2
back-tracking
nano service
nano-plan
self
back-tracking
component1
back-tracking
When all components in the plan are deleted the service is removed.
nano service
nano-plan
Zombie services
When a nano service is deleted and the delete cannot be performed in the current transaction (i.e., all delete
pre-conditions are not currently satisfied), then the service will become a zombie (mutate). The reason for
this is that the original transaction containing the delete must be committed. Zombie services are stored
with their service data, their plan states and diff-sets in a separate list /ncs:zombies/services.
When a nano service zombie exists for a specific service instance, NSO will not allow creation of this
same service instance in the configuration tree. The zombie must be removed before the service can be
recreated.
When a service mutates to a zombie, all service pre-condition kickers are rewritten to activate the zombie
service instead. This zombie activation is performed with either the re-deploy or the reactive-re-
deploy actions. The difference between the two action is which user the action uses. The re-deploy
uses the current user that initiated the action whilst the reactive-re-deploy action keeps using the
same user that last modified the zombie service. The action will perform the following steps:
1 Start a temporary transaction in which the service is reinstated (created). The service plan will have the
same status as it had when it mutated.
2 Back-track plan components in a normal fashion i.e., removing device changes for states for which the
delete pre-conditions are satisfied.
3 If all components are completely back-tracked the zombie is removed from the zombie-list. Otherwise
the service and the current plan states are stored back into the zombie-list with new kickers waiting to
activate the zombie when some delete pre-condition is satisfied.
The following example shows how three different nano services under a common container are deleted and
mutated in a transaction:
Example 188. Services mutation
ncs(config)# exit
ncs#
ncs# show zombies service delete-path
SERVICE PATH DELETE PATH
--------------------------------------------------------------------------------
/top/upper[id='u1']/dynserv[name='d1'] /top/upper[id='u1']/dynserv[name='d1']
/top/upper[id='u1']/inserv[name='i1'] /top/upper[id='u1']/inserv[name='i1']
/top/upper[id='u1']/myserv[name='m1'] /top/upper[id='u1']/myserv[name='m1']
For zombies the plan is still retrievable. This is important since changes in a zombie plan can be used as
pre-condition for another service plan state changes.
Example 190. Zombie actions
ncs# zombies service /top/upper[id='u1']/myserv[name='m1']
Possible completions:
resurrect Load the zombie back as service in current state.
re-deploy revive the zombie and re-deploy it.
The re-deploy and the reactive-re-deploy mentioned earlier are the zombie specific re-deploys.
The resurrect action will reinstate the zombie back in the configuration tree as a real service with the
current plan status but all plan component reverted from back-tracking to normal mode. It constitutes a
way of undo a nano service delete to the current status.
Behavior tree
A nano service behavior tree is a data structure defined for each service type. Without a behavior tree
defined for the service point the nano service cannot execute. It is the behavior tree that defines the
currently executing nano-plan with its components.
Note This is in stark contrast to plan-data used for logging purposes where the programmer needs to write the
plan and its components in the create() callback. For nano services it is not allowed to define the nano-
plan in any other way than by a behavior tree.
The purpose of a behavior tree is to have a declarative way to specify how the service's input parameters
are mapped to a set of component instances.
A behavior tree is a directed tree in which the nodes are classified as control flow nodes and execution
nodes. For each pair of connected nodes the outgoing node is called parent and the incoming node is called
child. A control flow node has zero or one parent and at least one child, and the execution nodes have one
parent and no children.
There is exactly one special control flow node called the root, which is the only control flow node without
a parent.
This definition implies that all interior nodes are control flow nodes and all leaves are execution nodes.
When creating, modifying or deleting a nano service NSO evaluates the behavior tree to render the current
nano-plan for the service. This process is called synthesizing the plan.
The control flow nodes have different behavior but they will all in the end choose to synthesize its children
in zero or more instances. When a control flow node is asked to synthesize itself it executes its rules for
synthesizing its children. Synthesizing an execution node adds the corresponding plan component instance
to the nano service's plan.
All control flow and execution nodes may define pre-conditions, which must be satisfied in order to
synthesize the node. If a pre-condition is not satisfied, a kicker is started to monitor the pre-condition.
All control flow and execution nodes may define an observe monitor which will result in a kicker being
started for the monitor when the node is synthesized.
If an invocation of an RFM loop (e.g., a re-deploy) synthesizes the behavior tree and a pre-condition for a
child is no longer satisfied the sub-tree with its plan-components are removed (i.e., the plan-components
are set to back-tracking mode).
Selector A selector node has a set of children which are synthesized as described above.
Multiplier A multiplier has a foreach mechanism that produces a list of elements. For each resulting
element the children are synthesized as described above. This can be used for example to
create several plan-components of the same type.
Create component The create-component execution node create an instance of the component type
that it refers to in the plan.
It is recommended to keep the behavior tree as flat as possible. The most trivial case is when the behavior
tree creates a static nano-plan, i.e., all plan-components are defined and never removed. The following is
an example of such a behavior tree:
behavior-tree
servicepoint myserv-servicepoint
plan-component self
2:1
init --> ready
plan-component router
selector
3:1
1
init --> myserv:syslog-initialized* --> myserv:ntp-initialized* --> myserv:dns-initialized* --> ready
plan-component extended
4:1
init --> myserv:interfaces-initialized* --> ready
Having a selector on root implies that all plan-components are created if they don't have any pre-
conditions, or for which the pre-conditions are satisfied.
behavior-tree
servicepoint inserv-servicepoint
plan-component self
2:1 plan-component single-syslog
init --> ready 4:3
init --> syslog-initialized* --> ready
selector
1 plan-component single-ntp
5:3
init --> ntp-initialized* --> ready
selector*
3:1 plan-component single-dns
6:3
init --> dns-initialized* --> ready
plan-component single-interface
7:3
init --> interfaces-initialized* --> ready
This behavior tree has a selector node as root. It will always synthesize the "self" plan component and then
evaluate then pre-condition for the selector child. If that pre-condition is satisfied it will create four other
plan-components.
The multiplier control flow node is used when a plan component of a certain type should be cloned into
several copies depending on some service input parameters. For this reason the multiplier node defines
a foreach a when and a variable. The foreach is evaluated and for each node in the nodeset
that satisfies the when the variable is evaluated as the outcome. The value is used for parameter
substitution to a unique name for duplicated plan component.
behavior-tree
servicepoint l3vpn-servicepoint
plan-component self
2:1
init --> ready
multiplier
3:1 plan-component "$VALUE"
selector @: /endpoint/id 4:3
1 ?: count(/l3vpn:onvpe[. = current()] = 0 init* --> dev-setup --> qos-configured --> ready
=: current()
multiplier
plan-component "$VALUE"
5:1
6:5
@: /endpoint/id
init* --> pe-created --> ce-vpe-topo-added* --> vpe-p0-topo-added --> dev-setup --> qos-configured --> ready
?: count(/l3vpn:onvpe[. = current()] != 0
=: current()
The value is also added to the nano service opaque which enables the individual state nano service
create() callbacks to retrieve the value. The nano service opaque mechanism is important and is
explained in detail below.
list myserv {
key name;
leaf name {
tailf:info "Unique service id";
tailf:cli-allow-range;
type string;
}
ncs:servicepoint myserv-servicepoint;
uses ncs:service-data;
uses ncs:nano-plan-data;
leaf-list device {
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
}
leaf dummy {
type inet:ipv4-address;
}
leaf extended {
type boolean;
default false;
}
leaf fail-state {
type string;
}
}
.
.
.
Next there needs to be a formal declaration of the plan and the behavior tree for this nano service. This is
done using YANG extensions. These declarations use YANG identities to denote component types and
plan states. Using identities ensure uniqueness and rules out syntactical errors like misspelling at compile
time.
In our example we use one specialized component type named router.
Finally we have three specialized states syslog-initialized, ntp-initialized and dns-
initialized. We use the word specialized here to distinguish these component types and states from
the standard predefined component type ncs:self and states ncs:init and ncs:ready.
.
.
.
identity router {
base ncs:plan-component-type;
}
identity syslog-initialized {
base ncs:plan-state;
}
identity ntp-initialized {
base ncs:plan-state;
}
identity dns-initialized {
base ncs:plan-state;
}
.
.
.
The ncs:plan-outline declares a plan with a given name. It contains a list of ncs:component-
type which are the allowed component types for an instance of this plan.
Note Note that a nano service instance has plan consisting of components where each component is based on
a component type. The ncs:plan-outline YANG declaration does not state how many components
of a certain type an specific nano service instance will have or even if all declared component types will be
used. It just spans all possible component types available for a specific nano service.
A nano service is expected to have a ncs:self component type, all other component types are optional.
Any component type, including ncs:self, are expected to have ncs:init as its first state and
ncs:ready as its last state. Any component type, including ncs:self, can have any number of
specific states in between ncs:init and ncs:ready.
In our example ncs:plan-outline declaration we have two component types ncs:self
and myserv:router. The ncs:self is chosen to just contain the required ncs:init and
ncs:ready states. On the other hand the myserv:router component type in addition defines
the myserv:syslog-initialized, myserv:ntp-initialized and myserv:dns-
initialized states.
ncs:plan-outline myserv-plan {
description
"My first plan";
ncs:component-type "ncs:self" {
ncs:state "ncs:init" {
ncs:create {
ncs:nano-callback;
}
ncs:delete {
ncs:nano-callback;
}
}
ncs:state "ncs:ready" {
ncs:create {
ncs:nano-callback;
}
ncs:delete {
ncs:nano-callback;
}
}
}
ncs:component-type "myserv:router" {
ncs:state "ncs:init";
ncs:state "myserv:syslog-initialized" {
ncs:create {
ncs:nano-callback;
ncs:monitor "$SERVICE" {
ncs:trigger-expr "syslog = 'true'";
}
}
ncs:delete {
ncs:monitor "$SERVICE" {
ncs:trigger-expr "syslog = 'false'";
}
}
}
ncs:state "myserv:ntp-initialized" {
ncs:create {
ncs:nano-callback;
ncs:monitor "$SERVICE" {
ncs:trigger-expr "ntp = 'true'";
}
ncs:post-action-node "/ncs:devices" {
ncs:action-name "sync-from";
ncs:result-expr "count(result) = 0";
}
}
ncs:delete {
ncs:nano-callback;
ncs:monitor "$SERVICE" {
ncs:trigger-expr "ntp = 'false'";
}
ncs:post-action-node "/ncs:devices" {
ncs:action-name "sync-from";
ncs:result-expr "count(result) = 0";
}
}
}
ncs:state "myserv:dns-initialized" {
ncs:create {
ncs:nano-callback;
ncs:monitor "$SERVICE" {
ncs:trigger-expr "dns = 'true'";
}
}
ncs:delete {
ncs:nano-callback;
ncs:monitor "$SERVICE" {
ncs:trigger-expr "dns = 'false'";
}
}
}
ncs:state "ncs:ready";
}
ncs:component-type "myserv:extended" {
ncs:state "ncs:init";
ncs:state "myserv:interfaces-initialized" {
ncs:create {
ncs:nano-callback;
ncs:monitor "$SERVICE" {
ncs:trigger-expr "interfaces = 'true'";
}
}
ncs:delete {
ncs:nano-callback;
ncs:monitor "$SERVICE" {
ncs:trigger-expr "interfaces = 'false'";
}
}
}
ncs:state "ncs:ready";
}
}
.
.
The ncs:create and/or the ncs:delete declaration contains the specifics for a create or delete
(back-tracking) operation in this plan state. The ncs:nano-callback implies that a nano service
callback of this type is expected for the state. The ncs:monitor declarations specifies the states create
or delete pre-condition respectively. The ncs:post-action-node declaration is the action to be
invoked after successful completion of the state. See the section called “Side effects” for more details on
post actions.
The nano service also requires a behavior tree which also is declared using YANG extensions.
Our example behavior tree consists only of a single ncs:selector control flow node with two
ncs:create-component execution nodes. This implies that all service instances of this service type
will have a static plan with two components named "self" and "router".
.
.
.
ncs:service-behavior-tree myserv-servicepoint {
description
"My first behavior tree YANG extension test";
ncs:plan-outline-ref "myserv:myserv-plan";
ncs:selector {
ncs:create-component "'self'" {
ncs:component-type-ref "ncs:self";
}
ncs:create-component "'router'" {
ncs:component-type-ref "myserv:router";
}
}
}
From a user perspective a nano service and an ordinary RFM service with ncs:plan-data is very
similarly. From a service developer perspective they are very different. The nano service has a defined
behavior tree that controls the plan data instantiations and no explicit writing of plan data are allowed.
The nano service allows for registration of callbacks on any plan state in the plan. In addition to the
create() callback there is an optional delete() callback that is invoked when the corresponding plan
component is in back-tracking mode.
The create() nano service callback have the same purpose and is very similar to the create()
callback of an ordinary service. The main difference is that it will not only set device configuration data
but also has the possibility to veto on the resulting status for the respective plan state.
The delete() callback is not allowed to write configuration data since it is called under a removal
phase of a plan component. The reason for its existence is when it is necessary to calculate and veto on
the resulting status for a back-tracking state. Remember that the delete pre-condition is already satisfied
and the default resulting state will be not-reached. This implies that registering a delete() callback is
seldom necessary if a delete pre-condition has been defined for the corresponding plan state.
The nano service needs a number of callback registrations, one for each plan component state. Note,
however that not all states needs any callbacks at all. In many cases the state acts as an checkpoint that
some condition is satisfied. This is what the state pre-conditions do. A component's ncs:ready state is a
good example of this.
There is actually a complication with this flexible callback registration. There must be a way for the NSO
Service Manager to know if all expected nano service callbacks has been registered. For this reason all
nano service plan component states that require callbacks are marked with this information. When the plan
is executed and the callback markings in the plan mismatch with the actual registrations this results in an
error.
All callback registrations in NSO requires a daemon to be instantiated. Normally the environment takes
care of this, e.g., the ncs-java-vm or the ncs-python-vm. For nano services it is allowed to have many
daemons where each daemon is responsible for a subset of the plan state callback registrations. The neat
thing here is that it becomes possible to mix different callback types (Template/Python/Java) for different
plan states.
nano service
nano-plan-data
comp1
Template service
servicepoint="myserv"
component="comp1"
state="state2"
Python daemon
Java daemon
register_nano_service_cb("myserv","comp1","state1")
registerNanoService("myserv","comp1","state1")
cb_nano_create(self, tctx, root, service, component, state, proplist)
@NanoServiceCallback(servicePoint="myserv", component="comp1",
state="state3", callType=NanoServiceCBType.CREATE)
cb_nano_delete(self, tctx, root, service, component, state, proplist) public Properties create(NanoServiceContext context,
NavuNode service,
NavuNode ncsRoot,
Properties opaque)
An important use-case for the mixed callback feature is the case where most of the callbacks are templates,
and only some are Java or Python. The reason for this is that nano services when applying a template tries
to resolve the template parameters using the nano service opaque. This is a unique functionality for nano
services that makes Java or Python apply-template callbacks unnecessary. More on this below.
The state pre-conditions can have parameters. These parameters are substituted using the opaque. Also
templates with parameters are substituted using the opaque. Using this scheme an early state, e.g., the
"self" components "ncs:init" state can have a callback that sets name-value pairs for all other states
that have templates or other states that have specific pre-conditions that needs service instance unique
definitions.
In the template above we find the DUMMY parameter. This parameter is expected to be defined with a value
in the nano service opaque or otherwise applying the template will fail.
A Python create() callback is very similar to its ordinary service counterpart. The difference is that
has two additional arguments component and state which specifies the component and state for which it is
invoked. The proplist argument is the nano service opaque (same naming as for ordinary services):
class NanoServiceCallbacks(ncs.application.NanoService):
@ncs.application.NanoService.create
def cb_nano_create(self, tctx, root, service, plan, component, state,
proplist, component_proplist):
self.log.debug("NanoService create ", state)
key = service.name
fail_state = service.fail_state or ''
myoper = ncs.maagic.cd(service, '/myserv:myoper')
devroot = root.ncs__devices.device
ip = service.dummy
devices = service.device
currstate = service.plan.component[component].state[state]
if fail_state == 'ntp':
currstate.status = 'failed'
else:
self.log.debug("Not failed ", state)
for q in devices:
self.log.debug("handle device ", q)
devroot[q].config.r__sys.ntp.server.create(ip)
pl = proplist + [('ntp', 'INIT')]
self.log.debug("Component = ", component)
self.log.debug("state = ", str(state))
currstate.status = 'reached'
return pl
@ncs.application.NanoService.delete
def cb_nano_delete(self, tctx, root, service, plan, component, state,
proplist, component_proplist):
service.plan.component[component].state[state].status = 'not-reached'
pl = []
for elem in proplist:
if elem[0] != 'ntp':
pl = pl + [(elem[0], elem[1])]
return pl
# ---------------------------------------------
# COMPONENT THREAD THAT WILL BE STARTED BY NCS
# ---------------------------------------------
class NtpNano(ncs.application.Application):
def setup(self):
self.log.info('Worker RUNNING')
self.register_nano_service('myserv-servicepoint',
'myserv:router',
'myserv:ntp-initialized',
NanoServiceCallbacks)
def teardown(self):
self.log.info('Worker FINISHED')
The Python nano service callback needs a registration call for the specific servicepoint/componentType/
state that it should be invoked.
For Java, annotations are used to define the callbacks for the component states. The registration of these
callbacks are performed by the ncs-java-vm. The NanoServiceContext argument contains methods for
retrieving the component and state for the invoked callback as well as methods for setting the resulting
plan state status. Several componentType/state callbacks can be defined in the same Java class and will
then be registered by the same daemon.
public class myservRFS {
@NanoServiceCallback(servicePoint="myserv-servicepoint",
componentType="ncs:self", state="ncs:init",
callType=NanoServiceCBType.CREATE)
public Properties selfInitCreate(NanoServiceContext context,
NavuNode service,
NavuNode ncsRoot,
Properties opaque,
Properties componentProperties)
throws DpCallbackException {
System.out.println("reached create " + context.getComponent() +
":" + context.getState());
String ipval = "";
try {
ipval = service.leaf("dummy").valueAsString();
} catch (NavuException e) {
throw new DpCallbackException(e);
}
if (opaque == null) {
opaque = new Properties();
opaque.setProperty("DUMMY", ipval);
}
context.setReached();
return opaque;
}
@NanoServiceCallback(servicePoint="myserv-servicepoint",
componentType="ncs:self", state="ncs:init",
callType=NanoServiceCBType.DELETE)
public Properties selfInitDelete(NanoServiceContext context,
NavuNode service,
NavuNode ncsRoot,
Properties opaque,
Properties componentProperties)
throws DpCallbackException {
System.out.println("reached delete " + context.getComponent() +
":" + context.getState());
try {
String sysVal = getRunningSyslogStatus(service);
ConfBool extended = (ConfBool) service.leaf("extended").value();
if (sysVal.equals("reached") ||
(extended.booleanValue() &&
opaque.containsKey("interfaces"))) {
context.setReached();
} else {
context.setNotReached();
}
} catch (Exception e) {
throw new DpCallbackException("Exception in self:init", e);
}
return opaque;
}
@NanoServiceCallback(servicePoint="myserv-servicepoint",
componentType="ncs:self", state="ncs:ready",
callType=NanoServiceCBType.CREATE)
public Properties selfReadyCreate(NanoServiceContext context,
NavuNode service,
NavuNode ncsRoot,
Properties opaque,
Properties componentProperties)
throws DpCallbackException {
System.out.println("reached create " + context.getComponent() +
":" + context.getState());
try {
String sysVal = getRunningSyslogStatus(service);
ConfBool extended = (ConfBool) service.leaf("extended").value();
if (opaque.containsKey("dns") &&
opaque.containsKey("ntp") &&
sysVal.equals("reached") &&
(!extended.booleanValue() ||
opaque.containsKey("interfaces"))) {
context.setReached();
} else {
context.setNotReached();
}
} catch (Exception e) {
throw new DpCallbackException("Exception in self:ready", e);
}
return opaque;
}
@NanoServiceCallback(servicePoint="myserv-servicepoint",
componentType="ncs:self", state="ncs:ready",
callType=NanoServiceCBType.DELETE)
public Properties selfReadyDelete(NanoServiceContext context,
NavuNode service,
NavuNode ncsRoot,
Properties opaque,
Properties componentProperties)
throws DpCallbackException {
System.out.println("reached delete " + context.getComponent() +
":" + context.getState());
try {
String sysVal = getRunningSyslogStatus(service);
ConfBool extended = (ConfBool) service.leaf("extended").value();
if (opaque.containsKey("dns") &&
opaque.containsKey("ntp") &&
sysVal.equals("reached") &&
(!extended.booleanValue() ||
opaque.containsKey("interfaces"))) {
context.setReached();
} else {
context.setNotReached();
}
} catch (Exception e) {
throw new DpCallbackException("Exception in self:ready", e);
}
return opaque;
}
}
@NanoServiceCallback(servicePoint="myserv-servicepoint",
componentType="*", state="myserv:ntp-initialized",
callType=NanoServiceCBType.CREATE)
public Properties genericNtpCreate(NanoServiceContext context,
NavuNode service,
NavuNode ncsRoot,
Properties opaque,
Properties componentProperties)
throws DpCallbackException {
...
}
The generic callback can then act for a certain state in any component type.
Registration of the ordinary create() or pre_lock_create() will not fail for a nano service. But
they will never be invoked.
Side effects
Ordinary services as well as nano services callbacks must be side effect free. Since the callbacks are
invoked during the transaction commit no RPCs or other access outside of the NSO datastore are allowed.
The reasons for this are many e.g., the callback can be invoked many times, for instance at dry-run where
side-effects are not expected.
On the other hand, calling RPCs or doing other side-effects is common in Reactive FASTMAP services.
This has usually been handled by some CDB subscriber that acts on behalf of the service.
For nano services it is possible to define a post-action which is a definition of an action that should be
asynchronously invoked if the state has been reached and after the commit has been performed. The state
definition also have a post-action-status leaf which will be set to not-reached, create-reached,
delete-reached or failed depending on the case and result.
For this purpose there exists a side-effect-queue and a corresponding kicker which invokes the actions on
behalf of the nano services and reports the result in the respective state/post-action-status.
nano service
nano-plan-data
comp1
create create
Asynchroneous Side-effect-queue
/myserv/plan/comp1/state2 create-action
...
Using this mechanism it is possible to define side effects for a nano service in a safe way. The result of
post-action can be read from the invoking state in the plan. The next state in the chain can then have a pre-
condition that validates the previous states post-action-status and proceeds if this condition is satisfied.
A post-action is only executed one time i.e., if the post-action-status is already create-reached in the
create case or delete-reached in the delete case then new calls of the post-action is suppressed. In
dry-run operations post-actions are never called.
identity dev-setup {
base ncs:plan-state;
ncs:plan-outline link:link-plan {
description
"Make before brake vlan plan";
ncs:component-type "ncs:self" {
ncs:state "ncs:init";
ncs:state "ncs:ready";
}
ncs:component-type "link:vlan-link" {
ncs:state "ncs:init";
ncs:state "link:dev-setup" {
ncs:create {
ncs:nano-callback;
}
}
ncs:state "ncs:ready" {
ncs:create {
ncs:monitor "$SERVICE/endpoints" {
ncs:trigger-expr "test-passed = 'true'";
}
}
ncs:delete {
ncs:monitor "$SERVICE/plan" {
ncs:trigger-expr
"component[name != 'self'][./back-track = 'false']"
+ "/state[name='ncs:ready'][./status = 'reached']"
+ " or not(current()/component[back-track = 'false'])";
}
}
}
}
}
To be noted in the plan definition is the following. There is only one nano service callback registered for
the service. This is on the "link:dev-setup" state in the "link:vlan-link" component type. This is represented
in the plan as:
ncs:state "link:dev-setup" {
ncs:create {
ncs:nano-callback;
}
}
For the state "ncs:ready in the "link:vlan-link" component type there are both a create and a delete pre-
condition. The create pre-condition for this state is:
ncs:create {
ncs:monitor "$SERVICE/endpoints" {
ncs:trigger-expr "test-passed = 'true'";
}
}
This pre-condition implies that the components based on this component type will not be regarded as
finished until the test-passed leaf is set to "true". This will illustrate the scenario that after initial setup of a
link is configured by the "link:dev-setup" state a manual test and setting of the test-passed leaf is necessary
before the link is regarded as finished.
This pre-condition implies that before starting to delete ("back-tracking") an old component, then all new
components must have reached the "ncs:ready" state i.e., being tested. This is to illustrate a "create-before-
break" scenario where the new link is created first and only when this is setup the old is removed.
ncs:plan-outline-ref "link:link-plan";
ncs:selector {
ncs:create-component "'self'" {
ncs:component-type-ref "ncs:self";
}
ncs:multiplier {
ncs:foreach "endpoints" {
ncs:variable "VALUE" {
ncs:value-expr "concat(a-device, '-', a-interface,
'-', b-device, '-', b-interface)";
}
}
ncs:create-component "$VALUE" {
ncs:component-type-ref "link:vlan-link";
}
}
}
The the ncs:service-behavior-tree is registered on the servicepoint "link-servicepoint" defined by the nano
service. It refers to the plan definition named "link:link-plan". The behavior tree has a selector on top
which chooses to synthesize it children depending on their pre-conditions, in this tree there are no pre-
conditions so all children will be synthesized.
First there is a component "self" based on the "ncs:self" component type in the plan that is always
synthesized.
Second there is a "multiplier" control node that selects a node-set. A variable named VALUE is created
with a unique value for each node in that node-set and creates a component of type "link:vlan-link" for
each node in the selected node-set. The name for each individual component is the value of the variable
VALUE. Since the selected node-set is the "endpoints" list that can contain at most one element this will
produce only one component. However if the link in the service is changed i.e., the old list entry is deleted
and a new is created then the multiplier will create a component with a new name. This will force the old
component (which no longer is synthesized) to be back-tracked and the plan definition above will handle
the "create-before-break" behavior of this back-tracking.
Now we create a service that setup a vpn link between devices ex1 and ex2 and is completed immediately
since the test-passed leaf is set to true.
admin@ncs(config)# link t2 unit 17 vlan-id 1
admin@ncs(config-link-t2)# link t2 endpoints ex1 eth0 ex2 eth0 test-passed true
admin@ncs(config-endpoints-ex1/eth0/ex2/eth0)# commit
admin@ncs(config-endpoints-ex1/eth0/ex2/eth0)# top
}
}
}
}
}
}
The service setup the link between the devices. We inspect the plan:
admin@ncs# show link t2 plan component * state * status
NAME STATE STATUS
---------------------------------------
self init reached
ready reached
ex1-eth0-ex2-eth0 init reached
dev-setup reached
ready reached
The service will at commit just add the new interface at this point and not remove anything. This is
because the test-passed leaf will not be set to "true" for the new component. We commit this change and
inspect the plan:
admin@ncs(config-endpoints-ex1/eth0/ex2/eth1)# commit
admin@ncs(config-endpoints-ex1/eth0/ex2/eth1)# top
admin@ncs(config)# exit
admin@ncs# show link t2 plan
...
BACK ...
NAME TYPE TRACK GOAL STATE STATUS ...
-------------------------------------------------------------------...
self self false - init reached ...
ready reached ...
ex1-eth0-ex2-eth1 vlan-link false - init reached ...
dev-setup reached ...
ready not-reached ...
ex1-eth0-ex2-eth0 vlan-link true - init reached ...
dev-setup reached ...
ready reached ...
We see that the new component 'ex1-eth0-ex2-eth1' has not reached its ready state yet. Therefore the old
component 'ex1-eth0-ex2-eth0' still exists in back-track mode but is still waiting for the new component to
finish. If we check what the service has configured at this point we get the following:
admin@ncs# link t2 get-modifications
cli devices {
device ex1 {
config {
r:sys {
interfaces {
interface eth0 {
+ unit 17 {
+ vlan-id 1;
+ }
}
}
}
}
}
device ex2 {
config {
r:sys {
interfaces {
interface eth0 {
+ unit 17 {
+ vlan-id 1;
+ }
}
+ interface eth1 {
+ unit 17 {
+ vlan-id 1;
+ }
+ }
}
}
}
}
}
So both the old and the new link exists at this point. Now we set the test-passed leaf to true to force the
new component to reach its ready state.
So the old component has been completely back-tracked and is removed because the new component is
finished. We also check the service modifications and see that the old link endpoint is removed:
admin@ncs# link t2 get-modifications
cli devices {
device ex1 {
config {
r:sys {
interfaces {
interface eth0 {
+ unit 17 {
+ vlan-id 1;
+ }
}
}
}
}
}
device ex2 {
config {
r:sys {
interfaces {
+ interface eth1 {
+ unit 17 {
+ vlan-id 1;
+ }
+ }
}
}
}
}
}
Introduction
By using the tailf:des3-cbc-encrypted-string, tailf:aes-cfb-128-encrypted-
string or the tailf:aes-256-cfb-128-encrypted-string built-in types it is possible to
store encrypted values in NSO. The keys used to encrypt these values are configured in ncs.conf and
default stored in ncs.crypto_keys.
Below follows a complete example of an application written in Python providing encryption keys from a
plain text file. The application is included in the example crypto/external_keys:
#!/usr/bin/env python
import os
import sys
def main():
key_file = os.getenv('NCS_EXTERNAL_KEYS_ARGUMENT', None)
if key_file is None:
error('NCS_EXTERNAL_KEYS_ARGUMENT environment not set')
if len(key_file) == 0:
error('NCS_EXTERNAL_KEYS_ARGUMENT is empty')
try:
with open(key_file, 'r') as f_obj:
keys = f_obj.read()
sys.stdout.write(keys)
except Exception as ex:
error('unable to open/read {}: {}'.format(key_file, ex))
def error(msg):
print('ERROR={}'.format(msg))
sys.exit(1)
if __name__ == '__main__':
main()
Introduction
As a development feature NSO supports sending log data as-is to an external command for reading on
standard input. As this is a development feature there are a few limitations such as the data sent to the
external command is not guaranteed to be processed before the external application is shut down.
To support debugging of the external log command behavior a separate log file is used. This debugging log
is configured under /ncs-config/logs/ext-log. The example below shows configuration for ./
logs/external.log with the highest log level set:
<ext-log>
<enabled>true</enabled>
<filename>./logs/external.log</filename>
<level>7</level>
</ext-log>
By default NED trace output is written to file preserving backwards compatibility. To write NED trace
logs to file for all but the device test which will use external log processing the following configuration
can be entered in the CLI:
# devices global-settings trace-output file
# devices device example trace-output external
When setting both external and file bits without setting /ncs-config/logs/external to
true a warning message will be logged to ext-log. When only setting the external bit no logging
will be done.