NETAVIS User Manual
NETAVIS User Manual
User Manual
English
NETAVIS Observer 4.8 User Manual (July 2018)
Copyright
Copyright © 2003-2018 NETAVIS Software GmbH. All rights reserved.
NETAVIS and Observer are trademarks of NETAVIS Software GmbH. All other trademarks are
trademarks of their respective holders.
NETAVIS Software GmbH
Lerchenfelder Gürtel 43
A-1160 Vienna
Austria
Tel +43 (1) 503 1722
Fax +43 (1) 503 1722 360
[email protected]
www.netavis.net
Page 2 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Contents
1 Introduction ...................................................................................................... 6
1.1 The Observer documentation set ...........................................................................................................6
1.2 What is new with this release...................................................................................................................6
1.3 Observer data security ..............................................................................................................................7
1.4 Video streaming methods and compression .......................................................................................7
2 Observer clients on multiple platforms ............................................................... 11
2.1 Introduction to Observer clients .......................................................................................................... 11
2.2 Starting the Observer client from a desktop web browser............................................................. 12
2.3 Working with the installed Observer client ........................................................................................ 16
2.4 Client multi-window and multi-screen operation............................................................................ 20
2.5 Client preferences ................................................................................................................................... 22
2.6 Mobile Observer clients ......................................................................................................................... 24
2.7 Observer Transcoding™ for low-bandwidth client-server connections (ABS) ........................... 29
2.8 Exiting the client ...................................................................................................................................... 30
3 Guidelines for setting up a new system with Observer .......................................... 31
3.1 Guidelines for setting up cameras ....................................................................................................... 31
3.2 Guidelines for setting up users ............................................................................................................. 32
3.3 Guidelines for setting up views ............................................................................................................ 32
4 Setting up cameras .......................................................................................... 33
4.1 Preparations............................................................................................................................................. 33
4.2 Adding a new camera and setting basic properties ........................................................................ 33
4.3 Setting up the camera recording archive........................................................................................... 42
4.4 Checking the Camera status ................................................................................................................. 42
4.5 Optional: Configuring video analytics (iCAT)..................................................................................... 43
4.6 Defining brightness, contrast, and saturation .................................................................................. 43
4.7 Working with camera groups................................................................................................................ 44
4.8 Changing the port mapping of analog cameras ............................................................................... 45
5 Managing users ................................................................................................ 46
5.1 Creating a new user account ................................................................................................................ 46
5.2 Setting general user privileges ............................................................................................................. 48
5.3 Setting camera access rights ................................................................................................................ 52
5.4 Working with user groups...................................................................................................................... 54
5.5 Defining Online Monitor views for a new user ................................................................................... 54
5.6 Information about logged-in users ..................................................................................................... 54
5.7 Changing the password ......................................................................................................................... 54
5.8 Working with Active Directory and LDAP users ................................................................................. 55
6 Using the Online Monitor .................................................................................. 56
6.1 Creating a new view ................................................................................................................................ 56
6.2 Selecting cameras ................................................................................................................................... 57
6.3 Navigating in the Online Monitor ......................................................................................................... 58
6.4 Modifying view port settings ................................................................................................................. 59
6.5 Zooming in a view port and in archive recordings ........................................................................... 62
Page 3 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Page 4 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Page 5 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
1 Introduction
Thank you for choosing NETAVIS Observer 4.8 as the management software for your video monitoring
system. As you use it, you will find that Observer not only enables you to view live images and record
them, but also provides a full-scale platform for the intelligent utilization of your video data.
This User Manual guides you through the functionality of NETAVIS Observer 4.8.
If you have questions that are not answered in the Observer documentation set, please contact your
NETAVIS partner, or else get in touch with us via the usual channels.
We wish you a great experience with NETAVIS Observer 4.8.
Your NETAVIS Team.
Page 6 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Please note: Encryption and decryption can mean higher CPU overhead at the server and/or the
client. Also doing something with the contents of an encrypted stream means decryption and
therefore higher CPU overhead. For example, doing video analytics with an encrypted video stream
means that is has to be decrypted at the server before processing. Likewise, storing an unencrypted
stream in an encrypted video database means that it has to be encrypted at the server before storing.
Likewise, displaying an encrypted video stream at the client means more CPU overhead at the client
because it has to be decrypted.
On the other hand, simply storing an already encrypted video stream coming from the camera does
not need more CPU at the server than storing an unencrypted video stream.
• the bandwidth needed for transmission between cameras and server but also between servers
and clients,
• the CPU load at the server and the client induced by compression and decompression, and
• the storage requirements for recording
For low-bandwidth client-server connections Observer offers the unique Transcoding™ feature (see 2.7
Observer Transcoding™ for low-bandwidth client-server connections (ABS) on page 29).
1.4.1 Multi streaming (multiple parallel video streams from the camera)
Some cameras are capable of providing multiple parallel video streams to Observer. This can be
helpful, for example, when online viewing and recording is to be done in different formats (e.g.
Page 7 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
different sizes and frame rates) or for optimizing iCAT video analytics performance (see 15.2.1
Considerations for setting up a system with iCAT on page 148).
Usually MJPEG cameras can deliver several MJPEG streams while MPEG cameras (MPEG-4, H.264, and
MxPEG) usually can deliver only 1 or 2 MPEG streams and some camera types can deliver several
MJPEG streams in addition to the MPEG stream(s).
However there are a few important restrictions with multi streaming:
• Some cameras have performance limitations in providing multiple streams depending on the
streaming format, resolution, and frame rate. We found out that some cameras just stop
streaming when the streaming processors of the camera get overloaded by certain resolution
and frame rates settings. Please refer to the camera data sheet and documentation.
• In the current version Observer supports 1 format setting for MPEG streams (MPEG-4, H.264, and
MxPEG) and multiple format settings for MJPEG streams.
Please note: Please refer to camera data sheet and documentation for camera limitations. Also the
document NETAVIS Observer Supported Video Sources may provide further details on camera
restrictions.
Page 8 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
At the cost of higher complexity, the result of applying MPEG video compression is that the amount of
data transmitted across the network is less than that of Motion JPEG. This is illustrated below where
only information about the differences in the second and third frames is transmitted
H.264 and MxPEG all work very similar to MPEG-4 whereby H.264, for example, needs only about 60%
of the bandwidth of MPEG-4 for roughly the same video quality. However, this efficiency does not
come for free. Encoding and decoding H.264 needs more CPU power than MPEG-4. A general rule is
that the higher the compression factor the heavier the CPU burden (in the server and in the clients).
Therefore there is always a tradeoff between bandwidth utilization and CPU power needed.
1.4.4 Advantages and disadvantages of Motion JPEG and MPEG (MPEG-4, H.264, and MxPEG)
Due to its simplicity, Motion JPEG (MJPEG) is a good choice for use in many applications. JPEG is a
widely available standard in many systems often by default. It is a simple compression/decompression
technique, which means the cost, in both system time and money, for encoding and decoding is kept
low. The time aspect means that there is limited delay between image capturing in a camera,
encoding, transferring over the network, decoding, and finally displaying at the viewing station. In
other words, MJPEG provides low latency due to its simplicity (image compression and complete
individual images), and for this reason it is also well suited for when image processing is to be
performed, for example video motion detection or object tracking.
MJPEG gives a guaranteed image quality regardless of movement or complexity of the image scenes. It
still offers the flexibility to select either high image quality (low compression) or lower image quality
(high compression) with the benefit of lower image file sizes, thus lower bit-rate and bandwidth usage.
At the same time the frame rate can be easily controlled, providing a means to limit bandwidth usage
by reducing the frame rate, but still with a guaranteed image quality.
Since MJPEG does not make use of a video compression technique, it generates a relatively large
amount of image data that is sent across the network. For this reason, at a given image compression
level (defining the image quality of the I-frame and JPEG image respectively), the network bandwidth
is less for MPEG compared to MJPEG, except at very low frame rates.
Another difference is that most MJPEG IP cameras can produce multiple simultaneous streams and in
different qualities (image sizes and compression quality) while most MPEG cameras can produce only
one stream in one quality. Therefore the same stream will be used in live viewing and recording.
This summarizes the benefit of MPEG: the ability to give a relatively high image quality at a lower bit-
rate (bandwidth usage). This can be especially important if the available network bandwidth is limited,
or if video is to be stored (recorded) at a high frame rate and there are storage space restraints. The
lower bandwidth demands come at the cost of higher complexity in encoding and decoding, which in
turn contributes to a higher latency when compared to MJPEG.
The graph below shows in principle how bandwidth use between MJPEG and MPEG compares at a
given image scene with motion. As can be seen, at very low frame rates, where MPEG compression
cannot make use of similarities between neighboring frames to a high degree, and due to the
overhead generated by the MPEG streaming format, the bandwidth consumption is actually higher
than MJPEG.
Page 9 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Small Low 3 KB
176x144 QCIF PAL
176x120 QCIF NTSC
Medium 5 KB
160x120 QQVGA
High 8 KB
Medium Low 8 KB
352x288 CIF PAL
352x240 CIF NTSC
Medium 13 KB
320x240 QVGA
High 20 KB
Large* Low 20 KB
704x576 4CIF PAL
704x480 4CIF NTSC
Medium 34 KB
640x480 VGA
High 52 KB
* For mega-pixel cameras the image size will be much bigger than shown in the table.
Page 10 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
You can choose among these Observer clients, operating systems and platforms:
Desktop web browser MS Windows XP, Vista, 7, 8, 10 No support for joysticks for PTZ
control
Client for iPad Apple iPad with iOS 6 or later Live viewing and archive access
(with MJPEG streams)
Client for Smartphone & Tablet many smartphone platforms and Live viewing (with
OSes MJPEG streams)
Note: The Mac OS X operating system hasn’t been supported since Observer 4.0.
Please be aware that some functions like Layout Navigation and running SAFE export files are only
available on the MS Windows platform.
This chapter describes how to start the Observer client on a desktop PC. If you want to run Observer on
a mobile device please refer to 2.6 Mobile Observer clients on page 24.
The minimum screen resolution for running the Observer client is 1024x768 pixels.
Page 11 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Licensing issues
The available functionality of your Observer installation is defined by the license file. The document
NETAVIS Observer Server Installation and Administration describes how licensing works and how to
obtain a license string. If you have a temporary demo license, a License dialog appears at every login
indicating that there is no permanent license. At this dialog just press the Continue button to operate
Observer in the demo mode.
See 11.1 Server system information and restarting on page 112 for how to display the current license of
your server.
• Lazy-loading client technology: Observer clients (both browser-based and locally installed)
only load the needed application components from the server on demand, when they are
needed. This saves time at startup and also bandwidth. It also eases the management and
upgrading of clients. Libraries for one version are downloaded only once and are then stored
locally on the client machine. The path is <user’s home directory>\netavisLibs\<version> (e.g.
C:\Documents and Settings\user\netavisLibs\4.4.5.158.634).
• Automatic client upgrading: Whenever the Observer server is upgraded to a new version, the
clients are automatically upgraded too. This is happening transparently to the user. The same
client will still be able to work with older server versions (see next point). Since release 1.9 the
client application has to be installed only once and every further Observer version will be
seamlessly accessible, without having to manually upgrade the client.
• Different versions between servers and clients: Since release 1.9 Observer clients can
connect to servers running different versions (release 1.9 or newer) without the need to install
clients matching the servers’ versions.
As a summary, Observer clients...
Page 12 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Now a page that checks the availability of the Java on your computer appears. This page should
disappear in a few seconds.
If this page stays, it means that you do not have the Java plug-in installed on your browser. You
have to first install Java by visiting www.java.com/download to install the latest Java package
and then start the browser anew. Now the screen should disappear and you can continue as
shown below.
2. Next a startup screen is displayed that lets you choose the language that you want to use. Click
on your language of choice and push Start. This takes you to the start page of Observer:
3. Click on Start Observer client (from the browser using Web Start). Depending on the
browser you are using you might be asked whether you want to execute the Java JNLP link. Click
on OK (to tell the browser to automatically start see 2.2.1 Optimizing the Web Start behavior of
your browser on page 15). Now you will be advised that the program is being loaded. How long
loading takes depends on your network.
On completion of loading, you will be notified that Observer is initializing. Then user data are
loaded.
Note: By clicking on Install the Observer client on your PC you can also install the Observer
client on your machine locally (see 2.3 Working with the installed Observer client on page 16).
4. Before starting the Observer client you will be asked if you allow to execute the downloaded
trusted applet:
Page 13 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Select the checkbox …always trust… and click on Yes to allow the download of the trusted
applet.
When you start the client the first time after a new installation some additional client application
components need to be installed (this is needed only once per client). You will be asked:
Normally you want choose to install the program components from Server over network.
However, if you have a very slow network connection between the client and the server you
might want to choose installation from Local media. When you choose this option you will be
asked to locate the directory ClientInstaller of the Observer installation CD. Once you choose
the correct location and push OK, the components will be installed.
5. Next you either come to the login panel or to the license dialog.
If the license dialog appears you yet have to obtain a license for using Observer. Please consult
the manual NETAVIS Observer Server Installation and Administration for information on how to do
that.
At the login panel enter your Login name and Password and click OK.
Page 14 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
If you do not yet have a permanent license for Observer, a License dialog appears. To continue
without a permanent license just press Continue. In this case the full functionality may not be
available. See also 2.1 Introduction to Observer clients on page 11 for further details about
licensing.
Please note: The authentication data transferred between client and server are encrypted with MD5
strong encryption. The administration user admin has the default password admin. For security
reasons you should change this password (please see 5 Managing users on page 46)!
A guest login is possible only if the guest has been defined on your server (which is the factory setting).
For further details contact your Observer administrator.
If you have forgotten your password, you can select the Forgot my password checkbox, answer the
asked question, and click OK. For more information, ask your Observer administrator.
Page 15 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Page 16 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
The Starter allows you to manage sessions. In a session you can define the server address, which
Observer application to start and the user/password details. Thus you can store and run different
sessions easily.
The Session editor is opened when you press Add new or Modify and allows you to define the
session details:
Element Description
Session name Name under which the session is stored.
Reconnect count
Here you can set how many times the client will try to reconnect to
a server which it has lost its connection to.
Page 17 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Pressing Start in the Starter will open the client window and connect you to the Observer server.
When the client is started, the Starter window will be hidden and in the task bar you will see its icon.
You can access the Starter by clicking its icon. There you can open the Starter dialog again and also
force an exit of the client application.
When you start the client the first time after a new installation some additional client application
components need to be installed (this is needed only once per client). You will be asked:
Normally you want choose to install the program components from Server over network. However,
if you have a very slow network connection between the client and the server you might want to
choose installation from Local media. When you choose this option you will be asked to locate the
directory ClientInstaller of the Observer installation CD. Once you choose the correct location and
push OK, the components will be installed.
Note: If you enable the Download client without asking option you won't be presented with this
dialogue.
If you do not yet have a permanent license for Observer, a License dialog appears. To continue
without a permanent license just press Continue. In this case the full functionality may not be
available. See also 2.1 Introduction to Observer clients on page 11 for further details about licensing.
It is also possible to start multiple sessions of the Observer client with a single shortcut. To do so you
need to right-click on an existing shortcut to the Observer client and open its Properties. In the
Target field you can then add the names of the saved sessions which you would like to be started
upon clicking on the shortcut (e.g. "C:\Program Files\NETAVIS Observer\na.bat"
"4.5.2 NCS" "4.5.2 NUS" will open the two saved sessions called "4.5.2 NCS" and "4.5.2
NUS"):
Page 18 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Per default, the downloaded client application components will be installed in the directory
%USERPROFILE%\netavisLibs (e.g. C:\Documents and
Settings\user\netavisLibs). You can change this directory by setting the environment
variable NETAVIS_DIRECTORY. The directory must exist prior to starting the Observer client. If the
variable does not exist, the client components will be installed in the standard directory.
Here is how you can set an environment variable in Windows 7:
1. Right click on the Computer icon in your Explorer or on your Desktop and choose Properties.
2. In the System window click on Advanced system settings in the left pane.
3. In the System Properties window select Advanced tab and click on the button Environment
Variables at the bottom of the dialog.
4. In the Environment Variables window you will notice two tables User variables for the current
user and System variables for all users.
5. To add a new User variable click on New… button. In the New User Variable dialog box enter
the variable name NETAVIS_DIRECTORY and the location of the directory and then click OK.
The default location would be %USERPROFILE%\netavisLibs.
6. Click OK in the Environment Variables dialog window and close the other dialogs as well.
The Starter also allows you to also manage sessions with Observer servers running releases older
than 4.0. Just select the checkbox Show settings for pre-4.0 releases to show the additional
settings.
Page 19 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Description of elements:
Element Description
Window width, height Define the size of the client window in pixels.
Upper left corner X, Y Define the location of the upper left corner of the client window in
pixels. This setting can be changed, e.g., for multi-screen setups.
Initial monitor view name Is optional and defines the initial Online monitor view.
Window decoration visible Defines whether the windows decoration border is visible.
Tool control bar visible Defines whether the tool control bar at the right side of the window
is visible. This bar allows switching between Online Monitor, Archive
Player, Event Management and Administration.
Event bar visible Defines whether the event bar at the bottom of the window is
visible.
Online monitor control bar Defines whether the menu and the history buttons for Online
visible monitor control are visible.
Overlay painting enabled When this is selected and the hardware supports it the Online
monitor uses the hardware overlay technique for displaying flicker-
free MPEG streams. This can also boost the client performance and
relieve the main CPU.
In multi-screen operation, overlay painting of MPEG video streams
may result in pink colored view ports if the client is not running on
the primary screen of Windows. Then you should turn off this
feature.
Page 20 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
The window contents and positions are automatically stored per user and between sessions. This
means when a user has multiple windows positioned on 3 screens and he exits and then logs in again,
all the windows on the 3 screens will automatically be restored.
Please note: There can only be one Archive player and this is located in the main window.
• Where they are located inside the window: Just click on the title bar of the Event list or the
Camera tree and drag it to a different location. For example, when the Online monitor and the
Event list are enabled, then you can drag the Event list from the default position at the right
border of the window to the bottom of the window.
• Whether they are floating (open when needed or there are new entries and collapsed or hidden
otherwise) or pinned (always shown): You can change the state of the Event list or the Camera
tree by pinning or unpinning it in the title bar.
Whether the Event list or the Camera tree should fill the whole window: You can maximize the
component by pushing the maximize button . When a component is maximized, it will
Page 21 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
occupy the whole space of the window and will automatically resize when you resize the
window. Thus you can create a window that contains only an Event list.
After you are done with the modifications deselect Windows->Enable layout customization.
The default location for the Event list is the right window border and for the Camera tree (that usually
only pops up when needed) it is the left window border.
Note: Only the Event list and the Camera tree can be dragged. The other components in the window
will then change their size accordingly.
Note: See 2.5 Client preferences on page 22 for information on how to enable full screen mode for a
window.
All the window states will be remembered between sessions. This happens on a per-user basis, which
means when the user logs in on a different client workstation the same window setup as on the first
client workstation will be shown.
Note: Depending on the authorization that your Observer administrator has assigned to you, some of
the client components could be disabled (menu shown in grey color). If you need more authorization,
please contact your local Observer administrator.
Page 22 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Page 23 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Note: Normally it is not necessary to always display the camera tree since it is displayed on-
demand. Also note that as of NETAVISObserver 4.6.2 the state of the camera tree (which groups
are and are not expanded) is stored per user.
Note: These debug options are only needed for advanced error diagnosis.
Page 24 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Note: These mobile clients do not support the full functionality of desktop clients (see 2.1 Introduction
to Observer clients on page 11 for more details). When using the Mobile Client the videostreams of
cameras, which do not provide a MJPEG stream to the NETAVISObserver server, have to be transcoded
on the server which requires additional CPU power.
with an empty view. Click on the menu button and select View Config to create a view.
4. By clicking on the menu button again you can choose between different options:
• Leave view config: Leave the view configuration
• Save: Saves the current view configuration.
• Create new view: Allows you to create a new view and configure it with a Name (with up
to 20 characters), the desired number of view ports (max. 3x2) and the default fit mode
(Letterbox, Stretch or Crop).
• View properties: Allows you to change the currently selected view.
• Delete current view: Deletes the current view.
• Logout: Logs out of the Mobile Client.
5. To add a camera to a view port select it in the camera tree on the left-hand side and drag it into
the desired view port. You can also search for cameras using the search box at the top.
Page 25 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Note: A preview of the camera image is only visible in the view configuration if the camera is
configured to provide a MJPEG stream to the NETAVISObserver server. In the actual view however
the camera's stream will be shown regardless of the its codec.
6. Once you have added a camera you can change its view configuration by double-tapping on it.
Changing its size allows you to have a camera fill multiple view ports. It is also possible to
overwrite the view's fit mode with a camera specific one.
7. To remove a camera from a view port you can either select and drag it back to the camera tree or
select Remove camera from the camera configuration.
8. After selecting Leave view config you will see the configured view:
9. To switch between different views, you can either swipe left / right on touch-enabled devices or
use the View menuin the top-right corner or move between views with the Backward or
Forward buttons.
10. By tapping on a camera in a view it will be opened in a large view.
11. In a camera's large view you can select to play back archive recordings by tapping on the Archive
button:
12. The playback can be started and stopped by clicking on the video and the usual playback buttons
are available at the bottom. Additionally, there is a shortcut menu on the right which allows you to
select the playback of different intervals, including a custom one where you can enter a specific
timeframe for playback.
13. You can return to the large live view or to the view by clicking on the Back button:
14. On PCs the Mobile Client also offers keyboard shortcuts for often used functions:
• Application-wide:
• ENTER: Confirm action or dialogue
• ESC: Dismiss popup
Page 26 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
• q: logout
• Main view:
• e: jump to view configuration
• left-arrow: jump to previous view
• right-arrow: jump to next view
• Live/Archive view:
• left-arrow: jump to live view
• right-arrow: jump to archive view
• View configuration:
• p: Show view properties
• n: Create new view
• DELETE: Delete view
• l: Select letterbox in view / camera properties
• s: Select stretch in view / camera properties
• c: Select crop in view / camera properties
Page 27 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Else you first have to tap Add layout, select the desired layout and then add cameras to the
individual view ports. It is possible to add multiple layouts and switch between them using the
arrow buttons or dropdown menu in the middle of the navigation bar:
4. When you are finished using it, it is recommended to log out of the NETAVIS Client for iPad via the
Logout button on the overview screen (although there is an automatic logout after 10 seconds).
Page 28 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
4. You can choose whether to watch the live-stream in a Small, Medium or Large resolution and
which Letterbox mode should be used.
Please note: Changing the resolution only works for cameras where multiple MJPEG streams are
supported. Please see the NETAVIS Observer Supported Video Sources document for further
information.
5. When you are finished using it, it is recommended to log out of the Client for Smartphone & Tablet
via the Logout button (although there is an automatic logout after 10 seconds).
Page 29 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
operated over very low bandwidth connections that would normally prevent their operation. The
technology works for all cameras, streaming resolutions, and formats (including MPEG-4, H.264, and
MxPEG).
Additional CPU resources are needed at the server and at the client for transcoding streams.
Transcoding™ can be set up in the server level by limiting bandwidth for live video and recording
playback streams as well as for recording exports (see 11.2 Setting Observer server parameters on page
114).
Important: Although transcoding works with all streaming formats, the best results and least CPU
overhead are possible with MJPEG streaming. Also the transcoding bandwidth limit must be chosen
carefully.
Therefore we suggest:
• MJPEG streaming format.
• Limit the transcoding bandwidth to app. 70% of the available server-client connection
bandwidth.
For connections, like Internet connections, with heavily varying bandwidth it is much better to use a
lower limit than a higher limit. With low limits of 256 kbit/s or 128 kbit/s very good results are possible.
Some customers have even used 56 kbit/s or 30 kbit/s with transcoding. As mentioned above the
limits are defined in 11.2 Setting Observer server parameters on page 114.
Page 30 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Please note: All of these steps should be taken by the administrator user (usually login admin).
Here we describe the steps for one server (for the setup of connected servers please see 12 Working
with interconnected Observer servers on page 121):
Page 31 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Note: NETAVIS Observer 4.7 introduced the Camera Import/Export Wizard which speeds up and
facilitates the initial camera configuration (see 4 Setting up cameras on page 33 for details).
Page 32 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
4 Setting up cameras
NETAVIS Observer allows any authorized user to set up cameras in the system.
4.1 Preparations
Before you begin to set up a new camera in Observer, be sure to have the following information
available:
• use the Camera Import/Export Wizard to run an automatic camera discovery on your LAN or
add cameras from an Excel sheet
• add a new camera from scratch, or
• duplicate an existing camera and just modify some parameters (see 4.2.3 Duplicating an existing
camera on page 41). Duplicating an existing camera creates an exact duplicate with all settings
copied. This is useful when you have more than one camera of the same type or with
equal/similar settings (like recording or video analytics settings).
Page 33 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
To modify a setting before importing the camera to the system select the desired camera(s) (or use the
Select all button) and then click on the column header (e.g. Comments) you want to edit. This will
open a separate window where the desired values can be entered or selected.
Hints: A common issues is that no camera group has been set for the discovered cameras.
By clicking Finish the cameras will be imported into the system and added to the corresponding
camera groups.
Camera name Enter the name that you want to give your camera. This is the name
by which you will select or display this camera (e.g., camera 2).
Comment Here you can add text that describes your camera (up to 255
characters).
Time zone Select the time zone of your camera's location (e.g., CET for a
Page 34 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Camera type Specify the type of camera by selecting it from the camera pop-up
menu.
Important: If you want to connect an analog camera via a video
server then select the type of the video server from the menu. If you
connect an analog camera to a video capture card directly in the
Observer server, then select NDS (Observer Digitizer Server) as your
camera type.
Name of camera admin If the camera needs authentication for administrating, then enter
the user name of the camera administrator here.
Pwd of camera admin Enter the password of the administrator account of the camera
(only if used).
Camera IP address Specify the IP address or network name of your camera or video
server. If you supply a network name, you must have access to a
domain name server (DNS) that resolves the name to an IP address.
Even dynamic DNS names (like dyndns) can be used. This field is
not needed for analog cameras connected directly via a video
capture card (NDS).
Camera server port If you are adding an analog camera via a video capture card (NDS)
or via a video server, specify the port of the capture card or video
server to which the camera is connected.
Aspect ratio This setting is only enabled for certain IP cameras that are shipped
with different aspect ratios (like PAL or NTSC). For cameras that are
delivered in one standard only, the correct value is set
automatically and cannot be changed. Please select the correct
value for your camera. If you select a value that does not fit your
camera, then the image might be distorted. Please refer also to 1.4.5
JPEG image sizes and storage requirements on page 10.
Streaming mode This option is only relevant if you use a Mobotix camera. Set it when
(Mobotix only) you would like to operate the camera in streaming mode. If this
option is not activated, the camera operates in single picture mode.
In streaming mode (MJPEG format) the camera delivers higher
frame rates than in single picture mode. If you activate the
streaming mode you also must set the according option in the
camera with the Admin tools of the camera.
Use HTTPS encryption If the camera supports encrypted streaming using HTTPS this
setting will be enabled. It defines whether streaming from the
Page 35 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
JPEG RTSP URL If you want to use the generic RTSP driver to get a JPEG stream from
a camera - which requires selecting "Generic RTSP" as the Camera
Type - then you have to enter the RTSP port of the camera into the
first field (if it is left empty then the default port 554 is used) and the
specific URL into the second field. The IP address and (if applicable)
username and password of the camera admin are used from the
corresponding configuration fields above. The stream configuration
(stream resolution, fps, etc.) has to be done in the camera itself.
Note: Ensure that on the Basic video and audio settings page
(see the next section for details) you only enable the JPEG/MPEG-
4/H.265 checkboxes if you have entered a corresponding RTSP URL
here.
H.264/MPEG-4 RTSP URL The configuration of a H.264/MPEG-4 stream via RTSP works just
like the JPEG stream configuration (see above for details).
Multi stream allowed Some cameras are capable of providing multiple video streams in
parallel. This can be helpful for example, when online viewing and
recording is to be done in different formats or for optimizing iCAT
video analytics performance (see 1.4 Video streaming methods and
compression on page 7 for a general discussion about multi
streaming).
Usually MJPEG cameras can deliver several MJPEG streams while
MPEG (MPEG-4, H.264, and MxPEG) cameras deliver only 1 MPEG
stream (some camera types can deliver several MJPEG streams in
addition to the MPEG stream).
If this option is not selected, then only 1 video stream will be pulled
from the camera regardless of how many different formats would
be needed.
If it is selected then multiple streams will be pulled.
Page 36 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Allow JPEG streaming If your camera supports MJPEG video streaming, you can allow
using this mode by selecting this checkbox.
In one special case, Observer will try to pull an MJPEG stream even
if this checkbox is deselected. This is the case when iCAT video
analytics is working on MPEG streams (e.g. for motion detection)
and Multi stream allowed is selected. This special additional
MJPEG stream in QVGA is used only for iCAT and helps to minimize
server CPU load for iCAT processing (please refer also to 15.2.1
Considerations for setting up a system with iCAT on page 148).
Allow MPEG-4 streaming If your camera supports MPEG-4 video streaming, you can allow
using this mode in the Online Monitor and the archive by marking
this checkbox.
Please note that platform restrictions may apply for this streaming
mode (please refer to 2.1 Introduction to Observer clients on page
11).
Allow H.264 streaming If your camera supports H.264 video streaming, you can allow using
this mode in the Online Monitor and the archive by marking this
checkbox.
Please note that platform restrictions may apply for this streaming
mode (please refer to 2.1 Introduction to Observer clients on page
11).
Allow MxPEG streaming If your camera supports MxPEG video streaming, you can allow
using this mode in the Online Monitor and the archive by marking
this checkbox.
Please not that platform restrictions may apply for this streaming
mode (please refer to 2.1 Introduction to Observer clients on page
11).
Stream MPEG-4 via This option should only be switched on in very special situations.
Multicast When marked then the MPEG-4 stream from the camera is received
via "multicast", when disabled via "RTSP over HTTP". Multicast is a
one-to-many, while RTSP is one-to-one type connection.
Mark this checkbox only if you want to have multicast MPEG-4
streaming of the camera. In most cases you want to leave this
Page 37 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Allow Audio to camera When marked, Observer will allow Audio to the camera and will
offer a button for that in the Online Monitor. Of course this feature
only works if you have a working microphone connected to the PC
where you run your client on and your camera has a loudspeaker
function.
Allow Audio from camera When marked, Observer will allow Audio from the camera to your
client (in the Online Monitor and for recordings). Of course this
feature only works if you have a working loudspeaker connected to
the PC where you run your client on. Please note that platform
restrictions may apply for this streaming mode (please refer to 2.1
Introduction to Observer clients on page 11).
Audio from and to share If this checkbox is marked, then there is only one button in the
single button Online Monitor that switches MPEG and Audio on and off. If this
checkbox is not marked, then you will find 3 buttons for the three
functions. See also 6.9 Working with MPEG cameras and audio () on
page 67.
Anonymize (distort) audio In some cases Laws do not allow the transmission or recording of
people's voices. The criterion often is whether one can recognize a
person by listening to its voice. Therefore Observer allows
anonymizing the audio stream by distorting it. If this checkbox is
marked then the live and recorded audio will be anonymized
(distorted).
The Default settings for MPEG-4, H.264, MxPEG and Default settings for single-stream
JPEG cameras define the streaming format for the various streaming types. These settings will
be used in the Online Monitor and also for recordings. For multi-stream JPEG cameras, the
Default settings for single-stream JPEG cameras are of no importance (because for each
view port in the Online Monitor and also for recording separate formats can be specified).
Please note: In the current version, Observer only supports 1 streaming format for MPEG
cameras for live viewing and recording.
Use these settings Usually this checkbox must be set. It defines whether the default
camera settings are set via Observer or in the camera directly (via its
own setting utilities). When it is switched on, the settings in the
camera are overwritten by the values of this dialog. When it is
Page 38 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Image quality Defines the image quality of the stream. The possible values are
High, Medium, and Low quality. The higher the quality, the more
bandwidth will be used for transmission and the bigger the space
requirements for recording will be (see also 1.4 Video streaming
methods and compression on page 7).
For multi-stream JPEG cameras, this value cannot be set here, since
separate image qualities can be set for recording and in each view
port in the Online Monitor.
Image size Defines the image size of the stream. Possible values depend on the
camera model. The bigger the image size, the more bandwidth will
be used for transmission and the more space for recording will be
needed.
For multi-stream JPEG cameras, this value is of no importance,
since separate image sizes can be set for recording and in each view
port in the Online Monitor.
Frame rate Defines the frame rate of the stream. Possible values depend on the
camera model. The bigger the frame rate, the more bandwidth will
be used for transmission and the more space for recording will be
needed.
For single-stream JPEG cameras, this value defines the maximum
possible frame rate. In the Online monitor and for recording lower
frame rates can be selected.
GOP size For MPEG streams defines the how many frames are sent and
stored in a GOP (group of pictures). One GOP is an integral data
packet that is transmitted and recorded. Our default value is 10,
which means that there is 1 reference frame (I frame) and 9
difference frames (P frames). A bigger GOP size means a higher
compression rate but also a somewhat lower quality and a bigger
delay between a real scene and its viewed images (which is
relevant, e.g. for live viewing in the Online Monitor). We think that a
GOP size of 10 is optimal for most case, which, dependent on
camera model, covers a time between 0.5 ~ 1 sec.
Page 39 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Bandwidth limit (Kbps) This setting limits the maximum bandwidth in kilobits per second
for the transmission of MPEG streams between the camera and the
server. As a consequence, also the bandwidth between the server
and the clients is thus limited and also the required storage in the
archive is influenced (limited). If this value is zero, then the
bandwidth is not limited.
This actually is a setting in the camera. The camera always
optimizes for the desired image quality and will sacrifice frame rate
in favor of quality in the case, the bandwidth would exceed the
supplied limit.
Fields for In-camera motion detection (please refer to 8.3 In-camera motion detection on page
94 for further details):
Receive event images via If this checkbox is marked, you enable the In-camera motion
FTP detection and tell Observer to receive event images via FTP. Please
be aware that if you select this checkbox, the server-based motion
detection must be disabled.
Receive event images via The same as above, just that the images are received via HTTP
HTTP protocol (some cameras support only HTTP).
Post recording length (sec) When Observer receives an in-camera event, then it can start a
parallel server-side recording in addition to the event images it
receives from the camera. The event images received from the
camera via HTTP (FTP is not supported with this option) are merged
with this server-side recording. This field defines how long this
parallel post-event recording is.
If it set to 0, then Observer does not start its own server-side
recording of images and just stores the event images it receives
from the camera via FTP/HTTP.
Frame rate This field defines the frame rate of the above mentioned parallel
server-side post-event recording.
5. Press the Next button at the bottom of the dialog. This invokes the Scheduling dialog. Please
refer to 7.1 Programming archive recordings on page 75 on how to set up the camera archive and
scheduling.
6. Optional: Press the Next button again at the bottom of the dialog. This invokes the I/O Control
dialog that lets you define the handling of optional I/O contacts of your camera. If you do not
want to use I/O contacts you can just jump to the next step.
Page 40 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Input handling enabled Select this check box if you want to process the state of the digital
input-contact port of the camera.
Input poll interval (msec) The poll interval defines how often the state of the input port is
checked. Minimum time is 500 milliseconds.
Output handling enabled Select this check box if you want to enable switching the digital
output port of the camera from within view port menu.
Note: When changing the driver of an already configured camera make sure that the
configuration on the Default settings dialog is appropriate for the new camera driver (e.g. only
stream types supported by the camera are selected).
Note: Only local cameras and remote camera groups can be duplicated. It is not possible to duplicate
individually mounted remote cameras.
Page 41 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Note: The camera status overview is only updated for cameras which are being actively used by
Observer for recordings, video analysis or live monitoring in the Online Monitor. For all other cameras
the status is not updated.
Once the option has been enabled a status icon is added next to each camera and camera group in
the camera tree within the Camera admin:
Page 42 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
More detailed information about each camera is available in an overlay which appears after hovering
over a camera entry:
• Status: One of the first three error states described above or "Normal".
• Last error information: Includes the exact timestamp and a short description of the last error.
• Continuous recording scheduled: If a recording is scheduled to run on the camera at this point
in time.
• Video analysis (iCAT) scheduled: If an iCAT definition is scheduled to run on the camera at this
point in time.
Page 43 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
3. Press the Next button 4 times (starting from basic properties to scheduling to I/O control to
Image settings. Now you should see the Video parameters dialog:
4. In the menu select Modify selected camera or group and then modify the brightness, Contrast
and Saturation values according to your needs. Please be aware that light conditions may
change during the day.
5. To store the settings press Save.
Page 44 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Page 45 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
5 Managing users
Observer requires users to login in order to work with the system. This chapter describes how to create
and manage users in Observer.
Generally, you can either
• use Observer to administer users (see 5.1 Creating a new user account on page 46), or
• use Active directory or LDAP to manage users (see 5.8 Working with Active Directory and LDAP
users on page 55).
Please note: At initial product installation a set of predefined users accounts and groups are created.
These users and groups model typical permissions of users in various roles. Instead of creating a new
user account you can take one of these predefined users and modify the settings accordingly.
Login name This is a short name that the user will use to log in.
Password Enter a password for the user (with up to 32 characters). The user
can change this later on (see 5.7 Changing the password on page
54).
Page 46 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Forgot question Formulate a question that (only) the user can answer if he has
forgotten his password.
Defines if the user has access to the Customizer on the server's web
Enable web customizer
page where system backups, configuration files, custom event
login
handlers, and other advanced administration tools are available.
Defines if the user has access to the automatically exported files on
Enable download of
the server's web page (see 19 Automatic Export on page 186 for
exported files
details).
SMS number If the user is to receive an SMS in the event of an alarm or failure,
provide his cell phone number. Please insert a full international
number starting with a ‘+’. Example: +43 123 456 7890.
Please note that an SMS sending device has to be connected to the
Observer server for this feature to work. Refer to the manual
NETAVIS Observer Server Installation and Administration for
information about supported devices and how to connect and
setup them.
E-mail address If the user is to receive an e-mail in the event of an alarm or failure,
provide his e-mail address.
PTZ priority (1=lowest, Defines the relative PTZ priority between users. A user with higher
10=highest) priority can take away PTZ control from a user with lower priority.
Please note: The automated PTZ actions started by the event
manager and scheduled routes have priority 4. Therefore users with
priority 1 to 3 will be overridden by automatic PTZ actions, whereby
users with priority 5 to 10 can override automatic PTZ actions but
will not be interrupted by them.
Max. PTZ use time (sec) Maximum allocation time, after which a PTZ camera is
automatically released. Zero means no limit.
PTZ inactivity timeout (sec) When a user has taken PTZ control and is inactive for a certain
amount of time, the PTZ camera is freed automatically after this
timeout. Zero means no timeout.
Page 47 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
5. Click on Next to go the Privileges dialog. Here you can set the privileges for the new user (see
5.2 Setting general user privileges on page 48 for details).
6. Click on Next to go the Camera Access Rights dialog. Here you can set the camera access
rights for the new user (see 5.3 Setting camera access rights on page 52 for details).
7. Click on Save to create the new user account with the settings you entered.
Hints: In order to use the four-eyes-principle you first create a user with the desired privileges and
camera access rights. You then create a new user, set a secondary password for that second user,
grant the same privileges and camera access rights as the first user, and then add the desired
additional rights (e.g. Archive access) compared to the first user.
Online monitor: Access to Online Defines if the user/group has access to the Online
monitor monitor
Online monitor: Add cameras to views Defines if the user/group can add cameras to existing
in Online monitor views in the Online monitor
Online monitor: Remove cameras Defines if the user/group can remove cameras from
from views in Online monitor existing views in the Online monitor
Online monitor: Create and delete Defines if the user/group can create and delete views in
views in Online monitor the Online monitor
Page 48 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Online monitor: Save view layouts in Defines if the user/group can save different view layouts
Online monitor in the Online monitor
Recordings: Access to recording Defines if the user/group can access the recording
archive player archive
Recordings: View external archive Defines if the user/group can view NEA recordings
recordings (NEA)
Recordings: Manage external archive Defines if the user/group can manage external storage
devices (NEA) devices for NEA
Events: Access to Event list and Defines if the user/group can access the Event list and
database database
Events: May acknowledge a system Defines if the user/group can acknowledge system
event events
Events: Notification in user interface Defines if the user/group receives notifications about
about system malfunction events system malfunction events within the client
Events: Sending email about system Defines if the user/group receives notifications about
malfunction events system malfunction events via email
Events: Sending SMS about system Defines if the user/group receives notifications about
malfunction events system malfunction events via SMS
Events: Notification in client user Defines if the user/group receives notifications about
interface about system information system information messages within the client
messages
Events: Sending email about system Defines if the user/group receives notifications about
information messages system information messages via email
Events: Sending SMS about system Defines if the user/group receives notifications about
information messages system information messages via SMS
Page 49 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
User admin: Access to User Defines if the user/group has access to the User admin
administration configuration
User admin: Manipulate user data Defines if the user/group can change the User admin
configuration
User admin: Access to information Defines if the user/group can access the Users tab to see
about logged-in users which other users are logged into the system
Camera admin: Access to camera Defines if the user/group has access to the Camera
admin admin configuration
Camera admin: Manipulate camera Defines if the user/group can change the Camera admin
configuration data configuration
Page 50 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
iCAT: Reset heat map values manually Defines if the user/group can reset heat map values
manually
I/O device admin: Access to I/O device Defines if the user/group has access to the I/O device
admin admin
I/O device admin: Manipulate Defines if the user/group can change the I/O device
I/O device configuration data configuration
Rule Admin: Access to rule Defines if the user/group has access to the Rule
administration administration
Rule Admin: Manipulate rules Defines if the user/group can change the Rule
configuration
Automatic Export: Access to Defines if the user/group has access to the Automatic
automatic exports export administration
Automatic Export: Manipulate Defines if the user/group can change the Automatic
automatic exports export configuration
External device admin: Access to Defines if the user/group has access to the External
device admin device administration
External device admin: Manipulate Defines if the user/group can change the External device
device configuration data configuration
Host admin: Access to Host Defines if the user/group has access to the Host admin
administration and System configuration and System information
information
Client: Allow GUI layout customization Defines if the user/group can customize the layout of the
Client
Client: Allow window management Defines if the user/group can create and delete windows
Client: Manage number plate lists Defines if the user/group can manage number plate lists
Page 51 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Client: Modify client preferences Defines if the user/group has access to the client
preferences
Client: Allow saving logfiles Defines if the user/group can download the logfiles from
the server
To set privileges for all users in the system you can select and modify the root group (but please
be aware that at each group or user level privileges can be overridden).
6. Click Save to save your changes.
Please note: The camera access rights can be set for regular individual users and Active
Directory groups but not for regular user groups or individual Active Directory users.
3. Click the Next button twice to get to the Camera Access Rights dialog.
As with the general user privileges also the camera access rights are initially inherited from the
upper group level. Inherited camera access rights from the group level to which the user or group
belongs are displayed with normal font, while values that you define at the current user or group
level are displayed in bold.
4. Select the camera or camera group you want to look at or modify.
5. Choose Modify selected user or group from the menu.
6. Set the rights according to your needs. To change a camera access right click on its button and
select the status from the pop-up menu (either Inherited, Enabled, or Disabled):
Live viewing: View live streams in the Defines if the user/group can view live streams in the
Online monitor Online monitor.
Recordings: Access camera recording Defines if the user/group can view the camera recording
archive archive.
Recordings: May export camera Defines if the user/group can export archive recordings.
archive recordings
Recordings: May protect archive Defines if the user/group is required to enter a reason for
accessing an archive recording which is stored on the
Page 52 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Recordings: Ask user for reason of Defines if the user/group is required to enter a reason for
accessing the archive recordings accessing an archive recording which is stored on the
system.
Recordings: Manual recording control Defines if the user/group is able to manually start and
in Online monitor stop recordings in the Online monitor.
PTZ control (pan, tilt, zoom) and I/O Defines if the user/group is able to manually control
port control PTZ cameras and set I/O ports.
Events: Notification in client user Defines if the user/group is notified about camera
interface about camera malfunction malfunction events in the client.
events
Events: Sending email about camera Defines if the user/group is notified about camera
malfunction events malfunction events via email.
Events: Sending SMS about camera Defines if the user/group is notified about camera
malfunction events malfunction events via SMS.
Events: Notification in client user Defines if the user/group is notified about in-picture
interface about in-picture events (e.g. events in the client.
motion detection, video analysis)
Events: Sending email about in- Defines if the user/group is notified about in-picture
picture events (e.g. motion detection, events via email.
video analysis)
Events: Sending SMS about in-picture Defines if the user/group is notified about in-picture
events (e.g. motion detection, video events via SMS.
analysis)
To set access rights for the selected user for all cameras in the system you can select and modify
the root camera group.
7. Click Save to save your changes.
Page 53 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
• Login as the new user and manually create new views as described under 6.1 Creating a new
view on page 56, or
• copy existing views from another administration user as described under 6.8 Copying views
between users on page 65.
Page 54 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Please note: The optional secondary password can only be changed by an Observer administrator
with the appropriate privileges.
Page 55 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Important: If you are using a low-bandwidth connection between client and server (e.g. via wide area
networks) it can happen easily that the video streams cannot pass through the connection fast
enough, which results in bad frame rates and slow responsiveness of the client. In such a case you
should use the Transcoding™ feature of Observer. See 2.7 Observer Transcoding™ for low-bandwidth
client-server connections (ABS) on page 29 for further details.
Page 56 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Please select an aspect ratio that fits most of your cameras you plan to show in the view.
Selecting an improper aspect ratio can lead to unused space on the screen.
Press OK to create a new view with these settings.
3. In accordance with the selection above, Observer creates a new view that might look as follows
(your view may have a different number of view ports depending on what you selected:
The view ports fill the central part of the window, and each view port has its own view port
controls.
At the right side of the window is the Event list, which is explained in 9 Handling events on page
97.
4. Choose Save all view settings from the menu in order to save all settings for your current user.
The next time you log on to Observer, all views will then be available again. If you neglect to save,
all changes are lost when you exit the application.
5. You can change the name of the current view with Rename view in the Control menu.
Please note: All view settings will be stored, also image quality and frame rate settings of view ports.
New views will be stored automatically without the need to save manually.
Page 57 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
5. In the camera selection menu of a view port you can also select the option Shuffle. Then
Observer cyclically switches through all the cameras assigned to the view port.
6. Please select Save all view settings from the view Control menu in order to save all settings.
The next time you log in to Observer, all views and view ports will be available again.
Page 58 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
• Double click in a view port to get a big view of the currently displayed camera in the view port. If
there is a view defined containing a big view port with the camera, then this view will be shown.
Otherwise a new temporary view with the camera will be created. You can then go back to the
previous view with the Previous view button or the corresponding option in the context
menu. Please see below for a few hints on how to manage big views of cameras.
6.3.1 Optimizing big views of cameras (views opened after a double click)
When you double click on a view port with a camera then Observer does the following:
1. It first tries to open up an already existing big view that contains the camera, or
2. If such a view does not exist, a new temporary view will be created. For the new view some
parameters such as the frame rate will be taken from the corresponding camera's default
configuration whereas the following parameters are inherited from the view port it is derived from:
• Video stream type (MJPEG, MPEG-4, H.264 or MxPEG)
• Aspect ratio, incl. different custom aspect ratios
• Crop or stretch parameters
• Current crop position
• iCAT view options
Note: The new big view only inherits these parameters upon its initial creation. Subsequent
changes made in the original view port (e.g. video stream type or iCAT view options) will not affect
the big view.
To create a permanent big view for a particular camera: Create a view with this camera and adapt
the settings (see 6.1 Creating a new view on page 56 for details). Then a double click will open that
view. You can group those big camera views into view groups by supplying view group names such as
"Big:cam1", "Big:cam2" and so on.
Page 59 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Set streaming Defines the format in which the camera images should be streamed. The available
format options depend on the camera and its configuration in Observer (see 4.2 Adding a
new camera and setting basic properties on page 33). Please also refer to 1.4 Video
streaming methods and compression on page 7 for further details on streaming
formats.
Set image Sets the quality of the streamed images by modifying the compression rate. The
quality options are High, Medium and Low. Refer to 1.4.5 JPEG image sizes and storage
requirements on page 10 for details on these values.
Note: Setting the image quality is only possible for MJPEG streams of cameras for
which Observer supports multiple MJPEG streams.
Set image size Sets the image size (resolution) of the streamed images. The available size options
are camera-specific so please refer to your camera's manual.
Note: Setting the image size is only possible for MJPEG streams of cameras for
which Observer supports multiple MJPEG streams. You cannot change the image
size for MPEG-4, MxPEG and H.264 streams.
Set frame rate Sets the frame rate of the video stream. The options are Max fps, various fps and
fpm (frames per minute) values and Stop.
Note: For cameras which provide a single MJPEG stream you can only select frame
rate values which are lower than the default MJPEG frame rate set in the Camera
settings.
Set camera
Defines the position where the camera name is displayed in the view port. The
name
options are Show at the top, Show at the bottom and Do not show.
appearance
Hint: As of NETAVIS Observer 4.5 it is also possible to change the size (CTRL button
+ mouse wheel up or down) and contrast (CTRL + Shift buttons + mouse wheel up or
down) of the camera name and hide/show the label with the stream type and fps
information (Shift buttons + mouse wheel up or down). For all of these commands
the mouse pointer has to be over the camera name.
Page 60 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Rendering Defines which scaling algorithm will be used when images need to be scaled to fit
preference the available view port space. Optimized for quality means that the scaling is
done with a more CPU-intensive anti-aliasing which causes straight lines to be
smoother. Optimized for speed means that the scaling is done with a faster
algorithm that may cause lines to be not so smooth.
These settings can be modified all at once for all view ports in the current view via Set parameters of
all view ports from the view's Control menu. Holding down the CTRL key while selecting any of the
Set parameters of all view ports commands will modify all view ports of all views (not just the
current view).
Note: When modifying all view ports at once the selected options are only set for the view ports with
cameras which support the desired options.
With the Set image size option there are 5 categories of image sizes for modifying all view ports and
the closest possible match supported by each camera will be used:
• Very small (QCIF, QQVGA, QCGA,...)
• Small (CIF, QVGA, CGA,...)
• Medium (VGA, 4CIF, NTSC, D1,...)
• Large (HD-720, SVGA, XGA, SXGA,...)
• Very large (HD-1080, SXGA+, UXGA,...)
Note: Setting the image size and image quality is only possible for MJPEG streams of cameras for
which Observer supports multiple MJPEG streams. Please refer to the NETAVIS Observer Supported
Video Sources document for information about for which cameras this functionality is supported.
Page 61 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Page 62 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Please note: When PTZ control is enabled, any mouse actions are taken for PTZ control. In order to
control the view port zooming, hold down the CTRL key while using the mouse.
Here is how to set the camera access right for manual recording (see 5.3 Setting camera access rights
on page 52 for a general description on how to set camera access rights):
1. Login as administrator user (or another user with the right to modify user data).
2. Choose User admin from the System administration menu. This opens the User admin
dialog.
3. Select the user or group for which you want to enable manual recording control.
4. Click the Next button twice to get to the Camera Access Rights dialog.
5. Select the camera or camera group for which you want to enable manual recording control.
6. Choose Modify selected user or group from the menu.
7. Enable the right Manual recording control from Online monitor by choosing Enabled from
the pop-up menu.
8. Push Save to save your changes.
Page 63 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
In order to use manual recording control for a camera you have to define a continuous recording
schedule (please refer to 7.1.1 Programming continuous timed recordings on page 75 for a general
description of setting up schedules for continuous recordings):
1. Choose Camera admin from the System administration menu. This opens the Camera
admin dialog.
2. In the camera tree select the camera for which you want to set up the recording schedule.
3. At the bottom of the screen, click on the Next button twice. This takes you to the Scheduling
dialog. If you are setting up a new camera, this dialog will be quite empty.
4. In the menu select Modify selected camera or group.
5. Click the Add button to add a schedule slot to the Time Intervals list.
6. Click the Change button and choose Continuous recording.
Now the Scheduling dialog shows the settings for configuring continuous recording.
7. Now you can define the days and times for the recordings. You can activate individual days or,
with the All button, the whole week at once. Select hours and minutes from the popup matrix.
If you want that recording can only be started manually then make sure that the Enable
interval checkbox is disabled.
Note: When a user pushes the manual recording button in the Online monitor actually the
Enable interval checkbox is toggled. This causes the recording to either start ort stop.
8. Set all the recording options as described in 7.1.1 Programming continuous timed recordings on
page 75.
9. Click on Save to save your settings.
You can switch recording on and off by pushing the manual recording button. The recording state is
shown by the color (a strong red dot means recording is on, otherwise recording is off).
Please note: For times not covered by a continuous recording schedule no manual recording button
is shown.
Page 64 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Please note: You can have the same view several times at different locations in a round tour.
You can select multiple views in the Views list at once by holding the CTRL key while selecting
the views.
5. For each view you can set a Shuffle duration that determines, how long (in seconds) this view is
shown before Observer automatically changes to the next view in the tour. By pushing Set for all
you can set the same duration for all view in the round tour.
6. Push Save to save your changes and OK to leave the dialog.
Later on you can modify an existing round tour by opening the Round tours dialog and then choose
an existing round tour from the Round tours list and push Modify round tour.
A tour can be deleted by opening the Round tours dialog and then choose an existing round tour
from the Round tours list and push Delete round tour. Then you have to enter your password to
confirm the delete.
Page 65 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
5. Select the user you want to copy the current view to, whereby views can only be copied to
logged-out users and your own account. You can then choose one of the user's windows or
select Create new window as a target for the current view. After you have finished the selection
Page 66 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
press the Copy button. Now the selected user receives a copy of the current view in the
designated window.
Hint: Multiple users can be selected by holding down the CTRL key while clicking on user names.
When selecting multiple users it is only possible to select Create new window as a target for
the current view. So if you want to copy one or all views to specific windows (as opposed to a
newly created window) of multiple users you have to copy them to each user individually.
6. The user who received the views has to log-in again in order for the changes to take effect.
You can also copy all views of the current user by choosing Copy all views to other users… from the
view's Control menu.
Please note: If the users already have views with the same name as the copied views, then the
existing views will be overwritten by the newly copied ones!
6.9 Working with MPEG cameras and audio (MPEG-4, H.264, and MxPEG)
Note: In the current version of Observer, MPEG and bidirectional audio is only supported by clients
running on MS Windows. For further details please refer to 2.1 Introduction to Observer clients on page
11.
Observer also supports MPEG cameras (MPEG-4, H.264, and MxPEG) with audio streaming (see also 4.2
Adding a new camera and setting basic properties on page 33).
If the camera allows MPEG streaming, you can select the MPEG streaming format from view port
control menu. Once the MPEG streaming is activated, additional MPEG controls appear on the view
port(s):
Page 67 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Hint: Aside of controlling views with iCAT and motion detection triggers, it is also possible to control
them with the Matrix view function (see 22.4 Matrix View function of the Online Monitor (VIP control) on
page 199 for more details).
Page 68 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Additionally you can define in which Online Monitor Window the large view should be
shown. For configurations which concern multiple users the largest common subset of
available windows will be shown. With the Current camera mode only Window(1) is
supported.
• Show live streams in view can be configured with Additional cameras, a selection of
Users, Windows, and Views (please note that view ports used for dynamic view
configurations should not be modified manually, e.g. by new adding cameras to them).
Multiple cameras and users can be selected (and deselected) by keeping the CTRL button
pressed. For single user configurations it is also possible to limit the feature to clients
connected from a certain IP address. Show live streams in view offers three different
modes:
o Replace oldest view port (first in, first out): The newest iCAT events are always
displayed in the "oldest" view ports (the view ports that stood there the longest
without a camera change). Thus it is possible to create a view where always
cameras with the newest events are displayed.
o Shift older view ports (from top left to bottom right): The newest iCAT events
are always displayed in the top left view port with all other camera views shifting
towards the bottom right. Cameras in the lower right view port are removed from
the view. Thus it is possible to create a view where cameras with the most recent
events are always displayed at the top left position.
o Insert into empty view ports: The newest iCAT events are always displayed in
empty view ports and each view port has a close button . When the user clicks
Page 69 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
the close button, the cameras are removed from the view, freeing the view port for
another camera to be shown. Thus it is possible to create a view where cameras
with the most recent events are always displayed and stay there until they are
manually closed by the user.
Please note: When all view ports of the view are filled, no new cameras will be
shown until a view port is freed.
For the Show camera's live stream in large view and Show live streams in view options
there are also Picture in Picture features which can be enabled:
• Show event in view port's event line: This option enables a list which keeps track of
the events which occurred in the view port. To navigate between the events turn the
mouse wheel up and down while hovering over the list at the bottom of the view port. By
double clicking on an event in the list the Event details dialog is shown, whereby the
Acknowledge and next, Previous, and Next buttons are disabled.
Please note: The background color of the event line is the Highlight color which has
been set for that iCAT or motion detection definition. As of NETAVIS Observer 4.5 and
similarly to the camera name label it is also possible to change the size (CTRL key + mouse
wheel up or down) and contrast (CTRL + Shift keys + mouse wheel up or down) of the
event list.
Page 70 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
• Show archive event replay in view port: This option enables Archive Access from
within the Online Monitor view port which contains the camera which triggered the iCAT
definition. It can only be used if the Show event in view port's event line option is
enabled.
Page 71 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
To watch the event you can either drag the timeslider to the corresponding position or
click on the preview window to watch 10 second segments of the recording.
The Archive Access window can be moved by pressing the CTRL key and dragging the
window with the mouse and it can be resized by pressing the CTRL key and turning the
mouse wheel.
The event line and archive replay window are automatically removed when all the
cameras in the corresponding view port are removed.
Please note: Until all the cameras are removed the event list and archive playback
window configuration in the corresponding view port stay the same as configured in the
definition of the iCAT or motion detection event which first triggered a video stream to be
shown in it.
5. Alternatively it is also possible - but not generally recommended - to use the Comment field to
add a dynamic view command to an iCAT definition.
pop: <mode>,<view-name>,<window-ID>,<viewport-eventline>,<viewport-archive-
replay>; <additional-camera-IDs>; <user-names>; <IP-addresses>
where:
<mode> is one of:
• 1: shows the camera in a large view.
• 2: shows the camera and optional <additional-camera-IDs> in the view named <view-
name> (must be supplied). The cameras are placed in the "oldest" view ports (view ports
that stood there the longest without a camera change). Thus you can create a view where
always cameras with the newest events are displayed.
Page 72 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
• 3: shows the camera and optional <additional-camera-IDs> in the view named <view-
name> (must be supplied). The cameras are placed row-wise starting at the top left view
port. The cameras that were in these view port before are shifted to the right. Cameras at
the lower right corner of the view therefore are removed from the view. Thus you can
create a view where cameras with the most recent events are always displayed at the top
left position.
• 4: shows the camera and optional <additional-camera-IDs> in the view named <view-
name> (must be supplied). The cameras are placed in empty view ports. They have a
close button associated with them. When the user clicks the close button, the cameras are
removed from the view, freeing the view port for another camera to be shown.
Thus you can create a view where cameras with the most recent events are always
displayed and stay there until they are removed by the user. Please note: When all view
ports of the view are filled, no new cameras will be shown until a view port is freed.
• 5: similar to mode 1 it shows the camera in a large view. The optional <additional-
camera-IDs> are paired one-by-one with the <user-names>. This causes that these
additional cameras are shown in large view at the clients where the supplied users are
logged in.
• 6: similar to mode 1 it shows the camera in a large view. The optional <additional-
camera-IDs> are paired one-by-one with the in the <IP-addresses>. This causes that
these additional cameras are shown in large view at the supplied clients workstations.
<window-ID> defines the window which will be used
<viewport-eventline> defines whether the events will be shown in the view port's event line
whereby 0 disables this functionality and 1 enables it.
<viewport-archive-replay> defines whether the archive event replay is shown in the view port
whereby 0 disables this functionality and 1 enables it. This functionality can only be used if the
<viewport-eventline> option is enabled.
<additional-camera-IDs> is an optional comma-separated list of camera IDs which should be
shown in addition to the camera that triggered the event.
<user-names> is an optional comma-separated list of user names to notify. If not defined, then
all users are going to be notified.
<IP-addresses> is an optional comma-separated list of client workstation IP addresses to which
the notification is sent. If not defined then all connected workstations are going to be notified.
pop:1;;;
shows a live view of the camera which triggered the event in all connected client sessions.
pop:2,my-view;3,4;;
shows live view of camera which triggered the event and the cameras with IDs 3 and 4 in the view
named "my-view" in all connected client sessions.
pop:3,my_fifo;3,4;christoph;192.168.7.12
Page 73 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
shows a live view of camera which triggered the event and the cameras with IDs 3 and 4 in the
view named "my_fifo" where the IP address of the client workstation is 192.168.7.12 and user
'christoph' is logged in.
pop:4,my_dynamic;3,4;;
shows live view of camera which triggered the event and the cameras with IDs 3 and 4 in the view
named "my_dynamic" in all connected client sessions.
6. Press the Save button.
Page 74 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Please note: In order to work with archive recordings you need to have the appropriate user
privileges and camera access rights (for further details see 5.2 Setting general user privileges on page
48 and 5.3 Setting camera access rights on page 52).
Note: Some fields and buttons are deactivated until you select Modify selected camera or
group in the menu and then they become modifiable. Also, the Time Intervals field is still
empty when you begin. Later it will contain one or more program slots for the selected camera.
5. Click the Add button to add a programming slot to the Time Intervals list.
Page 75 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
7. Now you can define the days and times for archive recordings. You can activate individual days
or, with the All button, the whole week at once. Select hours and minutes from the popup matrix.
Page 76 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Please make sure that the Enable interval checkbox is marked, because only then are the
settings enabled and recording is started.
Note: If you want to record at different times on different days, you can create multiple recording
intervals. For each recording interval proceed as described here.
8. In addition to defining the weekdays and times, you need to specify the Recording format. If
your camera also supports multiple formats you have several choices (for details on streaming
formats please refer to 1.4 Video streaming methods and compression on page 7):
• JPEG stream is also known as MJPEG, where the server stores sequences of JPEG
images.
• MPEG-4 video for MPEG-4 video streams.
• H.264 video for H.264 video streams.
• MxPEG video for MxPEG video streams.
For the MPEG video formats you can additionally select Save audio.
When you choose any of the MPEG video formats then the video settings that are defined in the
camera’s Default settings will be taken for recording (4.2 Adding a new camera and setting basic
properties on page 33).
If you choose JPEG stream you can additionally set the Image quality, the Frame rate and
the Image size for the archive recording:
Set the values according to your needs. See 1.4.5 JPEG image sizes and storage requirements on
page 10 for details on images sizes, quality, and storage requirements.
Note: Some cameras are only capable of a single picture stream which will constrain the
possibility of recording in multiple formats and having different video stream settings in the
Online Monitor and the recording. Please refer to the document NETAVIS Observer Supported
Video Sources for information about supported cameras and their streaming capabilities.
9. Fill in the Recording period for this camera. This value defines how long Observer will keep
recordings. Recordings that are older than Recording period will be automatically deleted by
Observer (see also 7.1.4 Operation of the Observer dynamic storage management on page 78).
You can also select Priority over other cameras (if storage space is short) to give this
camera priority over other cameras if the available storage space is too small for all requested
recordings of all cameras (for further details see 7.1.4 Operation of the Observer dynamic storage
management on page 78).
You can also define what the requested recording period refers to: either Recording period is
measured from now or Recording period is measured from youngest recording. There
can be quite a difference between these two choices for the following case: Assume a motion
detection-based recording that only triggers recording once every few weeks.
Additionally, you can see the Actual recording period (days/hours), the Storage used by
this camera (MB), the Total storage space (MB) which shows the overall storage space of the
server, and the Free storage space (MB) which is the available space for new recordings on this
server.
10. Click on Save to save your settings. As soon as the scheduled time is reached recording is started
with these settings.
Page 77 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Page 78 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Caution: Please be careful when you set the Priority over other cameras flag since, if storage space
is short, Observer truncates the archives of all other cameras in favor of this camera. If available
storage space is much too short relative to the requested storage periods of all cameras, this can lead
to strongly truncated archives.
Either the Camera tree appears immediately or you have to push the Select camera button
:
2. Select the camera from the camera tree and push Select (you can also double click a camera or
drag it to the calendar view). An overview for the selected camera on the current day is displayed:
Page 79 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
This one-day overview shows green blocks indicating minutes for which recordings have been
archived. A red corner in a block indicates that an event was triggered and an event recording
was started in that minute.
When you move the mouse pointer over a green block, then the first recorded frame of this
minute is displayed in the lower right corner of the window.
You can also switch to a monthly or an annual overview via the View button and its popup
menu.
3. With the mouse select the time span of the archive that you want to play back. To do this press
the left mouse button at the start of the time range, then move the mouse pointer to the end of
the time range and then release the mouse button. The color of the selected time range changes
to dark green.
4. Also select the playback acceleration by moving the Playback acceleration slider (default
value is 1). At the slider you can see in parentheses how long the selected time will need for
playback at the selected playback acceleration.
5. Now click the Playback button . This brings you to the Player view and the images for the
selected time span are loaded from the server and then will be replayed with the specified
acceleration (you can cancel the loading process by pushing the Cancel button).
Page 80 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Automatic playback: By clicking the Playback buttons or you can let the recording
automatically play forward or backwards. (Due to the archive storage mechanism backwards playback
might be a little jumpy with MPEG-4 and H.264 streams.) With the Playback Speed slider at the left of
the window you can vary the playback speed. Press the Stop button to end playback.
Manual playback: By clicking and moving the green Playback marker , you can control the
playback of images manually.
You can select the size of the playback by clicking the Original size button at the lower left of the
Player dialog. Here you have the possibility to choose from various sizes.
Page 81 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
1. Just move the mouse pointer over a view port and turn the mouse wheel. The view port will zoom
accordingly.
2. You can move the zoomed area in a view port by dragging it with the mouse.
3. Use the mouse wheel again to zoom out.
Please note: When PTZ control is enabled, any mouse actions are taken for PTZ control. In order to
control the view port zooming, hold down the CTRL key while using the mouse.
1. The red and blue markers let you select a smaller time interval for detailed playback.
Alternatively, you can click the Set Blue Marker button or Set Red Marker button to set
the respective marker at the current position of the green Playback marker.
2. Click the Zoom in button to load and replay the time interval between the blue and red
markers.
Page 82 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Note: Synchronous playback always loads one frame or GOP for each selected camera after each
other.
Page 83 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Caution: Please be careful when using this feature. Unless you set the aforementioned Day limit of
remove protected archive option the protected recording periods will never be deleted
automatically by Observer. This means that the space will be locked as long as you keep them
protected.
Page 84 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Note: To play back exported video sequences with Microsoft Media Player, you need the DivX codec.
You can download this for free from www.divx.com/divx.
Caution: Do not forget this password because otherwise you cannot open the exported file.
9. When you click OK, a file dialog will be opened asking where on your client computer you to
want to save the file. Once you select the location and confirm, a File download progress
dialog will show the state of the export. You can cancel the export anytime by clicking Cancel.
10. The exported file is an executable for MS Windows 8/7/Vista/XP. Below you can find a screen of
the running SAFE Player:
Page 85 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
You can zoom into parts of the camera view by drawing a rectangle and then clicking in the rectangle.
You can drag a zoomed view with the mouse. Alternatively you can show the zoomed view in a
separate window by selecting View > Separate zoom window. You can also use the mouse wheel to
zoom in and out. See also 7.2.3 Zooming in a view port and in archive recordings on page 81.
The iCAT info display menu (accessible via a right mouse button click) offers various options for
displaying iCAT information. Please refer to 15.4 Working with iCAT on page 171 for further information.
Controlling playback
There are several ways to control the playback of the recording. Besides the Play backward and Play
forward buttons it is possible to go the Previous Frame / Next Frame and Jump to start / Jump
to end when the playback is paused. The Speed of the playback can be controlled with a slider and
ranges from 1/8th to 1024x.
Please note: In this version, Observer supports archive motion detection only for MJPEG recordings
and not for recordings of other streaming formats (like MPEG-4, MxPEG or H.264).
1. Go to the main window and choose the Archive player (if the Archive player is not available,
then perhaps it is disabled in the client preferences; see 2.4 Client multi-window and multi-screen
operation on page 20).
Page 86 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
2. In the Calendar view of the Player select the camera and the timeframe for the motion
detection.
3. Push on the Playback button . This will load the images of the timeframe and replay the
recorded images.
4. In the Upper left corner of the Player view select Archive motion detection from the Control
menu:
Now the motion detection pane is opened on the left side in the Player view:
5. From the Detection field pop-up menu choose the motion detection field definition you want
to use for this motion detection. If you do not yet have a detection field defined or want to
change an existing definition, then you can push the Manage detection fields button to jump
to the Detection fields view in Administration. Please refer to 8.1.2 Basic configuration of server-
Page 87 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
based motion detection on page 91 for details on how to setup detection fields.
Continuing here, we assume you have a correctly set up detection field definition.
6. To start the server-based motion detection Push the Start button .
You can see the progress of the motion detection at the progress bar. You can always stop a
running motion detection by pushing the Stop button .
Caution: Since the motion detection actually runs on the Observer server and can potentially
use up a lot of CPU resources, please be careful selecting the detection fields and also the time
period. Especially if you have selected a long time period in the Calendar the motion detection
can take a lot of time. As mentioned above, you can always stop a running archive motion
detection.
7. While the motion detection is running, the detected motions are displayed in the hit list sorted by
the time in which the motion occurred. In our example, we have 2 hits:
8. You can replay the events by just selecting the event with the mouse. The playback time before
and after the event can be defined by the Pre/Post event (sec) fields. You can change the
values according to your needs. Please note that there must be archived pre- and post-event
recordings available in order to be replayed.
9. You can step through the events by pushing the Previous and Next buttons.
Page 88 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Please note: The NEA recording on the external device is exactly the same as the standard recording
on the main storage (at least as long as there is enough space on the device). The same recording
algorithm is used for NEA as the main storage. This means when the NEA device is running out of
space the oldest recordings will be overwritten . For a detailed description of how Observer manages
recordings please refer to 7.1.4 Operation of the Observer dynamic storage management on page 78
Apart from the NEA management functions in the client, NEA storage devices can also be configured
and managed via the admin command line interface of Observer (see NETAVIS Observer Server
Installation and Administration). Some more advanced functions like the setup of swap management
for storage devices are accessible only in this interface.
• Initializing of storage devices and starting of NEA recordings as well as stopping of NEA
recordings and ejecting devices.
All of these functions are available in the NEA storage management dialog is accessible via
the System administration menu of the main window.
The user privilege "Recordings: Manage external archive devices (NEA)" is needed for accessing
the dialog. See 5.2 Setting general user privileges on page 48 on how to modify user privileges.
• Replay NEA external archives in the Archive player.
The user privilege "Recordings: View external archive recordings (NEA)" is needed.
For an explanation and setup of automatic swap management of NEA device please refer to the
document NETAVIS Observer Server Installation and Administration.
When a NEA storage device connected to the server you can view its status in the NEA storage
management dialog accessible from the System administration menu of the main window. The
dialog shows all connected NEA storage devices, their sizes and their Status:
When a new storage device is connected to the server via an eSATA connector, then the device is
shown in the NEA storage management dialog accessible from the System administration menu
of the main window.
You can start NEA recording on the device by pushing the Initialize and start recording button.
A new device will automatically be initialized (formatted) for NEA before the recording starts. This
initialization is only executed once per device and it may take a while before it is finished. When
initialization is done, recording will start automatically. Next time, recording will start immediately
when you push the button.
Page 89 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Please note: All data on a storage device will be deleted, when it is initialized for NEA. Also note that
the minimum size for a NEA archive is 5GB.
In the NEA storage management dialog accessible from the System administration menu of the
main window you can stop a NEA recording on the device by pushing the Stop/eject button.
When you push the button, all NEA recordings on the device will be stopped and the device will be
unmounted/ejected. A dialog will open and ask you to unplug the device before pushing OK.
If you leave the device connected when you push OK, the archive of the device will be shown again in
the camera tree of Archive player of users who have the appropriate privileges (see above). In such a
case you have to execute the command Stop/eject again before unplugging the device.
Please note: There are some limitations on external NEA archives, like Archive motion detection is
not possible.
Page 90 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Please note: Since release R1.12 Observer offers powerful video analysis functions with iCAT.
This chapter describes the simple motion detection features that were present before R1.12 and that
are now a part of the iCAT toolbox. Simple motion detection is based on a detection of changed pixels
between video frames while iCAT offers intelligent object detection and tracking. For more details on
iCAT refer to 15 Video analytics with iCAT on page 146.
• Observer’s own server-based motion detection: The images are analyzed by the Observer
server. The advantage of this method is that it works with any camera, even old cameras. The
disadvantage might be that if there are many cameras that transfer their images to the server
for analysis, the bandwidth of the network could be burdened and also the server could be
overloaded.
• In-camera motion detection: The images are analyzed in the camera and only when
detection occurs, an event and image data are sent to the Observer server that then stores the
event and the images in its archive. The advantage of this method is that the network and the
server are not burdened. The disadvantage is that it works only with special cameras, that offer
this feature and that the motion detection settings have to be programmed in the camera
directly.
8.1.1 Preparation
Before you begin to configure a motion detection definition, be sure that you have the necessary
authorization to make settings. If you are not sure, please ask your Observer administrator.
Please note: Motion detection for PTZ cameras is problematic since normal movement of the camera
will trigger a motion detection event.
Page 91 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
2. In the camera tree at the left, select the camera for which you want to create a motion detection
definition.
3. In the menu choose Add new definition.
4. From the Type menu chose Simple motion detection.
5. Select whether you want to create a Rectangle or a Polygon.
6. Use the mouse to draw a detection field in the preview area.
7. Mark the Enabled checkbox, otherwise the detection field is inactive and no detection can occur.
Actually, a detection field must also be assigned to an active motion detection schedule (see
below).
8. Assign a name for the field in the Identifier text box, e.g. “Movement”.
9. Optionally, you can enter a Comment which is shown in the event details.
Hint: You can also use the Comment field to configure certain views to be shown to one or
multiple users when a motion is detected. See 6.10 Dynamic View Control in Online Monitor on
page 68 for more information.
10. Usually the Sensitivity should be left at Normal. It defines how sensitive (or tolerant) the
detection algorithm is when detecting the change of pixels. Modify the setting only when you
want the algorithm to be more or less sensitive.
11. In the Sample frequency (fps) field you can specify how often the image is to be checked for
changes.
Page 92 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
12. In the Time between events (sec) field you can enter the minimum time that must elapse
before a following event is triggered. This helps to filter out repetitive events.
Minimum time (sec) = 3 means that, after one motion detection event has been triggered, at least
3 seconds must pass before another Motion Detection event can be generated.
13. In the Pixel change threshold (%) field you can define how many pixels in % must change so
that a detection event is triggered.
Pixel change (%) = 20 means that 20% of the pixels relative to the previous frame (detection
cycle) must change for a detection to be triggered.
14. Optionally you can define a specific event icon and sound for this definition: Click on the Icon
button to assign a symbol. This icon is displayed in the Event list to notify an operator when an
event related to this definition occurs.
Click on the Sound button to assign a specific sound to the definition. This sound is played when
a user notification event related to the definition occurs.
15. After you have entered all parameters for your definition, save it by clicking on the Save button.
16. If you have not yet scheduled a detection-based recording for this camera, you will be prompted
for whether you want to edit the scheduling now.
Click on the Yes button if you want to configure the scheduling now. Please refer to section 8.1.3
Scheduling motion detection on page 93.
Click No if you want to do that later.
Please note:
If you add a new iCAT definition, it will automatically be assigned to all iCAT schedules of the
camera. If you do not want that, you have to remove the assignment manually.
If a definition is not assigned to an active schedule then it will not be activated (no archive
recordings will be made and no events will be generated) although it has the Active option set.
Page 93 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
Note: Since the camera carries out the detection, the actual configuration and programming must be
done directly in the camera via the setup interface (e.g. parameters like detection settings, image
quality, speed, etc.). This can usually be done by connecting with a web browser to the camera (by
entering its IP address). Please refer to the latest NETAVIS Observer Supported Video Sources and the
camera’s user manual for further details.
Page 94 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
• Receive pre- and/or post-event images from the camera (pushed by the camera) and
record these images in the standard Observer camera archive.
• Optionally, with receiving an in-camera detection event, start a server-based post-event
recording of images and merging this recording with the in-camera event images pushed by
the camera. This server-based recording can have a much higher frame rate than the pictures
pushed by the camera. This allows a much better documentation of in-camera events.
Note: In-camera motion detection cannot be used on a camera in parallel with other types of
recordings!
Page 95 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
checkbox. For further details about these recording parameters please refer to sections 7.1.1
Programming continuous timed recordings on page 75 and 7.1.4 Operation of the Observer
dynamic storage management on page 78.
9. Push the Save button to store your changes.
10. Now you have to configure your camera for pushing the in-camera events and images to the
Observer server. Since the steps for doing that are very camera-specific, you must consult the
document NETAVIS Observer Supported Video Sources. To program the camera's detection
algorithm please consult its user manual.
Note: After setting up the camera, do not forget to check and set the date and time of the camera to
reflect your current time.
Page 96 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
9 Handling events
Observer can record and display events of various types. These can be camera-related events like
video analytics events, archive access by a user, or camera failures, but also system-related events like
user logon and logoff.
Events are displayed in Event lists of the client and are stored in the central event database of the
server were they can also be queried. They have several properties that depend on the type of the
event.
Event priorities
All events also have an event priority that defines the relative importance of events. The default event
priority is 100. Informational events have a lower priority of 50, system and camera malfunction events
have a higher priority of 300. For events generated by video analytics triggers, you can specify
individual event priorities. Event lists can also be sorted according to priority.
In the User admin under the System administration menu you can define for each user some basic
event handling privileges. For example whether user you can access events at all or what kinds of
system events are displayed at the users event list (see 5.2 Setting general user privileges on page 48).
The number of events stored in the database can be defined in the server parameters (see 11.2 Setting
Observer server parameters on page 114).
Each client can have multiple event lists, for example, one that is sorted chronologically and another
one that is sorted according to event priorities. In the Client preferences you can configure for each
window's event list what columns are to be displayed, whether acknowledged events should stay in
Page 97 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
the list or should disappear automatically and other options (see 2.5 Client preferences on page 22 for
more details).
The position of the event lists inside client windows and further layout options can be defined as
described in 2.4 Client multi-window and multi-screen operation on page 20.
You can sort the event list according to column values by just clicking on the column title. Little arrows
after the title indicate the sorting order (ascending or descending). Clicking a column title again will
change the sorting order. For example, you can sort the list according to event priority by clicking on
the Priority column heading.
In an event list columns can be rearranged by clicking on a column title and dragging it with the
mouse to the new location.
Unseen events have a light red background color. As soon as an event has been opened in the Event
details dialog its background color changes to light grey. When an event is acknowledged, it usually
disappears from the event list. However, in the client preferences you can define whether
acknowledged events should be displayed in the event list. If so, they appear with a white background
color.
The first line contains the event text. Further details of the event are listed at the left side of the dialog.
The exact contents of the event details depends on the event type, but you will at least find the exact
time when the event occurred and if it is camera-related then also the camera ID and name are shown.
Page 98 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
When you move the mouse over the camera preview area you will also see the video analytics object
markers and annotations related to the event trigger (as seen in the screenshot above).
The following options are available for users:
• Start live stream: Starts live monitoring of the camera that triggered the event.
• Short event replay: Starts a playback of the event recording that triggered the event (a replay
is only available if the camera has been configured to record the event).
• Go to Archive player: Opens the Archive player and starts a playback of the event recording.
• Classify event as: The event can be classified as Unclassified, Irrelevant, Valid (true), and
Invalid (false).
• Acknowledge comment: This field can be used to add an optional comment related to the
acknowledgement of the event.
• Acknowledge: Acknowledges the event and close the dialog.
• Acknowledge and next: Acknowledges the event and loads the next event.
• Previous: Loads the previous event.
• Next: Loads the next event.
• Close: Closes the dialog.
Page 99 of 219
NETAVIS Observer 4.8 User Manual (July 2018)
In each of the dialogs it is possible to make multiple selections. For example to search for the
event types In-camera motion detection, Simple motion detection, and iCAT object
tracking, hold down the CTRL key and select these event types with left mouse clicks:
You can also search for events which have been acknowledged or not acknowledged by
selecting the appropriate check box.
4. With the Number of records options you can also select how many search results should be
displayed.
5. After setting different search criteria and filters press the Search button to display the results.
By clicking the Next button you can step forward in the result list and with Previous button you
can step backwards.
2. Select what file type you want to export the events to whereby the following types are available:
• HTML (.html)
• Excel Workbook (.xlsx)
• Excel 97-2003 Workbook (.xls)
• JSON (.json)
• Comma separated values (.csv)
3. Select whether you want to export All events according to filter criteria or Only events
display on search result list and enter a file name.
4. Press Export. Now the corresponding file is created and you can open and use it in a program of
your choice.
2. Select for which Start time, Time period and time Resolution you want to create the report.
Also you can filter according to Event type, Camera and User (multiple selection by holding
down the CTRL key while clicking is supported for all filters). If you do not select any values for
Event filter, then a summary statistic report will be created.
3. Press OK to start creating the report file. You will be prompted for a file name where the report
should be stored.
4. Open the created file in MS Excel. With the data, you can also create graphics like the following
one (motion detection statistics for one day):
Please note: In order to receive emails, the email address must be correctly defined in the user
properties. Also the server must be correctly configured and must also have access to an email router
for sending emails. For receiving SMS an SMS sending device must be configured at the server. See
NETAVIS Observer Server Installation and Administration for details.
Note: To configure and work with PTZ cameras, you need the corresponding user privileges and
camera access rights (see 5.2 Setting general user privileges on page 48 and 5.3 Setting camera access
rights on page 52). If you find that you do not have the authorizations you need, please contact your
Observer administrator.
6. In the Name field enter a designation for the route. Optionally you can enter a description.
7. Click on Next to move to the Route details dialog:
8. Click the New button to create a new entry in the Position list of the route and select a position
from the list of predefined positions via the Position popup menu.
9. In the Time at position field, enter the duration in seconds that the camera is to spend at the
position. Note that this time includes the positioning time of the camera.
10. Repeat Steps 8 and 9 for each entry in the position list.
11. You can change the sequence within the position list by selecting a position and then clicking the
Up or Down button. You can remove an entry from the Positions list by selecting it and then
pressing the Delete button.
12. Click the Save button to save your route or Cancel to discard changes.
You can select the saved route in the Online Monitor via the PTZ menu of the view port control menu
of the PTZ camera. Proceed similarly to modify a route.
Note: For each user a PTZ priority and allocation timeouts can be defined (see 5.1 Creating a new
user account on page 46). If another user with equal or higher priority has already assumed PTZ
control over the camera, you will be denied control and notified in a window. Then you need to
wait until this user surrenders control or reaches his timeout.
A user with a higher PTZ priority can take away PTZ control from a user with lower PTZ priority.
You have several options for directly controlling PTZ cams: The various modes can be selected
via PTZ->PTZ control mode in the view port control menu. The default setting is Continuous
mode and Center on click. As an alternate mode also an operation via a Crosshair is available.
Continuous mode (available for joystick and mouse operation): Click the mouse in the view port
and drag it in the direction where you want the PTZ cam to move. A red dot marks the origin of
the movement and a red line indicates how fast the movement is. When you release the mouse,
the movement will stop (though depending on the camera and network it is possible that there is
a small delay).
Center on Click: A single mouse click somewhere in the view port commands the camera to
center on the mouse click location.
Zooming: Zooming is available via the mouse wheel: zoom-in (forward scroll) and zoom-out
(backward scroll). The amount of the zoom will be indicated by 3 red dots after the mouse is
released:
Please note: When PTZ control is enabled, any mouse actions are taken for PTZ control. In
order to control the client-based view port zooming (see 7.2.3 Zooming in a view port and in
archive recordings on page 81), hold down the CTRL key while using the mouse.
Crosshair: When this mode is enabled, a crosshair for relative control mode is shown in the view
port:
Please note: In order for the joystick support to work the Java version (32-bit or 64-bit) installed
on the client PC has to match the system's processor architecture.
1. To send a camera to a predefined PTZ position, activate the PTZ control, right-click to open the
context menu and select a position from the list at the top of the menu (see 10.2 Defining fixed
PTZ positions on page 105).
2. To automatically follow a predefined PTZ route choose it from PTZ->Select PTZ route in the
view port control menu of the PTZ camera.
3. In the view port control menu of the PTZ camera, click PTZ->Select PTZ route->Stop route to
stop control of the PTZ camera.
You can stop PTZ control by pushing the button or deselect PTZ->Activate PTZ control in the view
port control menu.
Note: Please do not forget to stop PTZ control; otherwise other users cannot assume control. When
you end the Observer client, all PTZ control is automatically released.
For further details on user-specific PTZ priorities and timeouts please refer to 5.1 Creating a new user
account on page 46.
Note: Before you can schedule a route you first must define the route (please refer to 10.3 Defining PTZ
routes on page 105).
1. Choose Camera admin from the System administration menu. This opens the Camera
admin dialog.
2. Select a PTZ camera.
3. Press the Next button to go to the Scheduling dialog.
4. In the menu select Modify selected camera or group.
5. Click the Add button to add a programming slot to the Time Intervals field.
6. Click the Change button (now enabled) for a popup menu; select PTZ route.
This will show the PTZ scheduling settings (see above).
7. Set the time settings according to your needs.
8. Select the route from the Route Name pop-up menu (in the example above we selected Route
1).
9. Click on Save to store the scheduling settings.
Note: You can add several PTZ route schedules for a single camera. This means that you can follow
different routes at different times.
Archive shows the sum of fps and Kbytes per second of archiving on that host.
Monitor shows the sum of fps and Kbytes per second of all logged in users.
Please note that you usually only see the status of your server. If you want to monitor the status
of other Observer servers as well, you have to make other servers known to your server (please
refer to 12 Working with interconnected Observer servers on page 121).
2. To show the details of a server, double click an entry or select a server in the table and then
choose Details from the System menu. Now you see detailed information for the selected
server with hard disk partitions and their state.
Partition ctrl: shows the number of the hard disk controller; dev: shows the hard disk number
on the controller. On the screen dump we have 5 disks on 3 controllers.
Type indicates the type of the partition: DB is a database holding configuration and event data
and I is an image partition that holds the actual video image data.
Status indicates whether there is a failure on the partition. Such a failure needs to be reported to
your Observer Administrator.
Capacity indicates the size of the partition in MB.
Used % shows how much space of the partition is used.
3. Click on the Back button to leave the server details dialog and go back to the list of servers.
Please refer to 12 Working with interconnected Observer servers on page for a general explanation
of synchronization groups.
Please note: Modification of configuration data is only possible at the master of the
synchronization group and not at the slaves (as they will be updated by the master
automatically). As long as there is no master in the sync group (which may happen during set up
of the sync group) no modification is possible at all.
4. As of Observer 4.7 a direct interface to NETAVIS sMart Data Warehouse is available. It can be
enabled by checking the Allow sMart Data Warehouse connection checkbox and then setting
up (and confirming) a password for the interface. This password then needs to be entered in the
corresponding NETAVIS sMart Data Warehouse configuration.
5. Click on the Next button. Now you see the Server Parameters dialog. The following settings are
available:
Event storage period (days) Defines how many days events are stored. Any events that
are older than these days will be deleted from the event
database.
Max number of events stored on Defines how many events can be stored in the event
server database. For each event more than the defined max.
number of events the oldest event will be deleted.
Please note that both settings, the storage period and the
max. number of events, together constrain the event
database.
Server timezone abbreviation Defines the server timezone. Available values are:
Timeout for IP cameras (sec) Defines the time the server waits to receive a response from
a camera and then displays the "Camera not responding"
message in the Online Monitor. Note that also other error
conditions can trigger the "Camera not responding"
message, so this message might also be shown sooner than
the configured timeout period (e.g. when there's no route to
host).
Retry count for IP cameras, after Defines the number of connection retries after which the
which an event is generated server raises a “Camera not reachable” event.
Maximum login time for the Defines the time after which the guest user is forced to
“guest” user (sec) logout. If the value is 0, then the guest is never forced to
logout.
Timeout for server-server Defines the time after which an event “Connection lost to
communication (sec) server ...” is generated and the cameras mounted from that
server disappear from the camera tree (only the root
element of the mounted camera tree remains visible,
painted in red).
Server network address for camera This setting is only important for IP cameras that must
access actively access the server (e.g. for FTP upload with in-
camera motion detection) and only when the server is not
accessible by the cameras at its set IP address but at a
different address (e.g. due to address mapping). Per default
this address always reflects the IP address of the server. You
can enter an IP address or a network name.
List of IP addresses from which URL control is one way to enable third-party applications to
URL control is enabled start actions via URL-encoded strings (send http GET
request to an Observer server). The server upon receiving
these special URL requests executes the actions as if they
would have been generated internally.
URL control is only enabled for computers whose IP
addresses are entered in this fields (comma separated list).
All other requests are blocked. Please refer to the Release
Notes for further details on URL control.
ABS transcoding bandwidth limit Bandwidth limit per session for transcoded outgoing live
for live view video streams (kbit/s) view streams (ABS). Zero means no transcoding and no limit
for live view video streams. Please read 2.7 Observer
Transcoding™ for low-bandwidth client-server connections
(ABS) on page 29 for choosing the best limit values.
ABS transcoding bandwidth limit Bandwidth limit per session for transcoded outgoing
for archive playback video streams archive playback streams (ABS). Zero means no transcoding
(kbit/s) and no limit for archive playback streams. Please read 2.7
Observer Transcoding™ for low-bandwidth client-server
connections (ABS) on page 29 for choosing the best limit
values.
Note: See the note for "Transcoding bandwidth limit for live
view video streams (kbit/s)".
ABS transcoding bandwidth limit Bandwidth limit per session for transcoded outgoing
for archive export streams (kbit/s) archive export streams (ABS). Zero means no transcoding
and no limit for archive export. Please read 2.7 Observer
Transcoding™ for low-bandwidth client-server connections
Note: See the note for "Transcoding bandwidth limit for live
view video streams (kbit/s)".
Total outgoing bandwidth limit Total bandwidth limit for all outgoing connections of the
(kbit/s) specified network interface (NIC).
RTSP streaming port Port number generally used for RTSP communication (e.g.
for some MPEG cameras).
Please note that modifying the value here does not change
the RTSP port setting in the cameras. The cameras need to
be configured separately for the RTSP port.
Length of alarm recording image In case continuous recording is run in parallel to an iCAT or
database in days event-triggered recording this option defines the minimum
number of days the iCAT or event-triggered recordings are
stored (regardless of how long the continuous recordings
are stored).
Limit for manually protected Any delete-protected recordings older than this time limit
recordings in days (number of days) will be deleted automatically.
Name of logout PTZ position Name of a PTZ position where all PTZ cameras which have a
position configured with that name will be positioned when
a user logs out. If left empty the feature is not activated.
Heat map export period Interval in which heat map data is exported. Possible
options are:
• hourly
• daily
Heat map data reset method Per default iCAT heat map data is never reset. However
since NETAVIS Observer 4.6 it is possible to automatically
reset all heat map data:
• daily reset: day:hour (concrete example is day:07 for
daily resets at 7AM)
• weekly reset: week:week_day:hour (concrete
example is week:2:07 for weekly resets on Mondays
at 7AM whereby week_day is a number where
Sunday = 1 and Saturday = 7)
• monthly reset: month:day_of_month:hour (concrete
example is month:01:07 for monthy resets on the
first of the month at 7AM)
Note: This setting only affects the current host. This means
that in a system with multiple Observer servers this option
Note: See the note for "Heat map statistics reset method".
Note: See the note for "Heat map statistics reset method".
Note: See the note for "Heat map statistics reset method".
Note: See the note for "Heat map statistics reset method".
Max password age (days) Defines the maximum age of user's passwords (in days). If
set to 0 the feature is not activated.
6. To modify any of these values, select Modify host in the menu, set the values and then press
Save.
Note: In order for interconnected Observer servers to work together all of them have to run exactly the
same version.
b. No NETAVIS Observer server must import and simultaneously export cameras from the same
server. So there should be no loops within the architecture.
Hostname This is the name you give to the remote Observer server. It does not
necessarily have to match the actual name of the remote server.
Network name or IP This is the network name or IP address of the remote Observer
address server.
Observer Host ID This is an internal ID that uniquely identifies the remote server. The
ID of an Observer server can be obtained by reading its own entry
under the Host Admin tab.
2. Now you can define what you want to do with the remote Observer server. Set options according
to your requirements:
Send local events to Check this option if you want that your local events are sent to the
remote server remote host.
Import of cameras from Check this option if you want to import cameras from the remote
remote server enabled server.
Export local cameras to Check this option if you want to export cameras of your server to
remote server the remote server. If this option is not enabled the remote server
cannot access your cameras.
Monitor remote server Check this option if you want to monitor the status of the remote
status server in System information dialog.
Upgrade software on Check this option if you want to allow automatic distributed
Accept software upgrades Check this option if you want to accept automatic distributed
from remote server software upgrades of the current server (=slave) initiated by the
(=upgrade master) remote server (=master). In such a case your current local server is
the upgrade slave and the remote host the upgrade master. For
details about distributed upgrades please refer to the manual
NETAVIS Observer Server Installation and Administration.
Request license from Check this option if your current Observer server should check out
remote server (=license licenses from the remote server (=license master). For details about
master) floating network licenses please refer to the manual NETAVIS
Observer Server Installation and Administration.
Use secure connection Select this checkbox if you want to use an encrypted connection
(HTTPS) between the two servers (TLS HTTPS).
3. Click on Save to save your settings. Now you are ready to import cameras or camera groups from
a remote Observer server.
Note: See 12.2 Adding and defining a remote Observer server on page 122for more details on how to
configure these prerequisites.
Note: For remote cameras it is recommended to mount them per group rather than individually.
Now the camera tree shows this imported camera or group in bold with the name of the remote
server in brackets (e.g. 209 [grey]):
If the remote server is not reachable, the mounted cameras are not shown in the tree; only the
top point of the mounted camera tree appears in red color.
5. After you mount remote cameras you can work with them as if they would be connected to this
server. You can access live streaming via the Online Monitor or you can access the archive. If you
have the appropriate administrator rights you can also modify camera settings like PTZ,
scheduling or motion detection.
Note: Only local cameras and remote camera groups can be duplicated. It is not possible to
duplicate individually mounted remote cameras.
Note: Servers included in a synchronization group must not have any locally added cameras and
I/O devices!
Note: When a server is added as a slave to a synchronization group, any configuration data that does
not exist on the master server will be deleted and overwritten by the data of the master server. So
please be careful when you work with synchronization groups.
Also: When users connect to slave servers they cannot modify their user settings, this can only be done
when they are connected to the master user server. Thus layout navigation tool (LNT) project creation
and icon uploads should always be done at the master user server.
Hint: Observer can currently only import Active Directory users existing directly under the target group
defined by the "AD group name" field. Users from any other parallel OUs or groups, which are not
directly there but defined as a member of target group defined by "AD group name", cannot be
imported!
b. Enter the Server address (IP address) and Port (the standard port is 389) of the Active
Directory server.
c. Enter the User name (including the name of the Active Directory server, e.g.
NETAVIS\Administrator), Password and confirm the Password.
d. You can test the entered configuration with the Test connection button.
7. Configure the Directory parameters:
a. Enter the organizational unit (OU) and domain as the Search base (e.g.
OU=Users,DC=netavis,DC=net).
b. Enter the name of the previously configured Active Directory group name as the AD group
name and don't forget to include the corresponding organizational unit the AD group is
located in (e.g. CN=Observer4,OU=Users,DC=netavis,DC=net).
c. Enter the name of the previously configured Observer user group to which the Active Directory
users will be imported as the Observer group name (e.g. AD-Users) .
d. Select a previously defined Attribute mapping schema or create a new one by choosing
Edit... and then clicking on New. These are the attributes which will be imported from Active
Directory to Observer:
• Login name tag (mandatory): Set it to cn(users login with their common name: e.g.
John Doe) or sAMAccountName (users login with their account name: e.g. john).
• Name tag (mandatory): Set it to displayName.
• SMS number tag (optional): Set it to telephoneNumber.
• Email tag (optional): Set it to mail.
e. Enter the domain of the server as its Directory address (e.g. for netavis.net it would be
DC=netavis,DC=net).
Note: The directory parameters (OU, DC, CN) have to be written in capital letters!
Note: The Login name tag option set here has to match the option set earlier in the
Attribute mapping schema!
9. Press Save.
10. After Observer has finished synchronization with the Active Directory server (which may take up to
two minutes) the users configured there will appear under the previously set Observer group
name in the User admin.
Note: The camera access rights can be set for regular individual users and Active Directory groups
but not for regular user groups or individual Active Directory users.
Note: By adding extra servers on the Active Directory configuration page it is possible to integrate
multiple Active Directory groups.
Hint: In case that the configuration does not work please make sure that the configuration options are
correct. For example no leading or trailing space must be included in the configuration (e.g.
"DC=netavis,DC=net " with a trailing space does not work if the directory address is netavis.net).
Note: Since Layout Navigation is licensed separately of Observer you need a valid license in order to
work with these features. See also 11.1 Server system information and restarting on page 112 for details
about what license you have.
Note: Layout Navigation only works on Microsoft Windows. See 2.1 Introduction to Observer clients on
page 11 for details on the functionality available on various platforms.
Please note: Currently LNT can only display MJPEG camera streams (MPEG camera streams are not
yet supported).
Note: Layout Navigation only works on Microsoft Windows and requires Windows .NET 2.0 or later.
• in the locally installed client (please refer to 2.3.2 Starting the installed Observer client on page
16 for more details)
• in a desktop web browser (with the Start the Layout Navigation Tool (from the browser
using Web Start) option on the start page of Observer)
• in a desktop web browser with a One-Click Single-Sign-On simultaneously with the Observer
client (with the Start the NETAVIS client and the Layout Navigation Tool (from the
browser using Web Start) option on the start page of Observer). When using this option
entering wrong Login credentials will result in two separate error messages.
Hint: The size and position of the Layout Navigation window is saved and restored upon the next start
of the application.
14.4.3 Creating layouts and mapping cameras, I/O contacts, and zones
After you planned your layout hierarchy you have to select appropriate images for the various layouts.
The layout navigation tool supports popular image graphics file formats like GIF, JPEG, PNG, etc.
Depending on your needs and the available screen resolution for the layout navigation you have to
choose the size (in pixels) for your images. Please keep in mind that the LNT also supports image
scaling to fit the available space.
Please note: Each image's file size is limited to 2MB. For performance reasons it is recommended to
use JPEG and PNG files and keep their sizes small.
Mapping cameras
1. From the list of controls at the right side drag a camera control icon with the mouse onto the
layout and drop it there. A camera selection dialog opens automatically. Choose the I/O device
and contact and push OK. Now you have mapped the camera to your camera icon.
Repeat the above step for other cameras.
You can delete a camera by selecting it with the mouse by choosing Delete from the right mouse
button pop-up menu.
LNT allows you to work with I/O devices, i.e. you can display the state of input contacts and you can
switch output contacts of I/O devices (for configuring I/O devices, please refer to 22.7 Configuring
I/O devices on page 203). You can place such I/O controls onto layouts like camera controls.
Here are some icons representing the various states of input and output contacts:
Output contacts:
Input contacts:
Impulse button:
1. From the list of controls at the right side drag an I/O control icon with the mouse onto the layout
and drop it there. An I/O contact selection dialog opens automatically. In the list you only see I/O
devices that are configured with Observer. Choose the device and the I/O contact and press OK
(depending on whether the I/O control represents an input, an output or an impulse button, you
get only input or output contacts of the selected device). Now you have mapped the I/O device to
the icon.
Repeat the above step for other I/O contacts.
You can delete an I/O contact control by selecting it with the mouse by choosing Delete from the right
mouse button pop-up menu.
Defining zones
LNT allows you to group several cameras on a layout into so called zones that can have arbitrary
polygon shapes.
Cameras belong to a zone as long as they are positioned inside the boundaries of the zone. You can
place an arbitrary number of zones onto a layout.
Here is how you can create a zone:
1. Select a layout from the layout hierarchy.
2. Drag the Zone field with the mouse from the controls list at the right side and drop it onto your
layout. Where you drop the zone field control with the mouse will be the first corner of the
polygon shape and you can now define the zone shape by clicking at further corners. To finish
the zone definition, close the polygon shape. Alternatively you can create a rectangular zone by
pressing the CTRL key while moving the mouse.
3. Per default the zones will be named “Zone-1”, “Zone-2” and so on. You can rename and delete a
zone via the right mouse button pop-up menu. Zones can be moved by dragging them with the
mouse.
Please note: A zone can trigger certain actions, e.g. when you click with the mouse on it, a zone can
show all cameras belonging to the zone in the Observer client. This, for example, will either create a
new view in the Observer client or map it to an existing view depending on the names of the views and
zones. Therefore the name of a zone can be of importance.
Here is how the mapping works for the LNT action Show zone live in Observer Client:
If there is an Observer view that has the same name as the LNT zone and that also contains all the
cameras of the LNT zone, then this view is exposed in the Observer client. Otherwise a new view is
created with the name of the zone. Details for zone actions can be found under 14.5.3 Modifying zone
control appearance and behavior on page 139.
LNT offers link fields to easily navigate between layouts. Link fields can be placed on layouts and are
resizable grey areas. Clicking on a link field in Navigation mode will open the “linked” layout.
Here is how you can link layouts via link fields:
1. Drag the Link field from the controls list at the right side and drop it onto your layout. Where
you drop the zone field control with the mouse will be the first corner of the polygon shape and
you can now define its shape by clicking at further corners. To finish the link field definition, close
the polygon shape. Alternatively you can create a rectangular link field by pressing the CTRL key
while moving the mouse.
After you finished defining the shape of the link field, a dialog is opened offering the available
layouts for this link field. Choose a layout by double clicking or by pushing the Select button.
This defines which layout is to be opened when the link field double clicked in Navigation mode.
2. Choose Save from the Project menu to save your changes.
Now you can repeat the steps above to create your overall layout hierarchy. You can switch back to
Navigation mode by clicking the navigation mode icon at the right side of the tool bar.
To define a default project to be loaded when LNT is started, follow these steps:
1. In the Project menu choose Set default project… which opens a dialog listing all available
projects. Choose a project and push Select. To not load a default project, choose <No default
project> from the list.
Now this project will be loaded automatically at startup.
A home layout can be defined which is automatically shown when the project is loaded. You can set
the home layout by following these steps:
1. In the layout hierarchy select the layout that you want to set as home layout.
2. In the Layout menu choose Set as home.
Now this layout will be opened automatically when the project is loaded.
In this dialog you can define the control name and image and which actions are to be performed
on certain mouse operations and events.
Actions
• Show live in Observer Client exposes a large live view of the respective camera in the
Observer client that runs on the same machine. If no client is running on the same
machine, then noting is happening.
• Show archive calendar in Observer Client exposes the archive calendar view of the
respective camera in the Observer client that runs on the same machine. If no client is
running on the same machine, then noting is happening.
• Show live stream in LNT opens a window showing a live stream of respective camera in
LNT. If this action is bound to Mouse over then the window will be automatically closed
when the mouse is moved away from the camera icon. If this action is bound to Single
click or Double click, then the window stays until it is closed manually. To close all such
windows, you can select Close all live streams in the View menu.
• Start/Stop continuous recording starts or stops continuous recording of the camera in
Observer. It actually sets or deletes the Enable interval checkbox in the camera admin’s
Scheduling dialog. Please be aware that there must be at least one continuous recording
interval for the camera for this to work (refer to 7.1.1 Programming continuous timed
recordings on page 75 for details).
• Start/Stop motion detection enables or disables motion detection of the camera in
Observer. It actually marks or unmarks the Enabled checkbox in the camera admin’s
Motion detection dialog. Please be aware that there must be at least one detection field
definition for the camera for this to work (refer to 8.1.2 Basic configuration of server-based
motion detection on page 91 for details).
• Start/Stop analog video decode allows the control of special devices that decode IP-
based network video signals to analog video signals. This is useful for example for security
center video walls that are driven by analog video signals. The configuration of these
special devices must be done in configuration files (please refer to the Release Notes or to
the customization documentation).
Events
You can also modify the behavior of control icons for certain events, like Connection to camera
lost or Motion detected.
3. Click OK to save changes.
2. In the control icon list click on a zone icon with the right mouse button and choose Modify…
from the pop-up menu. This opens the Modify Zone dialog:
In this dialog you can define the color of the zone and which actions are to be performed on
certain mouse operations and events.
Actions
The possible Actions are basically the same as for the camera control (see 14.5.1 Modifying
camera control appearance and behavior on page 138) with one addition:
• Show zone live in Observer Client exposes the corresponding view of the zone in the
Observer client that runs on the same machine. If there is an Observer view that has the
same name as the LNT zone and that also contains all the cameras of the zone, then this
view is exposed in the Observer client. Otherwise a new view is created with the name of
the zone.
Events
You can also modify the behavior of zones of this type for certain events, like Connection to
camera lost or Motion detected.
The event View selected in Observer Client works this way: If there is an LNT zone with the
same name as the view in Observer then the corresponding action is triggered.
3. Click OK to save changes.
In this dialog you can define the name of the control and the two images for activated and
deactivated states. For the Impulse Button you can also set the Impulse Duration (ms) and
whether 0 or 1 should be set. The control type (Input, Output, Impulse Button) cannot be
modified.
3. Click OK to save changes.
2. In the control icon list click the right mouse button and choose Add I/O contact… from the
pop-up menu. This opens the Modify I/O Contact Control dialog (for a screenshot see 14.5.5
Modifying I/O contact control appearance and behavior on page 141).
In this dialog you can define the name of the control and the two images for activated and
deactivated states. Also set the Control type: An Input contact displays the state of the input
contact of an I/O device. An Output contact allows you to switch an output contact of an I/O
device. An Impulse button allows you to manually switch an output contact for a predefined
period of time.
For example, for an input contact that shows the state of a gate barrier you can use images that
reflect the open and closed state of the barrier.
3. Click OK to create the new camera control icon.
Note: An Observer client must be running under the same user on the same client workstation so that
it can be controlled by Layout Navigation.
As soon as you move the mouse pointer away from the icon, the stream disappears. This behavior can
be changed with the Modify control icon dialog (see 14.5.1 Modifying camera control appearance and
behavior on page 138).
Note: An Observer client has to be running on the same machine being connected to the same server
in order for the feature to work (the client will not be started automatically by LNT).
Note: An Observer client has to be running on the same machine being connected to the same server
in order for the feature to work (the client will not be started automatically by LNT).
Please note: Event handling will only work if the user has the rights to work with events.
When a new event occurs in Observer, then it is displayed in the Event list at the right side of the LNT
window (the Event list can be switched on and off in editing mode by choosing View > Event list).
The Project settings define what happens when a new event is coming in (see 14.6.7 Project settings on
page 144): The layout that contains the primary control for related to the event can be exposed
automatically and also the control that is related to the event can be highlighted (blinking rectangle).
Additionally a longer description is displayed in the Event description field at the bottom of the
window (which also can be switched on and off in Editing mode by choosing View > Event
description).
You can acknowledge an event by pushing Acknowledge in the Event description field.
Acknowledged events will be removed from the Event list. Pressing Cancel sets the state of the event
to seen (visited) but does not acknowledge it.
Please note that for each camera, a primary control can be defined that is exposed when a new event
is generated. You can set the primary flag for a camera icon by right-clicking on the icon in editing
mode and choosing Primary from the pop-up menu.
Generally, events in LNT can have the following states (indicated by different colors of the event entry):
• New (unseen) events are shown as grey (if it is selected then it is shown in green).
• Seen (visited) events are shown in blue.
• Acknowledged events are removed from the list.
When there are several new events, then LNT offers you to see (visit) them one by one. The exact
behavior of the visiting and how the event state can be set to seen (visited) can be defined in the
Project settings (see 14.6.7 Project settings on page 144). You can, for example, mark a new event as
seen and jump to the next event by moving the mouse over the blinking control.
Setting Description
Show home layout on load Defines whether the Home layout is shown when the tool is
started. In order for that to work you must have a home
layout defined.
Highlight zone under mouse Defines whether zones will be highlighted when you move
Setting Description
cursor the mouse over them. This can be useful for distinguishing
when the mouse cursor is over the zone or over the camera
control icon on top of the zone.
Automatically jump to layout on In case of an event this setting defines whether the layout
event that contains the primary control related to the event
should be exposed.
Visiting order of events Defines in which order new events are to be visited.
Set event state to seen (visited) by When a new event comes in or an existing event is selected,
the related control blinks or is highlighted. This setting
defines with what mouse operation the state of the event
can be set to seen (visited) (blinking is stopped).
Only suggest events of mapped If this checkbox is marked then only events of mapped
images for visiting cameras will be suggested for automatic visiting. If it is
unchecked then all events will be suggested.
Standard view size for layouts Defines the default image size, either Fit image or Full
size.
Event list insertion mode Defines whether new events in the Event list are inserted
from the Top or from the Bottom.
Please note: Since iCAT and some functions are licensed separately of Observer you need a valid
license in order to work with these features. See also 11.1 Server system information and restarting on
page 112 for details about what license you have.
• The iCAT Traffic module enables applications for roads and highways: Traffic Monitoring,
Stopped Vehicle Detection, Wrong Way Detection.
• The Face Detection module automatically detects human faces in video streams and
estimates the person’s age group and gender.
• The Smart Tripwire™ function for people and object counting prevents wrong and double
counting and works even with the most difficult entrance situations.
• The Smart Tripwire™ also allows detecting wrong direction movements of people and
objects.
• Powerful and robust object tracking and event triggering can be constraint to object
sizes, speeds, and other properties.
• Heat maps allow you to view various object statistics in an intuitive way.
• Event statistics can be manually exported to Excel or automatically into .csv files for further
processing.
• All iCAT detection annotations are available for live streams and in archived recordings.
• Seamless integration with the Observer event management system EMS and other Observer
functions. Additionally iCAT offers camera sabotage detection and lighting change
detection.
• iCAT algorithms have been tuned for the highest performance and least burden on the
server.
1. An object detection and tracking engine analyzes the video stream and tracks detected objects.
Please be aware that an object has to show consistent motion first in order to be detected and
tracked.
2. An event logic engine with configurable event triggers decides when a tracked object triggers an
event.
3. A real-time statistics module stores statistical information about various aspects of objects like
object sizes and speeds.
For setting up a camera with iCAT you essentially configure the following things:
Object tracking region: The object tracking region defines the part of the camera view in which iCAT
is detecting and tracking objects. For each camera you can define one tracking region that is either the
full camera view or a part of it in the form of a polygon or rectangle. No object will be detected or
tracked outside of this tracking region. Since the CPU overhead caused by iCAT is directly proportional
to the size (area) of all the active tracking regions of all active cameras of a server, optimizing the
tracking regions will save CPU power. Region definitions can also be used to mask out problematic
areas in the scene as well (e.g. swaying trees).
For each tracking region you can also define what object statistics should be measured by iCAT. Such
statistics can then be visualized.
Event triggers: Each camera can have several event triggers that define under what conditions an
event is generated by the detected objects. Event triggers only work inside of the object tracking
region. Examples of event triggers are people or object counters and detectors of stopped or started
objects. The CPU load caused by event triggers compared to the tracking region is negligible.
Scheduling: The standard Observer scheduling mechanisms are also used for scheduling (activating)
various iCAT setups. For example, it is possible to have different iCAT settings for weekdays and
weekends.
Though iCAT is able to analyze video captured by any type of supported camera, using a device with
higher image quality will result in better detection and tracking.
Generally the iCAT algorithms work with in- and outdoor cameras as well as for different perspectives.
The configuration of the algorithms in Observer is pretty simple, as you will see below.
For people and object counting, the best results are possible if the camera is mounted overhead
downward looking.
Please note: Setting up iCAT definitions for PTZ cameras is problematic since most of the iCAT
functions require a fixed camera position.
iCAT works with any video camera. If the camera can provide an MJPEG stream, iCAT uses this format
because it is the most efficient for video analytics. If the camera provides only MPEG formats (MPEG-4,
H.264, and MxPEG) then iCAT can also work on these streaming formats. However, please be aware
that video analytics in MPEG streams requires a lot more CPU power than in MJPEG streams since the
decoding is much more complex (for multi-stream operation please see below). Also video analytics in
MPEG streams causes additional delays because it works on groups of pictures or frames (so called
GOPs). As a rule of thumb, iCAT adds a delay of approximately 1 GOP duration. Depending on the
actual MPEG cameras model, a GOP duration is between 0.5 and 1 sec (see also 4.2 Adding a new
camera and setting basic properties on page 33).
Analog cameras that are connected via a video server are also supported, of course.
iCAT generally works on CIF (or QVGA or nearest) resolution. This is a good balance between accuracy
and CPU overhead. If there is a continuous recording enabled for the camera, iCAT uses this stream for
its algorithms and does not cause additional bandwidth. If the size of the stream is bigger than CIF,
iCAT downscales it to CIF (or QVGA or nearest) resolution.
Any pixel measures that are available in iCAT are relative to this resolution.
iCAT runs on the server and works very efficiently. The CPU overhead caused by iCAT is directly
proportional to the following aspects (see also 15.2 Basic iCAT concepts on page 147):
• The size (area) of all the active tracking regions of all cameras of a server. This means that
optimizing the tracking regions will save CPU power. The number and shape of event triggers is
negligible.
• The video processing speed (in fps) of iCAT.
• The streaming format (see above)
An Observer server running with iCAT on standard (not high-end) desktop server hardware can easily
handle approximately 10 iCAT-enabled cameras with standard settings.
Additional CPU power (like quad core), enhanced RAM speed, and bigger L2 caches help to boost the
iCAT performance.
As indicated above iCAT normally needs much more CPU power for processing MPEG streams (MPEG-
4, H.264, and MxPEG) than for processing MJPEG streams. Therefore Observer can pull two parallel
streams from the camera if the camera supports that: one MPEG stream for live viewing and recording
and 1 additional MJPEG stream for iCAT operation.
In the Default settings dialog in Camera Admin the checkbox Multi-stream allowed enables or
disable this dual-stream iCAT processing (see also 4.2 Adding a new camera and setting basic
properties on page 33).
If this checkbox is selected and live viewing or recording is active with an MPEG stream with a frame
rate of more than 5 fps or a resolution bigger than VGA 640x480 pixels then Observer will try pull a
second MJPEG stream from the camera for iCAT processing (please note that this stream will be pulled
even if the checkbox Allow JPEG streaming is deselected). For Face Detection, Traffic Monitoring,
Stopped Vehicle Detection, and Wrong Way Detection the resolution of this second stream will be
approximately VGA size (640x480 pixels) and for other iCAT definitions it will be approximately QVGA
(320x240 pixels). The frame rate of this second stream depends on the iCAT function.
Note: Dual-streaming iCAT processing will not be activated automatically after selecting the Multi-
stream checkboxes. Please restart the server or stop and start (disable/enable) all iCAT functions of the
camera in order to activate dual-streaming iCAT after changing the Multi-stream selections.
Object detection
Depending on the sensitivity and other settings iCAT currently detects objects of 8x8 pixels or bigger in
size (in a QVGA image). Only moving objects are detected. New objects are detected by iCAT after a few
video frames. How quickly objects are detected is also influenced by the sensitivity setting.
If the objects you want to track move very quickly across your camera view you will require a higher
video processing speed of iCAT than if they move slowly across your camera view. As a rule of thumb
the optimal frame rate for object detection and tracking is 8-10 fps.
Note: Not the absolute speed of the objects influences what processing speed you need but the
relative speed that these objects have in your camera view. This relative speed is influenced by the
camera perspective and distance from objects.
Example: Cars on a highway are moving very fast. However if you look at them with a camera from a
larger distance and from a perspective with a flat angle the cars are actually are moving pretty slowly
in your camera's view. Therefore you can choose a slower video processing speed even for such fast
objects like cars on a highway.
Hint: You can also use the Comment field to configure certain views to be shown to one or
multiple users when an iCAT event occurs. See 6.10 Dynamic View Control in Online Monitor on
page 68 for more information.
2. You can choose a specific Icon, Sound, and Highlight color for the event when it is shown in
the Event list.
3. You can also define a special event priority for events generated by the iCAT definition by setting
the value Priority of generated event. Please note that the event priority is a relative priority
whereby 100 is the default priority (see also 9 Handling events on page 97).
4. For more information on the Dynamic view action configuration please see 6.10 Dynamic View
Control in Online Monitor on page 68
5. The check boxes Save event in event list and Do not save event in event list allow you to
selectively override the general setting for the camera schedule which is defined in 15.3.17
Scheduling iCAT operation and recording on page 168).
Please note: As indicated in 15.2 Basic iCAT concepts on page 147 and 15.2.1 Considerations for
setting up a system with iCAT on page 148 objects will only be detected and tracked inside a
tracking region. Event triggers will only work inside the boundaries of tracking regions. On the
other hand, making the object tracking region as small as possible helps you to save CPU power
of your Observer server.
Also be aware that the tracking region should approximately at least be twice the size of the
biggest objects you want to track.
7. Now you have to set the configuration parameters of the tracking region:
Indoor camera Enable this setting if the camera is an indoor camera. Indoor setting
usually is best for rooms not bigger than 10x10m and objects not
farther away than 15m. Disable this check box for outdoor
Overhead downward This setting only takes effect if Indoor camera is enabled. Enable
looking the setting if the camera is overhead mounted and downward
looking. This will improve object separation and the accuracy of
object counting. In our example above, the camera is an indoor
camera and mounted overhead downward looking.
Sensitivity Usually this setting should be left at Normal. Only if you are not
satisfied with the object detection quality or behavior you can try to
modify this setting.
If you want a sharper object detection and separation, you can set
the Sensitivity to High or Very high. Also for example, if you want
to detect very small objects, you can improve the sensitivity. The
Sensitivity also influences how fast new objects are detected.
Higher Sensibility means quicker object detection, lower means
slower detection.
For environments that are very noisy visually and that cause too
many objects to be detected, the Sensitivity can be set to Low or
Very low.
Max object lifetime (sec) Defines how long a detected object is tracked before it is dismissed
by iCAT (i.e. no longer treated as object but essentially becoming
background). After an object is being dismissed by iCAT, if it starts
moving again, it will be detected as new object. The setting is useful
to lower the probability of falsely tracked objects which remain in
the scene for too long. If you experience that objects are no longer
tracked though they are visible and moving, this value might have
to be increased.
Max stopped object Defines how long a detected object that stopped is tracked before it
lifetime (sec) is dismissed by iCAT (i.e. no longer treated as object but essentially
becoming background). After an object is being dismissed by iCAT,
if it starts moving again, it will be detected as new object.
The setting is useful for removing false detections (usually caused
by environmental changes) which often remain still for a longer
period. If objects in the scene usually stop longer than this time
limit, then set it higher.
Video processing (fps) This defines at what frame rate the iCAT algorithms operate. If the
objects you want to track move very quickly across your camera
view you want to improve the speed. If they move slowly across the
camera view you can decrease the speed. See also 15.2.1
Considerations for setting up a system with iCAT on page 148.
Tolerance radius for Defines when how much a stopped object may move away from its
stopped object (%) stopping position before iCAT detects it to move again.
There is a virtual circle centered at the object's center point. In this
field one can define its radius proportional to the size of the object.
If the object's center remains inside the circle it is detected as
stopped. When this value is set close to 100%, slowly moving or
loitering objects will be detected as stopped. It also influences the
statistics of stopped objects.
Reinitialize on light change Defines whether the iCAT object tracking region will be re-initialized
if there are significant light changes (e.g. a flashlight that is turned
on in a dark environment).
Slow adaptation This option lengthens the learning period to distinguish between
the static background and moving objects. It can be useful for
particularly crowded scenes in people counting applications.
8. Press Next to get to the Heat map data collection settings for the object tracking region. Here
you can define what heat map data iCAT should collect. Later on the resulting heat maps can be
shown in the Online monitor and Archive player (see 15.4.2 Displaying heat maps on page
173). Currently, the following data can be collected:
• Object count
• Object speed
• Stopped object count
• Object stopping time
NETAVIS Observer 4.6 introduced two new features for heat maps:
a. Normalizing the heat maps of all cameras (see 11.2 Setting Observer server parameters on
page 114
b. Resetting the heat maps (applied to all cameras on this server!):
• Manually: Select the corresponding Object Tracking Region definition, right-click on it,
and select Reset all heat map values on this host.
• Automatically: Configure the Heatmap statistics reset method option in the Host
Admin (see 11.2 Setting Observer server parameters on page 114
9. Press Save to create the tracking region. Later on you can modify the tracking region settings.
10. If you did not yet define the scheduling for iCAT activities, then after you save the first iCAT
definition for a camera, you will be prompted for whether you want to edit the scheduling now.
Click on the Yes button if you want to configure the scheduling now (refer to section 15.3.17
Scheduling iCAT operation and recording on page 168 for further information).
Click on the No button if you do not wish to schedule the recording or if you want to do that later.
Please note:
- If you add a new iCAT definition, it will automatically be assigned to all iCAT schedules of the
camera. If you do not want that, you have to remove the assignment manually (see 15.3.17
Scheduling iCAT operation and recording on page 168).
- If a definition is not assigned to a schedule then it will not be activated and no archive
recordings will be made and no events will be generated.
15.3.4 Defining an event trigger for people and object counting (Smart Tripwire)
Once you have created a tracking region, you can create an arbitrary number of event triggers inside
this tracking region. Event triggers define under what conditions an Observer event is generated by
iCAT. Such events can trigger automatic recording and are stored in the normal Observer event
database that can be queried and exported.
Currently iCAT supports the following event triggers:
• A Smart Tripwire™ for directional people or object counting. This tripwire is directional, so if you
want to count objects in two directions you would create two tripwires.
• A polygon or rectangle that creates an event when an object either crosses the field, stops
inside the field or starts moving inside the field (e.g. for perimeter protection).
Please note: Before you can define an object trigger you must first define an object tracking region
(see 15.3.3 Defining an object tracking region on page 151).
This tripwire triggers a counting event whenever an object moves from the green area across the
red tripwire. The tripwire is “smart” as it only counts objects that have first been detected in the
green area and move across. It would not count the object if it would first be detected in the non-
green area, then moved across the line into the green and then move across the tripwire from the
green to the non-green area. It also would not double count an object that would have moved
across the line twice.
Hints: To count incoming and outgoing people or objects you would create two different
tripwires with opposite green areas which would both trigger events.
You can also use the tripwire to detect objects moving in the wrong direction.
Please note: As indicated 15.2.1 Considerations for setting up a system with iCAT on page 148
objects will only be detected after a few frames. Therefore an object can move a bit before it is
actually detected as object by iCAT. Therefore it makes sense to have the green area big enough
to allow iCAT time for the object detection. If that is not the case it might be possible that quickly
moving objects are not detected before they cross the tripwire and therefore would not be
counted. The ideal settings depend on viewed (relative) object speed and iCAT video processing
frame rate.
Do not place the tripwire too close to areas where objects exit the scene (e.g. image borders,
doors), because they might disappear before crossing the tripwire. It is a good practice to draw
the tripwire about half the average object size away from such areas.
7. You can also define when an object is counted by either selecting Object center point, Any
point of object, or Whole object. The most appropriate choice in most cases Object center
point because of its robustness.
8. When you push the Next button you can define additional constraints for the event creation. You
can limit the counting only to certain object sizes, certain aspect ratios, and a certain speed.
Currently those measures are definable in pixels (please keep in mind that the resolution iCAT
works on is either CIF or QVGA depending on the aspect ratio of the camera). Future releases of
iCAT will allow for real world measures.
Zero values in these fields mean that there is no constraint.
Hint for constraining the object size or speed: The size is the area of the object in pixels and
the speed is also measured in pixels per second. To know what object sizes or speeds you want
to filter it is helpful to watch a few objects passing the triggers and switch on the object markers.
These markers show the size and speed of the object in pixels. These are exactly the same
measures that you can use for the trigger. Here is an example of an object marker (Object ID is
[10], object size is 9110 pixels, speed is 208 pixels/sec):
The section 15.4.1 Displaying iCAT information in the Online Monitor and when playing recordings
on page 171 shows you how to view object markers.
9. In the Identifier text field enter a name for this event trigger. An example name for a people
counter would be "Entrance 1 incoming".
10. Press Save to save your definition.
Please note:
• If you add a new iCAT definition, it will automatically be assigned to all iCAT schedules of
the camera. If you do not want that, you have to remove the assignment manually (see
15.3.17 Scheduling iCAT operation and recording on page 168).
• If a definition is not assigned to a schedule then it will not be activated and no archive
recordings will be made and no events will be generated.
Hint: For more details on using NETAVIS Observer for people counting please refer to the People
Counting with iCAT White Paper available in the documentation section of our website.
8. In the text field Minimum time for staying inside field (sec) (for Object is crossing field
event triggers) or Min. time for stopping/staying inside (sec) (for Object stops in field
triggers) you can enter a minimum time required for an object that either stops or stays inside a
field before an event is triggered.
9. When you press the Next button you can define additional constraints for the event trigger. You
can limit the counting only to certain object sizes, certain aspect ratios, and a certain speed.
Currently those measures are definable in pixels (please keep in mind that the resolution iCAT
works on is either CIF or QVGA depending on the aspect ratio of the camera). Zero values in these
fields mean that there is no constraint.
Hint for constraining the object size or speed: The size is the area of the object in pixels and
the speed is also measured in pixels per second. To know what object sizes or speeds you want
to filter it is helpful to watch a few objects passing the triggers and switch on the object markers.
These markers show the size and speed of the object in pixels. These are exactly the same
measures that you can use for the trigger. Here is an example of an object marker (Object ID is
[10], object size is 9110 pixels, speed is 208 pixels/sec):
The section 15.4.1 Displaying iCAT information in the Online Monitor and when playing recordings
on page 171 shows you how to view object markers.
10. In the Identifier text field enter a name for this event trigger.
11. Push Save to save your definition.
Please note:
- If you add a new iCAT definition, it will automatically be assigned to all iCAT schedules of the
camera. If you do not want that, you have to remove the assignment manually (see 15.3.17
Scheduling iCAT operation and recording on page 168).
- If a definition is not assigned to a schedule then it will not be activated and no archive
recordings will be made and no events will be generated.
• Camera defocused
• Camera covered
• Camera moved
Please note: For sabotage detection an object tracking region is NOT needed.
When initializing the camera for sabotage detection please make sure that the camera has the correct
focus setting and that the scenery and brightness is stable.
1. Choose Video analysis (iCAT) from the System administration menu. This opens the Video
analysis (iCAT) dialog.
2. Choose a camera and in the menu select Add new definition.
3. In the Type pop-up menu choose Sabotage detection, which will expose the configuration
settings.
4. Select any of the three sabotage detection types.
5. In the Identifier text field enter a name for this sabotage detection.
6. Push Save to save your definition.
Please note:
- If you add a new iCAT definition, it will automatically be assigned to all iCAT schedules of the
camera. If you do not want that, you have to remove the assignment manually (see 15.3.17
Scheduling iCAT operation and recording on page 168).
- If a definition is not assigned to a schedule then it will not be activated and no archive
recordings will be made and no events will be generated.
Sabotage detection uses three detector algorithms to generate events for camera moved, defocused
and covered.
The camera movement detector tries to locate a couple of strong (= has high contrast) points across
the entire picture. Then it searches for them on each of the following frames, while continuously
creating new points to keep adapting to a new scenery. A "camera moved" event occurs when a given
number of these points are lost for a while.
The focus change detector acts like the auto focus algorithms in digital cameras. It estimates the
average sharpness of the picture and produces an event if this sharpness changes abruptly. A "focus
lost" event is produced if the sharpness of the picture decreases below a threshold, and a "focus
gained" event if the sharpness is increased above a threshold. Both thresholds are based on average
sharpness values of previous frames.
The camera covered detector uses a brightness analyzer that calculates the average brightness of the
picture for each frame and if something strange happens tries to find out what has happened (light
switched off, light switched on or just a person in dark clothes passed by). It does so by analyzing a
sample of average brightness values collected in previous frames.
The result of the these three detectors are combined to give the final alarm event (camera moved,
camera covered, focus lost/gained, brightness change)
As mentioned above the camera movement detector works with high contrast points on the picture
and the focus change detector checks the sharpness of the picture (measuring the sharpness of
edges). Logos or date and time text fields generated and placed on the picture by the camera could
decrease the accuracy of these detectors or could even prevent detection at all. This is because such
overlay fields are always stable, have a high contrast and sharpness and can therefore balance real
picture changes, so that the overall change is too small to be detected.
The solution is to disable the text overlay at the camera's own web page.
Hint: As the three sabotage types have many properties in common, to get the best detection rate
switch on all sabotage categories. For example, a camera cover event might be categorized as a focus
lost, because then image sharpness drops dramatically. If this category is also switched on, the
sabotage will surely be detected.
Please note: For lighting change detection an object tracking region is NOT needed.
1. Choose Video analysis (iCAT) from the System administration menu. This opens the Video
analysis (iCAT) dialog.
2. Choose a camera and in the menu select Add new definition.
3. In the Type pop-up menu choose Lighting change detection, which will expose the
configuration settings.
4. Select the checkboxes for Light switched on and Light switched off to detect abrupt lighting
changes like when somebody switches the light on or off.
You can also enter values in the Brightness high limit (%) and Brightness low limit (%)
fields to detect slower lighting changes like during sunrise and sundown. If you leave these
values empty, then slower lighting change detection will be disabled.
5. In the Identifier text field enter a name for this definition.
6. Push Save to save your definition.
Please note:
- If you add a new iCAT definition, it will automatically be assigned to all iCAT schedules of the
camera. If you do not want that, you have to remove the assignment manually (see 15.3.17
Scheduling iCAT operation and recording on page 168).
- If a definition is not assigned to a schedule then it will not be activated and no archive
recordings will be made and no events will be generated.
Note: Since NETAVIS Observer 4.7 the default age groups are 0 to 24, 25 to 55, and above 56.
8. Check or uncheck the Display age groups option to select whether the exact age estimation or
the corresponding age group should be shown in the iCAT info display.
9. Since NETAVIS Observer 4.6.3 there is also an option to define a Minimal face size. Clicking the
button opens a separate window with a snapshot of the camera's videostream and you can draw
the minimal face size with the mouse on that snapshot:
Faces which are smaller than this rectangle will not be detected and will not create an event. For
increased accuracy you can resize this snapshot window by clicking and dragging on its corners
or edges.
Hint: For more details on using NETAVIS Observer for face detection, including information
about how to configure the desired age groups, please refer to the Face Detection with iCAT White
Paper available in the documentation section of our website.
Please note:
- No object tracking region is needed for a dynamic privacy mask.
- Currently, privacy masks are not shown in Client for Smartphone & Tablet, Layout
Navigation, Video4Web, and Video Wall Control.
1. Choose Video analysis (iCAT) from the System administration menu. This opens the Video
analysis (iCAT) dialog.
2. Choose a camera and in the menu select Add new definition.
3. In the Type pop-up menu choose Privacy mask, which will expose the configuration settings.
4. Under the camera preview choose the Rectangle or Polygon check box.
5. Now you can draw the privacy field with the mouse in the preview pane. For a polygon you just
click with the mouse to define the corners of the polygon. You close the polygon by crossing an
existing edge or by double clicking with the mouse.
6. In the Identifier text field enter a name for this privacy mask.
7. Push Save to save your definition.
Please note:
- If you add a new iCAT definition, it will automatically be assigned to all iCAT schedules of the
camera. If you do not want that, you have to remove the assignment manually (see 15.3.17
Scheduling iCAT operation and recording on page 168).
- If a definition is not assigned to a schedule then it will not be activated and no archive
recordings will be made and no events will be generated.
6. Adjust the Road length and Road width (both in meters) of the road section which is covered
by the region configured above.
Please note: The included section should be around 70-100 meter long.
7. Define the Measurement time unit (in minutes) which defines how long a certain traffic state
has to last before a corresponding event is triggered.
8. Now for each Traffic state except Normal traffic the lower and upper thresholds for the Speed
(in km/h) and Traffic Density (in %) have to be defined. Additionally a Highlight color, Icon,
Sound, and Dynamic View Action can be defined for each Traffic state.
Please note: The Speed and Density ranges from the different traffic states should not overlap.
6. Adapt the Alarm time limit (sec) option which defines after how many second the
corresponding event is triggered.
Hint: It is possible to set up more than one Stopped Vehicle Detection per camera, e.g. to generate
different events for vehicles stopped on the main road or the emergency lane.
6. Next the detector learns the typical direction of the traffic inside the previously defined region.
Depending on the amount of traffic the duration for this learning process can range from a
couple of hours to a day. When the learning process is completed every vehicle going into a
direction other than the usual one will be detected as a wrong way driver and an event will be
generated.
If the typical traffic direction changes temporarily, e.g. because of construction work, the
detector can be reset: Select the Wrong Way Detection definition, right-click on it, and select
Reset traffic direction learning.
Please note: The Wrong Way Detection region should always include road parts where the
typical traffic direction is observable. For example, if the scenario is to detect drivers traveling the
wrong way on the emergency lane it is not a good practice to draw the detection region tightly
around the emergency lane. Rather the solution is to extend the detection region to include the
lane next to the emergency lane, so the detector can compare a vehicle’s route to the direction
learned on the adjacent lane.
Hint: It is possible to set up more than one Wrong Way Detection per camera, e.g. to generate different
events for vehicles going the wrong way on the main road or an exit lane.
Note: iCAT Number Plate Recognition is a separate module which needs to be enabled with an
appropriate license key and - depending on the specific configuration - a USB hardware dongle.
The basic steps for setting up Number Plate Recognition in Observer are:
1. Configure an iCAT Number Plate Recognition definition (which is covered in this section).
2. (Optional) Configure the desired NPR lists (please see 18 NPR List Management on page 184 for
more details).
3. (Optional) Configure desired actions for number plates included or excluded in NPR lists (please
see 17 Rule Administration on page 179 for more details).
Here are the steps for configuring iCAT Number Plate Recognition:
1. Choose Video analysis (iCAT) from the System administration menu. This opens the Video
analysis (iCAT) dialog.
2. Select a camera in the camera tree and in the menu select Add new definition.
3. In the Type pop-up menu choose Number Plate Recognition which exposes the configuration
settings.
4. In the Identifier text field enter a name for this iCAT Number Plate Recognition definition.
5. Draw the region where the number plates appear in the preview pane. Just click with the mouse
to define the corners of the polygon and close it by crossing an existing edge or by double
clicking with the mouse:
6. Check or uncheck the Lower edge parallel with plate checkbox depending on whether the
lower edge of the region defined above is parallel to the number plates (only available for
Region: Hungary). If it is enabled then the initial learning period of the number plate
recognition module is shortened.
7. Select in which Region the system is used (please refer to the iCAT NPR datasheet available on
our website for the complete list of countries whose number plates are supported):
• Europe and Russia
• Arabian Peninsula
• Hungary
• North Africa
• Southeast Asia
• Pakistan
• Central Asia
If possible iCAT NPR also detects which specific country a number plate is from. The detected
country is then shown in the corresponding event details and can also be used in event searches.
Note: This option has to be configured in accordance with the license running on your system.
8. Set the Event suppression time (sec) which filters out repetitive iCAT NPR events, e.g. when
cars are idling in front of a camera.
The default event suppression time of 10 seconds means that after an iCAT NPR event for a
number plate has been generated, a second event for the same car will only by generated if it
leaves the detection region for 10 seconds and then moves back into it.
9. Select the Minimum plate length whereby you can choose values between 1 and 5. Recognized
number plates with fewer than this minimum length will be discarded and not stored in the event
database. Please note that whitespaces are excluded from the minimum plate length.
10. Select the Number Plate Recognition Scenario:
• High speed is used for free flow scenarios
• Slow speed is used for vehicle entry, parking, and similar scenarios
Note: This option has to be configured in accordance with the license running on your system.
11. Check or uncheck the Motion detection trigger checkbox (only available for Scenario: Slow
Speed). If this option is enabled number plate recognition is only attempted after a motion is
detected in the image whereby the threshold is a 5% pixel change in the previously selected
detection region.
12. Check or uncheck the Fast detection checkbox. If this option is enabled then the first number
plate recognition of a given plate will create a corresponding event, else three consecutive
recognitions are needed for the event to be created.
13. Check or uncheck the Disable learning checkbox. If this option is checked then the initial
learning period is disabled.
14. Check or uncheck the Get color information checkbox. It is only available for some regions
(e.g. Arabian Peninsula) where the color of the plates has a significance. The detected color is
then shown in the corresponding event details and can also be used in event searches.
15. Check or uncheck the Country/region only checkbox. If this option is checked then only the
country/region of a number plate rather than the full plate is read by the system.
16. Next the detector learns the typical location of the number plates inside the previously defined
region. The duration of this learning process depends on the number of vehicles passing through
the region and the previously selected Lower edge parallel with plate and Disable learning
options. During this learning period a corresponding message appears in the camera ports of the
Online Monitor where the camera is shown, no number plates will be recognized, and no Number
Plate Recognition events will be generated.
Note: Only one Number Plate Recognition iCAT definition can be created per camera.
Please note:
- No object tracking region is needed for a dynamic privacy mask.
- Currently, dynamic privacy masks are not shown in Client for Smartphone & Tablet, Client for iPad,
Mobile Client, Layout Navigation, Video4Web, and Video Wall Control.
7. Set the Tolerance time (milliseconds) which allows cars to "slip through" after the traffic light
has switched to red.
8. Next the system learns the on/off state of the camera inside the previously defined lamp region.
The duration of this learning process depends on the specific scene but should not take more
than 5 minutes. During this learning period a corresponding message appears in the camera
ports of the Online Monitor where the camera is shown, no Red Light Violations will be
recognized, and no Red Light Violations events will be generated.
2. In the camera tree select the camera that you want to schedule. Go to the Scheduling dialog by
clicking on Next at the bottom.
3. In the menu select Modify selected camera or group.
4. Press the Add button below the Time intervals list and choose Video analysis (iCAT) from the
type button labeled Change.
5. Now define the days and times when iCAT should be enabled for this camera. You can activate
individual days or, with the All button, the whole week at once. Select hours and minutes from
the time popup.
Please make sure that the Enable interval checkbox is marked, because only then the settings
are enabled.
Note: You can create multiple iCAT intervals for different setups at different times. For each
interval proceed as described here.
6. Check the assigned iCAT definitions to this interval via the Assigned iCAT definitions popup
menu at the right side of the dialog. Per default all available iCAT definitions for this camera are
assigned. If you do not want that you can remove them now by deselecting the definition that
you do not want in this interval.
7. You can also define whether something should be recorded at an event triggered by one of the
assigned iCAT definitions. For that you have a full set of options to set which are described in
7.1.1 Programming continuous timed recordings on page 75.
In addition to the parameters for continuous timed recordings you can specify a Pre-event
frame rate (fps) which can differ from that after the event defined by Frame rate (fps). With
Pre/Post-event recording (sec) you can specify how long before and after the event you want
to record.
Please note: If there is an active continuous recording in MPEG format, it does not make sense
to have any Pre/post-event recording (sec) set for the event-based recording. This is because
MPEG recording is only done in one quality. See also below for further considerations on pre- and
post-event recording.
8. iCAT events for this camera are only stored in the Event database and only appear in the Event
list if the flag Save event in Event list is switched on. Otherwise only the recording will start
but no event will be generated.
You can override this setting for individual iCAT events selectively in the corresponding iCAT
definition (e.g. 15.3.4 Defining an event trigger for people and object counting (Smart Tripwire) on
page 154).
9. For all other recording settings please refer to 7.1.1 Programming continuous timed recordings on
page 75.
10. Press Save.
Note: iCAT-triggered event generation and recording is only active if there is an enabled iCAT
interval and there is at least one enabled iCAT definition assigned. Outside of this time interval
there is no recording or event generation. Furthermore, recording is started only if either of the
fields for Pre/post-event recording (sec) is bigger than zero.
Observer allows you to define the frame rate and quality of event-based recordings. If you want to save
video streams for events generated by the video analytics toolkit iCAT please keep in mind that
Observer needs to analyze the video stream that it later stores.
Some cameras have limitations when providing multiple video streams at different qualities and frame
rates. Therefore Observer tries to retrieve only 1 video stream with 1 quality and frame rate setting
whenever possible. This also helps to keep the CPU load for the server and the camera at a minimum.
Here is some information about how event-based recording depending on the video format and
pre/post-event frame rate setting is done. This can help you to tune your system to better fit your
needs while reducing burden on the server and the camera. For further information on which video
format is best for iCAT please refer also to 15.2.1 Considerations for setting up a system with iCAT on
page 148.
Please note: If the recording event is not generated by iCAT, then the recording behavior is the same
as described here, just iCAT is not analyzing the video stream. Simple motion detection involves iCAT.
If both, iCAT-based and other event-triggered recording is active at the same time, iCAT recording
parameters have priority for obtaining pre-alarm streams.
Event-based recording for iCAT in MPEG formats (MPEG-4, H.264, and MxPEG)
The majority of MPEG cameras cannot deliver multiple MPEG streams with different formats. Only 1
stream is delivered from the camera. Therefore you need to set the default frame rate to at least the
detection frame rate you need for iCAT (see Default settings in 4.2 Adding a new camera and setting
basic properties on page 33).
• There is no pre-event recording (recording time is 0)
Observer obtains a QVGA MJPEG stream from the camera and runs iCAT analyses on it. In the
case of an event, the streaming format is switched to MPEG and recording is started.
Advantage: minimal overhead on server CPU.
Disadvantage: Depending on the camera there can be a small delay caused by the camera
needed to switch from the pre-event streaming format MJPEG to the post-event MPEG format.
If the continuous recording has the same frame rate quality as the post-event recording, then only
continuous recording is done and the recording calendar is marked with the events.
If continuous recording is done at a lower frame rate or different quality and both streams use the
MJPEG format, then the iCAT recording settings will only be used for the post alarm period and the
continuous recording settings for other times. In all other cases, e.g. when one of the recordings uses
MPEG-4 / H.264 / MxPEG, the continuous recording settings will be used to record the event.
iCAT processing will be done with continuous recording frames (scaled down in size and/or frame rate
if necessary). Pre-alarm setting has no relevance in this case.
• Watching iCAT information live in the Online monitor and also when replaying recordings.
• Displaying Visual Statistics™ in the Online monitor.
• Generating reports on iCAT events like people counting and stopped objects.
15.4.1 Displaying iCAT information in the Online Monitor and when playing recordings
For each camera that has active iCAT schedules enabled you can display additional iCAT information
like object markers, bounding boxes, and event trigger fields in the Online monitor and also when
playing back recordings in the Archive player or an exported SAFE Player.
Here is an example of additional iCAT information displayed:
Object
Defines whether object markers should be displayed. Object markers show the
markers
object ID and information about the state of the object (MOV = moving, STP =
stopped, LOUNGE = move just a little bit), size and speed.
In this example object ID is [10], object size is 9110 pixels, and the object is moving
Event count
Defines whether the event count fields are to be displayed in the lower left corner of
fields
the view port. When enabled the count information will be displayed for each event
trigger separately in the form of Q for last quarter of an hour, H for last hour, D for
day and T for total since setup.
Here is an example:
There are two event triggers: a field for counting stopped objects and a people
counting tripwire (counter 1).
Object
Defines whether object bounding boxes should be displayed.
bounding
Here is an example of an object bounding box displayed:
boxes
Please note that also the object marker and the event triggers are displayed.
• an object tracking region with enabled heat map data collection (see 15.3.3 Defining an object
tracking region on page 151 for details)
• an enabled and currently running iCAT schedule for that object tracking region
Below you can find an example of an object count heat map:
In the Online monitor you can display these heat maps by choosing the desired heat map type from
the view port's right-click menu - iCAT heat maps - Type:
• Object count
• Object speed
• Stopped object count
• Object stopping time
Next, in the iCAT heat maps - Show menu you can choose when to show the heat maps:
• Show always
• Show on mouse over
• Do not show
For showing the heat maps for all view ports in a view the same menu is also available in the view's
Control Menu - Set parameters of all view ports.
In the Archive player's Control Menu you can select the heat map type to be shown via iCAT - iCAT
heat maps.
In the screenshot above you see an example of the object count heat map in an office situation. Cold
colors (such as blue) mean few object counts and hot colors (such as red) mean high object counts.
When you move the mouse over the view port you can see the raw heat map data for the
corresponding heat map type:
• Object count: accumulated object count
• Object speed: average speed of objects
• Stopped object count: accumulated stopped object count
• Object stopping time: accumulated stopped object time
In the screenshot above the count in the middle of the screen would be 3199 objects.
If you are normalizing the type of heat map (see 11.2 Setting Observer server parameters on page 114
for further information) then the max value set there is displayed in brackets (3500 in the screenshot
above).
As a comparison, here you see the heat map of the stopped object counts of the same camera:
Notice the difference in coloring. You see that people only very seldomly stop in the right side of the
office but stop very often in the middle near the chair.
1. Choose Video analysis (iCAT) from the System administration menu. This opens the Video
analysis (iCAT) dialog.
2. Choose the camera and then the corresponding object tracking region.
3. In the menu select Modify selected definition.
4. Press Next to get to the Heat map data collection settings.
5. Disable the checkboxes of those statistics you want to reset.
6. Press the Save button to save the changes. Now the heat map data of the disabled types is reset.
7. Now enable the heat map data collection again by repeating the above steps accordingly. Do not
forget to press Save again.
Note: I/O devices (and their corresponding actions) configured in versions prior to NETAVISObserver
4.6 via the XML configuration file will continue to work in Observer 4.6. However if you want to use
I/O devices with new features introduced in Observer 4.6 (e.g. Rule Administration) then you will need
to remove the previous XML configuration and re-add the I/O device in the I/O Device Administration.
2. To add a new I/O device right-click anywhere in the window and select Add new device.
3. In the Name text field enter a name for this I/O device.
4. In the Type pop-up menu choose the type of I/O device you want to add to the system.
Hint: Device-type specific Configuration hints are shown to the right of the text boxes.
9. You can edit the Port name for each I/O port and relay by double-clicking on it.
10. For some devices (e.g. many AXIS cameras, AXIS P8221) you can also configure the Port type by
double-clicking on it and choosing the desired type from the pop-up menu.
Note: It is important that the port type configuration made here matches the one on the
corresponding I/O device's webpage!
11. When you press Save Observer will try to connect to the I/O device with the given configuration
and a corresponding "I/O device was added" event is generated:
If the connection is not successful or lost at any time a "Connection lost to I/O device" event is
generated:
12. For any changes of input ports and relays of I/O devices a corresponding event will be generated:
13. Finally, you can also Modify and Delete selected I/O devices via the right-click mouse menu.
Corresponding events are again generated when an I/O device is modified or deleted.
17 Rule Administration
NETAVIS Observer 4.6 introduced Rule Administration which enables the simple configuration of a
range of actions which are triggered by specific events. For example the permanent recording of
cameras can be started once an alarm system is activated or a barrier can be opened upon the
detection of certain number plates. The rule administration is an extensible system which will
continue to cover more scenarios in the future.
Note: Actions triggered by I/O devices configured in versions prior to NETAVIS Observer 4.6 via the XML
configuration file will continue to work in NETAVIS Observer 4.6. However if you want to use Rule
Administration you will need to remove the previous XML configuration, reconfigure the I/O device in
the I/O Device Administration, and then add the corresponding actions via Rule Administration
(described in this chapter).
2. To add or modify rules right-click anywhere in the window and select Modify rules.
3. To add a new rule click on the Add rule... button which opens the Rule Editor:
4. Enter a name for this rule in the Rule Name text field.
5. Next you choose and configure a Trigger type whereby NETAVIS Observer 4.6 currently supports
the following triggers:
a. Motion detected: In its basic configuration the trigger is any motion detection (option: Any
definition) on one or multiple cameras and/or camera groups which can be selected.
Additionally it is also possible to limit the trigger to only occur when any of the selected
iCAT Motion Detection definitions occurs (option: Any selected definition). When selecting
more than one camera then only the subset of identically named iCAT Motion Detection
definitions available on all cameras is shown here.
b. Connection to camera lost: You can select one or multiple cameras and/or camera
groups. The trigger is if NETAVIS Observer loses the connection to one of them.
c. Connection to camera restored: You can select one or multiple cameras and/or camera
groups. The trigger is if NETAVIS Observer restores the connection to one of them.
d. I/O device port value changed: You can select any I/O device which was previously added
via the I/O Device Administration (see UNRESOLVED CROSS REFERENCEI/O Device
Administration on page 177). Then you can choose any of its input ports or relays and define
whether the port value changing from 0->1, 1->0 or Any value change is used as the trigger.
e. Number plate detected: In its basic configuration the trigger is simply a number plate being
detected on one or multiple cameras and/or camera groups which can be selected.
Additionally it is also possible to limit the trigger to only occur when a number plate is in or is
not in one or multiple NPR lists (see 18 NPR List Management on page 184).
f. Object crossed field or tripwire: In its basic configuration the trigger is any object is
crossing field and object crosses tripwire iCAT detection (option: Any definition) on one or
multiple cameras and/or camera groups which can be selected. Additionally it is also possible
to limit the trigger to only occur when any of the selected iCAT Object is Crossing Field and
Object Crosses Tripwire events occurs (option: Any selected definition). When selecting
more than one camera then only the subset of identically named iCAT Object is Crossing Field
and Object Crosses Tripwire definitions available on all cameras is shown here.
g. Object started moving in field: In its basic configuration the trigger is any object starts
moving in field iCAT detection (option: Any definition) on one or multiple cameras and/or
camera groups which can be selected. Additionally it is also possible to limit the trigger to only
occur when any of the selected iCAT Object Starts Moving in Field events occurs (option: Any
selected definition). When selecting more than one camera then only the subset of
identically named iCAT Object Starts Moving in Field definitions available on all cameras is
shown here.
h. Object stopped in field: In its basic configuration the trigger is any object stops in field
iCAT detection (option: Any definition) on one or multiple cameras and/or camera groups
which can be selected. Additionally it is also possible to limit the trigger to only occur when
any of the selected iCAT Object Stops in Field events occurs (option: Any selected
definition). When selecting more than one camera then only the subset of identically named
iCAT Object Stops in Field definitions available on all cameras is shown here.
6. By clicking on the Add action button you can then add one or multiple actions which will be
executed when the trigger configured above occurs. NETAVIS Observer 4.6 currently supports the
following actions:
a. Enable current continuous recording scheduling: You can select for which Camera(s)
the current continuous recording scheduling will be enabled: either the camera from the
event which triggered the rule or any other camera(s).
Note: This action does not automatically start a continuous recording on the corresponding
camera(s)! Rather it simply enables a previously configured continuous recording schedule
(see 7.1 Programming archive recordings on page 75) which is scheduled to run at the current
time. Changes of the scheduling can take up to 2 seconds to be executed and no other
changes should be made during that time.
b. Disable current continuous recording scheduling: You can select for which Camera(s)
the current continuous recording scheduling will be disabled: either the camera from the
event which triggered the rule or any other camera(s).
Note: This action does not automatically stop a continuous recording on the corresponding
camera(s)! Rather it simply disables a previously configured continuous recording schedule
(see 7.1 Programming archive recordings on page 75) which is scheduled to run at the current
time. Changes of the scheduling can take up to 2 seconds to be executed and no other
changes should be made during that time.
c. Enable current iCAT scheduling: You can select for which Camera(s) the current iCAT
scheduling will be enabled: either the camera from the event which triggered the rule or
any other camera(s).
Note: This action does not automatically start a Video Analysis (iCAT) recording on the
corresponding camera(s)! Rather it simply enables previously configured Video Analysis (iCAT)
schedules (see 7.1 Programming archive recordings on page 75) which are scheduled to run at
the current time. Changes of the scheduling can take up to 2 seconds to be executed and no
other changes should be made during that time.
d. Disable current iCAT scheduling: You can select for which Camera(s) the current iCAT
scheduling will be disabled: either the camera from the event which triggered the rule or
any other camera(s).
Note: This action does not automatically stop a Video Analysis (iCAT) recording on the
corresponding camera(s)! Rather it simply disables previously configured Video Analysis (iCAT)
schedules (see 7.1 Programming archive recordings on page 75) which are scheduled to run at
the current time. Changes of the scheduling can take up to 2 seconds to be executed and no
other changes should be made during that time.
e. Show camera in large view: You can select which Camera to show in the large view: either
the camera from the event which triggered the rule or any other camera. Additionally you
need to configure the User(s) and Window where the camera will be shown. When multiple
users are selected only the subset of commonly available windows are shown here.
f. Set I/O device port value: You can select any I/O device which was previously added via the
I/O Device Administration (see UNRESOLVED CROSS REFERENCEI/O Device Administration on
page 177). Then you can choose one of its output Ports or relays and define whether the Port
value should be set to 0 or 1. Optionally you can define whether to Switch the value back
to the opposite value after a certain amount of time (in milliseconds)
g. Set PTZ position: You can select a previously configured PTZ position of a Camera and
define a Minimal stay time for which the camera will remain at that position.
Note: This action does not work if the camera currently follows an automatic PTZ route!
h. Play sound: You can select one of the default sounds from the list and choose for which
user(s) the sound should be played. Additionally it's also possible to upload a custom sound
by clicking on the ... button (whereby .au and .wav files are supported).
i. Send email: You can select NETAVIS Observer users as Recipient users and CC users of e-
mail notifications (do note that these users need to have a filled in e-mail address in the
User admin). Additionally arbitrary e-mail addresses not associated with any Observer users
can be added in the Recipient addresses and CC addresses fields whereby multiple
addresses have to be separated by a comma (,).
For the Subject of the corresponding e-mail it is possible to either choose Use event text as
subject (e.g. Object stopped in field "Stopped person" of camera "Main Entrance") or Specify
subject and entering a custom text.
Similarly for the Body of the e-mail it is possible to either choose Use event details as
message (which adds all the information also visible in the Event details dialog of the trigger
event) or Specify message and entering a custom text. Additionally the Include trigger
image option can be checked so if the trigger event contains an image it will be attached to
the e-mail.
Note: Your NETAVIS Observer server needs to have a corresponding SMTP mail server and
networking setup configured in order to be able to send e-mail notifications (please see the
document NETAVIS Observer Server Installation and Administration and specifically the "Edit
network settings" section for further information). Also note that NETAVIS Observer has a
10MB limit for e-mail attachments. So if the Include trigger image option is checked and the
associated image is larger than 10MB then the e-mail notification will not be sent.
j. Generate custom event: Using the Edit custom events button you can enter the Custom
event editor which allows you to add fully customizable event types. You can Add, Modify or
Delete these custom events which are defined by their Event type, up to 5 Event
parameters, and a corresponding Event text. These custom events can then be generated
as an action within a rule whereby the custom event parameters can either be copied from the
triggering event or entered manually:
Note: The length of all event parameter names combined cannot exceed 200 characters!
7. You can also delete actions by clicking on the delete button next to each action.
8. Press OK to create the rule. If you forgot to specify any mandatory options then an icon will
pop up next to it.
9. Press Save to store the changes. For any changes also a corresponding event will be generated :
1. Existing rules can be enabled and disabled via the Active checkbox next to each rule. They can
also be modified by double-clicking on one or selecting it and pressing the Modify rule... button.
2. Finally, you can also Duplicate and Delete selected rules via the corresponding buttons.
Note: iCAT Number Plate Recognition is a separate module which needs to be enabled with an
appropriate license key and - depending on the specific configuration - a USB hardware dongle.
The basic steps for setting up Number Plate Recognition in Observer are:
1. Configure an iCAT Number Plate Recognition definition (please see 15.3.14 Defining Number Plate
Recognition on page 164 for more details).
2. (Optional) Configure the desired NPR lists (which is covered in this section).
3. (Optional) Configure desired actions for number plates included or excluded in NPR lists (please
see Using Rule Administration for more details).
2. To add a new number plate list right-click anywhere on the window and select Add new list.
3. In the Name text field enter a name for this NPR list.
4. You can choose a specific Icon, Sound, and Highlight color for the corresponding event when it
is shown in the Event list.
5. The Ignore separator characters checkbox defines whether separator characters in the number
plate (such as spaces) should be considered when matching detected number plates against the
selected NPR list.
6. Select the Tolerance (in number of characters) whereby you can choose 0, 1 or 2.
Selecting 0 means that a detected number plate has to exactly match a number plate in the
NPR list in other to be associated to it.
Selecting 1 character tolerance means that the detected number plate and a plate in the selected
NPR list have the same length but at most one character is different OR the detected number plate
has one character less or more than a plate in the selected NPR list.
Selecting 2 characters tolerance means that the detected number plate and a plate in the selected
NPR list have the same length but at most two characters are different OR the detected number
plate has one character less or more and at most one character is different than a plate in the
selected NPR list.
7. There are two options to add number plates to this list:
a. You can manually add number plates by clicking on the Add button at the bottom:
To add multiple number plates you can either separate them with a comma or add them in
separate lines by pressing ENTER after each number plate.
b. You can import number plates stored in text-, CSV- and Excel-files by clicking on the Import
button at the bottom and selecting the corresponding file from your hard drive.
Note: When importing number plates with non-standard characters (e.g. German umlauts)
ensure that the files to be imported are encoded with UTF-8. Please also note that for example
Microsoft Excel does not provide support for saving CSV-files with UTF-8 encoding!
8. Once added you can also filter the number plates by entering characters into the Filter textbox:
9. Press Save to store the changes. For any changes also a corresponding event will be generated :
10. You can also Modify, Delete and Export (as text-, CSV- and Excel-files) selected lists of number
plates. Corresponding events are again generated when an NPR list is modified or deleted.
19 Automatic Export
NETAVIS Observer 4.7 introduced an Automatic Export tab for configuring the automatic export and
upload of event statistics, events including their parameters, and heat maps to NETAVIS sMart Data
Warehouse and other 3rd party systems.
2. To add or modify automatic exports right-click anywhere on the window and select Modify
exports.
3. To add a new automatic export click on the Add button.
4. Per default the Enabled checkbox is checked which means that the automatic export is activated.
5. In the Name text field enter a name for this automatic export.
6. Select the Type of the automatic export where you are presented with three options:
• Event statistics are CSV files containing aggregated statistical values, equivalent to what
can be manually obtained with the event statistics export menu (e.g. 3 motion detections
occurred in the past hour).
• Events are CSV, HTML, JSON or XLS files containing the full event details, equivalent to
what can be manually obtained by exporting the result of an event search (e.g. vehicle with
number plate W-123AB was detected at 11:03 on July 25)
• Heatmaps are CSV files containing the relative heatmaps data (e.g. of stopped object
time).
7. For event statistics and events the default Export period which defines how often the automatic
export is uploaded is 1 day, other options are: 15 minutes, 1 hour, 1 week, and 1 month. For heat
maps the default value is 1 hour, the alternative is 1 day, and the period has to be configured via
the "Heat map export period" option in the Host Admin.
8. Destination(s) are the servers such as NETAVIS sMart Data Warehouse and other 3rd party
systems to which the automatically generated export files are uploaded. To add a new destination
click on Edit destination(s) which opens the Upload destination editor:
10. The Export delay (min) defines how many minutes an export is delayed which can be useful in
systems with unreliable connections between NCS and NUS servers.
11. The Retention time (days) defines how long the automatically generated export files are stored
on the NETAVISObserver server. The default value is 7 days and it can be changed by entering a
different value in the text box.
12. The Export format version defines the naming schema of the exported files:
• Event statistics:
o v1: Original naming schema (e.g. event-
statistic_v1_h2663615583332447065_20180313T180000+0100_20180313T181459+0100.
csv)
• Events:
o v1: Original naming schema (e.g.
event_v1_h2663615583332447065_20180313T181500+0100_20180313T182959+0100.cs
v)
• Heatmap:
o v1: Original naming schema (e.g. heatmap_v1_h2663615583332447065_cam3-
1_oc_20180313T181500+0100_20180313T182959+0100.csv)
o v2: Additional naming schema which also contains the name of the corresponding
iCAT definition (e.g. heatmap_v2_h2663615583332447065_cam3-
1_zone1_oc_20180313T181500+0100_20180313T182959+0100.csv)
Note: sMart Data Warehouse up to 3.1 only supports heatmap naming v1!
16. Users with the "Enable download of exported files" permission (see 5.2 Setting general user
privileges on page 48 for details) can download these automatically created files on the server's
web page by clicking on Download exported files link:
Note: The configuration of counting lines and zones needs to be done on the external device's web
interface!
2. To add a new external device right-click anywhere in the window and select Add new device.
3. In the Name text field enter a name for this external device.
4. In the Type pop-up menu choose the type of external device you want to add to the system.
Hint: Device-type specific Configuration hints are shown to the right of the text boxes.
Hint: For Hella 3D devices the username is "user-role-edit", the default password is "admin", and
the default port is: 8091. The password and (REST) port can be changed on the sensor's web
interface.
8. Optionally it's also possible to configure thresholds for zone level alerts and dwell time alerts.
• Select the zone level alerts checkbox to enable the corresponding zone alert events when
the level is above or below a certain threshold.
• Select the dwell time alerts checkbox to enable the corresponding zone alert events
when the level is above or below a certain threshold (in seconds).
Note: For Hella 3D devices the sensor's "max. dwell time" value is used for the dwell time
alerts!
Hint: In zone monitoring and queue length detection applications these alerts can be used as a
trigger for speaker announcements, informing staff to open an additional counter, and many other
processes.
9. When you press Save Observer will try to connect to the external device with the given
configuration and a corresponding "External device was added" event is generated.
If the connection is not successful or lost at any time a "Connection lost to external device" event
is generated:
10. Once the connection to an external device has been established the system will start generating
regular counting and zone events with the latest data from the sensor as well as corresponding
alert events when the configured thresholds are crossed. Zone events will only be generated when
the level and dwell time are not 0:
11. Finally, you can also Modify, Delete, and Clone selected external devices via the right-click
mouse menu. Corresponding events are again generated when an external device is modified or
deleted.
The video wall control application itself runs in a web browser on a PC that serves as the video wall
control center. It connects to one of the Observer servers in the network that then in turn controls the
Observer video wall clients that directly connect to the monitors of the video wall.
for that purpose. However, all cameras that you want to display on the video wall must be accessible
by this server (i.e. must be configured at this server or must be mounted from other servers).
It is also possible to use more than 1 server for providing the streams for the video wall (this depends
on how you setup your server network topology). In such a case the VWCA and also the video wall
clients (that directly connect to the monitors) must connect to these servers. For simplicity and easier
setup and management of the video wall, we suggest using one Observer server for driving the video
wall.
Each video wall client workstation/PC can drive several monitors of the video wall (how many depends
on your hardware setup, i.e. graphics card). Each such video wall monitor has to map to one window
in the Observer client.
You will need a few users to manage video wall functions appropriately. Generally these users must
have access rights to all cameras that are to be displayed on the video wall:
• 1 video wall management user (with which you login in to the VWCA).
• For each video wall client workstation/PC you need 1 user
You can create a user group that has privileges and access rights to all relevant cameras and then
create the above users belonging to this group (automatically inheriting the privileges and rights).
Please note these special issues:
• It is impossible to display a camera on a video wall monitor which is not available on the server
where the video wall client connects to. Depending on your setup, it is therefore possible that
you cannot display every camera on any video wall monitor.
• The video wall control application itself does not display live video streams of the cameras but
snapshot images for better orientation instead. The real video streams appear in the video wall
monitors.
6. In the Observer application position each of the created windows to the corresponding video wall
monitors (and put it in full screen mode). You can do that by dragging the window with the mouse
and then maximize the window.
7. When you are done setting up all the video wall client PCs and windows (monitors) you can
continue to configure and use the video wall control application (see 21.2 Controlling the video
wall with the control application on page 194).
Supported browsers:
Please note: VWCA stores its configuration information (like monitor settings, linked servers, etc.) on
the server where it was started from. If you want to control more than 1 Observer servers with your
VWCA, you have to start it always from the same server.
1. Select video wall matrix by clicking on the default tab or the background area between monitors.
You will see the Matrix settings in the Properties pane.
2. Define the number of rows and columns of your video wall (this must obviously match your
physical setup).
3. Select the appropriate aspect ratio of the monitors
4. The zoom factor defines how many monitor columns of your video wall you want to see in your
VWCA without scrolling.
Additionally, you can also define some colors.
Before you can operate the video wall with VWCA you have to define the linking (mapping) between
the physical monitors of the video wall and the monitors inside VWCA. The linking is actually done via
the Observer server that serves the video wall client workstations that actually connect to the monitors
and by choosing the appropriate Observer user login for which the windows for the monitors have
been defined.
Here are the steps for linking VWCA to the video wall servers:
1. Select one of the monitors by clicking on it. You will see Server link in the Properties pane.
2. Enter the server address, Login name and Password of the Observer server that serves the
video wall control workstations/PCs (for a general description on which servers and users to
choose please see 21.1 Setting up a video wall with Observer on page 192). Please note that you
have to select the correct user login for the monitor: You have to take the user login of the video
wall client workstation/PC where the monitor is connected to.
3. Push Validate. If you entered the correct server and login combination you will be offered to
choose from the available windows of the user.
4. Choose the window that is associated with the monitor (please remember that each monitor has
an associated Observer window; for a basic explanation please refer to 21.1 Setting up a video wall
with Observer on page 192). You will notice that a view of the window has been selected by default.
5. Repeat the above steps for all monitors of the video wall.
21.2.3 Operating the video wall with the video wall control application (VWCA)
You have several possibilities for controlling the contents video wall monitors. When you select a
monitor by clicking on the monitor title bar inside VWCA you can:
Select a view from the View list. Please note that each window in Observer can have several views.
Drag/drop a camera from the camera tree to a view port in a monitor. Please note that you drag a
camera only to the view port of a monitor that also has the appropriate server link. Impossible camera
drops are prohibited and are shown with a red cross.
Push the Back and Next buttons at the monitor title bars to move in history. You can also specifically
select a point in the history by selecting it from the History tab in the Properties pane.
Please note that the status bar shows the camera name when you move the mouse pointer over
monitor view ports.
22 Special functions
This chapter describes some special functions of Observer.
To see how to integrate Video4Web into your web pages please refer to the examples and full
documentation available on the Observer server's web page (http://<your-server-IP>) under
Documentation - Video4Web.
22.3 Controlling Observer with HTTP commands from external sources (URL
control)
URL control is one way to enable third-party applications to start actions via URL-encoded strings
(send http GET request to an Observer server). The server upon receiving these special URL requests
executes the actions as if they would have been generated internally.
For testing purposes, you can execute URL control by entering an HTTP command in a standard web
browser.
http://<your-server>/arms/servlet/BrowserServlet?cmd=clientcontrol&...
be shown there, or 2) if the view can not handle any more view-ports, a new view will be
created where all listed cameras are then placed.
• box.x0= upper left x coordinate of box (valid between 0 and 1000). The values are in 1/10th
percentages of the displayed image. Box parameters should be supplied only when the action is
draw_bounding_box.
• box.y0= upper left y coordinate of box (valid between 0 and 1000)
• box.x1= lower right x coordinate of box (valid between 0 and 1000)
• box.y1= lower right y coordinate of box (valid between 0 and 1000)
• box.linewidth= line width used when drawing the box
• box.color= color of the box (possible values: black, blue, cyan, darkgray, gray, green, lightgray,
magenta, orange, pink, red, white, yellow)
• box.text= text which is written into the box
• box.timeout= seconds after the box disappears automatically (0 mean click to disappear)
Please note: URL control is only enabled for allowed computers whose IP addresses are known to
the server (see 11.2 Setting Observer server parameters on page 114 for details). All other requests
are blocked.
Please refer to the Release Notes for further details or updates on URL control.
http://192.168.7.2/arms/servlet/BrowserServlet?cmd=clientcontrol&selector.user=admin
&selector.tool=online_monitor&action.action=show_live_stream
&action.mode=show_as_large&action.cameraid=12
Create action which draws a red box (for two seconds) onto the frame of camera ID 12 in the Online
Monitor of the "admin" user:
http://192.168.7.2/arms/servlet/BrowserServlet?cmd=clientcontrol&selector.user=admin
&selector.user=admin&selector.tool=online_monitor
&action.action=draw_bounding_box&box.x0=100&box.y0=100&box.x1=500&box.y1=500
&box.linewidth=2&box.color=red&box.text=MD&box.timeout=3&action.cameraid=12
Each server offers a simple test page for URL control at the following address:
http://<your-server>/URLtest.jsp.
Now assume that in the Observer Online monitor you want to show adjacent cameras of this virtual
matrix in a 3x3 view . Whenever you double click any of the 50 cameras in any of the views in Observer
this camera is then positioned as the center of the 3x3 view and the adjacent cameras of the big matrix
are automatically positioned around it. In the example above you see how the 3x3 view would look like
if you would double click Cam24 and Cam18.
The Matrix view function is defined by XML files sitting on the server. The name of the XML files follows
the form server.utils.CameraMatrixMapping.<action>.xml. You start by downloading and
editing the file server.utils.CameraMatrixMapping.sample.xml. When you are done with editing
you upload the file under a specific name to the server.
In addition to the above you can also control on which Observer clients the Matrix view function
should be triggered. You can do that by downloading and editing the file
server.utils.TargetActionMapping.sample.xml.
Note: Automatic exports configured in versions prior to NETAVIS Observer 4.7 via the XML
configuration file (as described below) will continue to work in Observer 4.7. However if you also want
to automatically upload these exports then you will need to remove the previous XML configuration
and re-configure the setup in the Automatic Export tab (see 19 Automatic Export on page 186 for
details). From NETAVIS Observer 4.7 onwards using Automatic Export tab is the recommended way to
automatically export and upload event statistics, events including their parameters, and heat maps to
NETAVIS sMart Data Warehouse and other 3rd party systems. The description below is only included
for legacy reasons.
Observer allows you to automatically export event statistics data in the format of a CSV (comma-
separated text file) that can be easily read by programs like MS Excel. The details of the statistics like
resolution, duration and filters for event types and camera can be flexibly defined.
The exported file will be stored in the file system of the server. This means you must have file system
access to the server in order to obtain the file. You can do that, for example, via FTP and the admin
user and password. Or you can mount a network drive (via Samba or NFS) and place the file there.
The statistics export function is defined by an XML file sitting on the server. You must download, edit,
and then upload the file in order to configure and activate the function.
Note: I/O devices (and their corresponding actions) configured in versions prior to NETAVIS Observer
4.6 via the XML configuration file (as described below) will continue to work in Observer 4.6. However if
you want to use I/O devices with new features introduced in Observer 4.6 (e.g. Rule Administration or
iCAT Number Plate Recognition) then you will need to remove the previous XML configuration and re-
add the I/O device in the I/O Device Administration (see 16 I/O Device Administration on page 177 for
details). From NETAVIS Observer 4.6 onwards using I/O Device Administration and Rule Administration
is the recommended way to work with I/O devices and the description below is only included for
legacy reasons.
NETAVIS Observer allows you to work with I/O contacts of cameras and special I/O devices.
The I/O device configuration is done by XML files sitting on the server. You must download, edit, and
then upload the files in order to configure and activate the function.
Hint: For more details on configuring I/O devices please refer to the I/O Contacts White Paper available
in the documentation section of our website.
Note: Joystick-support is only available in the locally installed Observer client on Microsoft Windows
platforms!
22.8.1 Installation
Follow these steps to enable AXIS T8310 Control Board support on a client:
• Connect the AXIS T8310 Control Board to the system and wait for Windows to detect and install
it
• Install the latest Observer client via the corresponding option from the Observer Start page
• Restart the client computer to complete the installation process
22.8.2 Use
The following two figures show the controls and buttons of the AXIS T8312 keypad and AXIS T8313 jog
dial. For simplicity we have numbered the buttons for later reference.
AXIS T8312 keypad:
The functions assigned to the different buttons of the keyboards are dependent on whether the Online
Monitor or Archive are in focus. To show the current focus press the Alt (13) + View (3) button
combination. The focus between windows and between different tabs within windows can be
changed forward with the Tab (1) button and backward with the Alt (13) + Tab (1) button combination.
Below you will find a list with the assignment of the buttons in the Online Monitor and Archive:
• Online Monitor:
AXIS T8312 keypad
1 - Tab: Move focus forward, or combined with Alt (13) backward
2 - Camera: After entering numbers and pressing this button the camera whose name ends with
an underscore (_) character followed by the entered number (e.g. Camera_212) is activated. In
case no such camera exists an „Invalid camera: XXX” message is shown
3 - View: After entering numbers and pressing this button the view whose name starts with the
entered number followed by an underscore (_) character (e.g. 123_MainView) is shown. In case
no such view exists an „Invalid view: XXX” message is shown
4 - Activate view: Activates the first view which contains the previously assigned camera
5 - Activate large view: The previously assigned camera is shown in a large view
6 - Show previous view
7 - Show next view
8 - Not used
9 - Activate PTZ: In case the last assigned camera has PTZ capabilities, this button activates the
PTZ function and the AXIS T8311 joystick will be assigned to this camera
11 - Archive: Displays the archive view of the previously assigned camera
10 - Administration: Displays the administration view of the previously assigned camera
12 - Numbers: For entering digits in camera names or view names
AXIS T8313 jog dial
1-6: Not used
7 - Previous/Next view: Changes to the previous or next view
8 - Select view: The names of defined views appear on the screen upon turning the jog wheel.
To change to the given view press the View (3) button within 3 seconds.
• Archive player:
AXIS T8313 keypad
Not used
AXIS T8313 jog dial
1 - Not used
2 - Marker button: Valid only when followed by pressing the Left (1) or Right (6) buttons. When
you press the Left (1) button then the start (left) marker of the interval is positioned under the
current position of the playback marker. Pressing the Right (6) button will modify the end time
stamp. When the Play (4) button is pressed the video sequence for the new time interval will be
downloaded and played back.
4 - Play/Stop
3,5 - Jump to the Beginning or End of the selected time interval
7 - Not used
8 - Move playback marker back and forth in the sample line
• Examples:
00123 (2) - Select the camera, whose name ends with „_00123”
11 (3) - Select the view, whose name starts with „11_”
malfunctions can be detected and addressed even faster. Also events such as iCAT detections can be
forwarded accordingly.
Note: SNMP Support is a separate module which needs to be enabled with an appropriate license key.
22.9.1 Configuration
1. At the server's web page click on Customizer login to log into the Customizer area with the
administration user admin.
2. After login click on Download configuration files.
3. Download the file server.site.snmp.SNMPMappings.sample.xml.
Note: Here you can also download the file NETAVIS-MIB.txt which contains the
NETAVIS MIB (Management Information Base).
22.9.2 Activation
Once the configuration file has been adapted, rename it to
server.site.snmp.SNMPMappings.default.xml and upload it via the Customizer login. Afterwards the
Observer server or at least the NETAVIS services have to be restarted for the SNMP agent to be
activated.
To receive the SNMP data sent by Observer a so-called SNMP management station software is needed.
23 Index
4 C
Audio Client
playback in archive 83 installation directory for client components
settings 36 19
working with cams 67 introduction 11
languages 13
AVI locally installed 16
exporting from archive 85 multi-window/multi-screen operation 20
overview of components 24
B preferences 22, 24
starting 16
Bandwidth support for low-bandwidth connections 29
limiting overall outgoing bandwidth of server web browser 12
118
Clone camera 41
support for low bandwidth via Transcoding
29 Contrast 43
F
E
Floating window components 21
Email
receiving on events and alarms 103 Four-eyes principle 46
Encryption Four-eyes-principle 48
AES encryption of video recordings 118 Frame rate
general description 7 changing in view port 60
HTTPS for camera connections 35 maximizing 60
HTTPS for client connections 12, 16
HTTPS for server-server connections 123
G
Event triggers (iCAT) 148, 154
Events 97 Getting started 31
acknowledging 98 Google Chrome
automatic export of statistics data 202 optimizing settings for browser client Web
details 98 Start (JNLP files) 15
Event list 97
I/O devices
configuring 203 L
iCAT 146 Languages 13
considerations for setting up 148
Layout navigation 132
CPU load 149
editing mode 133
dynamic privacy mask 166
I/O contacts 135
event triggers 148, 154
installation 132
event-based recording 168
navigation and operation 142
events 176
zones 136
Heat maps 173
object bounding boxes 173 Layout of windows
object counting 154 modifying 21
object markers 172 LDAP 55, 127
object tracking region 148, 151
people counting 154 License
privacy mask 161 displaying current license 113
recording based on 168 license string 12
sabotage detection 157 Login
SAFE export 86 secondary password 46
SMS
receiving on events and alarms 103
V
T View ports
creating 56
Time zone of server 115 define crop view 62
Time zoomimg in archive 82 dynamic event-based control 68
quality settings 60
Transcoding for low-bandwidth connections
zooming camera views 62, 81
29
setting up 117 Views
copying between users 65
Tripwire for object counting (iCAT) 154
dynamic event-based control 68
Online monitor 56
U optimizing big views after double click 59
round tours 65
Upgrading settings 59
remote servers 126
VIP control (matrix view function) 199
URL control
VWCA (video wall control application) 194
allowed IP addresses 117
control from external applications 198
W
User server (NUS) 121
Users Web pages
Active Directory/LDAP 55, 127 embedding live video streams 197
adding 46 Windows
camera access rights 52
create a separate Event list window 21
copying view between 65
deleting 22
Four-eyes-principle 48
modifying layouts 21
groups 54
info about logged in 54