DMS100 Technical Specification PDF
DMS100 Technical Specification PDF
DMS-100 Family
DMS-100 International
Technical Specification
DMS-100 International
Technical Specification
Information is subject to change without notice. Northern Telecom reserves the right to make changes in design or components
as progress in engineering and manufacturing may warrant.
DMS, DMS SuperNode, MAP, and NT are trademarks of Northern Telecom.
Publication history
April 1997
BCS40.1i and up Standard 02.03
• Minor editorial corrections.
April 1997
BCS40.1i and up Standard 02.02
• Updated to include additional information on ESA.
October 1995
BCS40i and up Preliminary 02.01
• Updated to include information on third universal tone receiver (UTR)
card in international digital trunk controller (IDTC).
October 1994
BCS37i and up Preliminary 01.02
• Revised to include technical changes from internal review of the draft
document and to add functionalities available with BCS37i
Contents
About this document iii
When to use this document iii
How to identify the software in your office iii
Where to find information iv
Features 4–1
Features common to residence and business 4–1
Ringing 4–29
Individual line ringing 4–29
Audible ringing tone 4–29
IXPM features 4–29
IXPM warm SWACT 4–29
IXPM performance monitoring 4–30
Interoffice features 4–31
Interoffice address signaling 4–31
Call state supervisory signaling 4–31
Intraoffice connecting arrangements 4–31
International killer trunks 4–31
IRLCM features 4–32
Intraswitching on the IRLCM 4–32
IRLCM intraswitching for subscriber features 4–34
ESA for IRLCM – basic 4–35
IRLCM ESA entry and exit 4–37
Downloadable tones 4–40
XPM static data audit on downloadable tones 4–40
Second dial tone over trunks 4–41
Call processing features 4–42
Overload control and protection of essential services 4–42
Call processing 4–42
System maintenance features 4–43
Hardware redundancy 4–43
Trouble detection 4–43
Audit programs 4–43
Call processing data base trouble detection 4–44
Periodic automatic tests 4–44
Service recovery and protection measures 4–44
Trouble verification 4–47
Recovery from faulty equipment 4–48
User interfaces 4–49
Maintenance and administration position (MAP) 4–49
System maintenance input/output interface 4–50
Trunk, line and service circuit test features 4–51
Trunk maintenance 4–51
Transmission measurements 4–52
Local office test lines 4–52
Administrative features 4–53
Data base management-memory alteration 4–53
Data base integrity and security 4–54
Monitoring of recent change area 4–54
Teletypewriter input/output 4–54
Automatic traffic measurements 4–55
Measurement applications 4–55
Service measurements 4–56
Network administration center I/O channel 4–56
Data verification capabilities 4–56
Network management (NM) 4–57
Network management surveillance data 4–57
Miscellaneous features 4–58
Administration 7–1
Data recording 7–1
Magnetic tape 7–3
Disk 7–10
OFZ2 7–54
OTS 7–54
PCMCARR 7–58
PM 7–60
PMTYP 7–62
PM1 7–64
PM2 7–64
RADR 7–65
RCVR 7–65
SOTS 7–65
SPC 7–67
STN 7–68
SVCT 7–68
TFCANA 7–68
TM 7–69
TONES 7–69
TRK 7–69
TRMTCM 7–71
TRMTCU 7–72
TRMTCU2 7–72
TRMTER 7–72
TRMTFR 7–73
TROUBLEQ 7–73
TS 7–74
UTR 7–74
OMs For International subscriber features 7–75
Network management (NWM) 7–77
Network management (VDU) 7–77
Status board lamp display 7–78
Network management controls 7–78
Network management displays 7–85
Administration of manual controls 7–90
Database management 7–96
Memory alteration 7–96
Teletypewriter (TTY) input/output 7–98
Automatic traffic and engineering measurements 7–98
Memory verification 7–98
Routing of output messages 7–98
Trunks out-of-service for data changes 7–99
Database facilities and structures 7–99
Table editor (TE) 7–99
Dump/restore 7–111
Pending order file (POF) 7–111
Journal file (JF) 7–112
Service analysis (SA) (observing) 7–114
DMS-100 and DMS-200 service analysis 7–114
Call progress data – automatically detected 7–114
Call progress data – analyst detected 7–115
Service analysis increased sample rate 7–115
DMS-300 service analysis 7–115
Multi-unit message rate services (MUMR) 7–116
Billing 8–1
DMS switch billing features 8–1
ITOPS billing 8–1
International Call Recording 8–1
Inter Administration Accounting 8–1
International Centralized Automatic Message Accounting 8–1
DMS-100 meter billing 8–1
Meter billing system description 8–2
Software meters 8–2
Determining call charges 8–3
Tariffs 8–4
Determining tariff rates 8–5
The time-of-day system 8–7
Meter pulse tandeming 8–7
Line and trunk metering 8–9
Metering for subscriber features 8–10
Meter billing files 8–11
DIRP interface 8–11
Billing file content 8–12
Billing records 8–12
Viewing the billing file 8–14
Meter billing file format 8–15
Accessing the meter billing system 8–18
Line data changes 8–18
Trunk group data changes 8–19
Meter audits 8–19
THQ audit 8–19
Billing recovery process 8–20
Meter backup utility 8–20
System restarts 8–21
Logs 8–21
Control of log messages 8–21
LOGUTIL 8–21
DMS-100 billing logs 8–21
Priority logs 8–25
Log thresholds and log suppression 8–25
Where to find log information 8–25
Operational measurements 8–26
DMS-100 billing OM groups 8–26
DMS-100i billing priority OM registers 8–26
OM thresholding 8–28
Switch Performance Monitoring System 8–28
DMS-100 billing-specific SPMS index 8–29
Where to find OM information 8–29
Alarms 8–29
Commands 8–30
Menu commands 8–31
Non-menu commands 8–42
ITOPS 9–1
ITOPS description 9–1
ITOPS services and capabilities 9–2
ITOPS rating system 9–3
ITOPS billing plan 9–5
Delay call database feature 9–7
Automatic number identification 9–7
Flexible subscriber number formatting 9–8
Toll break-in 9–8
Speech path splitting 9–8
Maintenance 10–1
Maintenance and administration position 10–2
MAP provisioning 10–2
MAP components 10–3
MAP interface to the DMS-100 Family system 10–6
MAP access and security features 10–6
Automatic dial-back 10–7
Command screening 10–7
Password control 10–7
Access control 10–7
Audit trail 10–8
Automatic logout of dial-up lines 10–8
System maintenance using the MAP 10–8
MTC level system status information 10–9
MTC level MAP commands 10–11
MTC sublevels 10–11
Alarms 10–12
Isolation of faults 10–13
Log report system 10–13
DMS-100 Family system maintenance 10–14
Transmission 11–1
Transmission level 11–1
Analog connection transmission specifications 11–1
Measurement points 11–1
Transmission specifications 11–1
Digital milliwatt (digital test sequence) 11–2
Equipment transmission levels 11–2
Characteristics of subscriber line interfaces (NT6X93, NT6X94) 11–3
VF parameters for subscriber line-to-line connections 11–5
Digital connection transmission specifications 11–9
International digital trunk controller (IDTC) 11–9
Digital trunk to digital trunk echo path delay 11–11
Transmission pads 11–11
Clock synchronization 11–12
NT40 clocking 11–13
DMS SuperNode clocking 11–17
Slip rate 11–18
Frequency capture width 11–19
Error rate 11–19
Compression law 11–19
CODEC transfer characteristics 11–20
Decision levels 11–20
Equipment 12–1
Physical 12–1
Equipment frames 12–8
Equipment frame dimensions 12–8
Equipment frame lineups 12–9
Equipment frame loading and support 12–9
Equipment frame earthquake resistance 12–10
Equipment frame floor plans 12–12
Host office floor plans 12–12
Documentation 14–1
Documentation media 14–1
Documentation ordering 14–1
Documentation catalogs 14–1
Documentation structure 14–1
Modular documentation system 14–2
Characteristics of MDS 14–2
MDS document identifiers 14–4
Northern Telecom publications 14–5
NTP index 14–6
Job specific documentation 14–7
Document index 14–7
Office-inventory record 14–7
Office feature record 14–7
Central office job specifications 14–7
Central office job drawings 14–7
Common systems drawings (as required) 14–8
Non-proprietary hardware documentation 14–8
Module structure (MS) 14–8
Assembly drawings (AD) 14–8
Interconnect schematics (IS) or functional schematics (FS) 14–8
Cabling assignments (CA) 14–8
Systems documentation 14–9
Northern Telecom publications 14–9
Peripheral module software release document 14–10
BCS preparation guide 14–10
Optional documentation 14–10
Feature description manual (FDM) 14–10
Peripheral module software release document 14–10
List of figures
Figure 1–1 DMS distributed architecture 1–2
Figure 1–2 DPCC cabinet 1–4
Figure 1–3 Upgrading the NT40 platform to DMS SuperNode 1–6
Figure 1–4 Main components of the DMS-Core 1–7
Figure 1–5 DMS-bus components 1–14
Figure 1–6 Inter-MS links configuration 1–16
Figure 1–7 Memory modules, pages and segments 1–20
Figure 1–8 Layout of a DSNE frame 1–21
Figure 1–9 Junctored network port configuration 1–23
Figure 1–10 Duplicated network module controller 1–24
Figure 1–11 Intra-network connection (JNET) 1–26
Figure 1–12 Inter-network connection (JNET) 1–27
Figure 1–13 64K single-cabinet ENET layout 1–29
Figure 1–14 128K dual-cabinet ENET layout 1–30
Figure 1–15 Example of an ENET connected to a Series II PM 1–32
Figure 1–16 Input/output device configuration 1–36
Figure 1–17 Input/output equipment frame layout 1–37
Figure 1–18 ILCM shelf organization 1–41
Figure 1–19 ILCM frame packaging 1–42
Figure 1–20 Duplication within the ILCM 1–43
Figure 1–21 Network – ILGC – ILCM Connections 1–46
Figure 1–22 Organization within the ILGC 1–48
Figure 1–23 CM and MS signaling paths 1–54
Figure 1–24 LPP physical configuration 1–57
Figure 1–25 Organization of a two-slot LIU7 1–59
Figure 1–26 Non-channelized access link configuration 1–60
Figure 1–27 Channelized access external interface configuration 1–61
Figure 1–28 CCS7 channelized access system overview 1–62
Figure 1–29 Two-slot link interface shelf with an NIU 1–63
List of tables
Table 1–1 SuperNode processor variations 1–10
Table 3–1 Real-time allocations/Grades-of-service 3–3
Table 3–2 Capacity increases – DMS SuperNode processors 3–3
Table 3–3 Real-time allocation of processor classes 3–4
Table 3–4 Processor engineering factors 3–9
Table 3–5 DMS SuperNode AWT factor percents 3–10
Table 3–6 NT40 AWT factor percents 3–10
Table 3–7 Japan call timings 3–13
Table 3–8 Turkey, Belize call timings 3–14
Number Title
—continued—
End
System architecture
This chapter describes the functional architecture of the DMS-100
International switching system. Provisioning information provided in this
chapter is for explanatory purposes only, to describe the relationships of the
various subsystems and components. For more detailed information on
hardware and software functionalities and provisioning rules, consult the
following Northern Telecom publications (NTP):
• DMS-100 Family Provisioning Manual, 297-1001-450
• Hardware Description Manual, 297-1001-805
• DMS-100 International Feature Description Manual, 291-1001-801i
Central processing
subsystem
Central messaging
subsystem
Input-output
Peripheral modules
devices
Subscriber Trunks
lines
Figure 1–2xxx
DPCC cabinet
FW30182
Frame
supervisory panel
MS 0
DMS-bus
MS 1
Computing
module shelf CPU 0 CPU 1
DMS-core
SLM 0 SLM 1
Core
cooling unit
Figure 1–3xxx
Upgrading the NT40 platform to DMS SuperNode
FW30313
Central
message
controller
Switching I/O
network controller
DMS SuperNode
DMS-Core
DMS-bus
Switching I/O
network controller
Processor bus
DMS-core
DMS-bus
Network IOC
The DMS-Core is also configured with two system load modules (SLMs) for
storage of software loads, office images and PM loads. Each SLM consists
of one cartridge tape drive and one disk drive unit (DDU).
Computing module
The fully duplicated and synchronized CM is based on a 32-bit
microprocessor with a built-in instruction cache facility and an on-board
high-speed data cache.
The CM has the following features:
• 32-bit CPU based on a Motorola microprocessor
• integrated program and data store with single-bit error correction
• high-speed duplicated message controllers (MC)
• distributed control reset system
• direct access to SLM disk or tape
• both planes on the same CM shelf (duplex shelf packaging)
• integrated inventory management for online identification of product
type and vintage for individual cards and paddle boards
Functional subsystems of the computing module
The CM is a duplicated, synchronized processor with up to 240 Mbyte of
memory for each plane. A single shelf holds both planes.
The CM shelf is equipped with cards on the front of the shelf and
corresponding paddle boards on the rear. The cards share a common bus
with the paddle boards. The CM contains the following functional
subsystems:
• processor (NT9X10, NT9X13)
• memory (NT9X14)
• reset control (NT9X26)
• clock (NT9X22)
• bus termination (NT9X21)
• bus extension (NT9X27)
• power supply (NT9X30, NT9X31)
• interfaces (NT9X12, NT9X20)
Table 1–1xxx
SuperNode processor variations
Card PEC Processor series POTS BHCA
Clock (NT9X22)
Each CM contains a subsystem clock that provides link synchronization to
the MS. The accuracy of the clock is determined by the office clock located
in the MS. The basic time reference is obtained through the serial links.
Bus termination (NT9X21)
The bus termination consists of the CM bus terminator paddle board. It
provides resistive termination for the system bus in both the CM processor
shelf and the SLM shelf. In addition, it provides the circuitry required for
buffering the CM activity signal and extracting component identification
from the power converters.
Bus extension (NT9X27)
The CM bus extender paddle board (NT9X27AA) extends the peripheral bus
(P-bus) from the CM processor shelf to SLM shelf.
Power supply (NT9X30, NT9X31)
The power supply provides power for the CM shelf. It consists of two
+5V 86-A power converters (NT9X30) and two –5V 20-A power converters
(NT9X31).
The DMS SuperNode power system is protected by interlock. This feature
prevents the system from being powered off inadvertently while in service.
Interfaces (NT9X12, NT9X20)
The transmission subsystem controls in-band data communication with the
MSs and provides the crossover for links. It consists of a CPU port card
(NT9X12) and DS512 interface paddle boards (NT9X20).
The CPU port card provides serial message communications between the
CM and the MS. The DS512 paddle board provides the transmission
interface for a single bidirectional two-fiber DS512 format.
System load module shelf
The SLM shelf houses two provisionable SLMs (NT9X44) that connect
directly to the CM system bus, their power supplies, and interface circuitry.
The shelf is configured as an extension of the DMS-core.
The SLM shelf is also equipped with cards on the front and corresponding
paddle boards on the rear. The cards share a common bus with the paddle
boards. The SLM shelf contains the following functional subsystems:
• SLMs (NT9X44)
• interfaces (NT9X12, NT9X46)
• clock (NT9X22)
• bus termination (NT9X21)
• bus extension (NT9X27)
Either CPU of the CM can be loaded from either SLM through the crossover
bus; however, if power is lost to one SLM, or if the NT9X12 card is faulty,
the SLM can load only the CPU on the same side of the switch.
Interfaces (NT9X12, NT9X46)
The transmission system controls in-band data communication with the CM
and MS, and provides the crossover for links. It consists of a CPU port card
(NT9X12) and two parallel port interface paddle boards (NT9X46).
The CPU port card provides serial message communications between the
SLM and the CM. The parallel port interface paddle boards connect the
synchronous bus on the CPU card to the SLM with the aid of an interconnect
cable.
Clock (NT9X22)
The subsystem clock provides link synchronization to the MS. The accuracy
of the clock is determined by the office clock located in the MS. The basic
time reference is obtained through the serial links.
Bus termination (NT9X21)
Bus termination is provided by the CM bus terminator paddle board. It
provides resistive termination for the system bus in both the SLM shelf and
the CM processor shelf.
Bus extension (NT9X27)
The CM bus extension paddle board (NT9X27BA) extends the P-bus to the
SLM shelf from the CM processor shelf.
Power supply (NT9X30, NT9X47)
The power supply provides power for the SLM shelf. It consists of two
+5V 86-A power converters (NT9X30) and two +12V power converters
(NT9X47).
Messaging component
The messaging component routes messages within the DMS-100 system,
including messages from the central control component (core processor) to
subtending nodes such as the switching network. The messaging component
in NT40-based systems is the central message controller (CMC). In
SuperNode-based systems, the messaging component is the DMS-Bus.
DMS-Bus
The DMS-bus is the messaging component of the DMS SuperNode system.
For reliability, the DMS-bus consists of two MSs. Under normal operating
conditions, the MSs share the load, although each MS can support the entire
system load if necessary. Figure 1–5 illustrates the main components of the
DMS-bus.
Figure 1–5xxx
DMS-bus components
FW30113
DMS-core
Port Processor
interface DMS-bus T-bus
interface
Transaction bus
Processor bus
The clock interface paddle board provides the direct analog interface
between the DMS-bus and external clock sources (Stratum 1), or remote
clock sources (Stratum 2 or 2.5). The DMS-bus system clock card provides
the internal Stratum 3 clock. Each office has two sets of clock cards, one in
each MS, that operate as internal master and slave sources.
Message switch
The MS is a hub for communication among DMS-100 components. The MS
concentrates and distributes messages in the DMS SuperNode system, and
allows other components to communicate directly with each other.
The MS provides the following capabilities:
• port-to-port message switching
• 240 000 messages per second with 64-byte message length
• independence among ports
• self-maintaining and self-diagnosing processor
Figure 1–6xxx
Inter-MS links configuration
FW30325
MS 0 MS 1
Inter-MS link 0
also monitors processor signals and relays them to the remote terminal or
remote scanning system.
Additional information on DMS SuperNode central processing units and
memory is provided on page 1–18.
System clock (NT9X53, NT9X54)
The system clock provides the clock source for the DMS SuperNode switch.
It consists of the MS system clock card (NT9X53) and the MS subsystem
clock paddle board (NT9X54).
The MS system clock card provides the internal Stratum 3 clock. The clock
interface paddle board provides the direct analog interface between the
DMS-bus and external clock sources (Stratum 1), or remote clock sources
(Stratum 2 or 2.5). Each office has two sets of clock cards, one in each MS,
that operate as internal master and slave sources.
For additional information on DMS SuperNode clocking configurations,
refer to the Transmission chapter of this document.
Mapper (NT9X15)
The mapper (NT9X15) performs logical-to-physical address translation for
messages routed between ports.
Port interface (NT9X17, NT9X20, NT9X23, NT9X62, NT9X69)
The port interface consists of the four-port DS30 paddle boards (NT9X23),
16-port DS30 paddle boards (NT9X69), the DS512 paddle boards
(NT9X20), subrate DS512 paddle boards (NT9X62BA), and the MS port
cards (NT9X17).
The four-port and 16-port DS30 paddle boards provide interfaces between
the MS and the junctored network, input output controller (IOC), and the
link peripheral processor (LPP). DS512 paddle boards use optical fiber to
link to ENET and the single shelf LPP (SSLPP). The DS512 paddle board is
the interface for a single DS512 link to the CM.
Bus termination (NT9X49)
The bus termination provides passive terminations to back panel signals. It
consists of the the MS P-bus terminator card (NT9X49CA or NT9X49CB)
that is used with ENET.
Power supply (NT9X30, NT9X31)
The power supply provides power for the MS shelf. The power is provided
by two NT9X30 +5V 80-A power converters, and two NT9X31 –5V 20-A
power converters. One of each type of power converter is located at each
end of the shelf, and provides power for one-half of the shelf.
The instruction cache and data cache allow the CPU to maximize
throughput.
The CM and the MS use different versions of the NT9X13 CPU circuit card,
which differ only in the amount and type of memory provided. Both the CM
and the MS use the Support Operating System (SOS). Each has its own
CPU and which also uses additional software. Some software is used by the
CM and the MS and is common to both, while some software is unique and
is used only by one or the other. However, each maintains its own copy of
the SOS plus all other common software and its own unique software. Each
has its own clock for program-instruction sequencing and timing functions.
DMS SuperNode memory
DMS SuperNode memory consists of integrated program store and data
store on the same bus. There are separate data and address buses, each 32
bits wide. Memory is byte addressable; thus, the logical address range is 4
Gbytes.
Memory access protocol
Memory is partitioned into 64-kbyte pages. The function of each page of
memory is defined by memory protection attributes in the MAU. Four types
of memory protection attributes are provided in the DMS SuperNode:
• write protection
• program only
• data only
• supervisor mod
Figure 1–7
Memory modules, pages and segments
Memory
module
2 Mbytes
Memory
module
2 Mbytes
Memory
module
2 Mbytes
Switching network
The switching network is a digital-switching matrix that interconnects the
peripheral modules, using time-division multiplexing. The switching
network planes are duplicated for reliability. On NT40-based systems, the
switching network used is the junctored network (JNET). The JNET is
made up of microprocessor-controlled, digital-switching network modules
(NM).
SuperNode-based systems can also be provisioned with JNET but are
typically equipped with the newer Enhanced Network (ENET). Both
Network 00
Filler panel
Frame
supervisory
panel
Network 01
Filler panel
Cooling unit
Figure 1–9
Junctored network port configuration
* PM
0 Port 0
NM 31
.
.
.
63 Port 63
* PM
.
.
.
* PM
* PM
0 Port 0
NM 0
Figure 1–10
Duplicated network module controller
Junctor Peripheral
side NM side
CMC or MS Plane 1
ports Side B
CMC 1
or *
MS 1
NMC
*
Side A
Plane 0
Side B
NMC
CMC 1
or
MS 1 *
Side A
Network Network
ports ports
Messages sent to PMs are sent simplex; each message traverses the
messaging component-to-NM links and the NM-to-PM message link, as
specified in the routing information carried in the message. If a message
link fails, the system automatically removes it from service, and reroutes
messaging through the remaining associated link.
Since a NM path is uni-directional, two paths are required to establish a
bi-directional channel. Thus, there are two “sides” to a plane designated as
Side A and B. Side A switches PCM signals from the peripheral side to the
junctor side, while Side B switches PCM signals from the junctor side to the
peripheral side.
The junctors are DS-30 links providing NM interconnection and
intra-connection. Junctors carry 31 channels of speech, with one channel
(channel 0) used for switching network DS-30 synchronization.
The designated grade-of-service (probability of blocking) of the switching
network is achieved with traffic offered by 30 channels of DS-30 at the
speech links. Thus, two channels of a speech link (channels 0 and 16) are
available for other purposes. Channel 0 is made available for messaging and
channel 16 for maintenance features. Channel 16 is also used by
inter-peripheral message links (IPML).
Intra-network connection
Two PMs on the same NM are intra-connected by a single junctor. The PMs
have one (duplicated) link connected to both planes of the NM. In this case,
the link carries both speech and messaging. As shown in figure 1–11, the
two PMs are connected via the Switching Network. Note that the illustrated
connection represents both space and time since a link/channel on one PM is
connected to a different link/channel on the other. In this case, only one
Physical junctor is necessary since it can carry up to 31 channels. PCM
signals from each PM are carried on the single junctor, but in different
time-slots (channels).
Inter-network connection
The inter-network configuration is illustrated in figure 1–12 on page 1–27.
Figure 1–11
Intra-network connection (JNET)
Junctor Peripheral
side NM side
Plane 0
PMY
*
Side B
NMC
Side A
*
Phone
Plane 1
* PMX
Side B
NMC
Side A
Phone
Network Network
port port
Figure 1–12
Inter-network connection (JNET)
Junctor Peripheral
side side
NM 1
Plane 0
*
Side A
NMC
Side B
*
Plane 0 PMY
*
Side A
NMC
Side B
*
NM 0
Plane 0
*
Side A
NMC
Side B
*
PMX
Plane 0
*
Side A
NMC
Side B
*
Enhanced network
ENET is a matrixed timeswitch that provides pulse-code modulated voice
and data connections between PMs, and message paths to the DMS-bus.
ENET is available in either a single-cabinet 64K configuration or a
dual-cabinet 128K configuration. Both configurations use the same
hardware components, and the 64K ENET can be upgraded to 128K
channels.
ENET provides the following capabilities:
• non blocking single-stage switching
• nailed-up connections with no adverse effect on traffic
• compatibility with A-law and µ-law companding
Figure 1–13 on page 1–29 illustrates the basic 64K single-cabinet ENET
layout, with two shelves for each plane.
Figure 1–14 on page 1–30 illustrates a typical 128K dual-cabinet ENET
layout, with one fully equipped cabinet serving as plane 0 and one fully
equipped cabinet serving as plane 1. Up to 60 speech link interface cards
are provided for each plane.
Figure 1–13xxx
64K single-cabinet ENET layout
FW-30502
FSP
Shelf 0
Plane 0
Shelf 1
Shelf
0
Plane 1
Shelf 1
Figure 1–14xxx
128K dual-cabinet ENET layout
FSP FSP
Shelf 1 Shelf 1
Shelf 2 Shelf 2
Shelf 3 Shelf 3
Cabinet Cabinet
cooling unit cooling unit
Note: For ENET applications with more than two shelves in each ENET
cabinet, the use of ISN cabling structure (ICS) cabinets (NT0X35BB) is
recommended. Three ICS cabinets (not shown) are required for the
example shown in figure 1–14: one on either side of the ENET cabinets
and one between them.
Series II PMs connect to ENET using DS512 fiber links. Retrofit kits are
required to allow Series II PMs to connect to ENET. Series II PMs message
switch and buffer 6 (MSB6), MSB7, digital trunk controller 6 (DTC6), and
Subscriber Carrier Module–100S (SMS) use copper speech links to connect
to ENET.
Figure 1–15 on page 1–32 shows an example of how an ENET can be
connected to a Series II PM using fiber links.
Figure 1–15xxx
Example of an ENET connected to a Series II PM
FW-30146
NT6X40* NT6X40*
paddle board paddle board
NT6X40** NT6X40**
PM (for example, DTC)
Input/output controller
The input/output controller (IOC) provides an interface between the
messaging component and input/output devices such as magnetic-tape
drives, disk drives, data links, MAPs, and printers. The IOC is used on
NT40-based and SuperNode-based platforms. Up to 36 devices can be
connected to an IOC shelf. A DMS system can support up to 12 IOC
shelves.
The IOC, together with the I/O device controller that connects the particular
device, perform the necessary conversion to DS-30 format for
communication with the control component by way of the message links.
Two DS-30 links connect to the IOC, one from each message component.
Figure 1–16 on page 1–36 illustrates the IOC and the input devices which
connect to it. The IOC resides in a single or dual input output equipment
(IOE) frame. The single IOE frame is illustrated in figure 1–17 on page
1–37.
Input output devices
The following types of input output devices connect to the DMS system via
the input output controller.
Magnetic tape drives
Magnetic tape drives (MTDs) are devices that allow transfer of system data
to permanent memory tape that is external and transportable. The MTD is
used for storage and retrieval of DMS data as follows:
• office image data (backup)
• automatic message accounting (AMA) data (record of billing)
• journal file (JF) data (record of data modification)
• operational measurements (OM) data (traffic reports)
Figure 1–16
Input/output device configuration
Computing
module
Message
switch
To switching network
Input/output controller
MTC TC DDC TC
Mag Disk
drive Modem
tape
VDU
Prtr
VDU VDU VDU
Figure 1–17xxx
Input/output equipment frame layout
Magnetic
tape drive
Frame
supervisory
panel
Input/output
controller
Input/output
controller
Disk drive
unit
Peripheral modules
Peripheral modules (PM) provide an interface between the DMS internal
switching network and the telephony network (lines and trunks).
A digital connection can be established among PMs under the direction of
the control component. Once connected, the PMs can pass voice/data and
pulse code modulation (PCM) signals among themselves.
Hardware and functional modularity are most evident at the PM level of the
system. The PMs connect both analog lines and digital voice and signaling
transmission systems. PMs provide the signal processing required to
convert data to a common digital format for transmission to another PM
connected via the switching network. The destination PM reconverts the
common format to the one required by the facility with which it connects.
The conversions involved are:
• analog-to-digital (A/D), for conversion of voice-frequency analog
signals to internal digital signals.
• digital-to-analog (D/A), for conversion of internal digital signals to
analog voice frequency signals.
• digital-to-digital (D/D), for conversion from internal to external formats
on digital facilities (for example, DS30 to PCM30)
Digital trunk facilities link DMS-100 International systems to other
switching systems in the telephone network. The system, through its digital
switching network, connects lines and incoming digital trunks to lines and
outgoing digital trunks located on the peripheral modules and utilizes
conventional signaling formats including multifrequency (MF),
multifrequency compelled (MFC), and dual tone multifrequency (DTMF).
Channel banks can be used to interface DMS-100 International switches to
the network via analog trunks.
In addition to signal processing, the PMs also perform many of the common
and real time consuming functions associated supervision and control of
external interfaces. For example, line modules perform line supervision,
digit reception and ringing functions. As a result, the control component
needs only to determine call destination when all digits are received and to
establish a call connection through the switching network.
Peripheral modules (PM) perform the following tasks:
• connect analog and digital facilities for conversion of data and signaling
to and from internal DS30 format. (see Note)
• transmit DS30 pulse code modulation (PCM) signals and data to other
PMs via the switching network.
• provide additional signal processing through the use of service circuits
such as dual-tone multifrequency (DTMF) receivers.
• provide maintenance and performance analysis circuitry for frequency
and level measurements
• provide facilities for subscriber and terminal signaling, such as busy
tone, dial tone, multifrequency digit outpulsing tones, and ringing
current
The connections through the switching network are duplicated. At least one
(duplicated) DS30 link is provided.
Note: A duplicated link means that there are two links, one to each
switching network plane. Speech and signaling information is sent over
both links, but only one link is active. If a fault occurs on the active link,
the other link takes over.
• ringing
• digit collection
• line metering (with type B line card)
A maximum of 640 lines can be connected to each LCM. Each LCM
consists of two shelves. (Line Concentrating Arrays). Each shelf consists of
a power converter, a controller, and a maximum of five line drawers. Each
line drawer contains up to 64 line cards. Up to 320 lines can be connected to
each shelf. The ILCM configuration is shown in figure 1–18 on page 1–41,
and figure 1–19 on page 1–42.
Figure 1–18xxx
ILCM shelf organization
FW-31146
Shelf 1
Power converter 1
Up to 3 Up to
Controller 1
Shelf 0
Power converter 0
Up to 3 Up to
Up to 5 line drawers
Control 0
Figure 1–19xxx
ILCM frame packaging
FW-31145
Ringing Ringing
generator 0 generator 1
Power converter 0
320 lines
DS30A
Controller Line drawers 640 lines
Power converter 0
Line drawers
Power converter 0
320 lines
DS30A
Controller 0
Line drawers
For reliability, each shelf is capable of taking over the lines of the mate
shelf, as shown in figure 1–20 on page 1–43.
Figure 1–20xxx
Duplication within the ILCM
FW-31148
Controller 0 Up to 5
Up to 3 line drawers Up to
DS30A links 320 lines
* *
1 or 2
RS-232
speech links
link
(DS30A)
Controller 1 Up to 5
Up to 3 * * line drawers Up to
DS30A links 320 lines
The controllers are connected by a serial link which allows one shelf to
check its data with the mate controller. The data for each call in progress is
sent to the mate controller over this link. If a fault occurs in one controller,
the mate controller can take over the calls in progress. Each line drawer has
duplicated links connecting it to both controllers, and each DS30A link is
connected to both controllers as well. Therefore, if a controller fails, the
hardware signals the mate controller, which then takes over the remaining
320 lines. There is no reduction in the number of links between the line
drawers and the controller, or between the ILGC and the ILCM, because the
controller has access to the failed controller’s line drawers and DS30A links.
Between the two controllers, there are two speech links (DS30A). If all
DS30A links on a controller are busy, but the mate controller has a free
DS30A link, a call originating on the controller with all links busy may be
routed between the links to the free DS30A channel on the mate controller.
This capability provides access for all lines to all six links, for traffic
engineering purposes.
Each ILCM has its own ringing generator. In the event of a failure, the mate
ILCM will supply the ringing voltage for both ILCMs.
In the event of a power failure, the converter on the mate shelf takes over
and provides power for both shelves.
A group of ILCMs connects to a ILGC, which then connects to the
switching network.
There is at least one DS30A link per shelf between the ILCM and the ILGC.
Each ILCM has at least two DS30A links to the ILGC controlling it. The
maximum number of DS30A links per shelf is three (that is, a maximum of
six DS30A links per ILCM).
The following circuit cards are provisioned on an ILCM.
Power converters (NT6X53)
The ILCM power converters supply all the required LCA shelf voltages and
operate using a nominal –48V dc battery source.
ILCM processor card (NT6X51AB)
This card provides:
• interface to digroup control card
• activity and sanity circuitry
• digit collection and messaging for up to 640 lines (spared mode)
• DMS-X message protocol to the ILGC
• monitoring of power and ringing functions
• clock recovery and generation
• 64K RAM
any mix of the available types of line cards, and allows additional plug-in
units to be added to a partially-equipped module without disruption of
service.
International line group controller (ILGC)
The ILGC performs high-level functions, such as:
• call coordination
• provision of the different tones required
• ILCM interface.
3 to 16 DS30 links
ILGC
Up to 640 Up to 640
lines per ILCM lines per ILCM
The ILGC functional diagram is shown in figure 1–22 on page 1–48. The
link between an ILCM and its host ILGC is DS30A. The link between each
ILGC and a network module is a DS30 link. DS30A is similar to DS30.
Some hardware economies are achieved, since DS30A is intended to work
over shorter distances (Line Concentrating Modules are adjacent to their
Line Group Controller).
Figure 1–22xxx
Organization within the ILGC
FW-31144
For reliability, one controller is active and the other controller is in standby
mode. There is a 19.2 kbits/sec link between the two units. This link is used
to transfer information from the active unit to the standby unit. This transfer
ensures that the standby unit has enough data to continue operation if the
active unit fails. The active unit checkpoints data from each call to the
standby unit. If a fault hardware or software fault occurs, the active
controller becomes inactive and the standby controller assumes control of
the ILGC.
Calls in talking state are preserved during both takeover and return to
service. Calls in the set-up mode are lost.
A switch-over occurs if any of the following fail:
• power converter
• control complex (MP, SP, FP, and memory)
• time switch
• CSM card
• formatter
• message interface
• A/B interface
The switch-over takes place if the fault occurs on the active controller.
These faults are detected by the routine use of the channel loop-around to
the above cards.
In the event of a power converter failure, the converter on the mate shelf
takes over and supplies power to the non-duplicated PCM30+2 interfaces on
both shelves.
The ILGC is the basic peripheral group controller. The International Digital
Trunk Controller (IDTC) is a derivative of the ILGC.
The following circuit cards are used in the ILGC.
Power converter (NT2X70AD)
The power converter provides all the required shelf voltages and operates on
a nominal –48V dc battery. The converter is capable of supplying power for
the non-duplicated DS-1 interface of the mate shelf in the event of a mate
converter failure. This power converter is also used in the IDTC.
Universal tone receiver (NT6X92)
The universal tone receiver (UTR) is a 32-channel receiver which can detect
a variety of tones, such as DTMF, MF, and MFC. Up to 128 frequencies can
be detected by programming two EPROMS on the card.
Tone samples are switched onto the parallel speech bus by the time switch
(NT6X44AA) and are collected by the UTR at the appropriate time slots.
The UTR analyzes the samples and identifies the tones. The results are
returned to the signaling processor (SP).
DS30 interface (NT6X40AB)
Each DS30 interface card contains the interface logic for eight DS30 ports
(single plane). This card also contains the network side loop-around logic
used for maintenance purposes. This card is also used in the IDTC.
Speech bus formatter (NT6X41AA)
The speech bus formatter card contains the logic required to combine both
network planes. The necessary control memory for the network side loop
around is located on this card as well as the logic associated with the ILGC
system clock. A per-channel channel supervision message (CSM)
loop-around facility is also supported on this card. This card also converts
the 512 channel speech bus to 16 32-channel serial ports and vice-versa.
This circuit card is also used in the IDTC.
Channel supervision message (NT6X42AA)
The channel supervision message card contains the required logic for
channel supervision message (CSM) extraction and insertion, parity and
integrity. This circuit card is also used in the IDTC.
Time switch (NT6X44AA)
The time switch card contains the ILGC switching logic, peripheral side
serial to parallel and parallel to serial converters, and peripheral side
loop-around logic. The time switch enables any network side channel to
connect to any peripheral side channel. This circuit card is also used in the
IDTC.
Message interface (NT6X69)
The message interface card contains the interface to both network and
peripheral side message channels. The card also contains the tone
generation logic.
Processor (NT6X45)
Two versions of IDTC processors are available. The newer version, the
NTMX77 unified processor card, is described on page 1–51. In the older
processor configuration, each ILGC unit contains two NT6X45 processor
cards, based on Motorola 68000 microprocessors. One card is configured as
the master processor (MP) and the other is configured as the signaling
processor (SP).
The MP performs the following functions:
• call control
• data management
• interpretation of central control messages
• channel assignment.
MC MC
Port crossover
bus
CMIC CMIC
links links
MS0 MS1
DMS-Bus
This technique not only results in true redundancy of these two entities but
also makes possible in the sharing of connections between themselves. Each
CPU has a serial fiber optic connection (CMIC link) to each MS. The two
CM planes are connected by the mate exchange bus, which in turn makes
possible a secondary connection between MCs using the port crossover bus.
This ensures complete redundancy for the CM connections.
Communication between switching core and subtending subsystems
The PMs and the control component communicate with one another by
means of messages sent via DS-30, DS-1 and DS-512 links. The messages
can be PCM signals, or they can be control and information messages
exchanged by the central control element, the switching network and the
PMs.
System nodes
Any unit that can accept messages or originate them, or both, is termed a
“node.” Therefore, the central control and central messaging elements, the
NMs, the PMs and the IOC are all nodes. In NT40-based systems, the side
of a node (or link) “facing” toward the central control element is referred to
as the “C-side;” the side of a node (or link) facing toward the PMs is
referred to as the “P-side.” In SuperNode-based systems, the side of a node
(or link) facing toward the DMS-Bus is referred to as the C-Side: the side of
a node (or link) facing away from the DMS-Bus is referred to as the P-side.
In both switching systems, messages going from the C-side to the P-side are
called outgoing messages. Those coming from the P-side to the C-side are
called incoming messages.
When installed, each NM, IOC and PM is assigned a unique node number
by the system. Each terminal controlled by the node is assigned a unique
number on the node. Therefore, every terminal (for example, line and trunk)
on the system is identified internally by a unique “terminal identifier” made
up of the node number and the terminal number on the node. This identifier
may be thought of as the address of a terminal. The central control element
software controls terminals by sending messages to the nodes on which they
are located. These messages include the terminal identifier, as well as the
data necessary for the particular action being performed.
Message link control is handled by the input/output (I/O) system. This
function is distributed among the nodes since each node must receive and
send messages over its C-side and P-side message links. The I/O System
uses routing land error control information is a message to ensure successful
message transmission in the presence of link noise or in the event of
transient or permanent hardware faults.
DS-30 interface
The common interface between components of DMS-100 system is a serial
data link with a bit-stream format. This format is termed “DS-30” and is
composed of 32 10-bit channels within a 125 microsecond frame. The
channels are numbered 0 to 31; channel 0 comes first in time.
Note: Framing is required so that both sending and receiving equipment
can agree on channel and bit number.
DS-512
The DS-512 format is the standard for internal optical fiber links on DMS
SuperNode. A DS-512 link is equivalent to 16 DS-30 links, multiplexed
onto a single optical fiber. One channel is used for link synchronization.
DS-512 provides more bandwidth than DS-30, and allows greater packaging
density.
S-512 format is used on the link between DMS-Core and DMS-Bus. This
link is a 32-Mb/s message channel. The distance specification of a DS-512
link is 250 meters. The DS-512 link is based on short-wavelength optical
technology, using LEDs and multi-mode fiber.
LPP. These buses, illustrated in figure 1–25 on page 1–59, are composed of
cards, paddle boards, and interconnecting cables. There are two F-bus
repeater cards (one for F-bus 0, one for F-bus 1) and two F-bus extender
paddle boards (F-bus 0 and F-bus 1) associated with each LIS. There are
also two F-bus terminator cards for each LIS, one for each F-bus.
Frame
supervisory panel
Link
interface
shelf 1
Link
interface
shelf 2
Link
interface
shelf 3
Cooling unit
Each F-bus is an eight-bit bus that forms the data communication path
between a local message switch and the LISs. The F-buses are dedicated to
each LIM as follows:
• F-bus 0 is dedicated to LIM 0
• F-bus 1 is dedicated to LIM 1
The IPF card provides message processing for its associated SL, including
GTT for DMS-STP applications. It also provides one interface or tap to
each of the two F-buses. Each F-bus tap is fully independent of the other
tap. This ensures that if one F-bus fails, the IPF can still access the other
F-bus.
F-bus 1
LIU7 EIU
Integrated Integrated
processor and processor and
F-bus interface F-bus interface
P-bus
Figure 1–26
Non-channelized access link configuration
FW-31193
Outside world
PCM30
Channel bank
Modem
Figure 1–27
Channelized access external interface configuration
FW-30327
Outside world
PCM30
PDTC
Network plane 0
Network plane 1
C-bus 0
C-bus 1
Figure 1–28
CCS7 channelized access system overview
FW-31136
DMS-core
DMS-bus
DS30 interface
LPP Network
LIM unit 0 LIM unit 1
Figure 1–29
Two-slot link interface shelf with an NIU
FW31190
Front
Rear
21R NTEX28 NIU DS30 link I/F NTEX25 NIU CBC 21F
20R NT9X19 Filler faceplate NTEX22 IPF 20F
19R NTEX28 NIU DS30 link I/F NTEX25 NIU CBC 19F
18R NT9X19 Filler faceplate NTEX22 IPF 18F
17R 17F
16R 16F
15R 15F
14R 14F
13R 13F
12R 12F
11R 11F
10R 10F
09R 09F
08R NTEX20 Intrashelf termination 08F
07R NT9X79 F-bus extension NT9X74 F-bus repeater 07F
Paddle boards
NT9X30 +5V 86-A power converter 04F
Cards
Figure 1–30
NIU bus configuration
FW-30328
F-bus 0
F-bus 1
C-bus 0
C-bus 1
Figure 1–31xxx
SSLPP F-bus and MS interconnections
FW-30350
Inter-MS link
Figure 1–32
An example of an EMC cabinet with SSLPPs
FW-31191
FSP
SSLPP 1 NT9X72
SSLPP 2 NT9X72
Filler faceplate
Filler faceplate
Core
cooling unit
Figure 1–33
SSLPP card arrangement arrangement
FW-31192
Front
Rear
Paddle boards
NT9X30 +5V 86-A power converter 04F
Cards
ESA tone
and clock
card
PCM30 Remote
To interface maintenance Inter-unit
DS30A
host card module link
ILGC ESA
processor
Line Line
control DS30A concentrating
card 1 array 1
Figure 1–35xxx
IRLCM Frame Layout – Front View
FW-31143
RG 0 RG 1
FSP
RMM
HIE
Cooling unit
LCA 1
LCM
LCA 0
Grill
Legend:
FSP Frame supervisory panel
HIE Host interface equipment
LCA Line concentrating array
LCM Line concentrating module
RG Ringing generators
RMM Remote maintenance module
Front view
NT6X60AA IRLCM RG 1
NT6X27AAB PCM30
NT6X27AAB PCM30
NT6X73BA LCC 1
NT6X73BA LCC 0
NT6X75 ETC*
Slot 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
Legend:
ETC ESA tone and clock card
LCC Link control card
Notes: *Cards in slots 14–16 are provisioned for ESA option only. Otherwise, the slots are
closed by filler panels (NT0X50AA).
**If additional PCM30 links are required (maximum of six links), the filler panel is
replaced by NT6X27AB.
Figure 1–37xxx
LCA shelf layout
FW-30248
FRONT VIEW
FRONT VIEW
NT2X59 Codec and tone card
01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21
DMS SuperNode SE
DMS SuperNode SE architecture is essentially the same as that of the
full-sized DMS SuperNode system. The SuperNode SE platform supports
all subtending equipment supported by SuperNode, including ENET, JNET,
LPP, the IOC, the MAP, remotes, and PMs. The SuperNode SE platform
combines the switching core, ENET and CCS7 link functionalities in the
SuperNode combined core (SCC) cabinet, as shown in Figure 1–39.
Figure 1–39xxx
DMS SuperNode SE SuperNode combined core cabinet
FW-30317
Frame
supervisory panel
MS 0 MS 1 DMS-bus
16K 16K
ENET 0 ENET 1
SLM 0 SLM 1
DMS-core
CPU 0 CPU 1
Cabinet
cooling unit
Figure 1–40xxx
DMS SuperNode SE functional block diagram
FW-30432
CPU 0 CPU 1
SLM 0 SLM 1
DMS-core
MS 0 MS 1
DMS-bus
LIS
NET 0 NET 1
PMs
DMS-core
The DMS-core provides the processing resources and performs system
management in DMS SuperNode SE applications. The DMS-core also
handles system integrity, maintenance, and the loading and downloading of
software. Figure 1–41 illustrates the main components of the DMS-core.
Figure 1–41xxx
Main components of the DMS-core – SuperNode SE
FW30116
Processor bus
DMS-core
DMS-bus
Network IOC
The DMS-core consists of the CM with two synchronized CPUs, each with
an SLM. The CPUs are connected by the mate exchange bus, which allows
the processors in the modules to compare computations, thus ensuring
system integrity between the active and inactive planes.
The two SLMs are used for storage of software loads, office images, and PM
loads. Each SLM consists of one cartridge tape drive and a disk drive unit
(DDU). In the DMS SuperNode SE cabinet, the DMS-core occupies a
single shelf; the DMS SuperNode switch requires two shelves for the same
components. This space saving is achieved through the use of high-density
memory cards and disk technology.
SuperNode SE computing module
The SuperNode SE CM is fully duplicated and synchronized. Its CPUs are
based on a 32-bit Motorola microprocessor, with a built-in instruction cache
facility and an onboard high-speed data cache. Normally the duplicated
CPUs run in synchronized mode, with the controlling CPU known as the
active processor and the mate as the inactive processor.
Other features of the computing module include
• integrated program and data store with single-bit error detection
Figure 1–42xxx
DMS SuperNode SE CMIC link configuration
FW-30319
Card 4 Card 4
Slot 16 Slot 23
9X17AD 9X17AD
Port 0 Port 0
MS 0 MS 1
9X62AA 9X62AA
MC0 MC1
9X86 9X86
CPU 0 CPU 1
Slot 18 Slot 21
local terminal can be up to 15 m (50 ft) from the RTIF. With a 20-mA
current loop interface, the local terminal can be located up to 457 m (1500
ft) from the RTIF.
The remote terminal port can be configured as either RS-232 or E2A to
handle baud transmission rates from 110 to 9600.
A VT420 MAP terminal that can operate in dual terminal mode can be used
to provide a virtual RTIF terminal and eliminate the need for duplicated
terminals.
Clock (NT9X22)
Each CM contains a subsystem clock that provides link synchronization to
the MS. The accuracy of the clock is determined by the office clock signal
distributed through the MS. The basic time reference is obtained through
the serial links.
Bus termination (NT9X21)
The bus termination consists of the CM bus terminator paddle board. It
provides resistive termination for the system bus in both the CM processor
and the SLM. In addition, it provides the circuitry required for buffering the
CM activity signal and extracting component identification from the power
converters.
Power supply (NT9X91, NTDX15)
The power supply provides power for all the components housed in the
DMS-core shelf. It consists of two ±5V power converters (NTDX15) and
two +5V/+12V power converters for the SLM (NT9X91).
The DMS SuperNode SE power interlock facility provides a safeguard to
prevent the system from being powered off accidentally while in service.
Under all other conditions, power interlock is disabled.
System load modules
The DMS-core shelf also houses two SLMs, their power supplies, and
interface circuitry. Each SLM consists of a 3.5-in. (8.9-cm) 340-Mbyte hard
disk drive (300 Mbyte when formatted) and a 150-Mbyte or 250-Mbyte
streaming tape drive with a removable cartridge.
An SLM is used to
• bootload the CM and the MS from disk or tape
• load an office image into the inactive CM
• dump an image to disk
• perform offline transfers from tape to disk or from disk to tape
The SLMs connect directly to the CM system bus. The active CM can
communicate directly with either SLM.
— write only
— read/write
• dynamic numbers of files
• efficient support for very large files
• support for cached and uncached files
• hierarchical directories
File security
Access to any file can be restricted using the file security system. Access is
determined by the security attributes of both the file and the user class. File
attributes are read, write, or execute. User class attributes are file owner, file
group, or everyone.
It is recommended that copies are maintained of all files held on system
disks, including the SLMs, in case of subsystem failure. Parallel recording
capabilities are provided for billing files.
DMS-bus
The DMS-bus consists of two duplicated MSs. Under normal operating
conditions, the MSs share the load, although each MS can support the entire
system load if necessary. Figure 1–43 illustrates the main components of the
DMS-bus.
Figure 1–43xxx
DMS-bus components
FW30113
DMS-core
Port Processor
interface DMS-bus T-bus
interface
Transaction bus
Processor bus
The MS shelf is equipped with cards at the front and corresponding paddle
boards at the rear. The cards share a common bus with the paddle boards.
Inter-MS links provide additional reliability within the frame transport
system (FTS). If the route calculated by the FTS is unavailable—for
example, because one of the nodes selected is out of service—inter-MS links
direct the message by an alternative route, if one exists. Without inter-MS
links, the message would be lost.
The DMS-bus is configured with two inter-MS links. Figure 1–44 shows a
typical configuration using inter-MS links.
Figure 1–44xxx
Inter-MS links configuration – SuperNode SE
FW30325
MS 0 MS 1
Inter-MS link 0
Processor (NT9X13)
The MS processor system consists of the CPU card (NT9X13), which is
equipped with 16 Mbyte of on-board memory for storage.
System clock (NT9X53, NT9X54)
The system clock provides the clock source for the DMS SuperNode SE
switch. It consists of the system clock card (NT9X53) and the clock
interface paddle board (NT9X54).
The system clock card provides the internal Stratum 3 clock. Each office
has two sets of clock cards, one in each MS, that operate as internal master
and slave sources. The clock interface paddle board provides the direct
analog interface between DMS-bus and external clock sources (Stratum 1)
or remote clock sources (Stratum 2 or 2.5).
Mapper (NT9X15)
The mapper (NT9X15) performs logical-to-physical address translation for
messages routed between ports on the MS.
Port interface (NT9X17, NT9X23, NT9X62, NT9X69)
The port interface consists of optional combinations of the DS30 four-link
paddle board (NT9X23), the four or two-link subrate fiber paddle board
(NT9X62), the 16-link DS30 paddle board (NT9X69), and a range of MS
port cards (NT9X17). The subrate optical fiber links connect various
applications of the DMS SuperNode SE system, such as ENET. The DS30
paddle boards connect the MS to existing DMS-100 Family components,
such as the IOC.
Reset control (NT9X26)
The RTIF paddle board (NT9X26) monitors and decodes commands, and
passes them to the CPU in the form of control signals. It also monitors
processor signals, and relays them to the remote terminal or remote scanning
system.
• SMS
• SMR
Figure 1–45 shows an example of an ENET connected to a Series II PM
using fiber links.
Power supply (NT9X30, NT9X31)
The power supply is provided by two +5V power converters (NT9X30) and
two –5V power converters (NT9X31). One of each type of power converter
is located at each end of the shelf and provides power for half the shelf.
Figure 1–45xxx
Example ENET connection to a Series II PM
FW-30146
NT6X40* NT6X40*
paddle board paddle board
NT6X40** NT6X40**
PM (for example, DTC)
The bit rate is 2.56 Mbit/s, which is dictated by the need to transmit speech
using 8-bit pulse code at 8000 samples per second. The bit rate is derived
from the following formula:
32 channels × 10 bits × 8000 samples per second = 2.56 Mbit/s.
DS512 protocol
The DS512 protocol is the standard for internal optical fiber links in the
DMS SuperNode SE system. It has 512 channels, one of which is used for
link synchronization. DS512 provides more bandwidth and greater
packaging density than DS30; one DS512 link is equivalent to 16 DS30
links.
Subrate DS512 interfaces are used in various applications in the DMS
SuperNode SE system, as follows:
• to provide the CMIC links
• to link the MS shelf to the ENET shelf
• to link the MS to application and file processors
Subrate links can operate at the following rates: SR128 (128 channels),
SR256 (256 channels), and full DS512 (512 channels), depending on the
application.
The DS512 protocol is used on links between the DMS-core and the
DMS-bus. The maximum length of a DS512 link is 250 m (273 yd). The
DS512 link is based on short-wavelength optical technology using
light-emitting diodes (LED) and multimode fiber.
Software engineering
This chapter provides an overview of the modular software structure
employed on DMS-100 International switching systems. The software
packaging, development and delivery processes are also described.
Distributed processing
The DMS-100 Family of switches employs a distributed processing
architecture that uses several different processors:
• The DMS SuperNode computing module (CM) and message switch
(MS) are programmed in PROTEL, a high-level language developed by
Northern Telecom.
• The NT40 central control (CC) central processing unit uses a
microprogrammable processor that is programmed in PROTEL.
• The peripheral modules use a processor that is programmed in an 8085
assembly-level language.
• The XPMs use a processor that is programmed in PASCAL. XPM is a
term applied to the PMs based on the Extended Multiprocessor System
(XMS) also developed by Northern Telecom.
Although each has a different programming environment, these units
communicate by means of a simple interface. Messages are sent via serial
data links using DS-30, DS-512 and DS-1 protocols.
Under normal circumstances, the two central processing units (CPUs)
operate in sync (that is, both are simultaneously executing the same
instruction with the same data). Each has access to certain state information
in the mate central processing unit; therefore, fault detection (for example,
matching for loss of sync) can be carried out. In addition, this access
provides inter-processor communication for system maintenance software.
Software packaging
From the operating company perspective, an installed DMS-100
International software load operates and appears as a seamless system. Prior
to installation, the software load is customized and compiled specifically for
the switching system on which it will be installed, providing all services and
Software delivery
Approximately twice a year, Northern Telecom issues a Batch Change
Supplement (BCS) containing new DMS-100 features. Each BCS release
offers hundreds of software features, developed by over 3, 000 designers,
testers, and support staff at four development sites. BCS releases are
constructed for each purchaser and delivered to DMS switching sites
throughout the world. New BCS releases are installed quickly and
cost-effectively while the system is fully operational, with minimal service
disruption.
One-night process
Northern Telecom’s BCS release is a “one-night” delivery process that
greatly simplifies and reduces the time required for software delivery. The
delivery process begins with a BCS package assembled specifically for the
DMS switch on which it will be installed. The software is then shipped to
the switch site and loaded into the inactive side of the fully-duplicated
DMS-100 processor. All customer-specific office data is retained and
transferred to the new software environment.
Figure 2–1
Internal structure of a module
Interface
Implementation
section
facilities to make use of the more primitive. For example, once the ability to
communicate with a terminal has been implemented in the I/O system, the
file system procedures for reading or writing data can use this facility as a
primitive without being involved in the actual implementation of that
function.
Figure 2–2
Support operating system (SOS)
Application software
Command interpreter
Program loader
File system
I/O system
Nucleus
Directory system
The directory system provides a mapping between character strings and
unique integers in a certain range.
Pool allocator
The pool allocator provides facilities for allocating and de-allocating pools
of storage out of which items of only a perspective type may be allocated.
The use of a pool allocator reduces the memory overhead as compared to
allocating individual items using the storage allocator. In order to minimize
the blocking of processes requesting items from the same pool, the size of
the pool is determined (by the user module and supplied to pool allocator as
a parameter) on the basis of the type and usage of a particular resource to be
allocated from the pool. However, if blocking may still occur, it is resolved
by the usage of another facility in SOS, called the flag system which is
described below.
Scheduler
The scheduler is responsible for sharing the CPU among all the processes in
the system. This allocation is done on the basis of the priority assigned to
each process, as well as the availability of processes to run as they wait for
certain events and as these events take place.
Timing facilities
Timing facilities track the progress of real-time and are used to time running
processes as well as processes waiting for timed events.
Message system
The message system is used for inter-process communication. Information
is transferred by the sending process, posting the information (called a letter)
on a mail box, and the receiving process retrieving the information from the
mail box. The receiver is delayed appropriately should it attempt to retrieve
a letter before it has been posted.
Synchronization primitives
Synchronization primitives regulate access to shared data by several
processes to prevent the data from getting scrambled due to unregulated read
and write attempts. They are also used to control the allocation and release
of finite resources, by tracking the number of available resource units.
Log system
When a software subsystem detects an event that it wants to report to the
outside world, it may compose a report describing the event, and pass it to
the log system which stores it in memory for later retrieval, or output it
immediately to one or more output devices.
CC to peripheral messages
As required, CC software sends an outgoing message to a device:
• a call process telling a network module to establish a connection
• a command interpreter (CI) process sending a line of output to a terminal
(via a device controller)
• a maintenance subsystem requesting a trunk module to perform a
self-diagnostic test.
The affected software composes the appropriate message and invokes the
I/O system to transmit that message to a particular peripheral device,
referred to internally as a node. The I/O system selects a route to that node
from its route-table, and then passes the message to the CMC, instructing it
to send the message via the selected route.
If both CMCs are unavailable (when they are busy sending other messages),
the output request is placed in a queue and the I/O system sends the message
when a CMC becomes available.
The transmission across the CPU-CMC nodes is interrupt driven, that is,
when the outgoing message buffer of a CMC is empty, it posts an interrupt
for the CPU by setting an outgoing message buffer empty (OMBE) bit in a
four-bit interrupt register. The interrupt is handled by the CMC interrupt
handler in the I/O system. The CMC interrupt handler would then release
the next queued message into the outgoing message buffer of the CMC
posting the interrupt.
From the CMC outwards, the transmission of messages between any two
nodes is regulated by an internal messaging protocol. The sending node first
transmits a control byte “may I send” (MIS) to the receiving node. The
receiving node responds by returning a control byte “send”. If received in
time, the sender transmits the message followed by the checksum over all
the bytes sent. The checksum is recalculated by the receiver and compared
The file system maintains information about all devices that are supported as
well as all files in the system. For each device, the location of the physical
file system procedure supporting it is kept. Each file has specific
information in a file control block:
• device type
• file name
• file ID
• file attributes
• access attributes (specify whether access is sequential or random, and
whether it is for reading or writing).
In addition to providing the ability to perform a file operation on any one of
the devices supported, the structure of the file system allows for the addition
of a new physical device to the system without having to make any changes
to the logical physical system. The device dependent module for the new
device is designed as an agency, compiled, linked and loaded into the switch.
As soon as the new module is run, it calls on a special procedure in the gate
(the logical file system), and passes it the addresses of the procedures for
performing file operations on the new device. The logical file system
records the new addresses in its table, and thereby enables itself to perform
file operations on the newly added device.
The structure of the DMS-100 Family file system is shown in figure 2–3.
As shown, the file system is structured as a logical file system module that
supports procedures for all the operations described above and which acts as
a gate module to device dependant modules called the physical file system.
The device dependent modules interact with the devices to perform the
desired function.
Figure 2–3
Physical devices supported by the file system
Command interpreter
The command interpreter (CI) of the operating system performs the input
function for the man-machine interface to the switch (MAP). It provides a
facility for reading and interpreting user commands and taking appropriate
actions. It reads lines typed at a MAP, analyzes them, invokes command
programs as needed, and evaluates the parameters required by these
commands.
Program loader
The program loader is used to load, modify, or unload programs and
program increments.
Flag system
In DMS-100 Family systems, the use of exhaustible system resources, such
as mailboxes, processes, or MF receivers, is controlled through the use of
flags. A flag associated with a resource indicates the number of items of
that resource are in use or in demand. Three facilities are provided by the
flag system:
• time-outs to prevent blocking or deadlock
• non-busy waiting
• first-come-first-served service of waiting processes.
Figure 2–4
Logical and physical tables
A B C D
Logical
table Key = k a b c d
F (key) d
A B C
Key = k Procedure to
calculate value d
a b c
Physical tables
The logical schema provides a higher-level view of data. The data at this
level is presented to the users in a canonical form regardless of the actual
internal storage mechanism used. The data is once again seen as residing in
a number of rows, referred to as logical tuples, and a number of columns
identifying fields of the tuples. The first field of the tuple is a key which
uniquely identifies that tuple.
The external or customer schema provides the end-user view of data. The
data is represented in the character form, rather than the binary form used at
the logical schema level. Users at this level are provided with a number of
commands for making various queries as well as manipulating data.
Figure 2–5 on page 2–17 shows the relationship of various software modules
which together constitute the DMS-100 Family database system. As shown,
these utilities build a layered structure whereby each level uses the facilities
provided by the lower level.
Figure 2–5
DMS-100 database system
Customer schema
End Table editor Formatter
users
Logical schema
Data-admin Logical database Key mapping facility
users utility
Physical size
Starting at the bottom of the figure, the time critical agencies access the data
directly at the physical schema level. These agencies have complete
knowledge of the actual structure of data, in fact these agencies have defined
this set of data using the most efficient storage mechanisms they deemed fit
in each case.
At the next higher level, an internal database utility is provided to serve as a
procedural interface for user modules at this level. This module performs
the mapping of data from the physical view of data to the internal view of
the same. Any agency wishing to make its data known to the users at the
internal level defines the internal view, provides the read, write and
write-nil-tuple procedures for mapping, and binds the same to the
internal-database gate module.
At the logical schema level, a logical database utility performs mapping
between the logical view of data and the internal view of data. At this level,
ordering information about the tuples is retained (key-mapping facility).
Therefore, requests such as “get the first tuple” or “delete the next tuple” can
be serviced at this level.
At the highest level, the mapping between the external and the logical
schemas is provided by the table editor. A very powerful set of table
manipulation commands is also implemented within the table editor. The
formatter provides mapping between the external representation and the
internal (or PROTEL) representation of data. Finally, there is the data
dictionary module, which supports a data dictionary embodying information
about all the data types in the system.
Call processing applications software
Call processing applications software handles the functions that are specific
to each type of call processing agent or to each type of call processing agent
or to each type of call.
Most call processing applications are implemented in standard call
processing applications architecture. Custom calling features that cannot be
implemented easily in standard architecture are implemented in the Feature
Processing Environment.
The standard call processing applications software has a hierarchical,
layered structure. The elements of the structure are classes of procedures.
Within a class, all of the procedures are functionally similar. Each
procedure within a class is tailored to the logical and physical characteristics
of a particular agent or type of call. All the procedures required to support
call processing functions for a type of agent or type of call can be considered
as aspect.
The general nature of the structure is the same for all calls, but the details of
the structure depend on the agents involved and the type of call. At the top
level of the hierarchy, the starter procedure controls the flow of the call,
based on the message being processed, the call state and the type of agent or
type of call.
The second level of the hierarchy consists of a set of processor procedures.
Each processor controls an individual transaction or a set of closely related
transactions.
which reside at the third level. There are several classes of processor
procedures:
Setup processors coordinate the setup phase of the call. The setup phase
of the call includes all the transactions from origination through to signaling
the terminating agent. In most cases, the setup processor calculates the
cross-thread of the call from the matrix-to-crossthread table.
When the originating agent is a trunk, part of the usual function of the setup
processor is performed by the originating allocator. The trunk originating
allocator coordinates origination, digit collection and translation: the trunk
setup processor coordinates the selection and signaling of the terminating
agent. Setup processors are selected by the thread of the originating agent
and are invoked by starters.
Cross processors establish the connection between the originating and
terminating agents. If a receiver is involved in the call, the cross processor
releases it. If a DTMF sender is required for outpulsing, the cross processor
obtains a DTMF sender and sends messages to the Network Modules (NM)
and Peripheral Modules (PM), directing them to connect a speech path
between the originator and the DTMF sender through the NMs. Cross
processors are selected by cross thread and are invoked by setup processors.
Cross processors control only a portion of a transaction.
Recall processors process answer and flash messages. They record
information for billing and route flash messages to custom calling features
(for example, three-way calling, call waiting, etc). Recall processors are
invoked by starters. They are selected by thread if only one agent is
involved in the call, or by crossthread if two agents are involved.
Disconnect processors take down connections and complete the
recording of billing information disconnect processors are invoked by
starters. They are selected by thread if only one agent is involved in the call,
or by crossthread if two agents are involved.
Error processors take down calls and idle the agents involved when
errors occur. Error processors are invoked by starters. They are selected by
thread if only one agent is involved in the call, or by cross thread if two
agents are involved.
Processors continue to execute until control is transferred to another
processor, or until the call is completed or condensed. Processor invoke
functions and supervisors to perform their call processing tasks.
Functions
Functions usually reside at the third or lower levels of the call processing
applications software hierarchy. Functions are invoked by processors to
perform a variety of call processing tasks. Functions are gated through
Supervisors
Supervisors are invoked by cross processors to perform some of the
functions associated with establishing the telephony connection and
signaling the terminating agent. Supervisors compose and send messages,
directing PMs to perform the following operations:
• Give audible ringback tone to the originator
• If the terminator is a line, apply physical ringing to the terminator
• Transmit and detect integrity for the duration of a call
• Report on-hook or flash signals from the agents involved in a call;
report answer signals also, if required.
Supervisors are gated through a table of procedure variables and are selected
by crossthread. Supervisors continue to execute until they return control to
the invoking processor.
Fault detection and system recovery
In order to carry the levels of traffic demanded of modern switching
systems, the DMS-100 Family system employs an architecture that
distributes the control functions for call processing to several control
centers. To provide the reliability demanded of today’s switching systems
(working non-stop 24 hours a day), these control centers are made of the
most reliable components and arranged in a hierarchy with clearly defined
responsibilities for fault detection and system recovery. Component
redundancy has been applied to this structure in a selective, cost effective
manner, both to the control centers themselves and the communication links
between them, permitting faults in the system to be readily diagnosed.
in the event of a fault in one of them. Faults in the network switching matrix
are detected by peripheral modules. The network message controllers are
checked by both the peripheral modules and the CC.
Figure 2–6
DMS-100 family (NT40) – major subsystems
FW-31181
Peripheral
subsystem ILCM ILCM
ILGC IDTC
I/O I/O
Network Network
controller 0 controller1
modules 0–31 modules 0–31
plane 0 plane 0
Central
control
MAP MAP
Tape 3
Tape 2
is also capable of resetting its inactive mate when required. The activity
state is determined by a single flip-flop that is cross-coupled between the
CPUs; it is designed with a minimum of circuitry in order to reduce its own
probability of failure. To facilitate system maintenance, a processor can
manually be forced into the inactive state, provided that its mate is fault-free.
To ensure integrity of the communication between the CC and the operator
environment, the I/O controllers use two fault detection mechanisms. One is
the message system protocol, and the other is a system of self checks
incorporated in the I/O and device controller firmware. It is possible that
I/O devices of a similar type may be assigned different functions. Should
one fail, its function may be reassigned to another similar device. This can
be done either automatically or manually, depending on the importance of its
function. For example, toll billing tapes may be reassigned by software,
certain display terminals may not.
The interdependence of network and peripheral fault detection mechanisms
is shown in figure 2–7, which identifies the modules associated with every
call. The fault detection mechanism between modules is included in the data
format of the speech links. Two additional bits appended to each 8-bit
speech sample are assigned to channel supervision messages and parity.
Although they provide fault coverage of link and network speech paths, they
are generated and checked in the peripheral modules.
As shown in figure 2–7 the network planes are duplicated to facilitate fault
recovery; outputs from both planes appear at each peripheral. Conversely,
each peripheral feeds both planes simultaneously with the same signal. In
the event of a fault in one of the network planes, the peripheral module that
detects the fault simply elects to receive the call from the corresponding
network module in the other plane.
Providing network plane selection on a per-call basis offers a significant
increase in the resilience of the system to multiple faults. Distributing the
fault detection mechanisms to all the peripheral modules provides smooth
recovery from simple but far-reaching faults such as network power failures.
A network module can support approximately 1,900 calls at any one time,
but if that module fails, each of the several peripheral modules connected to
it is required to recover only their own calls. Operating independently, these
peripheral modules reconfigure the system to use the remaining good
network plane for the calls that require it. (It is the simultaneous
transmission of speech and supervision signals on both planes that allow this
reconfiguration of each call path). Once the reconfiguration has been
accomplished and service restored, the peripheral modules then report the
problem to the CC for maintenance.
If any call encounters a failure in the second network plane, the peripheral
that detected the failure would not switch back to the first plane. Instead, it
would inform the CC call processing software that the path initially
specified for the call could not be sustained.
In addition to detecting network faults, the parity and supervision
mechanisms are also used to detect faults in links and in the peripheral
modules themselves. In these instances it is the other peripheral associated
with a call that reports a difficulty to the CC, which then takes appropriate
action.
Figure 2–7
Network and peripheral fault detection
Peripheral
1
Supervision
To line or
trunk circuit
parity 1
Supervision
receive and
parity check
Network
Speech Plane
module 1
selector
plane 1
As well as relying on other modules for fault detection, every network and
peripheral module includes basic fault detection an location mechanisms
unique to itself. For example, all modules contain sanity timers, which are
continually reset by correct cyclical operation of the controller. Should the
controller software enter a tight loop for any reason, the timer expires and
causes the controller to enter a reset state. This state is detected by the next
module upstream towards the CC, which informs the central control of the
reset condition. These sanity timers ensure that faulty peripheral and
network modules can never become blind to messages arriving from the CC.
The fact that the peripheral and network controllers reside in a small number
of circuit packs inherently simplifies the location of faults. In the network
subsystem up to eight circuit packs may be involved in the speech path of a
call, four in each module as shown in figure 2–8. Faults detected during a
call by the parity or supervision mechanisms can be pinpointed by a
system-generated test code. This code is automatically inserted into the
speech path and monitored at certain strategic points. An absent or incorrect
code at any point identifies the fault location.
Figure 2–8
Network fault detection
Network module
Switching matrix
Hardware test
facility
Multi- Detector
plexer
Failure
indication
(to control)
Figure 2–9
Example of system recovery (NT40)
Network Peripheral
plane 0 module 1
Central Central
processing message Network Peripheral
unit 1 controller 1 plane 1 module 2
The diagram above depicts a DMS switch in routine operation. CPU1 directs the activities of
the switch, sending messages through both CMCs to the sub-systems below. Call connec-
tions are maintained on both network. When CPU 1 sustains a failure, CPU 0 takes over
control of the activities of the switch, and processing continues. A fault in the link between
CMC 0 and the I/O controller requires that messages between the CPU and the controller be
routed exclusively through CMC 1. When the network plane 1 also breaks down, the DMS
switch continues to operate, with network plane 0 handling all call connections.
The first fault considered is a failure in the arithmetic unit of CPU 1, the
active processor in this example. The fault is detected by the matching logic
between the two processors and this generates a mismatch interrupt when
the faulty hardware is used. The two CPUs then automatically run through a
maze sequence located in microstore. CPU 0 successfully completes the
maze, but CPU 1 fails and enters a firmware loop.
After completing the maze, CPU 0 attempts to communicate with CPU 1 but
fails because CPU 1 is held in its loop. The activity switch timer that started
when the mismatch occurred will time-out and force CPU 0 into the active
state. Information for fault diagnosis is stored for subsequent action and
switching control is returned to the interrupted software routine now running
only in CPU 0.
Diagnosis is performed by a craftsperson using the Maintenance and
Administration Position (MAP) to generate a circuit pack replacement list.
Using this list, the faulty CPU is repaired and returned to service by MAP
commands that re-synchronize the CC.
Link faults manifest themselves in various ways such as message time-outs,
invalid control bytes and bad checksums. For this example (see figure 2–9)
a time-out waiting for a message handshake occurs on an incoming message.
The I/O controller closes the link so that messages are no longer routed over
it and then sets an error indication in the message that it sends over the
alternate link.
When the message arrives at the CC, the appropriate maintenance software
is informed of the link failure. It removes the link from service and informs
the I/O routing software that this link is no longer usable. In addition, the
CMC is told to stop scanning the link for incoming messages.
Communication with the I/O controller and devices must now use the
alternate link.
Immediately, and periodically thereafter, the CC tests the out-of-service link
to see whether it can be returned to service. The CC begins this test by
placing the link in a restricted maintenance state that allows only
maintenance traffic to pass over the link. The test is then carried out by
priming the I/O controller and CMC link control functions to accept such
maintenance message. Proper setting of the route bits now ensures that the
message travels over the link and be looped back by the receiving controller.
The normal routing algorithms and automatic rerouting features of the
system are therefore bypassed in this test. If such a test passes, the CMC,
I/O controller, and operating system are informed that the link is again
available for normal message traffic. In this fashion the system recovers
from transient fault conditions.
If the fault is hard, the system will not be able to return the link to service
and maintenance personnel informed via the MAP of the fault. Using MAP
facilities, they identify and replace the faulty component and return it to
service. Normal operations of the switch are unaffected by either the failure
or the repair action.
With both a CPU and a CMC-1 I/O controller link out of service as shown in
figure 2–9, let us assume a connection memory failure in a network module
occurs. As noted earlier, network connections are established in both planes
but the peripheral modules select only one plane from which to receive the
speech samples. Typically half the calls in progress are arbitrarily
completed through the even plane and the remainder through the odd plane.
When the connection memory failure occurs, the peripheral module detects
the loss of the speech path by the message interruption on the supervision
channel associated with the call. The peripheral immediately selects the
other plane to receive speech for that call and generates a maintenance
message to the CC, indicating the channel and link that are experiencing the
problem. The call is sustained in the opposite plane and the subscriber
remains unaware of the fault. When the fault message arrives in the CC,
information on the bad connection is frozen in software, so that even if the
call is disconnected, the connection can be diagnosed. The diagnosis is
performed, using the test code mechanism previously described, to pin down
the fault to a subsection of the network planes or interconnections.
Upon confirmation of the memory fault, the network module must be
removed from service. All new calls are connected through the good plane
and the message system is reconfigured accordingly.
The CC accomplishes this by updating the routing information, closing the
CMC links to the affected network module and closing the peripheral links
originating in that network module. At this point the detection and
reconfiguration process is complete. The repair and return to service steps
are again accomplished using the MAP. The same test procedures used in
diagnosing the original problem are used to verify the repair. These
examples demonstrate the ability of the DMS-100 Family switch to remain
in operation during a period of major component faults.
System recovery controller software
Automatic recovery capabilities on SuperNode-based DMS-100 Family
switches include the system recovery controller (SRC), a dedicated software
utility which optimizes recovery of system peripherals. When the SRC
detects loss of service on one or more peripherals, it automatically initiates
the appropriate recovery sequence, including reinitialization, software
reload, and return to service.
The SRC will make several attempts to recover a PM if required. With each
subsequent recovery attempt, the SRC performs a more detailed analysis.
Automatic software reload only occurs in the first recovery pass if the
system detects loss or corruption of load.
In addition, in cases where a large group of peripherals require recovery at
the same time (for example, after loss of power), the SRC broadcast-loads
groups of peripheral modules of the same type (see note). The SRC also
ensures that, in the event of a large outage, the effected system elements are
automatically recovered in the most expedient manner, removing the need
for decision-making and manual action from operating company personnel.
Note: Broadcast loading capability is supported on series 2 PMs, such as the
International Digital Trunk Controller (IDTC), International Line Group
Controller (ILGC) and the International Line Concentrating Module (ILCM).
These PMs must be equipped with NT6X45BA (or newer) processor cards to
enable broadcast loading. Older types of peripherals, such as the maintenance
trunk module (MTM) are single-loaded. For detailed information on specific
SRC recovery activities supported for each type of PM, refer to Lines, Trunks
and Peripherals Recovery Procedures, 297-1001-587.
SRC functions
The SRC coordinates the recovery activities of various subsystems outside
of the DMS-Core. These subsystems include the series I and II PMs.
Figure 2–10 shows how the SRC interfaces with the DMS-Core and with the
subsystems.
The SRC performs the following functions:
• Its dependency manager enforces inter-subsystem dependencies. Before
the SRC recovers a PM, the subsystems on which the PM depends must
be operating.
• The group manager groups subsystems together for broadcast-loading.
Common commands are sent to a group of PMs at the same time, instead
of one after another.
• The concurrent activity manager balances the amount of recovery work
against other activities occurring on the switch. The SRC attempts
recovery of as many critical subsystems as the DMS-core operating
system will allow.
• The SRC initiates recovery applications and monitors each step in the
application to ensure that the application completes as quickly as
possible.
Two separate activities are coordinated by the SRC for series II XMS-based
PMs (XPM) and line concentrating modules (LCM):
• system recovery of PM nodes following core restart or core switch of
activity, using the dependency manager
• loading of PM units after a loss of load has been detected by system
maintenance, using the group manager
The only connection between the two activities is that maintenance on a PM
initiated through the dependency manager can lead to the loading of one or
more PM units.
For example, after a total office power outage, the dependency manager
begins to return a PM to service after completion of the reload restart. The
system maintenance task that is performing the return-to-service detects the
loss of load and initiates the reload request for the PM units to the SRC.
Figure 2–10xxx
System recovery controller
FW-30737
Subsystems
Database
DMS-core
Switch
operating system
SRC conditions
The following prerequisite conditions must be met for SRC-coordinated
recovery of PMs:
• all equipment must have power
• NT6X45BA or newer processor cards must be installed in series II
XPMs to allow automatic broadcast-loading
• all PM load names (including series I PM loadnames) must be datafilled
in table PMLOADS
Series II XPMs with pre-NT6X45BA control cards are single-loaded rather
than grouped for broadcast-loading.
SRC triggers
The following events trigger the SRC to begin recovery of subsystems if
necessary:
• warm restart of the core
• cold restart of the core
• reload-restart of the core
• loss of load in a PM
• manual RESTART SWACT, ABORT SWACT, or NORESTART
SWACT of the core
Additional SRC triggers to reload series II XMS-based PMs
There are four additional triggers for the SRC to reload series II XPMs:
• the XPM reports a memory parity error during a periodic audit by the
switch operating system
• the ROM/RAM query step in the series II XPM return-to-service task
detects a loss of load
• the failure two times in a row to initialize the series II XPM during a
return-to-service task, indicating that something is wrong with the
software load
• the ROM/RAM query step in the series II XPM system-busy task detects
a loss of load
Core restarts
During a restart, the switch operating system reinitializes itself.
Reinitialization restores both the operating system software and the
subsystems outside the DMS-core to a known, stable state.
A restart of the system includes initialization of the modules in the
DMS-core, initialization of the PMs, and restoration of services. The period
of a restart is the time taken to recover the entire system to the point that all
services are available again. A flashing A1 appears on the reset terminal
interface (RTIF) when initialization of the software on the DMS-core is
complete. The recovery of the entire system continues after the flashing A1
appears.
The following list describes what happens to each PM during each type of
restart:
• A warm restart of the core is the least severe of restarts. XPMs are
audited and generally stay in service during a warm restart. During this
type of restart, calls in progress that have reached the talking state
continue. Calls that have not yet reached the talking state are
disconnected. Any calls that disconnect during the restart are
disconnected after the restart is complete and the billing data is recorded.
• A cold restart of the core is more severe than a warm restart. XPMs are
audited and generally stay in service during a cold restart. During this
restart, calls in progress that have reached the talking state retain their
connections during the restarts, but they may be disconnected if their
connections are reused by new calls after the restart. There is no record
made of calls in progress during a cold restart and no billing data is
recorded for these calls.
• A reload-restart of the core is the most severe restart. All PMs are
reinitialized during a reload-restart. All calls in progress are dropped,
and billing data for the dropped calls is lost.
Loss of load in a PM
Normally a loss of load occurs when a card (loaded with software) is
removed or the power to a card is interrupted. A PM becomes system busy
when a loss of load occurs. The SRC begins recovery when system
maintenance detects a loss of load.
Manual commands
The SRC reinitializes PMs if any of the manual commands RESTART
SWACT, ABORT SWACT, or NORESTART SWACT are used during an
upgrade of BCS software.
SRC dependency manager
Some recovery actions on objects are dependent on other objects to be in a
particular state to support the action. The dependency manager of the SRC
manages object dependencies using the applicable set of dependencies for
the type of restart. Thus, the SRC dependency manager prevents failure due
to premature starts, and reduces recovery times.
Objects
An object is any entity in the DMS switch. An object can be
• physical, such as an ENET plane, an XPM, an IPML, or a set of lines
• a service, such as line trunk server (LTS) call processing
• software, such as entry code
• an event, such as the initialization of core software
Managing dependencies
The action on the dependent object must not proceed until the object
depended upon is in the required state. The dependency manager ensures
that the dependencies for an action on an object are satisfied before the
action is allowed to proceed.
Dependencies are specified for each action for each object. Examples of
dependencies in DMS include
• one part of the software that must initialize before another
For example, two XPMs that have the same load file name and that have
NT6X45BA controller cards, but have different CMR file names are put into
different groups.
XPM units that cannot be grouped with other XPM units for
broadcast-loading are single-loaded. This can happen if the XPM units do
not have the hardware to support broadcast-loading or if they cannot be
grouped with other units during dynamic grouping. Grouping occurs only
for XPMs that have NT6X45BA or higher controller cards. XPMs that do
not have NT6X45BA or higher controller cards are not grouped with other
XPMs even if the other XPMs use the same load files. The SRC still
coordinates single-loading for purposes of concurrency management.
Static and dynamic groups
PMs that can be grouped together are identified from datafill, which
specifies their load file names and their hardware configurations. These
groups (called static groups) are maintained automatically over time as the
datafill changes. During recovery, the SRC forms dynamic groups from the
subgroups based on which elements require recovery and availability of
resources to perform the recovery.
Automatic broadcast-loading
Automatic broadcast-loading sends a request to load software to several PMs
simultaneously.
After receiving a request to load a member of a static group, the SRC builds
a dynamic group, using a combination of two methods:
• querying the group members for loss of load using the ROM/RAM query
message (only on XPMs equipped with NT6X45BA or higher controller
cards)
• waiting for autoload requests from the group members over a short
period of time (the autoload requests are submitted after failure to
return-to-service, where the failure is suspected to be due to loss of load
or load corruption)
When a system-busy unit is identified as needing loading, the SRC is
notified. The SRC group manager creates a group of PMs that can be
broadcast-loaded.
When a group is formed, the SRC coordinates the broadcast-loading. If a
PM has only one unit requiring loading, then that PM is dropped from the
group and a regular load request is submitted for the unit. System resources
are saved by using broadcast-loading even when the group consists of only
one PM, because if both units need loading, unit 0 sends the load messages
to unit 1.
Automatic broadcast-loading is reattempted once if a group of PMs is not
recovered. If the second attempt also fails, the SRC attempts to recover the
PMs individually.
Note: Series I PMs do not support broadcast-loading.
checking for software error conditions (such as ignoring a return code after
sending a message to process), and because of unforeseen or subtle hardware
failure modes. In general, these errors can cause the gradual deterioration,
or even the loss, of some of these software data structures and this will
eventually affect the performance of the switching system. A software data
structure that has deteriorated may become unable to represent the state of a
hardware resource; in such a situation it could, for example, show a network
speech channel as in use when in fact it is not. Independent software data
structures, such as mailboxes, may simply become unavailable for use.
In conventional computer systems this situation is normally dealt with by a
periodic restart or by reloading the computer, usually at least once a day. In
telephony, however, this is not acceptable, and the DMS-100 Family audit
software is used to make periodic checks on the data structures for integrity,
reasonableness and, where appropriate, to see that they match the actual
hardware states that they are meant to represent. Usually any discrepancy is
logged and where possible, corrected. Where there are very serious,
unrecoverable discrepancies the office alarm may be sounded. In extreme
situations maintenance may cause a system restart and may eventually lead
to reload of the system from tape.
There are four levels of audits:
• audits of telephony equipment such as trunks, lines, DTMF receivers and
MF receivers
• audits of call processing software resources: data structures that chart the
progress of individual calls
• audits of operating system software resources: mailboxes, letters, and
memory
• call completion audits.
In addition to the test applied to individual receivers, there are also tests on
groups of receivers. The queues of idle receivers are represented by linked
lists. These lists are checked for integrity by following the whole queue to
look for loops or breaks, and then checking to see whether each member in
the list should really be in the list. If a defect in the queue is found, then the
audit takes recovery action by rebuilding the whole queue.
In spite of the many levels of checking in individual parts of the system,
there may still be undetected faults that prevent the correct operation of call
processing. Because of this the call completion audit was designed. For this
audit a crude “goodness” measure is constructed, based on the ratio of the
number of calls successfully completed to the number of calls originated.
The numerical value of this ratio (maximum 100%), which is recomputed
every few minutes, is tracked over several cycles. If it falls below a certain
threshold an office minor alarm is sounded and a message is sent to the log.
If it falls below a lower (critical) threshold an office major alarm is sounded.
If this happens on several consecutive cycles a critical alarm is sounded and
the system is automatically restarted.
DMS-100 Family peripheral software
The DMS-100 Family System has been designed as a distributed system
where the various peripherals perform the time consuming repetitive tasks
(scanning and supervision of trunks, etc.) under the control of the main
CPU. To provide a uniform vehicle for control of activities within
peripheral modules, each peripheral module contains a simulated computer,
the Telephony Peripheral Virtual Machine (TPVM). The architecture of this
simulated computer and its instruction set provide a flexible, high level
mode of control of telephone calls and other tasks carried out in peripheral
modules.
Central control communicates with peripheral modules through messages.
The messages from central control contain programs written in TPVM
language identify the terminal on which the program is to be executed. The
incoming messages to central control, on the other hand, are generally
reports of events (for example, trunk seizure, digits dialed, integrity failure).
A number of advantages are derived from the TPVM approach to DMS-100
Family software. A fairly important one is the containment of effect of
change on software. A new feature will usually require only additions to
software resident in central control; redesign of a peripheral module or the
interface to take advantage of new technology will affect only software
resident in the peripheral module.
The stack instructions provide a means for performing logical and arithmetic
operations on data in the stack and for moving data from a stack to other
data areas of TPVM.
The communication instructions are used to compose messages and dispatch
them to central control. These instructions also control the generation of
messages to other peripheral modules over the supervision message channel,
the reception of messages from other peripheral modules, and actions
initiated upon reception of such messages.
Terminal control instructions control the hardware that connects the
telephone lines and trunks to the switch. A high level mode of control is
employed, which means that central control does not have to be concerned
with the details of terminal control.
The call control instructions generally initiate control of a phase of a
telephone call, for example, digit reception, or supervision of a talking
connection. The maintenance instructions provide the means for
maintenance of trunk interfaces, peripheral module hardware, and its
interfaces to the DMS-100 Family network.
The “execs” are short TPVM programs stored directly in the peripheral
module memory and can be called up as needed. For example, they can be
invoked directly by central control messages or called up after the
occurrence of certain call events in terminals to initiate needed action. The
TPVM contains instructions to define and invoke these execs.
The TPVM instruction set is designed to optimize some of the conflicting
tradeoffs in the design of peripheral modules, for instance, the need to
minimize peripheral module memory while carrying a substantial part of call
processing load, or the need to be flexible while attaining a high level of
abstraction (that is, protecting central control from the need to concern itself
with call details).
The MP implements the TPVM. In the TPVM, programs sent from the CC
are interpreted and terminal processes executed. In addition, audits of
terminal states and data are performed.
The total real time usage of call processing class, high priority call
processing class, deferrable call processing class, and I/O interrupts are
referred to as call processing occupancy (CPOCC). Within CPOCC, the real
time available to deferred call processing depends on how much real time is
left after the work for the other classes in CPOCC is performed.
With the work for the processor divided into classes, allocation of processor
real time must be set for each class. This document defines how time is
allocated to the different classes and how time allocations fluctuate due to
changing variables. Some of the variables are call types mixes, call
processing occupancies, and engineering factors.
The engineering of real time utilized in the overhead classes of the processor
are usually evaluated at three different grade-of-service levels at high day
busy hour. These three levels are defined as follows:
• 20% of attempts experience dial tone delay (DTD) or incoming start to
dial delay (ISDD) greater than 3 s.
• 8% of attempts experience DTD or ISDD > 3 s.
• 1.5% of attempts experience DTD or ISDD > 3 s.
Table 3–1
Real-time allocations/Grades-of-service
% DTS/ISDD NT40 (40 Mhz) DMS SuperNode
overhead/CP overhead/CP
Table 3–2
Capacity increases – DMS SuperNode processors
Processor Gain factor used in the real time tool
SN20 base
SN30 1.5 @ 86% call processing occupancy (CPOCC)
SN40 1.7 @ 86% call processing occupancy (CPOCC)
SN50 3.0 (SN50 @ 75%, SN20 @ 75%)
Real Time allocations for work on the different types of processors are
provided in this section. The SN20 and SN30 processor allocations are also
Table 3–3
Real-time allocation of processor classes
Class name Typical real time Real time allocations
usage at capacity at capacity
Table 3–3
Real-time allocation of processor classes
Class name Typical real time Real time allocations
usage at capacity at capacity
End
Table 3–3 shows that 86% of processor real time is available for call
processing classes where as 14% of real time is given to overhead classes in
a typical office. These allocations of processor real time to overhead classes
versus call processing classes vary due to switch conditions and scheduler
demands. As the switch conditions cause overhead requirements to increase,
real time available for call processing decreases. Therefore, if overhead
requirements are higher than 14% of processor real time, the maximum call
processing allocation is less than 86%. This is because the overheads can
use all their real time allocations if they need the real time to process the
work.
Classes do not always use their full allocation of the fixed component, but if
required, it is available to them. When extra fixed time allotments are
available, they can be utilized by other classes.
Within the assignable component, the proportion of processor real time that
the scheduler allocates to scheduler classes depends on several factors
including the following:
• the presence of certain applications programs, for example, NOSOFT
• the amount of processor real time required by the switch for maintenance
(as demand increases, the scheduler allocates more time for this class)
• the amount of available processor real time that the scheduler class can
assign to extra work for the processor to perform
• the amount of time assigned to the call processing classes less the actual
usage (if the assignable component is not fully used, extra time is
available to all other classes in whatever amounts they require)
Overhead classes
Overhead tasks have a complex interrelationship with each other and are
application dependent. Real time allocation algorithm controls are
Idle class
Idle class is provided time only if all the other classes have nothing to do.
Therefore, at full load Idle gets 0%.
The idle process and call processing resource audit run in this class.
Network operating software file transfer (NOSFT) class
NOSFT class, which is used by the processes communicating with a DNC, is
limited to 3% of NT40 (3% of SN) processor real time (assignable time) at
capacity. If a switch is not connected to a DNC then the 3% is provided to
the call processing classes first.
Currently the processes for file transfer run in maintenance class instead of
NOSFTG class. This means that NOSFT class will always be 0% and
maintenance class allotments will increase when a DNC is connected to the
switch.
Call processing classes
Call processing processes are divided into four categories: high priority call
processing (HPCP), call processing (CP), deferrable call processing
(DEFCP), and call processing-I/O interrupts. They comprise a total
maximum CPU call processing allocation (assignable component) of l83%
NT40 (86% SN).
Call processing-I/O interrupts (I/O)
I/O interrupts handle interrupts from the peripheral modules, typically using
11% of NT40 (11% of SN) processor real time. The interrupts involve
on-hook, off-hook, digits, flash, and so on.
High priority call processing (HPCP) class, call processing (CP)
class, deferrable call processing (DEFCP) class
The scheduler will give these classes up to 72% of NT40 (75% OF SN)
processor real time at full load in an ideal POTS office with no engineering
factors.
Functions within these classes are: call processing, AMA, call setup,
translations, network connections, terminations, feature activations, AMA
disk and I/O queue handling.
Table 3–4
Processor engineering factors
Factor NT40 DMS SuperNode
Table 3–5
DMS SuperNode AWT factor percents
AWT 1.5% DTD 8.0% DTD 20.0% DTD
Table 3–6
NT40 AWT factor percents
AWT 1.5% DTD 8.0% DTD 20.0% DTD
Maintenance factor
In a well maintained office during high traffic, the maintenance class should
use only 2% of NT40 (1% of SN) processor real time at high traffic periods.
If maintenance requirements tend to go above the 2% NT 40 (1% SN), this
extra real time demand must be accounted for in the maintenance factor.
EADAS-DC and EASDAS-NM factors
If the office is equipped with EADAS-DC (Engineering Administration Data
Acquisition System—Data Collection) and/or EADAQS-NM (Engineering
Administration Data Acquisition System—Network Management), then 2%
of NT40 (1% of SN) processor real time is required.
If these systems are not provisioned, 0% of processor real time is used for
the engineering factor.
SES (service evaluation system) factor
SES checks completion of a line/trunk call. It uses 0.5% of NT40 (0.3% of
SN) processor real time when turned on. SES uses 0% of processor real
time when inactive.
DNC 9600 baud and DNC 19200 baud factors
The DNC (Dynamic Network Control), depending on the specific
application, has the ability to gather and store large quantities of data from
several DMS switches simultaneously. The demand for real time depends
on the rate of data transfer. Both engineering factors use 0% of processor
real time when inactive.
Further information can be found in SEB 89-04-001, Business Network
Management (BNM) Impact on the DMS switches.
Engineerable background factor
With engineerable background, the telephone operating company has the
ability to expand the maximum processor allocation for priority devices
above 2% NT40 (2% SN) minimum allocation. This option allows the
amount of processor allocation for priority devices to increase at the expense
of call processing classes, if required.
The basic minimum 2% allocation for guaranteed background class allows
for the following:
• NT40—One priority device with 100% duty cycle or 2% per two priority
devices with 50% duty cycle.
• DMS SuperNode—Allow 1% per priority device with 100% duty cycle,
or 1% per two priority devices with 50% duty cycle.
Any terminal requirements greater than above during periods of high call
processing must be accounted for in engineerable background factor. As a
guideline for additional terminal requirements the following applies:
• NT40—Allow 2% per priority device with 100% duty cycle, or 2% per
two priority devices with 50% duty cycle.
• DMS SuperNode—Allow 1% per priority device with 100% duty cycle,
or 1% per two priority devices with 50% duty cycle.
If the number of priority device requirements are below the basic minimum,
then the engineering factor is 0% of processor real time.
Peaking factor
Traffic peaking refers to a sudden increase in the amount of offered traffic
versus the average traffic over a given time period. The grade-of-service
level can be affected during periods of traffic peaks, for this reason traffic
peaking should be considered when engineering the load level of the
processor. Adding 2–4% NT40 (2.5% SN) for peaking is a safety margin to
ensure desired grade-of-service is maintained at traffic peaks.
A study of over 50 field offices was conducted to determine the effects of
peak traffic on grade-of-service. The conclusions of the study indicates that
95% of the offices had an average fifteen minute peaking over the office
busy hour, requiring less than 2–4% of NT4 0 (2.5% of SN) real time in
maximum high day loading to achieve the average 20% DTD > 3 second
grade-of-service level criteria. Therefore, the NT40 2–4% (POTS–2%,
MIC–3%, TOPS–4%, ACCESS TANDEM–4%) and SN 2.5% (all office
types) compensation factors are considered to be conservative. If peaking is
not required then the engineering factor is 0% of processor real time.
CPUSTAT factor
The operational measurement, CPUSTAT outputs processor occupancy
information in much the same way as the activity tool. With CPUSTAT
active during periods of maximum call processing, the NT40 CPUSTAT
factor is l1% (SN is 0.5).
Further information on CPUSTAT can be found in SEB 88-04-002,
Enhanced CC Real Time Indicator.
SYNC factor
In an access tandem office a call processing phenomenon referred to as
harmonic effect can take place when a processor reaches 85%–90% of
capacity. A feature called Bleed (0–3) can be used to lessen the effect of this
phenomenon. When Bleed is set to maximum there is only a 1% NT40 (1%
SN) impact on processor real time.
Further information on harmonic effect can be found in SEB 88-06-004,
Access Tandem Harmonic Effect.
SMDI factor
Simplified Messaged Desk Interface messaging real time requirements must
be calculated for each individual application. SEB 88-06-002, (Simplified
Message Desk Interface Messaging Capacity) can be used as as it refers to
performing these calculations.
AABS factor
Automated Alternate Billing Service (AABS) message handling process,
called MPCFASTO, operates in the Maintenance Class. All the other AABS
functionality operates in Call Processing Processes. The impact of AABS
on real time can be calculated using SEB 89-07–001, Automated Alternate
Billing Service Performance Engineering Guidelines.
Call timings
DMS-100 International call timings are published in systems engineering
bulletin SEB 89-10-001, which is updated on a per-BCS basis. The current
release for BCS36 is issue 8.
The following tables give the current computing module call timings for the
following markets: Japan, Turkey, Belize, China, and Australia. The call
timings in this section are always reported in milliseconds and are
considered to be within ±5% accuracy unless otherwise noted.
Table 3–7
Japan call timings
Call type Call description ms per C/A ms per C/A
SN20 SN30
Table 3–7
Japan call timings
Call type Call description ms per C/A ms per C/A
SN20 SN30
Table 3–8
Turkey, Belize call timings
Call type Call description ms per C/A ms per C/A
SN20 NT40/40
Table 3–8
Turkey, Belize call timings
Call type Call description ms per C/A ms per C/A
SN20 NT40/40
Table 3–9
China call timings
Call type Call description ms per C/A ms per C/A
SN20 NT40/40
End
Table 3–10
Australia call timings
Call type Call description ms per C/A SN20
Overhead occupancy
(non-call processing)
0 14%
Operating guidelines
The operating guidelines for establishing the loading levels for specific
applications, at both the end and the beginning of the engineering design
period, include a series of planning factors that the telephone operating
company should take into consideration in the engineering process. The
current maximum load recommendation for the DMS-100 Family is 86% of
the total CPU time for local/toll/MDC offices and 86% of the total CPU time
for TOPS offices, including a minimum of 14% for basic non-call
processing overhead. Refer to figure 3–1 for the distribution of total CPU
occupancy. Once the office is placed in service, the office call attempt busy
hour can be monitored and the planned load can be increased according to
the operating plan.
Engineering considerations – load level
There are many considerations and decisions to be made when planning load
levels for a processor. Some of the decisions are based on clear cut rules,
but others are based on sound engineering judgements using guidelines.
The following are some of the key considerations that must be evaluated by
the engineer before decisions can be made:
1 Determine the point on the load service curve that the processor will fall
during both HDBH and ABS. Having the grade of service level during
ABS too close to the exponential part of the load service curve may
cause grade-of-service to be greatly reduced during HDBH. The higher
the ratio of HDBH to ABS, the larger the buffer needed from the
exponential part of the curve.
2 The High Day call mix versus ABS call mix must be considered.
Typical offices serving large business applications usually see very little
variation in call mix from day to day, whereas the call mixes for POTS
offices tend to change with traffic levels.
3 The engineering factors by percents used in deriving a processor’s
loading are considered conservative values because all the major features
rarely work concurrently or actively produce outputs every minute of the
busy hour. Also priority devices would rarely have 100% duty cycles
during busy hour.
4 In an initial office the data used to define office characteristic are
predicted values, whereas in an in-service office data are measured
values. Good engineering judgment must be used in determining the
predicted values in order to ensure the switch meets the desired
grade-of-service level during cutover. Once an office is placed in
service, the office characteristics are measured and compared to
predicted characteristics. If any of the prediction numbers differ from
the measured, then the new verified numbers can be used for switch load
planning.
MEMCALC requires as inputs all software packages that will reside in the
office’s in-service BCS load as well as data based on end of design (EOD)
quantities, or actual switch parameters. This tool is an integral part of
memory provisioning and is available to customers as well as Northern
Telecom. MEMCALC produces the memory required to satisfy the switch
application only. It includes required memory allocator rounding rules as
part of its output. An administrative spare, which includes a spare to
account for MEMCALC program accuracy, BCS dump and restore tools and
normal day-to-day service order activity, is also required. Administrative
spare is also added to the MEMCALC output and is included in the
description of provisioning policies.
Memory
The basic memory provisioning policy applies to both NT40 and SuperNode
in the following procedures:
1 To calculate memory requirements, use the wired capacities of all lines,
trunks and input/output (I/O) ports. Note that cards equipped I/O ports
should be considered as wired. This includes all read only and keyboard
send receive printers and visual display units.
2 When processing extensions and/or authorized software updates, all
existing software packages from the previous office image must be
applied to the new load. Discrepancies between the previous office
image and the current Job Feature Data Base (JFDB) should be resolved
by the job engineer and the Northern Telecom marketing representative.
DO NOT remove any feature Package(s) from the office image or JFDB
without prior written authorization from the marketing representative
and the customer. After all discrepancies have been resolved, all records
should be updated as required.
3 The DMS SuperNode Major Group Software Package, NTX960AA/AB,
must be present in the job’s JFDB listing in order to access SuperNode
MEMCALC. Conversely, NTX960AA/AB must be absent from the
job’s JFDB listing in order to access NT40 MEMCALC.
DMS SuperNode memory
The DMS SuperNode core can be provisioned with up to ten memory cards.
There are three memory cards available, as follows:
• 6-Mbyte NT9X14BB
• 24-Mbyte NT9X14DB
• 96-Mbyte NT9X14EA
Table 3–11 lists the maximum addressable memory for each of the available
SuperNode processor options.
Table 3–11
Maximum addressable memory values
SuperNode CPU Maximum addressable memory
this information may also be accessed via the MAP utility, using the
BCSMON command—BCSMON DUMP COUNT.
Performance standards
Performance standards establish the criteria used to engineer a DMS-100
Family switching system, including the determination of the quantity of
various circuits required. The DMS-100 Family parameters involved are
documented, and sensitivities are discussed in the following subsections.
The busy hour ratios of 10 HDBH to ABSBH and HDBH to ABSBH for
calls or traffic usage are specified by the operating company during the
provisioning of engineering facilities.
Provisioning methodology
DMS-100 Family overall and system component provisioning is based on
three factors:
• termination—lines and trunks
• traffic criteria—usage and call attempts
• real time—call attempts and call processing time
Termination
By this method, provisioning is performed on a termination appearance
basis. The number of components to be provisioned depends on the number
of terminations to be connected to these components and on the number of
terminations that these components can allocate per module.
Traffic criteria
Using traffic criteria, provisioning is performed on a traffic capacity basis
(usage and call attempts). The number of system components to be
provisioned depends on the traffic offered by the terminations to be
connected to these system components and on the traffic capacity that these
system components can carry per module. This is a function of the method
of operation based on blocking or delay criteria and the service standard
required.
Real time
This provisioning criteria is accomplished on a total millisecond per hour
capacity basis (call attempts and timings). The number of system
components to be provisioned depends on the mix of traffic originating from
or terminating to these system components and the millisecond capacity per
hour that these system components can provide.
Service standards
Matching loss grade of service
DMS-100 International grade of service specifications comply with CCITT
recommendation Q543. Inadequately handled call attempts are attempts
which are blocked (as defined in E600) or are excessively delayed within the
exchange. Excessive delays are those that are greater than three times the
“0.95 probability of not exceeding” values recommended in tables 3 through
6 of Q543. Table 3–12 lists the probability of inadequately handled call
attempts occurring, as specified in Q543 table 2.
Table 3–12
Probability of inadequately handled call attempts occurring
Table 3–13
Service circuits blocking and delay criteria grade of service
Service circuit Criteria ABSBH HDBH
Receivers
Service circuits in the DMS-100 Family system are common equipment
units which include DTMF, MF receivers, and universal tone receivers
(UTR). The DTMF receiver is used to convert dual-tone multifrequency
address signals from the customer to machine readable codes. The DTMF
receiver connection is from a customer line (through an LCM/LGC) through
the network, to a DTMF receiver on a TM or MTM.
The MF receiver is used to convert multi-frequency signaling over a trunk to
a machine-readable code. The MF receiver connection is from an incoming
trunk (through a digroup or a TM) through the network, to an MF receiver
on a TM or MTM.
The UTR is used to collect and decode both DTMF and MF address signals
and to report the decoded address digits to the Central Control by means of
the peripheral signaling processor, and it eliminates the need to establish a
network path to a DTMF/MF receiver in an MTM. The UTR is located in
the LGC, LTC, DTC, or RSC peripherals.
If blocking occurs, two connection attempts are made on randomly chosen
DTMF/MF receivers (MTM mounted). The length of time taken to establish
a connection to a receiver determines its delay criteria. To minimize
blocking probability, the receivers are spread over the available TMs and
MTMs.
Recorded announcement circuits
The Digital Recorded Announcement Machine (DRAM) provides recorded
announcements capabilities which can be engineered on a per office basis.
DRAMs can be used on a standalone basis, or in conjunction with other
recorded announcement equipment.
Types of overload
Overload may be in any of four main areas:
• shortage of tone receivers
• shortage of speech paths
• shortage of processing capacity in one or more peripherals
• limits on global system capacity
For each operation time, the definition is given in terms of the starting and
ending points followed by the requirements. The value given for the mean is
interpreted as the maximum allowable mean from the starting point to the
ending point for that operation time. The operation time may not exceed the
95% level more than 5% of the time for two conditions:
1 Receipt of a PTS supervisory signal is said to occur when the state
transition, which begins the signal, occurs at the E lead. The E and M
lead trunks make use of separate leads for signaling. Ground and open
states are used for off-hook and on-hook, respectively, on the E lead for
signaling from the trunk facility to a trunk circuit. Battery and ground or
battery and open states are used for off-hook and on-hook, respectively,
on the M lead for signaling from a trunk circuit to the trunk facility.
2 Transmittal of a PTS supervisory signal is said to occur when the state
transition, which begins the signal, occurs at the M lead.
Operation times applicable to PTS – SuperNode/access tandem
Operation times will be met at all engineered traffic loads.
Address time (cross-office time)
• Starting Point: End of address digit reception
• Ending Point: Transmittal of connect signal
• Mean: 200 ms
• 95% Level: 360 ms
Answer time
• Starting Point: Receipt of answer signal
• Ending Point: Transmittal of answer signal
• Mean: 22 ms *
50 ms * *
• 95% Level: 38 ms *
95 ms * *
• Maximum: 50 ms *
Not Specified * *
Re-answer time
• Starting Point: Receipt of re-answer signal
• Ending Point: Transmittal of re-answer signal
• Mean: 100 ms
• 95% Level: 180 ms
• Maximum: 250 ms
Clear-forward time
• Starting Point: Receipt of clear-forward (disconnect) signal
• Ending Point: Transmittal of clear-forward (disconnect) signal
• Mean: 100 ms (plus incoming trunk disconnect timing)
• 95% Level: 180 ms (plus incoming trunk disconnect timing)
• Maximum: 250 ms (plus incoming trunk disconnect timing)
Clear-back time
• Starting Point: Receipt of clear-back (hang-up) signal
• Ending Point: Transmittal of clear-back (hang-up) signal
• Mean: 100 ms
• 95% Level: 180 ms
• Maximum: 250 ms
Release-guard time
• Starting Point: Timeout of guard timing
• Ending Point: Idling of outgoing trunk
• Mean: 100 ms
• 95% Level: 180 ms
• Maximum: 250 ms
Forward-transfer time
• Starting Point: Receipt of forward-transfer (ring-forward) signal
• Ending Point: Transmittal of forward-transfer (ring-forward) signal
• Mean: 100 ms (plus incoming trunk ring-forward timing)
• 95% Level: 200 ms (plus incoming trunk ring-forward timing)
Seizure time
• Starting Point: Receipt of connect signal
Response time
• Starting Point: Receipt of start-dial or end of wink-start signal.
(Transmittal of connect signal in immediate dialing case)
• Ending Point: Beginning of outpulsing
• Mean: 100 ms (plus required delay)
• 95% Level: 180 ms (plus required delay)
• Maximum: 250 ms (plus required delay)
Mishandled calls
The call completion rate for a DMS-100 Family System is equal to or
greater than 99.99% (no more than one call in 10 000 mishandled). A
mishandled call is a call attempt that arrives at an incoming port of the
switching system but was mishandled due to a hardware and/or software
error. Three results are expected:
• Mis-routing
• Premature release by the switching system
• Switching system transmission failure as detected by a continuous parity
check.
Calls that cannot be completed due to the unavailability of engineered
equipment are not included in this definition unless the congestion is caused
by a system or subsystem fault or error.
Reliability
To ensure high reliability, all critical subsystems are duplicated. If a fault is
detected, the system software automatically re-configures the hardware to
prevent or minimize the effect on service.
The coverage of a duplicate subsystem is the proportion of the occurrence of
simplex faults from which the subsystem can recover through the detection
of the fault and the subsequent re-configuration to the standby module. The
long-term attainable coverage for the DMS-100 Family and the DMS
SuperNode duplicate modules is 99.9% for the core equipment and 99% for
peripherals.
Hardware failures
Table 3–14 on page 3–39 lists the predicted rates of failure for DMS-100
Family circuit cards. Table 3–15 lists the predictions for hardware reliability
performance.
System downtime
The following estimated times for restarts and reloads assume a fault-free
DMS-100 Family System, both hardware and software. In addition, it is
assumed that the peripheral subsystem has not experienced a power outage
prior to the restart.
Unscheduled system downtime – SuperNode
The total DMS-100 Family System unscheduled downtime, due to hardware,
software and procedural failure modes, is expected to be no more than two
hours in 40 years.
For BCS36, the following timings have been recorded for a typical DMS
SuperNode equipped with a SR40 processor:
SOS :34
Logs Start 1:13
A1 Flashing 4:08
Perminct 3:46
1st Login 4:15
1st DDU Insv 3:13
AMA Active 4:15
Boot timings Device BCS30 RTM
DDU (8 in., 203 mm) 4:36
DDU (14 in., 356 mm) 6:33
MTD 15:09
Scheduled system downtime
Reloads from the system magnetic tape via the Batch Change Supplement
(BCS) process are used to update the DMS-100 Family software. The new
software is loaded in the inactive CPU, while the active CPU is still
processing calls. The manual activity switch and the restart procedure are
initiated. In a typical DMS-100 Family System, the system downtime (the
period of time from the commencement of the activity switch until dial tone
is returned on all line peripherals) is estimated to be less than 10 minutes.
Table 3–14
DMS SuperNode and DMS-100 family PCP failure rates
Product Predicted failure
equipment Card Card rate per million
code location description hours
Table 3–14
DMS SuperNode and DMS-100 family PCP failure rates(continued)
Product Predicted failure
equipment Card Card rate per million
code location description hours
Table 3–14
DMS SuperNode and DMS-100 family PCP failure rates(continued)
Product Predicted failure
equipment Card Card rate per million
code location description hours
Table 3–14
DMS SuperNode and DMS-100 family PCP failure rates(continued)
Product Predicted failure
equipment Card Card rate per million
code location description hours
Table 3–14
DMS SuperNode and DMS-100 family PCP failure rates(continued)
Product Predicted failure
equipment Card Card rate per million
code location description hours
Table 3–14
DMS SuperNode and DMS-100 family PCP failure rates(continued)
Product Predicted failure
equipment Card Card rate per million
code location description hours
Table 3–14
DMS SuperNode and DMS-100 family PCP failure rates(continued)
Product Predicted failure
equipment Card Card rate per million
code location description hours
Table 3–14
DMS SuperNode and DMS-100 family PCP failure rates(continued)
Product Predicted failure
equipment Card Card rate per million
code location description hours
Table 3–14
DMS SuperNode and DMS-100 family PCP failure rates(continued)
Product Predicted failure
equipment Card Card rate per million
code location description hours
Table 3–14
DMS SuperNode and DMS-100 family PCP failure rates(continued)
Product Predicted failure
equipment Card Card rate per million
code location description hours
Table 3–14
DMS SuperNode and DMS-100 family PCP failure rates(continued)
Product Predicted failure
equipment Card Card rate per million
code location description hours
Table 3–14
DMS SuperNode and DMS-100 family PCP failure rates(continued)
Product Predicted failure
equipment Card Card rate per million
code location description hours
Table 3–14
DMS SuperNode and DMS-100 family PCP failure rates(continued)
Product Predicted failure
equipment Card Card rate per million
code location description hours
Table 3–14
DMS SuperNode and DMS-100 family PCP failure rates(continued)
Product Predicted failure
equipment Card Card rate per million
code location description hours
Table 3–14
DMS SuperNode and DMS-100 family PCP failure rates(continued)
Product Predicted failure
equipment Card Card rate per million
code location description hours
Table 3–14
DMS SuperNode and DMS-100 family PCP failure rates(continued)
Product Predicted failure
equipment Card Card rate per million
code location description hours
Table 3–14
DMS SuperNode and DMS-100 family PCP failure rates(continued)
Product Predicted failure
equipment Card Card rate per million
code location description hours
Table 3–14
DMS SuperNode and DMS-100 family PCP failure rates(continued)
Product Predicted failure
equipment Card Card rate per million
code location description hours
Table 3–15
DMS SuperNode reliability performance (hardware failures only)
DMS-100 parameter Predictions
Table 3–16
Assumptions
Features
This chapter summarizes the generic features and capabilities of the
DMS-100 International switching system. The specific features and
capabilities provisioned on a DMS-100 International switching system vary
depending on the market in which the system is provisioned, the application
of the system, and the hardware and software provisioned. For more
detailed information on DMS-100 International features and provisioning
rules, consult the following Northern Telecom Practices (NTPs):
• DMS-100 Family Provisioning Manual, 297-1001-450
• DMS-100 International Feature Description Manual, 291-1001-801i
Multi-line hunting
A pilot DN is associated with the hunt group. To access the group, the pilot
DN is dialed. Hunting starts with the pilot DN and ends at the last line, in
sequential order.
Distributed line hunting (DLH)
DLH is assigned to large hunt groups which require equal distribution of
calls. A pilot DN is associated with the hunt group. To access the group, the
pilot DN is dialed. Hunting starts on the subsequent line in the group, that
is, the line which was last selected.
If the line at which hunting starts is not idle due to an origination, the next
line is checked. This continues until the hunting starting point is reached.
At this point, busy tone is returned.
Additional hunting options available
Hunt options OFR (line overflow increments hardware register) and OFS
(line overflow increments software register) are available on the DMS-100
International switch.
Hunting options not available
The following hung group features are not available :
• line overflow to route (LDR)
• line overflow to DN (LOD)
When the BNN is dialed, the associated line rings if it is idle. If the line is
busy, no hunting takes place, and busy tone is returned to the calling
subscriber. If the line belongs to a BNN hunt group, hunting occurs within
the BNN. Hunting is either sequential or circular, depending on which
option is assigned to the BNN group.
The maximum number of members for a BNN hunt group is 210. BNNs are
assigned in data tables Hunt Group and Hunt Group Member.
Dial pulse dialing
Dial pulse dialing permits a customer to send pulsed dc address signals to
the switching system. After line seizure, the system alerts the customer, by
dial tone, of its readiness to receive the dialed address information. The
system interprets the dialed digits according to the call processing feature
arrangements for the central office. The system ignores dual tone
multifrequency address signals from lines equipped only with dial pulse
dialing.
Dual tone multifrequency (DTMF) dialing
DTMF dialing permits a customer to send dual tone multifrequency address
signals to the switching system. After line seizure, the system alerts the
customer, by dial tone, of its readiness to receive the dialed address
information. The system interprets the dialed digits according to the call
processing feature arrangements for the central office. Lines equipped with
the DTMF dialing feature are also permitted to use dial pulse dialing.
Malicious call trace
Malicious call trace (MCT) is controlled by operating company
administration, and is provided subject to the legal requirements of the
country in which the DMS-100 International switch resides. This feature
allows the subscriber to request call trace on an in-progress call by entering
a designated signal. When the system receives the signal, it stores the
calling number and the time and date of the call in an MCT log report. An
alarm is generated at the originating and terminating offices.
The subscriber receiving the malicious call, having activated the feature, is
free to originate another call after going on hook. However, the originating
subscriber is held until maintenance force-releases the circuits after
operating company administration legal procedures.
The simple procedure for CEPT activation is to flash the switch-hook of the
called subscriber’s telephone set, where flash is defined as an on hook
interval of 200 msecs. to 1.2 seconds. This method accommodates both
DTMF as well as dial pulse phones.
Subscriber services that use the register recall signal during an off hook A-B
connection cannot be active on the subscriber’s line during call trace
activation.
Emergency cut-off
This feature provides operating company administration with a mechanism
to prevent non-essential subscriber originating calls. Operating company
administration can designate subscriber lines as essential or non-essential.
The feature is activated by entering a command at the Maintenance and
Administration Position (MAP).
Subscriber services
The following subscriber services are available on the DMS-100
International switch:
• abbreviated dialing
• call diversion
• call waiting
• cancel call waiting
• hot line
• warm line
• subscriber activated outgoing restrictions
• subscriber features denied
• no double connect
• call completion to a busy line
• selective call recording
• call transfer
• three and six way calling
• wake-up call
Interrogation:
LH DT * # FC * AN * TN # IND RH
Usage:
LH DT ** AN RT A-B RH
Withdrawal: via service order by operating company administration.
Call diversion
Call diversion allows a subscriber to request rerouting of all calls to an
announcement, another subscriber, or an operator. The following types of
call diversion can be activated:
• call diversion to operator (CDO). CDO intercepts calls to the designated
line and reroutes them to an operator.
• call diversion to announcement (CDA). CDA intercepts calls to the
designated line and reroutes them to an announcement.
• call diversion on busy (CDB). CDB reroutes calls to the designated line
to a second subscriber line, if the designated subscriber line is busy.
• call diversion to subscriber (CDS). CDS reroutes calls to the designated
line to a second subscriber line. The subscriber programs the number of
the second line.
• call diversion fixed (CDF). CDF reroutes calls to the designated line to a
second subscriber line. The operating company programs the number of
the second line.
• do not disturb (IDND). IDND reroutes all calls to the designated
subscriber to treatment.
Call diversion assignment and activation
Call diversion services are assigned by operating company administration
using service orders. The features can be assigned with a state of active or
inactive. If the status is active, it is the administration’s responsibility to
ensure that valid routing information is provided for the diversion.
A subscriber can program CDA, CDS or CDB on activation. For CDB or
CDS, the subscriber enters the target number. For CDA, the subscriber
enters a 1 or 2 digit announcement code.
Subscriber programming of CDO, CDF and IDND is not allowed.
• R SDT 1
The original call is disconnected and the waiting call connected.
• R SDT 2
The active call is put on hold and the waiting or held call connected.
This can be repeated to switch between the two calls.
• R SDT 3
If the subscriber has three way calling service, a three way call is set up.
• R SDT 6
If the subscriber has six way calling service, a six way call is set up
using a six port conference bridge.
Cancel call waiting
Cancel call waiting allows a subscriber to temporarily deactivate the CWT
feature, so that incoming calls do not interrupt the current call. If an
incoming call attempts to terminate on the line, CWT tone is prevented and
the call is immediately denied. When the current call is terminated, the
CCW feature is automatically deactivated and CWT is restored. CCW can
be activated immediately prior to a call, or during the call by dialing an
access code.
CCW is available to all subscribers that have CWT assigned. CCW is not
assigned to specific subscribers but is potentially available to all subscribers.
Operating company administration can query the status of this feature on a
subscriber line.
CCW does not provide the capability to suspend interruptions on the other
subscriber. It only affects the subscriber which has activated the feature.
The subscriber service code (SC) for this feature is controlled by operating
company administration. The specific SC can be datafilled in table
ACCODE. The dialing sequence used to initiate subscriber activation can
also be datafilled; however, it is recommended that a * be used to precede
the SC.
The subscriber activation code sequence is as follows:
1 * SC #
2 A subscriber can activate CCW via two methods: prior activation and
flashing:
— prior activation: LH DT *SC# SDT TN RT....(talking)...RH
— flashing (talking)...RR SDT *SC# ACK...(talking)
CEPT hot line
Hot line is assigned and programmed by operating company administration.
When the subscriber lifts the handset, the switch immediately sets up the
path to the predetermined terminating target number (TN).
Digipulse lines include push button digipulse phones and rotary dial phones.
Warm line (WLN)
Warm line is assigned by operating company administration. The subscriber
and the operating company can activate, deactivate, and program the feature
with a destination TN. When the subscriber lifts the handset, the switch
starts a timer. If the subscriber does not start dialing before the timer
expires, the switch sets up a call to the predetermined destination directory
number (DN). If the subscriber starts dialing before the timeout, the timer is
cancelled and the call is processed as a regular call.
The following sequences are used to control WLN:
Activation:
LH DT * FC * TN # IND RH
To activate the WLN feature with the previous TN, the subscriber must
execute the following sequence:
LH DT * FC * # IND RH
Interrogation:
To verify if warm line is active regardless of telephone number:
LH DT * # FC # IND RH
Data check (telephone number) interrogation:
LH DT * # FC * TN # IND RH
Deactivation:
WLN can be deactivated by the subscriber or by operating company
administration.
LH DT # FC # ACK RH
Subscriber activated outgoing restrictions
Subscriber activation of International line restrictions (ILR) allows a
subscriber to activate or deactivate one of the call restrictions on their own
line. The subscriber enter a password to activate and deactivate ILR. The
subscriber can also query the restriction status of the line.
Operating company administration retains the capability to assign, activate
or deactivate restrictions on a line.
The classes of restrictions for an individual subscriber are:
• DOR: denied origination (all outgoing calls barred). This restriction can
only be assigned by operating company administration.
• DNI: deny national and all International calls. All calls except local
calls are barred.
• DAI: deny all International calls. All calls except local and national
calls are barred.
• DIDD: deny International direct dial. All International calls, except
those to the International operator, are barred.
• DNID: deny national and International direct dial. All calls except local
calls, and calls to the International operator, are barred.
• DABE: deny all but emergency. All calls are barred, except for calls
that are of emergency class.
• NIL: No calls barred. ILR is not assigned to the line, or ILR is
assigned, but not activated.
Activation:
The subscriber can activate any of the five call restrictions by the using
following dialing sequence, either in idle or talking mode:
(idle) LH DT * SC * PW * CR # IND RH
(talking) RR SDT * SC * PW * CR # IND...(talking)...RH
where: PW is a 4 digit-password which the subscriber selects
CR is the chosen call restriction
1 = DABE (deny all but emergency)
2 = DNID (deny national and International direct dial calls)
3 = DIDD (deny International direct dial calls)
4 = DNI (deny national and all International calls)
5 = DAI (deny all International calls)
Deactivation:
The subscriber can deactivate the restriction which he has applied to his line
in either idle or talking mode using the following dialing sequence:
(idle) LH DT # SC * PW # IND RH
(talking) RR SDT # SC * PW # IND...(talking)...RH
Interrogation:
The subscriber can interrogate the line restrictions using either of the
following sequences:
(idle) LH DT *# SC # IND RH
(talking) RR SDT *# SC # IND (talking) RH
Specifying the restriction class:
The subscriber can specify the restriction class on the line using either of the
following methods:
(idle) LH DT *# SC * CR # IND RH
(talking) RR SDT *# SC * CR # IND...(talking)...RH
Subscriber features denied
Subscriber features denied (FDN) is a line option which can temporarily
deny a subscriber the use of current features, without deleting the features
from datafill.
If FDN is assigned, all other features do not function, and the subscriber
cannot activate or deactivate any features. Operating company
administration can alter datafill for existing features while FDN is active;
however, new line options cannot be added, with the exception of malicious
call trace.
FDN can only be activated by operating company administration by service
order or datafill. It is not a subscriber chargeable feature.
No double connect
No double connect (INDC) allows the subscriber to prevent incoming calls
from interrupting a call in progress, allowing the line to be used for
applications which require no interruptions, such as data transfer. INDC
prevents operator interruptions from toll break in (TBI), and also prevents
incoming call waiting calls.
Subscriber activation:
Idle: LH DT *SC# IND RH
Flashing: (talking): RR SDT *SC# IND...(talking)
Subscriber deactivation:
Idle: LH DT #SC# IND RH
Flashing: (talking): RR SDT #SC# IND...(talking)
Subscriber interrogation:
Idle: LH DT *#SC# IND RH
Flashing: (talking): RR SDT *#SC# IND...(talking)
Ring again
Ring again (RAG) allows a subscriber to request notification, by a special
ringing signal, that a busy line has become idle. The subscriber can then
reinitiate the call by lifting the handset or going off hook. Only
intra-exchange calls can use this feature. RAG is assigned to a line by
operating company administration using service orders.
Activation:
BT RR SDT * SC # IND
Deactivation:
LH DT # SC # IND RH
Interrogation:
LH DT * # SC # IND RH
Selective charge recording (SCR)
Selective charge recording (SCR) allows subscribers to have the charges for
the current call quoted back to them when the call is completed.
Subscribers with SCR assigned can activate it using a DTMF set when a
local or toll call is direct dialed. After call disconnect, an SCR100 log is
generated, which records the metered pulses for the call. Operating
company administration can use this information to determine the call
charges, then contact the subscriber.
SCR is assigned to a line by operating company administration using service
orders.
Activation:
To activate SCR, the subscriber uses the following dialing sequence when
placing a call:
LH DT * SC # SDT TN IND RT
If the feature has not been assigned to the subscriber line, a NACK is
provided and the subscriber is unable to continue the dialing sequence.
The following information is recorded in the SCR log associated with the
call upon end of metering:
• start date (6 digits): the date that the call originated (YYMMDD).
• answer time (6 digits): the time that answer was received (HHMMSS).
• calling number (maximum 10 digits): the directory number of the
originating party.
• called number (maximum 18 digits): the directory number of the
terminating (dialed) party.
• call duration (8 digits): the duration of the call (minutes) between answer
and disconnect.
• pulse count (8 digits): the number of pulses recorded for the call.
• meter name: the meter name used for the call.
Operating company administration use the calling number to ring back the
subscriber when they have computed the call charges from the pulse count
and the meter name indicated in the log.
Call transfer
Call transfer can be assigned to a subscriber either as a stand-alone feature,
or combined with 3WC/6WC. Call transfer is assigned to a line by
operating company administration.
The subscriber assigned this feature can transfer either an incoming call, or
an originated call, to a third party. Restrictions on the types of call that can
be set up for the second leg can be established on a per-office basis using
translations tables. Call transfer to treatment is allowed. This feature cannot
be used while in 6WC conference mode.
Call charges are assigned for the first leg to the originator of that leg, and for
the second leg to the subscriber setting up the transfer.
The following dialing sequences control this service:
• R SDT TN
Party B is held and party A rings party C.
• R SDT 0
Party A remains connected to the active party. A disconnect tone is sent
to the held party.
• R SDT 1
Party A is reconnected to the held party.
• R SDT 2
Held and active calls are switched.
• R SDT 4
Held and active party are connected and controlling party disconnected.
Three and six way calling
This feature allows a subscriber to place an existing two-party call on hold
and set up an inquiry call to another subscriber. The subscriber then has the
following options.
In the 3-way calling scenario, the subscriber initiating the 3WC can:
• switch speech paths between the held party and talking party
• connect all parties into a 3-port conference
• remove the conference and reconnect to a single party.
• DT: digitone
DTSR OM counts show delays for the following reasons:
• CC Call Processing realtime delays. Calls can be held up in the
origination or process queues.
• Resource failures in the CC.
• Delays caused by CC overload controls.
The DTSR counts do not show delays of originations that have gone through
guaranteed dial tone (GDT). GDT ensures that a subscriber will eventually
receive dial tone without having to re-originate. With GDT, all originations
are placed on a queue. When a terminal has been queued for more than
three seconds, the PORGDENY register of the OM group PMOVLD is
pegged. These counts are separate from the DTSR pegs and should be
consulted along with the OM group DTSR to obtain full details of dial tone
delay.
Data collection
The CC starts this feature in the IXPM by downloading the appropriate static
data when it returns the IXPM to service. The craftsperson can turn DTSR
on or off at the DTSR level of MAP CI, using the BIND and UNBIND
commands. The LCMs collect DTSR statistics for each call. At the end of a
reporting interval, the DTSR statistics for all LCMs are reported to the CC.
Figure 4–1 shows the distribution of data collection functions. The IXPM
polls the LCMs and accumulates the results over the units of the LCMs.
Each IXPM then packages the results for its LCMs and sends an IXPM
DTSR report to the CC. The CC stores the data on a per-LCM basis.
The three types of processors are responsible for the following functions:
• CC – Store DTSR statistics
• IXPM:
— Poll the units of the LCMs for DTSR statistics.
— Accumulate DTSR statistics per LCM.
— Send IXPM DTSR statistics report to CC.
• LCM:
— Collect DTSR statistics.
— Send DTSR statistics report to IXPM.
Figure 4–1xxx
DTSR collection
FW-31151
DTSR
statistics request
Central IXPM LCM
control IXPM DTSR
statistics report
DTSR statistics
report
Figure 4–1 shows only the continual data flows between processors. The
CC also sends a message to the IXPM on return to service and reset to
update the static data that tells the IXPM how often to poll the LCMs and
whether to collect DTSR data.
Faultsman’s ring-back
This feature allows a craftsperson at the subscriber premise to test continuity
on the line and verify ringing without additional assistance. The
craftsperson takes the phone off-hook, dials a special access code upon
hearing dial tone, and goes on-hook. The switch sends ringing to the
subscriber line, and the craftsperson answers (goes off-hook).
By going on-hook within ten seconds, the craftsperson acknowledges to the
switch that the line is functioning correctly, and the line is set to normal
operating state (idle). If the craftsperson does not go on-hook within the
timeout period, the switch assumes the line is faulty. The line is locked out
and a log is generated.
This feature applies to DP and DTMF phones, and all types of residential,
coin, and PBX lines. The ringback time ranges from 60 to 180 seconds and
is controlled by office parameter FRB_RINGING_TIME in table OFCOPT.
The default value is 60 seconds.
Faultsman’s digit test
This feature provides the Faultsman’s Digit Tests (FDT) for the faultsman
(field engineer) to test a telephone from the subscriber’s premises. It
complements the existing support for direct maintenance (no operator
intervention) provided by faultsman’s ring-back.
The test operates in two stages:
• digit reception test (DRT): the integrity of digit reception is tested by
dialing all digits on a push button telephone, or zero on a rotary dial
telephone. On a DP set, this ensures that the make/break mechanism is
working. On DTMF sets, this verifies that each digit can break dial tone.
Jamming of push buttons is also detected by the DRT.
Semi-permanent connections
A semi-permanent connection (SPC) is a connection which can be set up or
taken down by operating company personnel. The subscriber has no
signaling control of the connection, but can use the speech/data path for the
duration of the connection. The SPC is a standard voice band audio channel
which supports inband audio transmission only. Supervision signals such as
on-hook and off-hook are not supported on the SPC.
The following types of SPC connections are supported:
• line to line
• line to trunk
• trunk to line
• IRLCM line to line
• IRLCM line to IRLCM line
• line to IRLCM line
• trunk to IRLCM line
• IRLCM line to trunk
Operator tones
Insert tone on coin phone terminations
This feature causes pay phone recognition tone to be given to the operator
when a call terminating on a coin phone is answered.
When a caller asks the operator for a collect call to a number, the call is
charged to that called number. If the called number is a coin phone, an
indication must be given to the operator, who can prevent fraud by refusing
to connect the call.
Certain types of coin phones provide a tone on answer. In those cases, both
tones will be audible to the operator.
Toll break in – background tone
This feature allows the DMS-100 International system to apply the
appropriate tone sequence when an operator offering a toll call to a
subscriber finds the subscriber busy on another call and has to barge-in on
the call.
TBI MAP signaling command
This feature allows testing of the toll break in (TBI) signal, by simulation
from the trunk test position (TTP) level of the maintenance and
administration position (MAP).
The service hall can be configured as any type of hunt group (DLH, DNH,
MLH, BNN), with the attendant’s line external to the group.
Two types of calls can be placed from a service hall:
• direct dial call (automatic)
• operator assisted
For each direct dialed call originating at the service hall, a billing record is
printed. Operator assisted calls are manually ticketed. Since call charges
are a function of the meter pulses, zero charges are indicated in the billing
record for toll calls. If the metering information for toll calls is present, then
the billing record indicates charges for toll calls.
The following terminology is used to describe the service hall feature:
• service hall: a public site where telephone calls can be placed.
• APS (attendant pay station): the line option assigned to each telephone
in the service hall.
• local call: calls tariffed in the local switch are classified as local calls.
• toll call: calls that are routed over CAMA trunks (usually not metered)
are referred to as toll calls.
• billing record: logs generated for direct dialed calls are called billing
records.
Service hall operation
The sequence of events for a typical call from the service hall is as follows:
• Customer checks in with the attendant by providing call details and a
deposit (if required). The customer then joins the queue.
• If the call is a direct dial call, the attendant assigns a telephone to the
customer.
• If the call is a booked call, the attendant requests the booking operator
for the call. When the call is ready, the operator dials the service hall.
At this point, the attendant can direct the customer to the appropriate
telephone.
• On call completion, a billing record is printed for direct dialed calls.
Other types of calls are manually ticketed.
• The customer settles his or her account with the attendant.
Announcements
Recorded announcement service (local)
Recorded announcement service provides a verbal announcement to
originating lines which have voluntarily requested access to a recorded
announcement (time, weather, etc.) or have been routed to an intercept
announcement for calls that cannot be completed as dialed.
Calls routed to an announcement, in response to a voluntary customer
request, receive charge treatment. Calls routed to an intercept
announcement are not charged to a customer. Therefore, answer supervision
is not returned to the originating office for interoffice calls routed to an
intercept announcement.
Digital recorded announcement system (DRAM)
The DRAM provides recorded announcements, stored in digital format,
which are accessible on either a “barge-in” or “non barge-in” basis. A set of
standard announcements are available on programmable read only memory
(PROM) cards, while others can be generated on site by the operating
company using RAM or electronically erasable PROM (EEPROM) cards.
The DRAM can be used for local recorded announcements, special
information tones, or custom applications such as calling number
IXPM features
IXPM warm SWACT
Warm switch of activity (SWACT) capability between a running IXPM unit
and its dedicated standby unit is supported on the DMS-100 International
switch. Warm SWACT capability is the maintaining of all calls in session
and in a stable state during a switch of activity (SWACT) from the active
unit to the inactive unit of an IXPM. Activity changes can occur by manual
or system request.
Calls are considered stable and in session if they have received answer and
no events have occurred following answer. Calls that have not received
answer before SWACT (in set up), or subsequent to answer have received a
flash (feature activation, etc.) are dropped during the SWACT.
Reasons for SWACT
SWACTing calls from one unit to another can occur for a variety of reasons,
including:
• maintenance requirements
— hardware
– repairs or upgrading
– out-of-service testing
– exercising the SWACT mechanism
— software
– updating code
– updating data
• fault recovery
— hardware
– unrecoverable faults
– device degradation
— software
– non recoverable processor trap
– an audit failure
SWACTing for maintenance reasons is generally considered a controlled
SWACT, as the transfer of activity is done by request. SWACT for fault
recovery reasons is generally considered to be uncontrolled, in the sense that
the time at which the activity transfer occurs cannot usually be chosen or
deferred. Uncontrolled SWACT generally represents the worst case
situation with respect to the impact on subscribers, especially if it occurs
during heavy traffic periods. If possible, a warm SWACT should be
performed during light traffic periods.
IXPM performance monitoring
IXPM performance monitoring provides easily accessible information to the
craftsperson to indicate a particular peripheral’s performance. This is
accomplished using the MAP level PERFORM, a sub-level of the MTC PM
level.
PM activity (PMACT )
The PMAct sub-level, accessed by entering PMACT at the Perform level,
displays the percentage of the realtime used in the MP, SP and FP. PMAct
divides the realtime being used into three categories: call processing
occupancy, high priority background occupancy and low priority
background occupancy. The combination of the high priority and the call
processing occupancies indicates the realtime used to actually provide
service, while the low priority background occupancy indicates the realtime
spent in running audits and diagnostics.
MAP level details
All Perform sub-levels provide the same commands: Strt, Stop, Strtlog and
Stoplog. The measurements begin when the Strt (start) command is entered.
This command can take one parameter, which specifies how long the tool
will run. The default is set at 15 minutes. The Stop command is used to
stop measurements.
The Strtlog (start logs) and Stoplog (stop logs) commands provide the
capability to output the results displayed on the screen as logs. Once started,
logs are printed every 15 minutes.
Interoffice features
Interoffice address signaling
Refer to chapter 6, “Signaling and Interfaces” for information on interoffice
address signaling.
Call state supervisory signaling
Refer to chapter 6, “Signaling and Interfaces” for information on
supervisory signaling.
Intraoffice connecting arrangements
An intraoffice connecting arrangement is provided to establish a connection
between two customers served by the same switching system.
International killer trunks
The killer trunk feature is a software replacement for the typical external
individual circuit usage and peg count (ICUP) equipment. It attempts to
detect faulty trunk circuits or facilities that are not detectable by normal call
testing. Trunk members to be detected will have at least one of the
following properties:
• killer trunk – a trunk member which is repeatedly seized but due to a
malfunction is not held for an appreciable length of time. For example,
bad transmission will cause the subscriber to drop the connection and
re-attempt the call. Within a group these trunks will have a higher than
average attempt rate.
• slow release trunk – a trunk member that has a low attempt rate coupled
with a fairly high usage. Malfunctioning supervisory equipment is
typically the cause of this.
• always busy trunk – a trunk member which has zero call attempts, and is
busy for the entire report interval. Causes include under-engineering of
the group, normal high usage, and equipment malfunctions.
• always idle trunk – a trunk member which has a usage of 0 and zero
attempts. Improper network management controls, over-engineering and
equipment malfunction are among the causes.
A maximum of 2048 trunk members can be assigned for killer trunk
detection. If a trunk is detected to be both a killer trunk and a slow release
trunk, it is flagged as a killer trunk.
By gathering usage and peg statistics on a per-trunk basis, over some
specified interval, trunk members with the above properties are identified.
This is accomplished through the use of registers assignable on a trunk
group basis which accumulate the trunk attempts and connect duration. The
peg statistics used for killer trunks are not related to OM pegs. The Killer
Trunk process periodically runs through the registers and computes the
average call holding time for each trunk member instrumented. A report is
generated identifying killer, slow release, always busy, and always idle
trunks.
IRLCM features
Intraswitching on the IRLCM
The IRLCM consists of two line concentrating arrays (LCAs) within and
between which connections are established for intraswitched calls. Within
the IRLCM, intraswitching is broken down into two types.
• Intra-LCA– subscribers located on the same LCA are connected via an
intra-channel
• Inter-LCA– one subscriber on one LCA and the other on the mate LCA
are connected via an inter-channel.
Figure 4–2xxx
Intra/inter LCA connections
FW-31152
IRLCM
LCA 1
Host channel
to ILGC
Interconnection Intraconnection
LCA 0
Intraconnection
Legend:
ILGC International line group controller
IRLCM International remote line concentrating module
LCA Line concentrating array
In ESA mode, reorder tone would be given immediately after dialing the 9.
This is because no translation data about routing to trunks would have been
downloaded, and treatment would be given for the first digit.
Functionality of ESA
The functionality of ESA is divided between the CC, the ESA processor and
the IRLCM and is classified as follows:
• entry and exit
• call processing (and support)
• translations
• static data collection & down-loading
• maintenance & diagnostics.
If any of these indicate a failure, a decision is made (based also on the status
of the mate link) to enter, or not to enter, ESA.
ESA is requested if the unit’s own link is bad and if one of the following
conditions exist:
• The mate unit also has a failed link
• The mate is inactive
• The inter–unit communication (IUC) link has failed
If both IRLCM units are active, then both must request ESA in order for the
ESA to accept the request.
ESA exit
While in ESA mode the message link from the IRLCM goes to the ESA
processor. The ESA processor exercises control over all calls. When the
ESA processor recovers messaging capability with the ILGC and is ready to
surrender control, it sends a command to both units of the IRLCM
requesting an ESA exit. All further messages from the IRLCM are ignored
once this request has been sent.
The IRLCMs instruct the LCC cards to restore host communication and set a
timer to prevent oscillating into and out of ESA.
The only type of exit supported by international ESA is the type known as
“cold”. With “cold” exit, all calls are dropped when CC control is regained.
Call processing
The ESA processor supports calls made between lines (POTS or coin) while
in ESA mode. Since communication with the CC is lost, access to the
resources of the CC is also lost. As a result, no CC–based functions are
provided.
While in ESA mode the following functions are not provided:
• Call processing features
• Logs
• Metering
• Facility maintenance
• Service analysis
• Operational measurements
Note the absence of the ambiguous and feature tables from the above list.
These are not supported in ESA and are not downloaded. Note also the
absence of tables FAHEAD and FACODE. These tables are not supported in
ESA, as they are normally used on calls to foreign areas. Since ESA is
designed for local area calls only these tables are not downloaded to save
memory for other, more relevant, tables.
In addition to the above tables, terminal information is downloaded,
including some fields from table LINEATTR. Data is collected only for
those lines not in an offline, manual busy or unequipped state. Lines in any
of the above three states ar unable to process calls in ESA mode.
Data selection
In some tables, not all of the contents are required for simple line–to–line
translations. Only the relevant tuples are collected and sent to the peripheral
module. In this context a relevant tuple means any tuple that could be
required for translating digits from any of the terminals on the IRLCM.
Nonrelevant tuples are not downloaded and ESA translations continue as if
the tuples were never present.
The basis of this information is the translations start point in table
LINEATTR; field IXLNAME. Since most subscriber features are not
supported in ESA mode, and the only treatment available is reorder tone,
many tuples are not required.
Data is collected whenever ESA static data is downloaded. Changes by
maintenance personnel are not necessarily propagated to the ESA processor
immediately.
Data restrictions
The data store available for static data within the ESA processor is limited. It
is possible that the complete set of relevant tuples from the CC translation
tables will not fit into the ESA processor. In this case the extra tuples are
discarded during data collection and the static data loading is still indicated
as successful. At least one ESA log will indicate to maintenance personnel
that this has occurred.
ESA options
ESA for the IRLCM is a optional package based on the DMS-100
International product. The ESA processor is not required for normal
operation of the lines on the IRLCM and can be out of service without
affecting subscribers.
Downloadable tones
This feature allows XMS-based peripheral modules (XPMs) to support
market-specific service tones using common hardware. The tone parameters
for each market are stored in software modules and are down-loaded to
XPM as part of the static data.
The numbers and specifications of the service tones required depend on the
application. The group of parameters (for example, frequency, level, and
tone-id) that define the service tones used in each particular application are
stored in the CC as tone sets. The tone set needed in each XPM is identified
in datafill, and the tone parameters are down-loaded as part of the PM’s
static data at return to service (RTS). The XPMs use these parameters to
generate the service tones.
XPM static data audit on downloadable tones
This feature audits the peripheral static data for corruption by checking for
data mismatch between the CC and the XPM. On subsequent return to
service actions, based on the data checksum from the audit, the CC
determines if it is necessary to download the data to the XPM.
Static data download
On RTS, the CC queries the checksum of the XPMs static data. If this
checksum does not match that of the CC, the static data is sent to the XPM.
The CC also sends data for the XPM tone generation facility.
Second dial tone over trunks
Second Dial Tone Over Trunks is used in situations when the local exchange
does not have enough registers to collect dialing digits. In these cases, when
the caller dials an access code (typically an International access code), the
local exchange cuts the subscriber through to its outgoing (OG) trunk. Upon
detection of a trunk seizure in the far end office, the corresponding incoming
(IC) trunk applies a second dial tone alerting the user to continue dialing
digits. The second dial tone ceases when the first dialed digit is received, or
the call is routed to treatment.
The Second Dial Tone feature can be assigned on a trunk group basis to DP
trunks by datafilling field DIALTONE in table DGHEAD. The datafill
options for applying second dial tone towards the originator after trunk
seizure are as follows:
• NONE This is the default value. No second dial tone is used.
• NORM A standard second dial tone specified in the tone set is
used.
• SPEC A non-periodic second dial tone specified in the tone set
is used.
• SPEC2 A periodic second dial tone specified in the tone set is
used.
The tone set for second dial tone is country-specific. Specifications for the
tone sets are provided on a per-country basis in table 4–1 on page 4–42.
Table 4–1xxx
Second dial tone definitions in International markets
Market Toneset in DIALTONE field in table DGHEAD
table
LTCINV NORM (Hz) SPEC (Hz) SPEC2 (Hz)
Audible alarms
The system provides the software and hardware for generating audible
alarms. Unique alarms are used to distinguish different alarm levels.
Visual displays
Visual displays are provided to indicate the various types of alarms locally
and at locations other than the maintenance control center in accordance
with local practices.
Output messages
In addition to audible and visual alarms, output messages report alarmed
maintenance information to the craftsperson. Reference to paper based
documents is not normally necessary to isolate the trouble and perform
further tests.
Trouble status indicators
Trouble status indicators in the office include (1) system status indicators
which display the status of the major units of equipment of the system (2)
state of health indicators which do not necessarily denote trouble but report
such conditions as system load, and (3) other visual indicators such as aisle
pilots and exit alarm panels.
Office alarm subsystem
The DMS-100 International alarm system generates audible and visual
indications of trouble conditions detected within the switch or in associated
equipment.
Alarm inhibit
Craftspersons can selectively inhibit certain non-critical alarms in special
situations where repeated alarms would mask valid alarms.
Trouble verification
The DMS-100 International switch takes actions to verify that a trouble
exists prior to providing a trouble indication. Additional testing can be
invoked prior to initiation of repair to determine the continued existence of
the problem and to verify that the trouble has been cleared. Thresholding is
available for certain classes of troubles in order to avoid premature service
recovery action and trouble notification.
Trouble sectionalization and isolation
Test hardware, software, and procedures are provided to locate a defective
unit with no disruption of service once a trouble condition has been detected
and verified.
Trouble location procedure
In order to facilitate the isolation of faults in DMS-100 International
systems, the maintenance and administration position is provided with a
User interfaces
For additional information on user interface features, refer to chapter 10,
“Maintenance”.
Maintenance and administration position (MAP)
The MAP provides an interface to assist in the fault recovery, isolation, and
recovery processes. The assignment of various maintenance and
administrative functions to specific MAP units is specified by the operating
company. A specific MAP unit can be assigned the capability to perform all
functions if desired. Alternately specific MAP units can be assigned subsets
of the total capability, or single functions.
Switching system control and display interface
The MAP VDU is the primary man-machine interface between maintenance
personnel and the DMS-100 International system. The VDU screen area
available for system output is 80 characters wide by 24 lines long. In the
maintenance mode, the screen is divided into a number of areas which
display the following types of information:
• System status area (three lines by 80 characters) indicates the alarm
and/or operational status of the system, with immediate automatic
updating of the current display.
• Work area (variable number of lines by 68 characters wide) provides:
— Descending levels of subsystem status
— Display of working data (posted trunk numbers and voltage and
frequency levels applied and measured).
• Command menu display area (20 lines by 12 characters wide) defines
the function which can be performed at the position at any given time.
• Command interpreter output area (variable number of lines by 58
characters wide) provides:
— Output of system reports (including error, action taken, and
diagnostic messages) upon operator request
— Output defining specific components of the system.
• Input echo area (one line by 68 characters) provides an echoed statement
of the most recent operator input command string.
• User identification and time (two lines by 12 characters) provides the
identification of the user logged into the MAP and the time of day.
Alarm release
Audible alarms in the DMS-100 International switch can be cancelled by the
removal of the alarm condition or, if the alarm is a result of a
system-detected fault, by the operation of a key on the Alarm Control and
Display Unit (ACD) located near the MAP. Subsequent audible alarms are
not inhibited.
Remote maintenance
This feature provides interfaces permitting full remote maintenance
operation excluding repair requiring physical action on equipment. The
feature allows, at the remote maintenance location, all MAP capabilities
available at the switching system location. These capabilities include: I/O
messages visual and audible status indications and alarm conditions,
equipment control, unit isolation, system reconfiguration and initialization,
office data access and change, and control of AC, DC and transmission tests
on trunks. These capabilities allow craftspersons at the remote maintenance
location to operate, administer and maintain the switching system with the
same effectiveness as at the switching system site.
transmission level and noise measurements. Line and trunk access are
provided to the 100-type test line.
101-type test line
A 101-type test line provides a communication line to a craft position.
102-type test line
A 102-type test line applies a precise tone for approximately nine seconds
on, one second off, in a repetitive sequence.
Faultsman’s ring-back
Faultsman’s Ring-back is a maintenance feature which is used by a
faultsman/linesman (field engineer) to test continuity of a line while on the
subscriber’s premises, without the help of a counterpart in the exchange. It
can also be used by the faultsman to obtain physical ringing on the
subscriber’s telephone to enable adjustment to be made to the ringer.
Administrative features
Refer to chapter 7, “Administration”, for additional information on
administrative features.
Data base management-memory alteration
Assignments
A means of specifying line/telephone numbers, trunks, routing, and charging
assignments is available. A means of specifying the miscellaneous
equipment assignments is also available.
Pending order file (POF)
With this feature, DMOs can be activated immediately upon entry or place
in POF in the system memory for activation at a later time. These DMOs
are input using File Editor and Table Editor commands. The file editor is
used to create the POF; the table editor DMO commands are stored within
the newly created file. While in the POF, DMOs can be queried, changed, or
deleted. They can also be activated individually or all at once with a single
command entry.
Initialization and growth
A means is provided to initialize the data model and the contents of the data
base, and for the data base to grow as additional equipment is added in the
office.
Office records verification and statistics
A means is provided to obtain office records and statistical summaries of the
data base for many operating company functions, perform ad hoc requests
for data in the data base, and audit a data base external to the central office
with the data base contained in the central office.
the use of switched link chains, which has the effect of providing wide
access using limited servers. The network measures are event measures (for
example, terminating calls) or usage measures (for example, service circuits
usage) that relate to the performance of the network.
Customer measurements
Service offerings for individual customers that result in the definition, within
the system, of a functional (from a call processing point of view) and private
(from a tariff point of view) grouping of lines or stations can have associated
measurements on a group basis.
Validity measurements
Information is provided with operational measurement that allow the data
user to judge the reliability of the data.
Service measurements
Customer access service measurements
See section “Automatic traffic measurements” in this chapter.
Maintenance service measurements
Maintenance measurements provide data which can be used to evaluate
equipment performance and the impact of troubles on customer service, and
also to calculate an operating company defined performance index.
Call processing
This chapter summarizes the manner in which call processing occurs on
DMS-100 International switches. The explanations in this chapter are
general in nature and apply to generic DMS-100 International capabilities,
which may be modified for individual market requirements. The specific
handling of call processing for DMS-100 International switching systems
varies depending on the market in which the system is deployed, the
application of the system, and the hardware and software provisioned.
Call processing software in DMS-100 International switches is distributed
between the central control element (the Central Control on NT40-based
systems, or the DMS-Core on SuperNode-based systems) and the peripheral
modules (PMs). The following paragraphs and illustrations describe the
sequence of events for line and trunk originated calls, followed by a
structural description of DMS-100 International call processing software in
general.
Line originated calls
The following is the sequence of events for line originated calls:
1 A seizure is detected from an idle line.
2 If system resources (for example, universal tone receiver for MF lines, or
ILGC-to-network DS30 channel) are available to handle the call
origination request, the system performs step 4.
Otherwise, the system performs step 3.
3 No treatment is applied to the originator until the required system
resources are available (the subscriber does not receive any tone).
— If the originator releases before system resources are available, the
line returns to idle state.
— If the originator remains off-hook until the system resources are
available, the system performs step 4.
4 Dial tone is applied to the line and a timer is started to record the timeout
for the reception of the first digit.
— If the first digit is received before this timeout, dial tone is removed
from the line and the system performs step 6.
— Otherwise, the system performs step 5.
Figure 5–2xxx
Key for line and trunk originated call flow illustrations
Step Decision
State
idle
seizure
System N
resources wait for
available? resources
Y
Allocate
resources. Start
timer for 1 digit. System
resources release
available
dial tone
idle
digit wait
—continued—
1 digit wait
I
digit digit
timeout
all digits N
dialed?
treatment
release time out
Y
translation and
routing
free system free system
resources resources
A
originating
idle line
lockout
PDIL release
treatment
I idle
—continued—
outgoing trunk
D
BLDN treatment
no free trunk available
within route list
VACT treatment
GNCT
treatment
appropriate
treatment
—continued—
idle busy
line status?
Is there an N
LGC to CC NBLN treatment busy treatment
channel?
I I
Make Y connection
connection possible?
N
Ringer OK? busy treatment NBLH treatment
L I I
—continued—
Y
Is the ringer NBLH treatment
Hunt next line NBLN treatment ok?
N L
I all lines I
already
hunted twice?
S Y N
busy treatment
—continued—
ringing
ringback tone
answer wait
answer
timeout release answer
I
Idle originating Idle terminating
line. Free line. Free
network network
connection. connection.
Stop metering. Stop metering.
—continued—
make network
connection
seizure
send digits
trunk answer
wait
release answer
idle
clear
forward reanswer wait
Stop metering
—continued—
originating line
disconnect reanswer wait
treatment
originating line
disconnect
treatment
End
Treatments
Following is a list of the treatments set by DMS-100 International Family
Systems. Other treatments can also be datafilled to an office.
• BLDN – blank directory number. This treatment is required for routing
of unassigned directory numbers.
• BUSY – busy line. The treatment to which a line or trunk is routed
when the terminating line is not idle. Some examples are:
— when a line dials its own directory number.
— when a line or trunk dials a number which is busy.
— when the called line has been seized for testing.
— when the PM for the called line is busied for maintenance.
If option CIR (circular hunting) is assigned to the group, all lines in the
group are hunted, regardless of the start point of hunting. If CIR is not
assigned, the default is sequential hunting (sometimes called linear hunting).
Sequential hunting starts at the number dialed and ends at the last number in
the hunt group. Therefore, if the pilot DN is not dialed, not all lines are
hunted.
Multi-line hunting (MLH)
There is only a pilot DN associated with the hunt group. To access the
group, the pilot DN is dialed. Hunting starts with the pilot DN and ends at
the last line, in sequential order.
Distributed line hunting (DLH)
Hunting always starts on the subsequent (idle at the time last one was
selected) line in the group. DLH is assigned to large hunt groups which
require equal distribution of calls.
During hunting, lines which are busy, have a termination restriction, or are
marked as failure suspects will be skipped. Lines which cannot be
connected due to ILGC or network blocking due to traffic conditions are
also skipped. If the call cannot be terminated at any line at the end of
hunting, the hunting algorithm is repeated, this time also trying the failure
suspected lines. If this also ends with no successful termination, then busy
treatment is applied to the originator.
Trunk originated calls
DMS-100 International systems handle trunk originated calls as follows:
1 An incoming trunk sends an origination message to the CC.
2 If system resources (for example, universal tone receiver) are available
to handle this call origination request, the system performs step 4.
Otherwise, the system performs step 3.
3 No treatment is applied to the originating trunk until the required system
resources are available.
— If the originator releases before the system resources are available, it
is returned to idle state.
— If the system resources are available before the originator releases,
the trunk becomes CPB and looks for digits. It then sends PTS back.
— Meanwhile, if the originator releases, the trunk is marked as idle.
4 The originating office is signaled to proceed to send (PTS) the digits.
Digits sent by the originating office are collected by the incoming trunk
and sent to the CC upon completion of this process.
5 Translation and routing capabilities of the system attempt to identify the
terminating agent as one of the following and act accordingly:
17 The outgoing trunk is seized and digits to identify the terminating agent
are sent to the targeted office. After completion of digit sending:
a. If an answer signal is received from the terminating agent:
– answer is propagated to the originating agent. The two parties
are now in trunk-trunk talking state.
– the system performs step 18.
b. If the originating agent releases (that is, a CLF is received):
– clear forward is propagated to the terminating office via the
outgoing trunk.
– both agents are marked idle.
– the network connection is freed.
– system resources allocated for the call (for example, call
condense block) are freed.
18 After a trunk-to-trunk call is established between the two agents:
a. If the originating party releases first (that is, a CLF is received):
– clear forward is propagated to the terminating office via the
outgoing trunk.
– both agents are marked idle.
– the network connection is freed.
– system resources allocated for the call (for example, call
condense block) are freed.
b. If the terminating party releases first (that is, a CLB is received):
– clear back is propagated to the originating office via the
incoming trunk (unless no answer was sent to the originating
office).
– the system performs step 19.
19 With the terminating party on hook:
a. If the terminating party re-answers before a CLF is received from the
originator:
– answer is propagated to the originating office (unless the
terminating line is a free number). The two parties are back in
the trunk-trunk talking state, and the system performs step 18.
b. When the originating party releases (that is, a CLF is received):
– Both agents are marked idle.
– the network connection is freed.
– system resources allocated for the call (for example, call
condense block) are freed.
Figure 5–4xxx
Processing of trunk originated calls
FW-31170
Answer Seizure
Answer Outgoing
Call forward trunk
Call block Call forward
Call block
idle
origination
System N
resources wait for
available? resources
allocate
resources System
resources clear forward
available
proceed to send
idle
A I
idle
—continued—
outgoing trunk
D
BLDN treatment
no free trunk available
within route list
VACT treatment
GNCT
treatment
appropriate
treatment
—continued—
idle busy
line status?
Is there an N
ILGC to CC NBLN treatment busy treatment
channel?
I I
Make Y connection
connection possible?
N
Ringer OK? busy treatment NBLH treatment
L I I
—continued—
N L
I all lines I
already
hunted twice?
S Y N
busy treatment
—continued—
ringing
ringback tone
answer wait
answer
timeout clear forward answer
clear forward
idle
—continued—
trunk-line talking
Idle originating
trunk. Free clear back
network
connection.
clear forward
wait
terminating line
disconnect
treatment
release treatment
timeout Free network answer
connection.
Mark both
Free system Free system agents idle.
resources resources Free system
resources.
talking*
*trunk-to-line talking for a trunk-to-line call, trunk-to-trunk talking for a trunk-to-trunk call.
—continued—
seizure
send digits
trunk answer
wait
clear
forward answer
idle
clear clear back
forward
clear forward
wait
—continued—
originating trunk
idle lockout
clear forward
idle
End
Software description
Call processing functional concepts
The function of DMS-100 International call processing software is to
establish connections among telephony agents. The connections thus
established may be used to transmit voice or data. A telephony agent is
defined as any kind of line or trunk, or any special service circuit that
performs a telephony function.
The following table lists examples of telephony agents and briefly specifies
the function of each:
Agent Function
line for subscriber access
trunk for calls between switches
echo suppressor for reducing echo on trunks
announcement for recorded announcements
conference circuit for toll break-in
CC and the PMs for metering purposes are not included in this diagram for
the sake of simplicity.
Figure 5–6xxx
Message exchanged by CC, NMs and PMs for a basic voice call
FW-31169
Digits
Step 2
Collect more digits
Last digits
Network
Connect network path
module 1
Connect network path
If the originating agent is a line, the message from the CC directs the
PM to give dial tone to the originator.
2 Receive the digits:
The peripheral knows the action to be taken to receive the digit
information from the originating agent and proceeds accordingly.
For DP lines and trunks, no receiver is required.
For multifrequency (MF) trunks, dual-tone multifrequency (DTMF)
trunks, or DTMF lines, the PM connects a universal tone receiver
(UTR) to the originating agent.
The manner in which digits are collected and sent to the CC depends on
the numbering plan used. In some cases, the PM contains the required
decision mechanism to complete the digit collection by itself and sends
only one “digits” message to the CC. In other cases, CC processing may
be necessary to determine the number of digits to collect. In such cases,
the CC begins translating the digits and sends messages to the PM telling
it how many more digits to collect. Several digits messages may be
required before all digits are received.
While waiting for digits, no processing by the CC is needed until another
message is received, so the CC condenses the call. To condense the call:
— The CC saves all information necessary to resume call processing
when a message arrives.
— The CC releases all software resources which are not required while
the call is not active. These resources can then be used by another
call.
When the CC receives a message for a condensed call, call processing
resumes. The CC decides what action to take, based on the following
factors:
— the message received.
— the information saved when the call was condensed.
3 Translate the digits:
a. The CC analyzes the digit translation data to determine the call
destination, based on the digits received and the originator’s
attributes.
b. The result of translation is a list of all the possible routes for the call.
c. Depending on the numbering plan used, translation may begin before
all the digits are received, with intermediate results saved when the
call is condensed. The CC may send messages to the PM telling it
how many more digits to collect, based on the intermediate results of
translation.
4 Select a terminating agent. For each route in the route list, the CC does
the following:
a. If the route is a trunk group:
i. selects an idle trunk from a queue of idle trunks in the trunk
group. The order in which idle trunks are selected depends on
the trunk group type.
ii. if there are no idle trunks in the group, proceeds to the next route
in the list. This is called “alternate routing”.
If the end of the route list is reached before an available route is
found, the call cannot be completed. Calls that cannot be
completed are routed to treatments, which may consist of tones or
announcements indicating the reason that the call could not be
completed. If no routes are available, the originating agent
receives “no circuit treatment”. This treatment is specified in the
office datafill, and usually consists of reorder tone followed by
lockout.
If a route is found, the CC proceeds to the step (establish the
telephony connection).
b. If the route is a line (call terminating in the same office):
i. examines the dynamic data on the state of the line to determine if
it is idle.
ii. if the line is idle, proceeds to step 5 (establish the telephony
connection).
iii. if the line is busy, sends a message to the originating PM
directing it to give busy tone to the originating agent.
5 Establish the telephony connection. A telephony connection is
established through the NMs as follows:
a. The CC selects a network path for the call.
b. The CC sends messages to the NMs and PMs, directing them to
connect the network path.
c. The messages sent by the CC to the PMs also inform them of the
integrity value to be transmitted and received in the CSM, and direct
them to look for integrity.
d. After integrity is found, the PMs connect both agents to the speech
path, if the terminator is not a line. If the terminator is a line, both
agents are connected to the speech path after the terminator answers.
The messages sent to the PMs also direct them to supervise the
connection after integrity is found. The PMs supervise the
connection by:
– continuously checking integrity.
– scanning for an answer signal from the terminator.
Interfaces
Interconnections with the outside world on DMS-100 International switches
are made using analog, digital, or a combination of analog and digital
interfaces.
• Analog interfaces (Z interfaces) are used to interface subscriber
telephone equipment. These are the line circuits and reside in the
International line concentrating module (ILCM) shelves at the host
office.
• Digital interfaces (A interfaces) provide direct access to the digital
system or equipment at the PCM-30 rate (2.048 Mb/s). These interfaces
reside in the International digital trunk controller (IDTC) shelves.
Line and trunk interfaces communicate with the DMS-100 switching
network using multiplex loops operating at 2.56 Mb/s.
Note : The DMS-100 International switch only supports direct PCM-30
trunk connections. Channel banks must be used to connect analog trunks.
Figure 6–1xxx
Surge waveshape
1000 V Amplitude
500 V
Decay time to 1/2 value
Rise time
X Time in µs Y
0 < X < 10 – 100 µs 0 < Y z 100 – 1000 µs
Surge waveshape is defined as follows: Rise time X time to decay to half crest value.
For example: 10 X 1000 µs
Dielectric strength
1000 V DC, applied for 1 minute to tip (A-lead), ring (B-lead), and ground
terminals, in any combination, shall not result in damage.
The typical outside plant protection used with DMS-100 International
switches is 3-Mil carbon block (350 V) for over-voltage and heat coils (150
mA, must not operate within 2 minutes at 250 mA, and must operate within
10 seconds at 350 mA). These devices are provided by the operating
company.
Overvoltage protection
Overvoltage conditions are detected by the World Line Cards (type A –
NT6X17BA and type B – NT6X18BA) when any of the following
conditions are met:
• the voltage at TIP (A-lead) equals or exceeds +75.0 volts ± 4.0 volts
with respect to ground at any time during a 30 ms interval
• the voltage at TIP (A-lead) is equal to or less than –75.0 volts ± 4.0 volts
with respect to ground at any time during a 30 ms interval
• the voltage at RING (B-lead) equals or exceeds +75.0 volts ± 4.0 volts
with respect to ground at any time during a 30 ms interval
• the voltage at RING (B-lead) is equal to or less than –75.0 volts ± 4.0
volts with respect to ground at any time during a 30 ms interval.
World line card characteristics are described in further detail on page 6–74.
Trunk interfaces
DMS-100 International switches can be directly connected to digital trunks,
using a PCM-30 digital carrier system (CCITT type A interface). Analog
trunks can also be connected via an appropriate channel bank.
International digital trunk controller
The International digital trunk controller (IDTC) interfaces sixteen
32-channel 2.048 Mb/s PCM-30 carrier systems with sixteen 32-channel (30
voice, 2 signaling and control), 2.56 Mb/s duplicated speech links (ports) to
the DMS-100 International digital network. An IDTC is a dual-shelf
peripheral with each shelf consisting of a maximum of 20 printed circuit
packs.
Packaging and power
A fully equipped IDTC consists of two shelves with common equipment that
provide for hot standby operation in the event of failure of common control
of one of the shelves. Each shelf contains 15 common control and switching
network interface circuit cards, one power converter card, and four PCM-30
interface cards. Each PCM-30 interface card accommodates two PCM-30
spans. In order to interface to the DMS-100 International switching
network, each shelf contains two interface cards, each of which provides
eight ports to the network. Each port is duplicated into planes 0 and 1 of the
switching network. Each incoming DS-1 channel is mapped to a network
port.
In addition, a third universal tone receiver (UTR) card – NT6X92CB – may
be added on an IDTC+ frame to provide an additional 30 receiver channels.
PCM-30 interface specifications
Output specifications (see also CCITT yellow book rec. G703.6.2.)
• line rate: 2.048 Mb/s phase-locked to office clock
• line code: high density bipolar (HDB3) code
• line impedance: 75 ohm resistive coaxial or 120 ohm resistive
symmetrical pair (switchable)
• pulse characteristics:
— termination:
75 ohm resistive (coaxial),
120 ohm resistive (symmetrical pair)
Delay: The transmission delay caused by the buffering of bytes in the IDTC
will not be more than three frames (375 microseconds). The average delay
will not be more than two frames (250 microseconds).
number digits, the calling number digits, the type of call, and information
about the status of the terminating subscriber.
Trunk signaling timing parameters
Signaling timing parameters are specified as part of the line and register
signaling system definitions in tables LNSIGSYS, RGSIGSYS, DGHEAD
and DGCODE. Each parameter consists of a default value, a minimum and
maximum limit, and an incremental value. The default values will satisfy
most requirements. These can, however, be changed by operating company
administration, within prescribed limits. Refer to DMS-100 International
Customer Data Schema, 297-1001–451i, for detailed specifications of
variable timing parameters.
Table 6–1xxx
Register signaling types
Abbreviation Register signaling type
Table 6–1xxx
Register signaling types (continued)
Abbreviation Register signaling type
End
Table 6–2xxx
Generic R2 register signaling – numeric address information activities
DIGIT_3 digit 3
DIGIT_4 digit 4
DIGIT_5 digit 5
DIGIT_6 digit 6
DIGIT_7 digit 7
DIGIT_8 digit 8
DIGIT_9 digit 9
DIGIT_B digit 11 These activities represent
overdecadic
d di didigits
i (di
(digits
i 11 through
h h
DIGIT_C digit 12 15) during transmission and
DIGIT_D digit 13 reception of the called and calling
number
number.
DIGIT_E digit 14
DIGIT_F digit 15
—continued—
Table 6–2xxx
Generic R2 register signaling – numeric address information activities
(continued)
Table 6–2xxx
Generic R2 register signaling – numeric address information activities
(continued)
Table 6–2xxx
Generic R2 register signaling – numeric address information activities
(continued)
Table 6–3xxx
Generic R2 register signaling – address control activities
Activity Meaning Explanation
Table 6–3xxx
Generic R2 register signaling – address control activities(continued)
Activity Meaning Explanation
Table 6–3xxx
Generic R2 register signaling – address control activities(continued)
Activity Meaning Explanation
Table 6–3xxx
Generic R2 register signaling – address control activities(continued)
Activity Meaning Explanation
Table 6–3xxx
Generic R2 register signaling – address control activities(continued)
Activity Meaning Explanation
Table 6–3xxx
Generic R2 register signaling – address control activities(continued)
Activity Meaning Explanation
Table 6–3xxx
Generic R2 register signaling – address control activities(continued)
Activity Meaning Explanation
Table 6–4xxx
Generic R2 register signaling – calling party category activities
Activity Meaning Explanation
Table 6–4xxx
Generic R2 register signaling – calling party category activities (continued)
Activity Meaning Explanation
Table 6–4xxx
Generic R2 register signaling – calling party category activities (continued)
Activity Meaning Explanation
Table 6–4xxx
Generic R2 register signaling – calling party category activities (continued)
Activity Meaning Explanation
Table 6–4xxx
Generic R2 register signaling – calling party category activities (continued)
Activity Meaning Explanation
Table 6–4xxx
Generic R2 register signaling – calling party category activities (continued)
Activity Meaning Explanation
Table 6–4xxx
Generic R2 register signaling – calling party category activities (continued)
Activity Meaning Explanation
Table 6–4xxx
Generic R2 register signaling – calling party category activities (continued)
Activity Meaning Explanation
Table 6–5xxx
Generic R2 register signaling – ANI ID phase category activities
Activity Meaning Explanation
Table 6–5xxx
Generic R2 register signaling – ANI ID phase category activities (continued)
Activity Meaning Explanation
Table 6–5xxx
Generic R2 register signaling – ANI ID phase category activities (continued)
Activity Meaning Explanation
Table 6–5xxx
Generic R2 register signaling – ANI ID phase category activities (continued)
Activity Meaning Explanation
Table 6–5xxx
Generic R2 register signaling – ANI ID phase category activities (continued)
Activity Meaning Explanation
Table 6–5xxx
Generic R2 register signaling – ANI ID phase category activities (continued)
Activity Meaning Explanation
As a tandem or terminating
exchange, DMS does not receive
this signal.
NO_CALL_TRANS no call transfer see activity NO_CALL_TRANS on
page 6–23.
—continued—
Table 6–5xxx
Generic R2 register signaling – ANI ID phase category activities (continued)
Activity Meaning Explanation
Table 6–5xxx
Generic R2 register signaling – ANI ID phase category activities (continued)
Activity Meaning Explanation
Table 6–5xxx
Generic R2 register signaling – ANI ID phase category activities (continued)
Activity Meaning Explanation
Table 6–6xxx
Generic R2 register signaling – called subscriber status activities
Activity Meaning Explanation
Table 6–6xxx
Generic R2 register signaling – called subscriber status activities (continued)
Activity Meaning Explanation
Table 6–6xxx
Generic R2 register signaling – called subscriber status activities (continued)
Activity Meaning Explanation
Table 6–6xxx
Generic R2 register signaling – called subscriber status activities (continued)
Activity Meaning Explanation
SUB_OUT_ORD subscriber line out This activity indicates that the called
of order subscriber line is out of service or
faulty.
Table 6–6xxx
Generic R2 register signaling – called subscriber status activities (continued)
Activity Meaning Explanation
Table 6–6xxx
Generic R2 register signaling – called subscriber status activities (continued)
Activity Meaning Explanation
Table 6–6xxx
Generic R2 register signaling – called subscriber status activities (continued)
Activity Meaning Explanation
Table 6–6xxx
Generic R2 register signaling – called subscriber status activities (continued)
Activity Meaning Explanation
Table 6–6xxx
Generic R2 register signaling – called subscriber status activities (continued)
Activity Meaning Explanation
Table 6–6xxx
Generic R2 register signaling – called subscriber status activities (continued)
Activity Meaning Explanation
Table 6–6xxx
Generic R2 register signaling – called subscriber status activities (continued)
Activity Meaning Explanation
Table 6–6xxx
Generic R2 register signaling – called subscriber status activities (continued)
Activity Meaning Explanation
Table 6–6xxx
Generic R2 register signaling – called subscriber status activities (continued)
Activity Meaning Explanation
Table 6–7xxx
Generic R2 register signaling – special activities
Activity Meaning Explanation
Table 6–8xxx
Generic multifrequency compelled (MFC) signaling codes
Code combinations Frequencies
Backward f0 1380 f1 1500 f2 1620 f4 1740 f7 1860 f11
1980
Forward f0 1140 f1 1020 f2 900 f4 780 f7 660 f11
540
1 f0 + f1 x x
2 f0 + f2 x x
3 f1 + f2 x x
4 f0 + f4 x x
5 f1 + f4 x x
6 f2 + f4 x x
7 f0 + f7 x x
8 f1 + f7 x x
9 f2 + f7 x x
10 f4 + f7 x x
11 f0 + f11 x x
12 f1 + f11 x x
13 f2 + f11 x x
14 f4 + f11 x x
15 f7 + f11 x x
R2 signaling timing
R2 signaling timing is summarized in table 6–9. Timers T1, T4, T5 and T6
are used for compelled and pulsed timing by incoming trunks, and timers
T2, T3, T7 and T8 are used by outgoing trunks.
Table 6–9xxx
R2 signaling timing
Timer RGSIGSYS Parm Description
T1 ICMXISIG maximum inter-signal time
T2 OGMXTON maximum forward tone-on time
T3 OGMXTOFF maximum forward tone-off time
T4 ICPREPLS pre-pulse pause
T5 ICPLSTIM pulse duration
T6 ICPSTPLS post-pulse pause
T7 OGRMXPLS maximum received pulse length
T8 OGPSTPLS forward post-pulse pause
Table 6–10xxx
Line signaling systems
Signaling type Signaling system characteristics
WINGSIG Wink-start
NTLS01 Delay-dial
NTLS03 Used in one-way outgoing applications such as
terminating trunks from a DMS-200 switch to an
operator switchboard. Call processing operation is
seize only, without outpulsing. This line signaling
system uses A-bit signaling for all standard telephony
signals and includes B-bit blocking and unblocking
signals.
NTLS04 Used in one-way incoming applications such as junctor
trunking from an operator switchboard to a DMS-200
switch, NTLS24 protocol includes a single A-bit state
transition followed by a restoration to the original state
when the stop transmission (ST) signal is received at
the end of the digit string. This system uses only A-bit
signaling for all standard telephony signals.
NTLS05 Used in one-way incoming applications such as
specialized junctor trunking from an operator
switchboard to a DMS-200 switch. The operation
involves the same signaling as NTLS04, but also
includes an end-of-selection (EOS), start ringing
(SRG), and break-in-request (BIR) signals. This
system uses the A-bit for all standard telephony
signals except for SRG and BNR signals, which use
the B-bit. The B-bit is also used for blocking and
unblocking signals.
—continued—
Table 6–10xxx
Line signaling systems (continued)
Signaling type Signaling system characteristics
Table 6–10xxx
Line signaling systems (continued)
Signaling type Signaling system characteristics
Table 6–10xxx
Line signaling systems (continued)
Signaling type Signaling system characteristics
End
Call processing
Refer to chapter 5, “Call Processing” for additional information on call
processing signaling and treatments.
The following functionalities are supported on the DMS-100 International
switch:
• link by link connections
• end to end connections
• local or remote application of treatments.
End-to-end signaling
End-to-end signaling is supported on the DMS-100 International switch. If a
call connection involves incompatible trunk signaling systems, or digit
processing is required, link-by-link signaling must be used.
R2 trunk classes
For some R2 protocols, trunk classes must be defined. These classes are
then used to impose restrictions or imply functionality based on the type of
traffic which flows over a trunk. R2 trunks are categorized in the following
classes.
• CAMA: Trunks are assigned the CAMA class if ANI information must
be sent to the next office, or collected from a previous office.
• None: If ANI information is not required, the trunk is assigned the
traffic class NONE.
Call control
Calls over R2 trunks normally use calling subscriber control. With calling
subscriber control, if the calling subscriber goes on-hook first, the
connection is released immediately. If the called subscriber remains
off-hook for a period of 10 (typical) seconds, the line receives disconnect
treatment. If the called subscriber goes on-hook within 10 seconds (typical)
of the calling subscriber going on-hook, the line is set to idle and no
treatment is applied.
With calling subscriber control, if the called subscriber goes on-hook first,
calling subscriber re-answer timing is started. Metering continues during
this re-answer timing. If the re-answer time expires and the called
subscriber is still on-hook, the connection is released. If the calling
subscriber remains off-hook for another 10 seconds (its lockout time), the
calling subscriber receives disconnect treatment.
If the called subscriber goes back off-hook during the re-answer period, the
re-answer timing is stopped, and the call continues (since the connection was
never released).
If the calling subscriber goes on-hook during the re-answer period, the
connection is released, re-answer timing is stopped, and both subscribers are
marked as idle. If the calling subscriber goes on-hook during its lockout
timing, no treatment is applied.
If the alarm is continuous for a defined period of time, the trunks on the
IDTC are taken out of service and the channel bank or digital office at the
end of the PCM-30 line is informed. At the end of the alarm state, the trunks
are returned to service and the remote alarm is removed.
If the alarm is not continuous, the event is recorded as a “hit”. If the number
of hits recorded in a given period exceeds the maintenance limit, the alarm is
activated. If the number of events exceeds the out-of-service limit, the
trunks on the IDTC are taken out of service and the channel bank or digital
office at the end of the PCM-30 line is informed. At the end of the alarm
state, (that is, no hits within a defined period), the trunks are returned to
service and the remote alarm is removed.
Refer to table 6–11 on page 6–61 for detailed timing specifications on the
alarm types discussed in the following sections.
Table 6–11xxx
Carrier maintenance alarms
Alarm types Out of Return to Maintenance Out of
service time service time limit service limit
Local loss of Range 0 – 25.5 s in 0 – 25.5 s in 0 – 255 0 – 255
frame 0.1 s 0.1 s events events
synchronization
h i ti Default 3s 3s 4 events in 20 events in
value 1 min 5 min
Local loss of Range 0 – 25.5 s in 0 – 25.5 s in 0 – 255 0 – 255
multiframe 0.1 s 0.1 s events events
synchronization
h i ti Default 3s 3s 4 events in 20 events in
value 1 min 5 min
Remote loss of Range 0 – 25.5 s in 0 – 25.5 s in 0 – 255 0 – 255
frame 0.1 s 0.1 s events events
synchronization
h i ti Default 3s 3s 4 events in 20 events in
value 1 min 5 min
Remote loss of Range 0 – 25.5 s in 0 – 25.5 s in 0 – 255 0 – 255
multiframe 0.1 s 0.1 s events events
h i ti
synchronization Default 3s 3s 4 events in 20 events in
value 1 min 5 min
Alarm indica- Range 0 – 25.5 s in 0 – 25.5 s in 0 – 255 0 – 255
tion signal 0.1 s 0.1 s events events
Default 3s 3s 4 events in 20 events in
value 1 min 5 min
Bit error rate Range 0 – 255 0 – 255
events events
Default 4 events in 20 events in
value 1 min 5 min
Frame slip Range 0 – 255 0 – 255
occurrences events events
Default 4 events in 20 events in
value 1 min or 24 h 5 min or 24 h
Note: Measurement period adjustable to 24 hours for use in synchronous networks.
Connection-oriented signaling
Connection-oriented signaling, also referred to as trunk signaling, is used to
setup, monitor, and take down a telephone user part (TUP) call.
STP
Signaling link
Signaling link
Originating Terminating
SP SP
TUP trunk
Connectionless signaling
Connectionless signaling, also referred to as transaction services, is not
associated with the setup and takedown of a TUP call. For example,
connectionless signaling is used to access a database for 800 number
translations.
The request for 800 number translation is passed through signaling messages
from a per trunk signaling (PTS) end office (EO) to a DMS signaling
point/service switching point (SP/SSP).
The DMS SP/SSP routes the request through the signaling network to a
DMS service control point II (DMS-SCPII), where the transaction is
processed.
The DMS-SCPII returns the translated number back through the signaling
network to the DMS SP/SSP, which sends the translated number to the EO
that made the original transaction request. Processing then continues as for
a connection-oriented call.
Figure 6–3
Connectionless signaling (transaction services)
FW-30070
EO SCPII
Modes of operation
The term common channel signaling mode refers to the relationship between
the signaling component and the voice and data component of a call. CCS7
uses two signaling modes: associated signaling and quasi-associated
signaling.
Associated signaling
In associated signaling, the signaling links (SL) follow the same route as the
TUP trunk for a call. TUP trunks are interoffice circuits that carry the voice
and data traffic between originating and terminating SPs or SSPs.
Associated signaling does not require a DMS-STP and can be used to initiate
low-volume applications. Figure 6–4 illustrates a low-cost, simple
configuration of two connecting SSPs.
Figure 6–4
Associated signaling in a simple configuration
FW-31259
Signaling link
Originating Terminating
service service
switching switching
point point
TUP trunks
Quasi-associated signaling
In quasi-associated signaling, the signaling information is routed along links
that do not follow the same route as the TUP trunk for a call. Instead,
signaling is carried through the signaling network along indirect routes on
two or more SLs.
Signaling
transfer
point
Originating Terminating
signaling signaling
point point
TUP trunks
SCPII SCPII
STP STP
SCPII database
SSP SSP
Example A in figure 6–7 shows a route that consists of three linksets. The
route originates from the SSP and terminates at the SCP. Example A also
illustrates that a linkset consists of a set of links.
Example B in figure 6–7 shows two routes. Both routes originate from the
same node and terminate at the same destination node in the network.
Together, route A and route B form a routeset that originates from the same
SSP and terminates at the same SCP.
Figure 6–7
CCS7 network communications
FW30059
links
Example A
Database
Example B
STP STP SCP
Linkset Linkset
Linkset Route A
SSP Linkset Database
Route B
Linkset
STP STP SCP
Database
Figure 6–8
CCS7 message routing label
FW30055
SIF
(signaling User-specific information Routing
information label
field)
Routing
Signaling link Originating Destination
label
selector code point code point code
Originating
point code Network Network Network
identification cluster cluster member
Message discrimination
Message discrimination is the process that determines whether an MSU has
been delivered to its intended destination point. This decision is based on an
analysis of the DPC that is in the routing label of the MSU. If the SP to
which it is delivered is the destination point, then the MSU is delivered to
the message distribution function of the SP. If this SP is not the intended
destination point, the MSU is delivered to the routing function of the SP for
further transfer on an SL.
Message distribution
Message distribution is the process of analyzing the source indicator in the
MSU when it arrives at the destination point. The service information octet
(SIO) determines to which user part the MSU is to be delivered: either TUP
or signaling connection control part (SCCP).
Message routing
Message routing involves the selection of an appropriate SL for each MSU.
The route an MSU takes is determined through a combined analysis of
information that is contained in the routing label of the MSU and routing
data that is provided at the SP.
Although there are advantages to using standard routes for MSUs that
belong to different user parts, the service indicator that is included in each
message provides the potential for using different routing plans for different
user parts.
message which, for tandem traffic, indicates the outgoing SL. The message
is then queued for transport along the F-bus to the link interface module
(LIM).
When intra-LPP traffic reaches the LIM unit, it is not routed via the
DMS-bus. It is transmitted across the LIM and is queued for transport to the
appropriate SL through the F-bus, IPF, and ST.
When inter-LPP traffic reaches the LIM, it is transmitted over DS30 links to
the DMS-bus. When the message reaches the DMS-bus, it is queued for
transport to the appropriate LPP. The message travels to the LIM and is
queued for transport to the appropriate SL through the F-bus, IPF, and ST.
CCS7 traffic between the CM and the LIU7s includes such things as
network management messages. CM to LIU7 traffic crosses the CM
inter-communications (CMIC) links to the DMS-bus. From there it travels
across the DMS-bus to the LIM in the appropriate LPP. The traffic crosses
the LIM to the F-bus and is then transported down to the LIU7. It is then
transported into the CCS7 network on an SL. The LIU7 to CM traffic
follows the reverse route.
Figure 6–9
DMS-STP message processing
FW30163
DMS-core
DMS-bus
F-bus F-bus
LPP A LPP B
Signaling Signaling
link link
Message Message
routed routed Discriminator Router
between within one
LPPs LPP
SCCP processor
Subscriber interfaces
Line circuits
The line circuit (LC) card is the final interface between the subscriber’s loop
and the digital circuitry in the switching network and central control.
Line circuits provide the following functions:
• office battery to the loop
• transmission of ringing and tones
• transmission of 12 or 16 KHz pulses
• transmission of battery reversal
• speech coding and decoding
• test access
• loop loss control
• loop balance
• 50 Hz or 60 Hz filtering
• loop supervision
The tonesets in older vintages of the NT6X69 card are also country-specific,
but reside in fixed PROM on the circuit cards. In either case, the lines
signaling toneset for a particular country is specified in field TONESET of
data table LTCINV, as specified in DMS-100 International Customer Data
Schema, 297-1001-451i. Tonesets are defined in software on a
country-specific basis by Northern Telecom, based on specifications
supplied by the operating company.
Coin telephone interface
Coin telephone interfaces are provided via the NT6X94 line card.
Private branch exchange (PBX) interface
PBX lines requiring loop start signaling use the NT6X93 line card. The
NT6X94 line card provides the interface for PBX lines requiring
battery-reversal signaling, or 12 or 16 KHz control pulse signaling.
Subscriber premise meter control
12 or 16 KHz control pulses are generated in the line shelf and applied to the
subscriber loop through the line circuit.
Speech coding and decoding
Speech is converted from an analog form to a digital form and vice-versa by
the CODEC circuit on the line card. A filter limits the bandwidth to
3400Hz. Speech sampling is performed at an 8KHz rate.
Test access
A relay on the line card allows testing to be performed on the loop as well as
towards the central office equipment side.
Loop loss control
To meet the requirements of the appropriate Telephone Administration
Transmission Plan Objective, pads under software control can be switched
in. This permits up to 7dB of loss in 1dB steps to be inserted in the outgoing
speech path (digital to analog).
50 hz or 60 hz filtering
The transmit side (analog to digital) of the line circuit provides for 50Hz ac
induction filtering. Longitudinal induction of up to 20 ma per conductor can
be tolerated by the line circuit. In a line connection, the response at 50Hz is
at least 20dB down from the reference frequency level.
Loop supervision
The loop supervision circuitry monitors the line for on-hook/off-hook status.
This information is passed to the appropriate peripheral processor and action
is taken accordingly.
This circuitry is also used to detect the changes in state of the loop that occur
during DP dialing.
Dialing
A station can have the dialing option of dial pulse (DP) or DTMF. For
DTMF, the tones are received by the line circuit and transmitted to the
DTMF receiver for digit detection and recognition.
Figure 6–10xxx
International Line Circuit – Type A
FW-31149
Cutover To +15V
hold line Diode
To other LC in LD R1 – +
CO TA RG
R2
To XPCM –48V
Flux balance winding
–48V
X R3
Ring mux on bus
RG RG TA
interface card
X X X
Ring trip
Supervision
Supervision data network Test
access bus
Figure 6–11xxx
International line circuit – Type B
FW-31150
Cutover To +15V
hold line Diode
To other LC in LD R1 – +
CO TA RG
R2
To XPCM –48V
Flux balance winding
–48V
X R3
Ring mux on bus
RG RG TA
interface card
X X X
Ring trip
Supervision
To other LC Supervision data network Test
access bus
Meter bus Metering
Tone bus pulse generator
Table 6–12xxx
DTMF dialing toneset
Nominal high group frequencies
Nominal low group 1209 hz 1336 hz 1477 hz 1633 hz
frequencies
697 hz 1 2 3 A
770 hz 4 5 6 B
852 hz 7 8 9 C
941 hz * 0 # D
Table 6–13xxx
DTMF reception parameters
Parameter accepts digits rejects digits
frequency tolerance greater than 1.5% (each no total rejection (each
frequency) frequency)
signal duration greater than or equal to less than or equal to 22
44 ms ON ms ON
input level (900 ohm 0 to –27 dBm (per less than or equal to –29
termination) frequency) dBm (per
frequency)
twist –10 to +4 dB greater than or equal to
(high to low) 16 dB (high to low)
Tolerance to echo
The DTMF receiver will operate accurately during the presence of signal
echoes which are delayed to 20 ms and reduce in level by at least 10 dB with
respect to the incident signal.
Ringing
Ringing source
The ringing source consists of an AC ringing generator supply that
effectively produces AC superimposed on DC. The ringing source removes
the ringing voltage applied to line at the AC zero crossing point.
During the silent period, office battery supply (–52V nominal) is applied on
the ring-side of the line.
The AC ringing voltage is applied on –52V battery. This is not the talk
battery.
The ringing waveform is sinusoidal with peak-to-rms voltage ratio between
1.35 and 1.45.
The ring-trip function is performed for external loops up to 1900 ohms.
The ringing range, including the telephone set, is as follows:
• –3 bridged –1900 ohms
• –5 bridged 1300 ohms
Audible ring tone
Audible ring tone is sent to the calling party when the called party is alerted
by means of ringing voltage being applied to the line. The tone has a duty
cycle similar to that of the ringing voltage and is not synchronized to the
duty cycle of the called party ringing voltage.
Ring trip
When the called party goes off-hook, the DMS-100 International switch
recognizes the change-of-state and removes ringing voltage from the line. It
also removes audible ring tone from the calling party line.
cut-over circuit in the line drawer (LD). Once operated, sufficient current
flows through the CO relay from the cut-over hold line so that all operated
CO relays in the line cards remain operated. This opens the CO contacts,
thus isolating the tip and ring leads of the addressed line circuit from their
connections to the main distribution frame (MDF). At cut-over time, the
ground on the cut-over hold line is removed, opening the hold current path
for all CO relays in the line drawer. Any CO relay which has previously
been set to the cut-off state will then be released. This simultaneously
connects the tip and ring leads of any selected number of LC cards to their
associated MDF connections.
Analog loop-around
With relay CO only operated, the T and R leads are isolated and the VF
transformer is unbalanced enabling VF signals on the “receive” path to
appear on the “transmit” path with almost zero loss. The analog
loop-around consists of a VF signal in digital form from a test signal
generator. After conversion to analog, the test signal crosses to the
“transmit” path, is converted back to digital, and returns to the test
equipment. By comparison of the input and output signals, the condition of
the “transmit” and “receive” circuits can be tested.
Digital loop-around
A special eight-bit test pattern is transmitted and received through an
established digital path to check the integrity of the digital connection in the
line circuit.
Metallic test access (MTA)
The metallic test access unit (NT2X46AB) provides a metallic DC
connection between test circuits and line circuits in the ILCMs. The MTA
unit is frame mounted and is usually located on a miscellaneous equipment
frame. The MTA unit functions in conjunction with the NT3X50 driver card
as part of the line maintenance facility for DMS-100.
The MTA unit provides access for up to 10 ILCMs, up to 16 test circuits,
and a two-wire switched path. It interfaces with the MDF, ILCM, and the
driver card located in a maintenance trunk module (MTM). The test circuits
are connected to the MTA appearances at the MDF. The ILCM and the
driver card are connected directly to the MTA unit.
The MTA unit provides a 16 x 20 two-wire switching matrix, which can be
expanded by connecting additional MTA units. Each MTA unit has 20
two-wire appearances on the vertical side of the matrix, and can serve up to
10 ILCMs. Rules for provisioning additional MTA units are described in
DMS-100 Family Provisioning, 297–1001–450.
Administration
This section describes the administrative and operational aspects of
DMS-100 International switches:
• data recording
• International Centralized Automatic Message Accounting (ICAMA)
system
• operational measurements (OM)
• network management
• database management
• database facilities and structures
• service analysis
• fraud prevention features
DIRPPOOL data table defines the groups of recording devices, the type of
devices (tape or disk), and the recording volume identifications.
The system performs three scheduled software audits. The device audit
checks the physical recording device every five minutes to verify that it is
ready to receive data. The subsystem audit verifies the integrity of the
device files and their configurations every 60 minutes. With disk operation,
the disk daily audit performs a detailed audit of all disk files, verifying their
status and updating the same as required. The tape daily audit performs
tasks such as checking for free tapes and rewinding parallel tape files. Both
daily audits run at a preset time each morning. For more information see
Device Independent Recording Package Product Guide, 297-1001-013.
Under normal operation, the subsystem files are maintained in the “open”
state, ready to accept data. If two or more files are assigned to a specific
subsystem, then active and standby status is assigned to the files, and
periodic rotation of recording duty occurs. A maximum of three standby
files can be specified for any subsystem. In addition to the active and
standby files, a parallel file can be set up as a backup to record all data
output by a subsystem. For more information see Device Independent
Recording Package User Guide, 297-1001-312 and figure 7–1.
Figure 7–1xxx
Data recording
ICAMA
Disk
JF – Journal file
Magnetic tape
The characteristics of the recording format for magnetic tapes are covered
by the Magnetic Tape Users Guide, 297-1001-118, and are outlined in this
section. Magnetic tapes can be used by DMS-100 Family switching systems
to store data for seven applications:
• Automatic Message Accounting (AMA)
• Operational Measurements (OM)
• Office Image (for system backup)
• Trouble Diagnostic Data
• Call Detail Recording (CDR) records for local calls
• Customer Data Modification (CDM)
• Office Data Modification.
Figure 7–2xxx
Magnetic tape recording format
Density
GAP 1.0
identification
IBG .6
IBG .6
IBG .6
IBG .6
IBG .6
Data Data Data area
block block block Initial gap 2.425
3.0
Posamble 41 ch
Posamble 41 ch
Tapemark 40 ch
Preamble 41 ch
Preamble 41 ch
Preamble 41 ch
header header volume
1 1 label–8
label–80 label–80 0 ch) 9-track
ch) ch) magnetic
BOT marker tape
Identification
Dataset A
burst 4.825
As shown in figures 7–3 and 7–4, these labels are referred to collectively as
the header label group or the trailer label group.
The tape mark, which acts as a delimiter or separator, follows both label
groups, (header and trailer) and each data set. Two tape marks follow the
trailer label group to indicate that the end of the last data set on the volume
has been reached.
Figure 7–3xxx
Label organization for a single data set on a single volume
Beginning of tape
Hdr 1
Hdr 2
Header label group
UTL 1
...
UTL 8
Tapemark
Data set A
Tapemark
EOF 1
EOF 2
Trailer label group
UTL 1
...
UTL 8
Tapemark
Tapemark
End of tape
Note: Shaded labels are optional
Figure 7–4xxx
Label organization for multiple data sets on a single volume
Beginning of tape
...
UTL 8
Data set A
Tapemark
EOF 1
EOF 2
UTL 1 Trailer label group
...
UTL 8
Tapemark
Hdr 1
Hdr 2
UTL 1 Header label group
...
UTL 8
Tapemark
Data set B
EOF 1
EOF 2
UTL 1 Trailer label group
...
UTL 8
Tapemark
Tapemark
End of tape
Note: Shaded labels are optional
End-of-file label (EOF1) The End-of-File Label 1 follows the data set
indicating that the end of the data set has been reached. It is identified by
EOF1 and contains identical information to the HDR1 label, except for the
label identifier and the block count.
End-of-file label 2 (EOF2) The End-of-File Label 2 immediately follows
EOF1 and is identified by EOF2. It contains identical information to the
HDR2 label, except for the label identifier.
User trailer labels (Bellcore AMA format) (UTL) Optionally, a
maximum of eight user trailer labels can appear immediately following
EOF2, and are identified by UTL1 to UTL8. These labels contain
user-assigned information about the data set.
Data set logical record formats
A data set is composed of a number of logical records which are organized
into fixed or variable-length blocks with the former being most used for
DMS-100 Family System data sets.
Blocking is the process of grouping a number of logical records as a
physical block. A block is made up of the data records between the
inter-block gaps and may be 18 to 2048 tape characters (bytes) in length.
Blocking allows efficient utilization of storage space by reducing the
number of interblock gaps in the data set. Blocking also reduces processing
time because fewer input/output operations are required to process entire
blocks of records when the records fit into the fixed length format applies.
When a variable length record exceeds the physical block size, it will be
written into the next block.
The OM and LOG datasets are encoded in EBCDIC (with the character set
as defined by the IBM standard PN print train for 1403 and 1404 printers).
The NT standard AMA and Bellcore AMA data sets are encoded in Binary
Coded Decimal (BCD) four-bit code, the details of which are found in
Automatic Message Accounting–Northern Telecom Format, 297-1001-119.
In the Bellcore format, the AMA data is recorded in signed packed decimal
with hexadecimal identifiers.
Disk
Disk Drive Units (DDUs) for DMS-100 NT40 offices are 14-inch
Winchester type drives which have capacities of 300 Mbytes. The DDU
consists of a disk drive and a NT1X78 power converter card installed on a
dedicated shelf on an I/O Equipment (IOE) frame. Associated with each
DDU is a NT1X55 disk drive controller card which occupies one card slot of
the IOC shelf, and which interfaces with the DDU and IOC. In NT40-based
offices, or in SuperNode-based offices with minimal SLM disk storage
space, the DDU serves as the primary mass storage device. A minimum of
Disk files Storage space on a disk is allotted on the basis of volumes and
files. Once the volumes have been defined they are considered as separate
entities by the system. After files are created within the volumes, the data is
stored wherever space is available within the confines of the volume. This is
a system task and is transparent to the user.
Each data file is a collection of segments within a volume with each segment
representing 512 blocks of 1 K DIRP bytes each. The block is the smallest
DIRP addressable unit (also equals 256 2 K DIRP blocks) and constitutes a
disk sector.
AMA and OM recording The storage of AMA and OM data on disk can
be very cost effective. The primary advantage of disk over tape is the
reduction of the number of recording devices, for example, from four to six
MTUs per office, to the one MTU-two disk combination. This advantage is
gained through the random access characteristics of disk which enables the
storing and accessing of many separate files simultaneously on one drive
(see Device Independent Recording Package User Guide, 297-1001-312.)
Depending on the size of the AMA and OM files, (and drives and volumes
allowed to DIRP) it is possible to store many days of AMA data, OM data,
journal file, as well as the office image on one disk, while the other one can
be the duplicate backup. On a daily basis, the most recent AMA file can
then be image copied to tape, and transported to the processing center. The
maximum DIRP backup is 24 volumes of 64 megabytes each for a total of
1563 Mb/stream.
Security features provided primarily for AMA files include dual recording
of data, sanity checks of file deletion commands, and activation of alarms if
a disk space minimum threshold is reached.
Office image – backup The disk can also be used for office image
storage and bootstrap loading. With the recommended dual disk
configuration, one or more office images can be stored on both disks. Since
data transfer speed from disk is much faster than from tape, loading and
dumping of the office image will be faster. Peripheral Module loads can
also be stored on the same disk.
Normally the office images are set up as a particular disk volume as an
archive or a set of images with the most recent being available for backup
(current image file). Multiple images may be stored on disk and/or tape,
with a route list defining the order in which the images are accessed.
Journal file (JF) This file contains records of all DMOs entered, allowing
automatic re-activation if data tables are inadvertently destroyed. Even
though this file is not large, it is normally active, therefore requiring a
dedicated tape drive. Disk residence will free up the tape drive, and allow
instant activation in an emergency.
Pending order file (POF) The POF is used to store service orders, rating
changes and office configuration changes for later activation. Although this
file is normally small, occasional block-cut change procedures may require a
much increased storage capability. Disk resident POF allows larger volumes
of DMOs to be entered further in advance of their scheduled
implementation.
Non-resident programs and user files As well as storing non-resident
maintenance programs, the disk can be utilized to store other programs and
data of a non time-critical nature. This further facilitates remote
maintenance and administration of offices, and reduces the requirements for
memory cards.
Data polling
Remote data polling of OM and/or AMA information
The remote data polling system permits the telephone company to transfer
OM, AMA, and JF DIRP data of a DMS-100 family office to a data
processing center. This data is stored on a disk or a magnetic tape and,
through the Device Independent Recording Package (DIRP), the data is
made available to the remote data polling system which transmits it to the
data center. The data is transferred using a version of the CCITT X.25 for
data communication protocol.
Dedicated cards can be established separately or jointly for OM and AMA
polling. The DMS-100 interface consists of an NT1X89 Multi-Protocol
control card and an EIA RS-232-C interface to a modem.
When setting up the connection via the data packet switching network the
data network (DNA) address of the host collector is verified by comparing it
to the authorized list of users in data table XFERADDR (see Customer Data
Schema, 297-1001-451, “Section 045 Data Transferal System”). If the
address matches, access is made available.
The inventory of data record files to which the host collector has access are
listed in data table DIRPHOLD. Here the files are listed by type (such as,
originating subsystem AMA, or OM), file name, and location (volume serial
number). While file management is normally automatic, manual override is
available. Files that have been requested and processed are denoted “aging”
and await automatic expiration and erasure (see Remote Data Polling System
Description and User Interface, 297-1001-524.)
the system. The measurement update and print schedules are operating
company defined, to provide the necessary flexibility to satisfy a wide
variation of operating company requirements.
Operational measurements system components and organization
The OM system monitors certain events in DMS-100 Family Systems and
enters the results into registers in the data store. Events are entered either
individually every time an event occurs, or scanned (sampled) at regular
intervals regardless of the time of occurrence of an event. Events measured
individually are referred to as peg counts, while sampled measurements
(states) used to determine the degree of usage of the system hardware and
software resources are called “usage counts.” The low (slow) scan is based
on 100 seconds per period and high (fast) scan is based on 10 seconds per
period.
A table of active measurements, with its associated holding and
accumulating register, is called a measurement group. All measurements in
the group have the same table name. Each table name is assigned a class
definition, that is, ACTIVE or HOLDING, indicating its status in the total
OM system. The group and class names are used to identify the OM in the
header sections of OM output reports and as parameters of command.
Data collection
The OM data in the active registers is useful only if related to the specific
period of time during which it was collected. OM data cannot be directly
copied from the active tables to an output process (tape, printer,
accumulation) because of the likelihood that another count may occur during
the copying process, thus introducing an unknown time period and causing
an inaccurate output.
Data is transferred (copied) from the active registers to a duplicate set of
registers classed as “holding” registers. Data copying from active to holding
registers normally occurs at 15 or 30 minute periods (or optionally at 5-, 10-,
15-, 20- or 30-minute periods), depending on the type of OM counts and the
active registers cleared to begin new counts for the next period.
The last three 5-minute period counts are stored in three sets of holding
registers and the last four 15-minute period counts are stored in four sets of
holding registers.
Copying to output processes is performed from the holding registers, with a
controlled time period on which to base subsequent data manipulation. The
holding registers isolate the output processes from the active OM data
inputs, lessening the urgency and priority of the output copying processes.
The contents of the holding registers are available for display or distribution
via the file system. The file system allocates the OM data output to the
Table 7–1
Structure of typical OM data table
Trk (table name)
PM2 (DTC)
CLASS: ACTIVE
START: 1986/08/07 09:30:00 THUR; STOP: 1986/08/07 09:39:47 THUR;
SLOWSAMPLES: 5 ; FASTSAMPLES 57 ;
KEY (PM2_OMTYPE)
INFO (PM2_OMINFO)
PM2ERR PM2FLT PM2INITS PM2LOAD
PM2USBU PM2UMBU PM2MSBU PM2MMBU
PM2CXFR PM2ECXFR PM2CCTSB PM2CCTMB
PM2CCTFL PM2CCTER
TM
CLASS: ACTIVE
Figure 7–6
AMA demand report
AMA
START: l986/08/07 09:00:00 THUR; STOP l986/08/07 09:10:23 THUR;
SLOW SAMPLES 6; FAST SAMPLES 61;
Measurement blocks
The OM system in DMS-100 Family systems provide the operating
company with performance data for the switch. The general requirements of
the operational measurements are divided into categories which include
accuracy, integrity, security, documentation, report format, and report input
and output capabilities.
Measurement accuracy
The design objective of the DMS-100 Family operational measurements
system is to register peg counts for l00 percent of all detected events.
Measurements that overflow are detected by the system and pegged by the
“overflow” registers provided.
Measurement integrity
Measurements stored within the switching system are protected from
inappropriate modifications by the switching system. The operational
measurements are not reset by warm or cold system restarts.
Measurement security
The design of the DMS-100 Family operational measurements system
ensures that the measurements stored within the switching system are
protected from manual modification.
Measurement documentation
A partial list of operational measurement tables provided in DMS-100
International switches is outlined in table 7–2.
Measurement format
Measurement data is output to VDUs and printers in decimal digits.
Measurement report input and output capabilities
The measurement reports can be available at the local or remote locations.
Reports in DMS-100 Family systems are provided on a scheduled or
demand basis.
International OM overview
The following OM groups are defined in DMS-100 International offices. A
summary of each OM group follows.
Table 7–2xxx
DMS-100 International OM groups
AMA
This group is currently not used by International.
ANN
This OM group measures the usage of recorded announcements. Each
announcement (for example, BLKDN and PSPD) has its own set of pegs.
The key to the group is the announcement CLLI. ANN has one INFO field
and 5 usage fields:
ANN_OMINFO
Displays the maximum number of calls which can be simultaneously
attached to the group.
ANNATT
Displays the number of calls routed to the group.
ANNOVFL
Displays the number of calls routed but not connected due to the maximum
allowable groups already reached, or to the group being maintenance-busy.
ANNTRU
Usage count of calls connected to the group. Scan rate is 100 second.
ANNSBU
Usage count of the number of equipped tracks which are in tk_system_busy,
tk_pm_busy or tk_deloaded state. Scan rate is 100 second.
ANNMBU
Usage count of the number of equipped tracks which are in tk_man_busy or
tk_seized state. Scan rate is 100 second.
CF3P
This OM group measures the usage of 3-port conference circuits. The key to
the group is the CLLI – CF3P. CF3P has one info field and 8 usage fields:
CONF_OM_INFO
Displays the number of software equipped conference ports.
CNFSZRST
The number of times a circuit has been assigned (excluding ITOPS
requests).
CNFOVFLT
Displays the number of times non-ITOPS requests could not be satisfied
because all remaining idle circuits have been assigned to ITOPS.
CNFQOCCT
Usage count of the number of requests waiting in the queue for a circuit.
Scan rate is 10 second.
CNFQOVFT
Displays the number of requests rejected because the waiting queue is full.
CNFQABNT.
Displays the number of requests in the waiting queue abandoned.
CNFTRUT
Usage counts of the number of conference circuits in tk_cp_busy,
tk_cp_busy_deload and tk_lockout states. Scan rate is 10 second.
CNFSBUT
Usage counts of the number of conference circuits in tk_remote_busy,
tk_pm_busy, tk_system_busy, tk_carrier_fail and tk_deloaded states. Scan
rate is 10 second.
CNFMBUT
Usage counts of the number of conference circuits in tk_man_busy,
tk_seized and tk_nwm_busy.
CF6P
This OM group measures the usage of 6-port conference circuits. The key to
the group is the CLLI – CF6P. There is one info field and 8 usage fields:
CONF6_OM_INFO
Displays the number of software equipped conference ports.
CNF6SZRS
The number of times a circuit has been assigned (excluding ITOPS
requests).
CNF6OVFL
Displays the number of times non-ITOPS requests could not be satisfied
because all remaining idle circuits have been assigned to ITOPS.
CNF6QOCC
Usage count of the number of requests waiting in the queue for a circuit.
Scan rate is 10 second.
CNF6OVFL
Displays the number of requests rejected because the waiting queue is full.
CNF6QABAN
Displays the number of requests in the waiting queue abandoned.
CNF6TRU
Usage counts of the number of conference circuits in tk_cp_busy,
tk_cp_busy_deload and tk_lockout states. Scan rate is 10 second.
CNF6SBU
Usage counts of the number of conference circuits in tk_remote_busy,
tk_pm_busy, tk_system_busy, tk_carrier_fail and tk_deloaded states. Scan
rate is 10 second.
CNF6MBU
Usage counts of the number of conference circuits in tk_man_busy,
tk_seized and tk_nwm_busy. If all 6-port conference circuits have been
configured as two 3-port conference circuits, all pegging will be against the
CF3P OM group.
CMC
This OM group measures the performance of the Central Message Controller
The key to this group is either CMC0 or CMC1. CMC has 8 fields:
CMCLERR
The number of errors detected in the functioning of a CMC link to a network
module or I/O controller.
CMCERR
The number of errors detected in the functioning of a CMC or its associated
clock.
CMCFLT
The number of error events from which the CMC or its associated clock
could not recover. This register is incremented before the system tries to
recover the CMC; if the system can recover then the event is removed from
this register and will increment either CMCLERR or CMCERR.
CMCDIAG
The number of system-called diagnostics, that is the total of CMCLERR,
CMCERR and CMCFLT.
CMCLKSBU
Usage Count of the number of peripheral CMC message links in
system-busy state. Scan rate is 100 second.
CMCLKMBU
Usage Count of the number of peripheral CMC message links in man-busy
state. Scan rate is 100 second.
CMCSBU
Usage count of the amount of time the CMC itself is in system-busy state,
due to failure of the CMC or its associated clock. Scan rate is 100 second.
CMCMBU
Usage count of the amount of time the CMC itself is in man-busy state as a
result of commands from an authorized map. Scan rate is 100 second.
CP
This OM group measures the usage of call processing. CP has five info
fields and 31 usage fields:
CPOINFOX
There are five info fields. The first always has a value of zero. The
remaining four give the provisioned number of CP letters, wakeup blocks,
CP (call processes) and CCB (call condense blocks), respectively.
CCBSZ
The number of times a CCB has been allocated in response to a message
from a terminal which was in a valid state to originate a call.
CCBSZ2
The extension register for CCBSZ.
CCBOVFL
The number of messages lost because there were no idle CCB available to
assign to them.
CCBTRU
Usage count of the number of CCB assigned. Scan rate is 100 second.
CCBTRU2
The extension register for CCBTRU.
CPSZ
The number of times a call process has been activated.
CPSZ2
The extension register for CPSZ.
CPTRU
Usage count of the number of call processes active at any one time. Scan
rate is 100 second.
CPTRU2
The extension register for CPTRU.
CPTRAP
Number of calls failing during call processing because the CPU hardware
detected an illegal software condition.
CPSUIC
Number of calls failing during call processing because unexpected results
were detected.
ORIGDENY
The number of times an activated call process was not allowed to work on a
new origination because the limit on new originations has been reached.
WAITDENY
The number of calls forcibly released because call processing has requested
a brief suspension and the associated call process was the only available to
process requests for service from other calls.
CPLSZ
Number of seizures of CP letters to carry messages to calls already in the
system.
CPLSZ2
The extension register for CPLSZ.
CPLOOVFL
Number of messages from terminals which could be originating calls, where
the message could not be passed by the message buffer nor a CP letter.
CPLPOVL
Number of progress messages from terminals where the message could not
be passed because no CP letters were available.
CPLTRU
Usage count of the number of CP letters assigned. Scan rate is 100 second.
CPLOSZ
Number of messages from terminals which are in a state to originate a call,
which are stored in a CP letter rather than an originating buffer.
CPLLOW
The number of idle CP letters at any point during a given transfer period.
OUTBSZ
The number of outgoing messages which must be placed in a buffer because
the CMC was busy.
OUTBOVFL
The number of outgoing messages lost because no idle buffer was available.
OUTBTRU
Usage count of the number of output buffers assigned. Scan rate is 100
second.
MULTSZ
The number of seizures of a multi-block.
MULTTRU
Usage count of the number of multi blocks assigned. Scan rate is 100
second.
MULTOVFL
Number of attempts at initiating a multi-linked call because no idle
multi-block was available.
WAKESZ
Count of CPWAKEUP block seizures.
WAKEOVFL
Count of unsuccessful CPWAKEBLOCK seizures.
WAKETRU
Usage count of the number of CPWAKEUP blocks assigned. Scan rate is
100 second.
CINITC
Count of number of all the CCB in use at the time of a cold restart.
WINITC
Count of number of all the CCB in use at the time of a warm restart.
INITDENY
An estimate of the number of call originations denied during cold and warm
restart, initialization periods.
CP2
This OM group extends the CP group. CP2 has two info fields and 11 usage
fields:
CPO2INFOX
There are two info fields. The first always has a value of zero. The second
gives the value of the number of provisioned ECCB.
ECCBSZ
The number of ECCB in use by applications.
ECCBOVFL
The number of times a request for an ECCB failed because there were none
free.
ECCBTRU
Usage count of the number of ECCB assigned. Scan rate is 100 second.
CPWORKU
The number of times the cp capacity index exceeded four. This index is the
count of half second intervals during which the CP queue was never empty
over the last 5 seconds.
INEFDENY
The number of times a origination request has been purposely discarded
because a second message arrived indicating that the call has been
abandoned.
CPLHI
The highest number of CP letters in simultaneous use reached.
CCBHI
The highest number of CCB in simultaneous use, reached.
CPHI
The highest number of Call Processes in simultaneous use, reached.
OUTBHI
The highest number of outgoing buffers in simultaneous use, reached.
MULTHI
The highest number of multi-blocks in simultaneous use, reached.
WAKEHI
The highest number of wakeup blocks in simultaneous use, reached.
CPU
This group measures the performance of the central processing unit (CPU).
CPU has one register with 8 usage fields:
MTCHINT
The number of mismatch interrupts due to hardware-detected differences
between the two CPU.
TRAPINT
The number of trap interrupts occurred.
CPUFLT
The number of times a CPU, data store, program store, a link to a CMC or a
CMC data port is machine busied as a result of a diagnostic failure.
SYSWINIT
The number of warm restarts which have occurred during the current
transfer period.
SYSCINIT
The number of cold restarts which have occurred during the current transfer
period.
SYNCLOSS
The number of times the CPUs were put into simplex mode following a
mismatch interrupt.
MSYLOSSU
Usage count of the amount of time the CPUs are operating in simplex mode
as a result of map commands or jamming of the inactive switch on a CPU.
Scan rate is 100 second.
SSYLOSSU
Usage count of the amount of time the CPUs are operating in simplex mode
as a result of a system action such as a diagnostic failure. Scan rate is 100
second.
CSL
This group measures the performance of console devices such as TTY and
MAP. CSL has four usage groups:
CSLERR
The number of device errors detected by the I/O system. A device is
permitted to recover from up to five errors between successive audits before
being left permanently system-busy.
CSLFLT
The number of times the system did not recover an I/O device following the
occurrence of an error previously pegged in CLSERR, either because it did
not pass diagnostic tests, or because this was the it sixth error in the current
audit interval.
CSLSBU
Usage count of the number of I/O consoles in system-busy state. The scan
rate is 100 second.
DCM
Refer to OM groups PM or PMTYP.
DDU
This group measures the performance of the disk drive units. DDU consists
of 4 usage groups:
DDUERROR
A count of the I/O errors detected in the operation of an inservice disk drive
unit causing an out of service condition.
DDUFAULT
A count of the Return to Service failures. Incremented each time a DDU
fails to recover from an error previously pegged in DDUERROR.
DDUMBUSY
Usage count of the number of DDU in man-busy state. Scan rate is 100
second.
DDUSBUSY
Usage count of the number of DDU in system-busy state. Scan rate is 100
second.
DS1CARR
This group is not used by International.
DTSR
This group measures the performance of the switch’s ability to return dial
tone within three seconds. The key to this group is LMDP, LMDT, LCMDT,
LCMDP, LCMKS and DLMKS. DTSR has four usage registers for each
key:
TOTAL
A count of the total sampled calls.
TOTAL_2
The extension register of TOTAL.
DELAY
A count of sampled calls where dial tone delay exceeds 3 seconds or which
got receiver queue overflow.
DELAY_2
The extension register of DELAY.
DTSRPM
This group measures the performance of each peripheral’s ability to return
dial tone within three seconds. The key to the group is the index for the
peripheral. DTSRPM has one info field which gives the key and the name
of the peripheral, and six usage fields:
DTSRPM_OMINFO
The index of the peripheral (LCM) plus the name.
DPLTOT
A count of the total sampled calls originated by DP lines for this LCM.
DPLDLY
A count of sampled calls where dial tone delay for DP lines exceeds 3
seconds or which got receiver queue overflow.
DGTTOT
A count of the total sampled calls originated by DT lines for this LCM.
DGTDLY
A count of sampled calls where dial tone delay for DT lines exceeds 3
seconds or which got receiver queue overflow.
KSTOT
A count of the total sampled calls originated by keyset lines for this LCM.
KSDLY
A count of sampled calls where dial tone delay for keyset lines exceeds 3
seconds or which got receiver queue overflow.
EXT
This group measures the usage of Extension blocks, which are registers for
storing auxiliary call data. The key to this group is the type of ext block,
indexed by EXT_FORMAT_CODE. There is one info field and four usage
fields.
EXTINFO
This displays the index of the ext block plus the external name.
EXTSEIZ
The number of times a request for an extension block of this type was
successful.
EXTOVFL
The number of times a request for an extension block of this type failed
because there were none unassigned.
EXTUSAGE
Usage count of the number of extension blocks assigned. Scan rate is 100
second.
EXTHI
The highest number of extension blocks of that type in simultaneous use
reached.
FTRQ
This group measures the usage of feature queue blocks, which are registers
for storing feature information against an agent. The key to this group is the
AL10PROG
The number of times a 10-member list has a new number programmed.
AL10INTG
The number of times a 10-member list is interrogated for a number.
AL10USGE
The number of times a 10-member list has a member used.
AL10CERR
The number of times a 10-member list has been incorrectly accessed by a
subscriber.
AL30PROG
The number of times a 30-member list has a new number programmed.
AL30INTG
The number of times a 30-member list is interrogated for a number.
AL30USGE
The number of times a 30-member list has a member used.
AL30CERR
The number of times a 30-member list has been incorrectly accessed by a
subscriber.
AL60PROG
The number of times a 60-member list has a new number programmed.
AL60INTG
The number of times a 60-member list is interrogated for a number.
AL60USGE
The number of times a 60-member list has a member used.
AL60CERR
The number of times a 60-member list has been incorrectly accessed by a
subscriber.
ALHNPROG
The number of times a 100-member list has a new number programmed.
ALHNINTG
The number of times a 100-member list is interrogated for a number.
ALHNUSGE
The number of times a 100-member list has a member used.
ALHNCERR
The number of times a 100-member list has been incorrectly accessed by a
subscriber.
ADLCERR
The number of times a subscriber without the ADL feature attempts to
access the feature.
ICDIVF
This group measures the usage of a call diversion to a fixed destination
feature, i.e. Call diversion to an Operator (CDO) and Call Diversion Fixed
(CDF). ICDIVF has 16 fields:
CDOACT
The number of times a subscriber activates the CDO feature.
CDODACT
The number of times a subscriber deactivates the CDO feature.
CDOINTG
The number of times a subscriber interrogates the CDO feature.
CDOUSGE
The number of times an incoming call is diverted by the CDO feature.
CDODENY
The number of times an incoming call is denied the diversion specified by
the CDO feature because the terminator is engaged in a diverted call or the
call has already been diverted five times.
CDOOVFL
The number of times an incoming call is denied the diversion specified by
the CDO feature due to of a lack of system resources.
CDOCERR
The number of times a subscriber incorrectly accesses the CDO feature.
CDODERR
The number of times an incoming call is denied the diversion specified by
the CDO feature due to an inaccessible target number.
CDFACT
The number of times a subscriber activates the CDF feature.
CDFDACT
The number of times a subscriber deactivates the CDF feature.
CDFINTG
The number of times a subscriber interrogates the CDF feature.
CDFUSGE
The number of times an incoming call is diverted by the CDF feature.
CDFDENY
The number of times an incoming call is denied the diversion specified by
the CDF feature because the terminator is engaged in a diverted call or the
call has already been diverted five times.
CDFOVFL
The number of times an incoming call is denied the diversion specified by
the CDF feature due to of a lack of system resources.
CDFCERR
The number of times a subscriber incorrectly accesses the CDF feature.
CDFDERR
The number of times an incoming call is denied the diversion specified by
the CDF feature due to an inaccessible target number.
ICDIVP
This group measures the usage of a call diversion to a programmable
destination feature, i.e. Call diversion to an Announcement (CDA), Call
Diversion to a Subscriber (CDS) and Call Diversion on Busy (CDB).
ICDIVP has 27 fields:
CDAACT
The number of times a subscriber activates the CDA feature.
CDAPROG
The number of times a subscriber programs the CDA feature.
CDADACT
The number of times a subscriber deactivates the CDA feature.
CDAINTG
The number of times a subscriber interrogates the CDA feature.
CDAUSGE
The number of times an incoming call is diverted by the CDA feature.
CDADENY
The number of times an incoming call is denied the diversion specified by
the CDA feature because the terminator is engaged in a diverted call or the
call has already been diverted five times.
CDAOVFL
The number of times an incoming call is denied the diversion specified by
the CDA feature due to of a lack of system resources.
CDACERR
The number of times a subscriber incorrectly accesses the CDA feature.
CDADERR
The number of times an incoming call is denied the diversion specified by
the CDA feature due to an inaccessible target number.
CDSACT
The number of times a subscriber activates the CDS feature.
CDSPROG
The number of times a subscriber programs the CDS feature.
CDSDACT
The number of times a subscriber deactivates the CDS feature.
CDSINTG
The number of times a subscriber interrogates the CDS feature.
CDSUSGE
The number of times an incoming call is diverted by the CDS feature.
CDSDENY
The number of times an incoming call is denied the diversion specified by
the CDS feature because the terminator is engaged in a diverted call or the
call has already been diverted five times.
CDSOVFL
The number of times an incoming call is denied the diversion specified by
the CDS feature due to of a lack of system resources.
CDSCERR
The number of times a subscriber incorrectly accesses the CDS feature.
CDSDERR
The number of times an incoming call is denied the diversion specified by
the CDS feature due to an inaccessible target number.
CDBACT
The number of times a subscriber activates the CDB feature.
CDBPROG
The number of times a subscriber programs the CDB feature.
CDBDACT
The number of times a subscriber deactivates the CDB feature.
CDBINTG
The number of times a subscriber interrogates the CDB feature.
CDBUSGE
The number of times an incoming call is diverted by the CDB feature.
CDBDENY
The number of times an incoming call is denied the diversion specified by
the CDB feature because the terminator is engaged in a diverted call or the
call has already been diverted five times.
CDBOVFL
The number of times an incoming call is denied the diversion specified by
the CDB feature due to of a lack of system resources.
CDBCERR
The number of times a subscriber incorrectly accesses the CDB feature.
CDBDERR
The number of times an incoming call is denied the diversion specified by
the CDB feature due to an inaccessible target number.
ICONF
This group measures the usage of the Three-Way Call (3WC) and Six-Way
Call (6WC) features. ICONF has 12 fields:
TWCUSGE
The number of times a subscriber engages a 3-port conference circuit via the
use of R3.
TWCDENY
The number of times a subscriber is unable to engage a 3-port conference
circuit because the leg the subscriber is attempting to conference is in an
invalid call state.
TWCOVRFL
The number of times a subscriber is unable to engage a 3-port conference
circuit due to a lack of system resources.
TWCCERR
The number of times a subscriber incorrectly attempts to engage a 3-port
conference circuit.
SWCUSGE
The number of times a subscriber engages a 6-port conference circuit via the
use of R3.
SWCDENY
The number of times a subscriber is unable to engage a 6-port conference
circuit because the leg the subscriber is attempting to conference is in an
invalid call state.
SWCOVFL
The number of times a subscriber is unable to engage a 6-port conference
circuit due to a lack of system resources.
SWCCERR
The number of times a subscriber incorrectly attempts to engage a 6-port
conference circuit.
ADDNUSGE
The number of times a subscriber successfully initiates an enquiry call.
ADDNDENY
The number of times a subscriber is unable to initiate an enquiry call
because the active call is in an improper state.
ADDNOVFL
The number of times a subscriber is unable to initiate an enquiry call due to
a lack of system resources.
ADDNCERR
The number of times a subscriber incorrectly attempts to initiate an enquiry
call.
ICWT
This group measures the usage of the Call Waiting (CWT) and the Cancel
Call Waiting (CCW) features. ICWT has 8 fields:
CWTUSGE
The number of times the call waiting tone is applied to a subscriber.
CWTABNDN
The number of times a call which is waiting on another call is terminated
before it answered.
CWTDENY
The number of times system restrictions prevent a subscriber from being
waited on.
CWTOVFL
The number of times insufficient resources prevent a subscriber from being
waited on.
CWTCERR
The number of times a subscriber incorrectly attempts to invoke the CWT
feature.
CCWACT
The number of times a subscriber successfully activates the CCW feature.
CCWUSGE
The number of times a call is prevented from being waited on because the
subscriber has CCW active.
CCWCERR
The number of times a subscriber incorrectly attempts to activate the CCW
feature.
IDND
This group measures the usage of the Do Not Disturb (DND) feature. IDND
has eight usage fields:
DNDACT
The number of times a subscriber successfully activates DND.
DNDDACT
The number of times a subscriber successfully deactivates DND.
DNDINTG
The number of times a subscriber interrogates DND.
DNDUSGE
The number of times an incoming call to a subscriber with DND active is
diverted to treatment.
DNDDENY
The number of times an incoming call to a subscriber with DND active is
not diverted due to a call already being diverted or the incoming call is
attempting to be diverted for the sixth time.
DNDOVFL
The number of times an incoming call to a subscriber with DND active is
not diverted due to insufficient system resources.
DNDCERR
The number of times a customer incorrectly attempts to access the DND
feature.
DNDDERR
The number of times an incoming call to a subscriber with DND active is
not diverted due to incorrect routine or translation datafill.
IFDL
This group measures the usage of the features which automatically route to a
fixed destination. These are the Hotline (HTL) and Warmline (WLN)
features. IFDL has nine usage fields:
HTLUSGE
The number of times a subscriber goes off hook and is routed to the HTL
destination.
HTLOVFL
The number of times a subscriber cannot be routed to HTL destination due
to data corruption or software errors.
WLNACT
The number of times a subscriber successfully activates Warmline.
WLNPROG
The number of times a subscriber successfully programs Warmline.
WLNDACT
The number of times a subscriber successfully deactivates Warmline.
WLNINTG
The number of times a subscriber interrogates Warmline.
WLNUSGE
The number of times a subscriber goes off hook and is routed to the WLN
destination.
WLNOVFL
The number of times a subscriber cannot be routed to WLN destination due
to data corruption or software errors.
WLNCERR
The number of times a subscriber incorrectly accesses the WLN feature.
ILR
This group measures the usage of the International Line Restrictions (ILR)
feature. ILR has 17 usage fields:
DNIACT
The number of times a subscriber activates the Deny National and
International version of ILR.
DNIDEACT
The number of times a subscriber deactivates the Deny National and
International version of ILR.
DNIUSGE
The number of times a subscriber is prevented from making a National or an
International call with DNI active.
DAIACT
The number of times a subscriber activates the Deny All International
version of ILR.
DAIDACT
The number of times a subscriber deactivates the Deny All International
version of ILR.
DAIUSGE
The number of times a subscriber is prevented from making an International
call with DAI active.
DABEACT
The number of times a subscriber activates the Deny All But Emergency
version of ILR.
DABEDACT
The number of times a subscriber deactivates the Deny All But Emergency
version of ILR.
DABEUSGE
The number of times a subscriber is prevented from making any but an
Emergency call with DABE active.
DIDDACT
The number of times a subscriber activates the Deny International Direct
Dial version of ILR.
DIDDDACT
The number of times a subscriber deactivates the Deny International Direct
Dial version of ILR.
DIDDUSGE
The number of times a subscriber is prevented from making an International
Direct Dial call with DIDD active.
DNIDACT
The number of times a subscriber activates the Deny National and
International Direct Dial version of ILR.
DNIDDACT
The number of times a subscriber deactivates the Deny National and
International Direct Dial version of ILR.
DNIDUSGE
The number of times a subscriber is prevented from making a National or an
International Direct Dial call with DNID active.
ILRINTG
The number of times a subscriber interrogates the ILR feature.
ILRCERR
The number of times a subscriber incorrectly accesses the ILR feature.
INDC
This group measures the usage of the No Double Connect (INDC) feature.
INDC has five usage fields:
NDCACT
The number of times a subscriber activates the INDC feature.
NDCDACT
The number of times a subscriber deactivates the INDC feature.
NDCINTG
The number of times a subscriber interrogates the INDC feature.
NDCUSGE
The number of times an active call prevents an incoming call from being
waited on or prevents a toll break-in interruption due to IDNC being active.
NDCCERR
The number of times a subscriber incorrectly accesses the INDC feature.
IOC
This group measures the performance of the Input-Output Controllers. IOC
has seven fields:
IOCERR
The number of times an error is detected in the functioning of an in-service
IOC.
IOCLKERR
The number of times a device error is detected by the IOC on one of its
peripheral-side links.
IOCFLT
The number of times the IOC is unable to recover following an error which
was previously pegged in IOCERR, and thus remains system-busy.
IOCLKSBU
Usage count of the number of system-busy links between the IOC and the
peripheral devices.
IOCLKMBU
Usage count of the number of man-busy links between the IOC and the
peripheral devices.
IOCSBU
Usage count of the number of system-busy IOCs.
IOCMBU
Usage count of the number of man-busy IOCs.
IOSYS
This group measures the performance of the Input-Output System. IOSYS
has one field:
IOSYSERR
The number of times the I/O system detects an error on an incoming or
outgoing message, as indicated by an error or rebounded-message interrupt
by a CMC.
IWUC
This group measures the usage of the Wakeup Call (WUC) feature. IWUC
has ten usage fields.
WUCACT
The number of times a subscriber activates the WUC feature.
WUCDACT
The number of times a subscriber deactivates the WUC feature.
WUCINTG
The number of times a subscriber interrogates the WUC feature.
WUCUSGE
The number of times a successful wake-up call is generated to the WUC
subscriber.
WUCDENY
The number of times the subscriber is denied from activating a wakeup call
due to feature restrictions.
WUCABDN
The number of times a wake-up call was attempted for the second time but
the subscriber was busy or did not answer.
WUCOVFL
The number of times a wake-up attempt could not be performed due to
insufficient wakeup feature storage.
WUCCERR
The number of times a subscriber incorrectly accesses the WUC feature.
WUCRSET
The number of times a wakeup call was attempted but the subscriber did not
answer or was off-hook.
WUCNRSC
The number of times a wake-up attempt could not be performed due to
insufficient call processing resources.
LM
Refer to groups PM or PMTYP.
LMD
This group measures the performance of local and remote line module
traffic. The key to the group is the internal index into the line module (LCM
for International). LMD has one information field and 11 usage fields:
LMD_OMINFO
Displays the internal peripheral index plus the external name.
NTERMATT
The number of attempts made to provide a path (port channel plus line
drawer channel) between a line to which a call is to terminate, and a network
port. The attempt is only counted after call processing has determined that
the line is available (for example, not busy).
NORIGATT
The number of originating attempts reported by this line module to the CC.
Note that the same customer attempt may cause several machine attempts to
be pegged if the originating line module voice paths are congested.
LMTRU
The usage count of the number of lines in line_cp_busy or
line_cp_busy_deload state. Includes time from when the system attempts to
provide a path between the line and a network port, until that line is released
from the call. The scan rate is 100 second.
TERMBLK
The number of attempts to find a voice path from the network to a
terminating line which fail either because all of the LM channels to the
network are busy, or because it is impossible to match an idle channel on any
of the links to the network with an idle channel from the line shelf serving
the terminating line.
ORIGFAIL
The number of originating attempts failing because of partial dial,
permanent signal, extra pulse or bad tones, large twist, or unexpected
message type.
PERCLFL
The number of calls attempting to terminate on this line module which fail
and are given system failure (SYFL) treatment because of the inability to
ring the terminating line properly. Also included are ground start lines that
report loop faults during attempted terminations.
STKCOINS
The number of times an attempt to collect or return coins are stuck.
REVERT
The number of reverted calls initiated on this line. Pegged when ringing
starts, after the caller has gone on hook for the first time.
MADNTATT
The number of times that any Multiple Appearance Directory Number
(MADN) group secondary member on the LM has been notified of an
incoming call. MADN lines are not currently supported in International.
ORIGBLK
The number of originations previously pegged in LMD_NORIGATT which
fail for lack of a path from the originating peripheral module to the network.
ORIGABN
The number of originations previously pegged in LMD_NORIGATT which
are abandoned before the call setup is complete.
LOGS
This group measure the performance of the LOG system. LOGS has 4 usage
fields:
LOSTREC
The total number of log messages lost due to overflows, either of the central
buffer or of individual device buffers.
SWERRCT
The total number of software error reports generated by the switch,
including those which are not reported due to log suppression or buffer
overflow.
PMSWERCT
The total number of software error reports generated by peripheral modules.
PMTRAPCT
The total number of trap reports generated by peripheral modules.
MACHACT
This group measures the performance of CPU usage by different classes of
base level processes. It does not include measurements of the time spent at
interrupt level. MACHACT has five usage fields:
INTLEV
The units of CPU time used in processing foreground tasks. These tasks run
at class levels SYSTEM 6 and SYSTEM 7.
CPLEV
The units of CPU time used in processing work that is related to the
handling of calls. This includes call setup, translation, network connections,
terminations, billing, feature processing and all other actions related to call
processing. It does not include the time spent accepting call processing
messages as this is still part of the interrupt level process. Also included are
tasks (processes) running at class level CPCLASS (5).
MTCELEV
The units of CPU time used in processing high priority maintenance actions,
MAINTCLASS LEVEL – (5). Activities include network maintenance,
reloading new peripherals, CC SYNC operations, and routine maintenance
audits.
BKGDLEV
The units of CPU time used in processing all other maintenance actions,
SYSTEM 0 and NGBKGCLASS (4,3,2,1) Activities include MAP I/O,
LOG output, OM reporting, routine audits.
PREVLEV
The units of CPU time used for the Operational Measurement scan and
transfer function and the CPU time used by LOG and MAP devices defined
as guaranteed in tables TERMDEV, GBKGCLASS LEVEL (4,3,2,1).
MTA
This group measures the performance of minibar drivers used for
maintenance actions. The key is the CLLI – MTADRIVER. MTA has one
information field and four usage fields:
MTA_OM_INFO
This displays the number of drivers assigned in table MTAMDRIVE.
MTASZRS
The number of times a set operation is performed on a MTA by the MTA
driver.
MTASZFL
The number of times a set operation is abandoned because the driver is in
use or out of service.
MTATRU
The usage count of the number of MTA drivers performing set operations.
Included are drivers in seized or nwm-busy states. The scan rate is 10
second.
MTAMBU
The usage count of the number of MTA drivers in a man-busy state.
Included are drivers in cp_busy or lockout states. The scan rate is 10
second.
MTRERR
This group measures errors and inconsistencies in the International Metering
System. MTRERR has four usage fields:
LATECHG
The number of times a time-of-day changeover occurs and a peripheral fails
to be updated with the new tariffs.
BADMDI
The number of times a call is made from an agent whose Metering Data
Index (MDI) is invalid.
METOVFL
The number of times any subscriber’s software meter reaches its maximum
limit of 9 999 999, and wraps to 0.
MTRUSG
The MTRUSG OM group provides general information on metering usage.
MTRUSG has 16 fields.
NMSPCFL
The number of times a hard fault is identified in the network-resident
connection memory or in a speech path segment internal to the network, as a
result of diagnostic tests triggered by an integrity failure previously pegged
in NMSPCER. The path segment affected is made unavailable to call
processing.
NMCFLT
The number of times a network module controller cannot recover from an
error previously pegged in NMCERR. The controller is left system-busy.
NMSBU
The usage count of the number of network modules in system-busy state.
The scan rate is 100 second.
NMMBU
The usage count of the number of network modules in man-busy state. The
scan rate is 100 second.
NMPTSBU
The usage count of the number of network module ports in system-busy
state. The scan rate is 100 second.
NMPTMBU
The usage count of the number of network module ports in man-busy state.
The scan rate is 100 second.
NMJRSBU
The usage count of the number of junctors in system-busy state. The scan
rate is 100 second.
NMJRMBU
The usage count of the number of junctors in man-busy state. The scan rate
is 100 second.
NWMTGCNT
This group is not currently supported for International.
OFZ
This group has been replaced by OTS and SOTS.
OFZ2
This group has been replaced by OTS and SOTS.
OTS
This group measures the traffic load on the switch. The sum of the
incoming traffic calls represents the external pressure on the switch. The
ORGFSET2
The extension register for ORGFSET.
NINC
The total number of incoming attempts from incoming traffic which are
recognized by the CC.
NINC2
The extension register for NINC.
INCTRM
The number of calls from incoming traffic which connect to terminating
traffic (that is, trunk to line).
INCTRM2
The extension register for INCTRM.
INCOUT
The number of calls from incoming traffic which connect to outgoing traffic
(that is, trunk to trunk).
INCOUT2
The extension register for INCOUT.
INCTRMT
The number of calls from incoming traffic which connect to a tone or an
announcement.
INCABNM
The number of calls from incoming traffic which are machine abandoned
before being connected to terminating traffic, outgoing traffic, tone,
announcement, lock-out status, or feature activation or deactivation. This
may occur because of upstream office delays or problems.
INCABNC
The number of calls from incoming traffic which are abandoned before
being connected to terminating traffic, outgoing traffic, tone, announcement,
lock-out status, or feature activation or deactivation. This may occur
because of customer abandonment.
INCLKT
The number of calls from incoming traffic which fail to connect or receive a
treatment. This may occur because the true identity of the incoming call has
been lost or the trunk has been force released. These are incoming calls
attempts which could originate and then were not able to terminate to an
agent.
INCFSET
The number of calls from incoming traffic which invoke a custom calling
feature activation or deactivation. This capability is not currently supported
in International.
INCFSET2
The extension register for INCFSET.
NSYS
The total number of calls recognized by the CC as being system generated
traffic. This includes all forms of originations that cannot be included in
NORG or NINC. For example, a line to alternate trunk because the first
trunk encountered outpulsing trouble, or a line to alternate hunt line because
the previous hunt line was busy.
NSYS2
The extension register for NSYS.
SYSTRM
The number of calls from system generated traffic which connect to
terminating traffic. System generated calls to BUSY line treatment are
considered to be line terminations.
SYSOUT
The number of calls from system generated traffic which connect to
outgoing traffic.
SYSTRMT
The number of calls from system generated traffic which connect to a tone
or an announcement due to an error condition.
SYSABDN
The number of calls from system generated traffic which are abandoned
before disposition to terminating traffic, outgoing traffic, tone,
announcement, lock-out status, or feature activation or deactivation. This
may occur because of customer abandonment.
SYSLKT
The number of calls from system generated traffic which are locked-out
prior to any other routing.
SYSFSET
The number of calls from system generated traffic which invoke a custom
calling feature. This is not currently supported in International.
PCMCARR
This group measures the performance of the 30–channel PCM Carrier links
for the International Digital Trunk Controller (IDTC). The key to this group
is the range of carriers (16 for each IDTC). PCMCARR has one information
field and 20 usage fields:
D30OMINF
This displays the D30CLLI, which consists of the PMNAME and the
D30CKT.
LLFAERR
The number of times a Local Loss of Frame Alignment error occurs in this
carrier. A LLFA error occurs when 3 or 4 consecutive frame alignment
patterns have been received with an error.
LLMAERR
The number of times a Local Loss of Multiframe Alignment error occurs in
this carrier. A LLMA error occurs when 2 consecutive multiframe
alignment patterns have been received with an error.
RFAIERR
The number of times a Remote Frame Alarm Indication error occurs in this
carrier. A RFAI error occurs when the remote equipment indicates that it is
experiencing frame level errors and/or equipment failures.
RMAIERR
The number of times a Remote Multiframe Alarm Indication error occurs in
this carrier. A RMAI error occurs when the remote equipment indicates that
it is experiencing multiframe level errors and/or equipment failures.
AISERR
The number of times an Alarm Indication Signal error occurs in this carrier.
An AIS error occurs when a continuous stream of 1s is detected.
BERERR
The number of times a Bit Error Rate error occurs in this carrier. An BER
error occurs when a frame alignment pattern is found to be incorrect.
SLIPERR
The number of times a frame is slipped in the carrier.
SIGLERR
The number of times a transient change is detected in the supervisory
signaling channels of the carrier.
LLFAFLT
The number of times a Local Loss of Frame Alignment fault occurs in this
carrier. A LLFA fault occurs when a LLFA error persists past the LLFAOST
threshold or the number of non-persistent errors reaches the LLFAOL
threshold.
LLMAFLT
The number of times a Local Loss of Multiframe Alignment fault occurs in
this carrier. A LLMA fault occurs when a LLMA error persists past the
LLMAOST threshold or the number of non-persistent errors reaches the
LLMAOL threshold.
RFAIFLT
The number of times a Remote Frame Alarm Indication fault occurs in this
carrier. A RFAI fault occurs when a RFAI error persists past the RFAIOST
threshold or the number of non-persistent errors reaches the RFAIOL
threshold.
RMAIFLT
The number of times a Remote Multiframe Alarm Indication fault occurs in
this carrier. A RMAI fault occurs when a RMAI error persists past the
RMAIOST threshold or the number of non-persistent errors reaches the
RMAIOL threshold.
AISFLT
The number of times an Alarm Indication Signal fault occurs in this carrier.
A AIS fault occurs when a AIS error persists past the AISOST threshold or
the number of non-persistent errors reaches the AISOL threshold.
BERFLT
The number of times an Bit Error Rate fault occurs in this carrier. A BER
fault occurs when the number of BER errors exceeds the BEROL threshold.
SLIPFLT
The number of times the number of frames slipped exceeds the SLIPOL
threshold.
SIGLFLT
The number of times the number of transient changes in supervisory
signalling channels exceeds the SIGLOL threshold.
CARRSYSB
The usage count of the number of times the carrier was in system-busy state
because of fault occurrences. Scan rate is 100 second.
CARRCBSY
The usage count of the number of times the carrier was in CBSY state
because the C-side peripheral was not in service. Scan rate is 100 second.
CARRPBSY
The usage count of the number of times the carrier was in PBSY state
because the P-side peripheral was not in service. Scan rate is 100 second.
CARRMANB
The usage count of the number of times the carrier was in manual-busy state
due to manual actions. Scan rate is 100 second.
PM
This group measures the performance of each peripheral module. The key is
the range of PMs. PM has one info field and 22 usage fields:
PM_OM_INFO_TYPE
This displays the device name, that is, the device class and device number,
and an optional * which indicates that the PM node is datafilled in table
PMEXCEPT.
PMERR
The number of errors detected in an in service PM.
PMFLT
The number of PM errors, previously pegged in PMERR, which result in the
PM going to a system-busy state.
PMMSBU
The usage count of the number times the PM is in a system-busy state. The
scan rate is 100 second.
PMUSBU
The usage count of the number times a unit in the PM is in a system-busy
state. The scan rate is 100 second.
PMMMBU
The usage count of the number times the PM is in a man-busy state. The
scan rate is 100 second.
PMUMBU
The usage count of the number times a unit in the PM is in a man-busy state.
The scan rate is 100 second.
PMSBP
The number of times the PM is made system-busy from an in-service or an
in-service-trouble state.
PMMBP
The number of times the PM is made man-busy from an in-service or an
in-service-trouble state.
PMSWXFR
The number of times a transfer of activity occurred due to system
intervention resulting in a WARM SWACT or a TAKEOVER.
PMMWXFR
The number of times a transfer of activity occurred due to manual
intervention resulting in a WARM SWACT or a TAKEOVER.
PMSCXFR
The number of times a transfer of activity occurred due to system
intervention resulting in a COLD SWACT.
PMMCXFR
The number of times a transfer of activity occurred due to manual
intervention resulting in a COLD SWACT.
PMCCTDG
The number of times the system has referred a line or a trunk on the PM to
maintenance software for checking because of repeated difficulties during
call processing.
PMCCTFL
The number of instances counted in PMCCTDG where a wrong card, no
card or a card fault was found.
PMPSERR
The number of errors detected on the P-side interface associated with the
peripheral.
PMPSFLT
The number of faults detected on the facilities associated with the peripheral.
PMRGERR
The number of times a problem is detected with a ringing generator
associated with an in service peripheral. This applies to LCMs only.
PMRGFLT
The number of times a problem is detected with an in service ringing
generator associated with an in service peripheral. This applies to LCMs
only.
PMSBTCO
The number of times a terminal is in CP-busy or CP-busy-deload state when
the PM is made system-busy or C-side-busy from an in-service or an
in-service-trouble state.
PMMBTCO
The number of times a terminal is in CP-busy or CP-busy-deload state when
the PM is made man-busy from an in-service or an in-service-trouble state.
PMCCTOP
The number of times an outside plant circuit failure is detected on a line or
trunk by system diagnostics. Retests do not repeg this OM.
PMINTEG
The number of times an integrity failure is reported to the CC by the
peripheral. This does not apply to an LCM.
PMOVLD
This group measures attempts to originate or terminate that were denied in
either the CC or the peripheral during overload. PMOVLD has one
information field and two usage fields.
PMOVLD_INFO_TYPE
This displays the host name and peripheral number.
PORGDENY
The number of originations denied because the peripheral is in an overload
conditions and has insufficient realtime to processes an originating call. The
pegging of this implies that the origination is lost. The counts are only
reported to the CC after the peripheral recovers from the overload condition
which means until this happens the peg count will be inaccurate.
PTRMDENY
The number of terminations denied by the CC because the peripheral is in an
overload conditions and has insufficient realtime to processes a terminating
call. This is not currently supported in International.
PMTYP
This group provides totalling of the counts within PM group on a per PM
type basis. The key to the group is the range of PM types. PMTYP has one
information field and 22 usage fields:
PMTYP_OM_INFO_TYPE
This displays the type of PM, and the count of the number of PMs of that
type.
PMTERR
This is the sum of the register PM_PMERR for this PM type. Excluded are
those in excepted in table PMEXCEPT.
PMTFLT
This is the sum of the register PM_PMFLT for this PM type. Excluded are
those in excepted in table PMEXCEPT.
PMTMSBU
This is the sum of the register PM_PMMSBU for this PM type. Excluded
are those in excepted in table PMEXCEPT. Scan rate is 100 second.
PMTUSBU
This is the sum of the register PM_PMUSBU for this PM type. Excluded
are those in excepted in table PMEXCEPT. Scan rate is 100 second.
PMTMMBU
This is the sum of the register PM_PMMMBU for this PM type. Excluded
are those in excepted in table PMEXCEPT. Scan rate is 100 second.
PMTUMBU
This is the sum of the register PM_PMUMBU for this PM type. Excluded
are those in excepted in table PMEXCEPT. Scan rate is 100 second.
PMTSBP
This is the sum of the register PM_PMSBP for this PM type. Excluded are
those in excepted in table PMEXCEPT.
PMTMBP
This is the sum of the register PM_PMMBP for this PM type. Excluded are
those in excepted in table PMEXCEPT.
PMTSWXFR
This is the sum of the register PM_PMSWXFR for this PM type. Excluded
are those in excepted in table PMEXCEPT.
PMTMWXFR
This is the sum of the register PM_PMMWXFR for this PM type. Excluded
are those in excepted in table PMEXCEPT.
PMTSCXFR
This is the sum of the register PM_PMSCXFR for this PM type. Excluded
are those in excepted in table PMEXCEPT.
PMTMCXFR
This is the sum of the register PM_PMMCXFR for this PM type. Excluded
are those in excepted in table PMEXCEPT.
PMTCCTDG
This is the sum of the register PM_PMCCTDG for this PM type. Excluded
are those in excepted in table PMEXCEPT.
PMTCCTFL
This is the sum of the register PM_PMCCTFL for this PM type. Excluded
are those in excepted in table PMEXCEPT.
PMTPSERR
This is the sum of the register PM_PMPSERR for this PM type. Excluded
are those in excepted in table PMEXCEPT.
PMTPSFLT
This is the sum of the register PM_PMPSFLT for this PM type. Excluded
are those in excepted in table PMEXCEPT.
PMTRGERR
This is the sum of the register PM_PMRGERR for this PM type. Excluded
are those in excepted in table PMEXCEPT.
PMTRGFLT
This is the sum of the register PM_PMRGFLT for this PM type. Excluded
are those in excepted in table PMEXCEPT.
PMTSBTCO
This is the sum of the register PM_PMSBTCO for this PM type. Excluded
are those in excepted in table PMEXCEPT.
PMTMBTCO
This is the sum of the register PM_PMMBTCO for this PM type. Excluded
are those in excepted in table PMEXCEPT.
PMTCCTOP
This is the sum of the register PM_PMCCTOP for this PM type. Excluded
are those in excepted in table PMEXCEPT.
PMTINTEG
This is the sum of the register PM_PMINTEG for this PM type. Excluded
are those in excepted in table PMEXCEPT.
PM1
This group is not currently supported on DMS-100 International switches.
PM2
Refer to groups PM or PMTYP.
RADR
This group measures the performance of the Receiver Attachment Delay
Recorder. RADR generates test call originations for timing receiver
attachment under various traffic loads. The key is the RCVR_KIND type
which for international means the UTRRCVR kind. RADR has one
information field and three usage fields:
RAD_PHYS_TUPLE_FOR_OMS
This displays the desired number of test calls per hour (RADCALLR), the
lower delay threshold in seconds (RADLDLYT) and the upper delay
threshold in seconds (RADUDLYT).
RADTESTC
The number of tests carried out in the form of requests for receivers.
RADLDLYP
The number of requests which took longer than the lower delay threshold to
satisfy.
RADUDLYP
The number of requests which took longer than the upper delay threshold to
satisfy.
RCVR
DMS-100 International switches use Universal Tone Receivers (UTRs).
Refer to to the UTR OM group.
SOTS
This group measures the performance of the No Circuit Class (NCCLS)
category together with the outgoing and terminating networks. SOTS has 25
usage fields:
SOTSNCBN
The number of times the No Circuit Business Network condition is
encountered. Currently not supported on DMS-100 International switches.
SOTSNCID
The number of times the No Circuit Inward Dial condition is encountered.
Currently not supported on DMS-100 International switches.
SOTSNCIM
The number of times the No Circuit Intermachine condition is encountered.
Currently not supported on DMS-100 International switches.
SOTSNCIT
The number of times the No Circuit Intertoll condition is encountered.
Currently not supported on DMS-100 International switches.
SOTSNCLT
The number of times the No Circuit Local Tandem condition is encountered.
Currently not supported on DMS-100 International switches.
SOTSNCOF
The number of times the No Circuit Offnet condition is encountered.
Currently not supported on DMS-100 International switches.
SOTSNCON
The number of times the No Circuit Onnet condition is encountered.
Currently not supported on DMS-100 International switches.
SOTSNCOT
The number of times the No Circuit Other Trunks condition is encountered.
Currently not supported on DMS-100 International switches.
SOTSNCRT
The number of times the No Circuit Trunks condition is encountered.
Currently not supported on DMS-100 International switches.
SOTSNCTC
The number of times the No Circuit Toll Completion condition is
encountered. Currently not supported on DMS-100 International switches.
SOTSNOSC
The number of times the No Circuit Service Trunks condition is
encountered. Currently not supported on DMS-100 International switches.
SOTSPDLM
The number of machine dialed calls that route to Partial Dial treatment.
SOTSPSGM
The number of machine dialed calls that route to Permanent Signal
treatment.
SOUTNWT
The number of attempts to find a network path from a line or a trunk to a
selected outgoing or test trunk.
SOUTNWT2
The extension register for SOUTNWT.
SOUTMFL
The number of first trial match failures to find a network path from a line or
a trunk to a selected outgoing or test trunk.
SOUTRMFL
The number of outgoing calls failing and getting NBLH treatment due to a
second or last trial network match failure on an attempt to connect to an
outgoing or test trunk.
SOUTOSF
The number of first trial seize failures occurring after an outgoing trunk has
been selected and the necessary network paths acquired, following which the
call has been routed onward in an attempt to select another outgoing trunk.
SOUTROSF
The number of outgoing calls failing and getting SSTO treatment due to
outgoing seize failure.
STRMNWT
The number of attempts made to find a voice path to a terminating line.
STRMNWT2
The extension register for STRMNWT.
STRMFL
The number of attempts made to find a voice path to a terminating line
which fail due to the unavailability of a network connection.
STRMBLK
The number of attempts made to find a voice path to a terminating line
which fail due to the unavailability of an LM channel or the impossibility of
matching an idle channel from the network to the terminating line shelf.
STRMRBLK
The number of attempts made to find a voice path from the network to a
terminating line which fail and route to BNLN treatment. This is also
pegged by STRMBLK.
STRMGSGL
The number of attempts made to terminate to a ground start line which fail
to glare.
SPC
This group measures the performance of the semi-permanent connections
feature. SPC has six usage fields:
SPCNTCAT
The number of attempts by the administration, via table control, to set up a
semi-permanent connection.
SPCNTCSU
The number of attempts by the administration, via table control, to set up a
semi-permanent connection which are successful.
SPDISCTC
The number of attempts by the administration, via table control, to
disconnect a semi-permanent connection which are successful.
SPCNAUAT
The number of attempts by the audit to establish a semi-permanent
connection.
SPCNAUSU
The number of attempts by the audit to establish a semi-permanent
connection which are successful.
SPCNAUAT
The number of attempts by the audit to idle a semi-permanent connection.
STN
This group is not currently supported on DMS-100 International switches.
SVCT
This group is not currently supported on DMS-100 International switches.
TFCANA
This group measures the usage of the traffic analysis / traffic separation
system. This system is used to measure traffic for specific groups classified
by Source Traffic Separation Number (STSN) and Destination Traffic
Separation Number (DTSN). The counts are collected at Source Traffic
Separation (STS) and Destination Traffic Separation (DTS) intersections.
The key to the group is the traffic analysis register number. TFCANA has
six usage registers per key:
TFANPEG
The number of network connections at each STSN X DTSN intersection as
defined in table TFANINT. The count is made at the point where an idle
destination terminal is available and a successful network connection is
made.
TFANPEG2
The extension register for TFANPEG.
TFANSU
The total setup usage at each STSN X DTSN intersection. Setup usage
(setup time) is accumulated by time stamping. Scan rate is 100 second.
TFANSU2
The extension register for TFANSU.
TFANCU
The total connect usage at each STSN X DTSN intersection. Connect usage
(connect time) is collected only after setup usage at the intersection has been
collected. Scan rate is 100 second.
TFANCU2
The extension register for TFANCU.
TM
Refer to groups PM and PMTYP.
TONES
The groups measures the usage of certain tones. The key to the group is the
CLLI for tone generators, refer to table TONES. TONES has 2 usage fields:
TONENATT
The number of calls routed to the given tone. This is pegged before
determining whether the call can indeed be connected to the tone.
TONEOVFL
The number of calls routed to a given tone which cannot be connected
because the maximum allowable number of calls are already connected.
TRK
This group measures the performance of the office trunks. The key consists
of the trunk group number and the external identifier (CLLI). TRK has one
information field and 19 usage fields:
OM2TRKINFO
This displays the trunk direction, the number of total trunk circuits in the
group, and the number of working trunk circuits in the group (excluding
TK_UNEQUIPPED and TK_OFFLINE).
INCATOT
The number of incoming seizures recognized on this group.
PRERTEAB
The number of incoming attempts abandoned before routing can be
completed.
INFAIL
The number of events on a trunk which appears to have or actually has
originated a call, which indicate a possible need for maintenance action,
have accordingly caused the generation of a log message, and have resulted
in a call failure if indeed a call was in progress.
NATTMPT
The number of times routing directed an outgoing trunk call to this trunk
group.
NOVFLATB
The number of times a call, allowed to access to the given trunk group,
overflows the group and is routed onwards, because on idle trunk is
available.
GLARE
The number of times a previously selected trunk had to be dropped because
the peripheral module detected an origination before it could seize the trunk,
and the customer sub-group data indicated that this office should yield to
glare.
OUTFAIL
The number of occurrences of error on an outgoing trunk after an attempt
has been made to seize the trunk. The trunk will be released and a log
message generated.
DEFLDCA
The number of calls prevented from accessing this trunk group although
they were routed to it, due to the action of network management controls.
DREU
The usage count of the amount of time (in hundreds of seconds) during
which directional reservation is activated for this two-way group. Scan rate
is 100 second.
DREU
The usage count of the amount of time (in hundreds of seconds) during
which protective reservation is activated for this two-way group. Scan rate
is 100 second.
TRU
The usage count for the number of trunks found to be in tk_cp_busy,
tk_cp_busy_deload, or tk_lockout states. Scan rate is 100 second.
SBU
The usage count for the number of trunks found to be in tk_pm_busy,
tk_remote_busy, tk_system_busy, tk_carrier_fail, or tk_deloaded states.
Scan rate is 100 second.
MBU
The usage count for the number of trunks found to be in tk_man_busy,
tk_seized, or tk_nwm_busy states. Scan rate is 100 second.
OUTMTCHF
The number of attempts to get a network path from an incoming trunk or
line to a selected trunk of this group, which fail because of network
blockage.
CONNECT
The number of outgoing seizure attempts on this trunk group which appear
to have resulted in successful connections, since they have not been
followed by indications of glare or seize failure.
TANDEM
The number of incoming calls on this group which are initially routed to an
outgoing trunk group.
AOF
Not supported on DMS-100 International switches.
ANF
Not supported on DMS-100 International switches.
TOTU
The total trunk group usage. The sum of TRK_TRU plus TRK_SBU plus
TRK_MBU. Scan rate is 100 second.
TRMTCM
This group measures the usage of Customer Miscellaneous treatments. Only
the usage fields listed are supported by International.
TCMUNDT
Undefined treatment. The default value for entries in Class of Service
screening and Prefix treatment tables when no treatment is required.
TCMPDIL
Partial dial treatment. The treatment given when at least one digit has been
dialed, but not enough to complete the call.
TCMPSIG
Permanent signal timeout treatment. The treatment given when no digits
have been dialed before timeout.
TCMVACT
Vacant code treatment. The treatment given when no valid translation can
be determined for the dialed digits.
TCMBLDN
Blank directory number treatment. The treatment given when an unassigned
directory number is dialed.
TCMTRBL
Trouble intercept treatment. The treatment given when a line or trunk calls a
line with the plug up (PLP) option assigned.
TCMANCT
Machine intercept treatment. This treatment can be used for routing of
disconnected or out-of-service directory numbers to an announcement.
TCMDISC
Disconnect Timing treatment. The treatment given when a line fails to go
on-hook within 10 seconds after the other party terminates the call.
TRMTCU
This group measures the usage of Customer Unauthorized treatments. Only
the usage fields listed are supported by International:
TCUORSS
Originating service suspension treatment. This treatment is given when a
line with denied origination (DOR) or suspended service (SUS) is assigned.
TCUDNTR
Denied terminating treatment. This treatment is given when a call is routed
to a line with denied termination (DTM) assigned.
TCUFNAL
Feature not allowed treatment. This treatment is given when a line tries to
access a feature for which he is not authorized to use.
TCUNACK
Negative acknowledgement treatment. This treatment is given when a
subscriber’s feature request cannot be performed due to some feature
interaction or feature restriction. Also given as a treatment to an
interrogation attempt which is correctly dialed but the feature status
interrogated is not in effect.
TRMTCU2
This group is not currently supported on DMS-100 International switches.
TRMTER
This group measures the usage of Equipment Related treatments. Only the
usage fields listed are supported by International:
TERSYFL
System failure treatment. The treatment that is applied if the call must be
aborted due to a switching unit failure, whether hardware or software.
TERRODR
Reorder treatment. This treatment is given to calls for which distorted
signals are received during dialing or in-pulsing.
TERINBT
Inbusy treatment. The treatment is given to calls routing to a line which is in
INB state.
TRMTFR
This group measures the usage of Equipment Related treatments. Only the
usage fields listed are supported on DMS-100 International switches:
TFRBUSY
Busy line treatment. This treatment is given when a line tries to route to
another line which is busy and call waiting is not in effect, which is his own
line, the line has been seized for testing, or the line is out of service.
TFRCONF
Confirm tone treatment. This treatment which is given when a subscriber
tries to access a feature and is successful. Also given as a treatment to an
interrogation attempt which correctly dialed and the feature status
interrogated is in effect.
TFRILRR
International line restrictions treatment. This treatment is given to a line
which tries to route to a translation class for which that class has been denied
by the ILR option.
TFRIWUC
International wakeup call treatment. This treatment is given to a line which
answers a wakeup call activated through the WUC casual feature.
TROUBLEQ
This group measures the number of times the system refers a line to a queue
for diagnostics. The key is the queue type. TROUBLEQ has one
information field and three usage fields:
TROUBLEQ_OM_INFO
This displays the size of the queue.
TRBQATT
The number of times the system referred a line to this queue for diagnostics.
TRBQOCC
The number of lines in the queue when the queue was sampled. Scan rate is
100 second.
TRBQOVFL
The number of times a line was sent to this queue for diagnostics but could
not be placed on the queue because the queue was full.
TS
This group measures the performance of the time switch usage on the
peripheral face of the network. TS has the following fields:
TS0
The usage count of the peripheral-side time-switch 0.
TS1
The usage count of the peripheral-side time-switch 1.
TS2
The usage count of the peripheral-side time-switch 2.
TS3
The usage count of the peripheral-side time-switch 3.
TS4
The usage count of the peripheral-side time-switch 4.
TS5
The usage count of the peripheral-side time-switch 5.
TS6
The usage count of the peripheral-side time-switch 6.
TS7
The usage count of the peripheral-side time-switch 7.
UTR
This group measures the usage of the Universal Tone Receivers. UTRs are
requested for digit collection by both DIGITONE lines and MF trunk calls.
The key to the group is the peripheral range. UTR has one information field
and 10 usage fields:
UTR_OMINFO
This displays the PM type, the XPM number and the number of UTRs
equipped for the peripheral.
UTRSZRS
The number of times a UTR has been assigned in response to a request.
Calls failing to get a UTR will not proceed to the digit collection phase.
UTROVFL
The number of times a request for a UTR could not be satisfied because all
available UTRs were busy.
UTRQOCC
The usage count of the number of requests in the wait queue for UTR. Scan
rate is 10 second.
UTRQOVFL
The number of attempts to secure a position in the wait queue which were
denied because the queue was full.
UTRQABDN
The number of UTR requests which are deleted from the wait queue because
the requestor had given up.
UTRTRU
The usage count of the number of UTR currently available. Scan rate is 10
second.
UTRSAMPL
The number of times the usage registers UTRQOCC and UTRRTRU were
updated.
OMs For International subscriber features
Operational measurements are provided for the following subscriber features
on the DMS-100 International switch:
• ADL Abbreviated dialing
• CDA Call diversion to announcement
• CDO Call diversion to operator
• CDS Call diversion to subscriber
• CDB Call diversion on busy
• CDF CALl diversion fixed
• CWT/CCW Call waiting/cancel call wait
• IDND International do not disturb
• ESG Emergency service group
• HTL Hot line
• ICR International call recording
• ICT International call transfer
example, Deny National and all International calls). OM pegs are also
required on a call restriction basis.
NWM controls. These displays remain on the VDU until they are erased or
replaced by others via NWM command input.
The trunk group status display provides the identity of trunk groups, the
number of calls offered, the number and percentage of calls overflowing,
attempts per circuit per hour, connections per circuit per hour, the measure
of traffic usage, the number of calls deflected and the name of any active
control on the trunk group.
The control status displays are divided into four categories or levels. These
levels are called the group, code, route and automatic controls. All of these
levels display the different controls (available on a particular level) which
are active in the DMS-100 Family and their total number.
The menus of commands available to the network manager are distributed
among the different levels of displays provided. Any command display on
any particular level may be entered.
The network manager may at any time display operational measurement data
on the VDU by accessing the DMS-100 Family operational measurement
system. Network management in the DMS-100 Family employs a
teleprinter which is primarily used for scheduled hard-copy printout of
NWM and OM reports. The VDU and printer may be located remotely.
Status board lamp display
The status board provides the network manager with a lamp display, via
Signal Distributor (SD) points, of the status of selected trunk groups
(normally toll). Lamps are illuminated when all trunks in a group are busy.
A maximum of 32 trunk groups can be associated with a single SD point
which operates only when all groups are in the busy state. A maximum of
1792 SD points can be provisioned. The trunk groups can be outgoing or
two-way groups.
The status board lamp display is updated periodically at an office definable
interval. Data table Office Engineered Parameter (OFCENG) field name
Network Management Trunk Group Busy Lamp Update (NWMTGBLU) is
used when changes to the interval are required. The interval is in units of
ten seconds with a default value of two minutes.
The status board can be located locally or remotely. A purchased telemetry
unit may be used to remote the status board to other buildings.
Network management controls
Several network management controls are currently available in DMS-100
Family.
Flexible reroute
The flexible reroute control is a percentage based reroute which may be
activated at the MAP. It may have single or multiple vias. Up to sixteen
reroute controls with up to seven multiple vias are allowed. Flexible
reroutes are counted against the number of manual trunk group controls
allowed in the switch at any one time. Five options are supported in the
Flexible Reroute command:
• Regular/immediate reroute
• Direct routed/alternate routed traffic
• Hard to reach traffic/all traffic
• Equal Access traffic/non-equal access traffic/all traffic
• Cancel in chain return
The controlling DMS-100 Family switching system has the ability to apply
three IDOC signal level filters. The thresholds of the signal levels and the
period of their application are controlled by entries in the IDOC data tables
(NWMIDOC and NWMSD). The level (1–3) is selected by an entry in field
IDOCLEV of table NWMIDOC.
Level one is applied when the number of incoming Multifrequency (MF)
calls waiting for a receiver exceeds the On-Threshold (ONTHLD) value for
ONFILTER time, as set in table NWMIDOC. This level is deactivated if it
is less than the value set for Off-Threshold (OFFTHLD) (less than
ONTHLD) for OFFILTER time, as set in table NWMIDOC. The decision to
apply or remove the IDOC level is made every minute.
Level two is applied independently of IDOC level 1 and is active if the
percentage of time devoted to call processing by the CPU of the DMS-100
Family switch is greater than ONTHLD for ONFILTER time. This level is
deactivated when the call-processing CPU usage is less than OFFTHLD for
OFFILTER time. The decision to apply or remove IDOC level 2 is made
every minute.
Level three is applied when the system has lost call-processing capability.
The SD points which transmit control signals reflecting the status of the
IDOC levels are assigned by entries in table NWMSD. Some SD points are
wired to the status board lamp assembly, which then provides indicators of
the length of the MF receiver waiting queue, and CPU occupancy.
Pre-planned number control (PPLN)
The PPLN feature applies remote DOC in response to an external signal
from another office. This feature consists of preplanned controls that are
activated by either scan points, reception of SDOC CCIS messages or via
the NWM VDU.
Scan points may be activated by external ground, loop, or battery. In each
case the external resistance must be less than 6980 ohms; battery required is
52 volts.
A maximum of 256 pre-plans can be remotely activated over 256 scan
points. These values are set by office parameters NWMPPLN and NWMSC
respectively. Each pre-plan can implement one of the seven trunk group
controls over a series of trunk groups (maximum 32 trunk groups). The
seven trunk group controls have an activation order:
1 IRR Immediate Reroute
2 DRE Directional Reservation Equipment
3 PRE Protective Reservation Equipment
4 CANT Cancel To
5 SKIP Skip
6 STR Selective Trunk Reservation
7 HUNT (Regular Translations)
8 RR Regular Reroute
9 CANF Cancel From
When a specific pre-plan is assigned to either DRE or PRE, up to 63 trunks
may be reserved.
Selective incoming load control (SILC)
SILC is a substitute for the IDOC control for connected offices that cannot
or do not respond to IDOC signals (such as Equal Access [EA] interLATA
carriers). When SILC controls are activated, selected incoming calls are
blocked to reduce the amount of traffic that is accepted by the switch.
There are two thresholds for SILC controls similar to the thresholds for
IDOC levels 1 and 2 (referred to as MC1 and MC2). Each threshold has two
modes of call blocking, but only one mode may be used at a time:
• Blocking by a preset percentage of incoming calls
• Blocking by a preset gap between incoming calls (call gapping).
lines are given preferred service. Originations from these lines will be
handled before all others.
ESP can be queried to determine its status (on/off). The information printed
will also identify the user performing the enable or disable operation.
Network management displays
The status of the network is displayed on the screen of the MAP VDU and in
levels, where the top-level display gives an overall picture of the whole
network and the lower-level display reflects the status of the various NWM
controls. The network manager can select a desired level by entering the
appropriate command from the menu, and then telescope from the higher to
lower levels.
Network management level display
The NWM level display is outlined in figure 7–7. The first three lines of the
VDU display provide a general view of the traffic-handling capability of the
DMS-100 Family. This display appears at the top of the telescoping levels
available to the network manager. The next lower line displays the variable
information and is up-dated every minute to reflect the constantly changing
traffic load, and controls applied or removed by the network manager. The
individual display fields and ranges in values are as follows:
Display field Ranges Description
Figure 7–7
NWM level display
NWM
0 Quit /
1 /
2 /
3 /
4 Display___ /
5 ___Finals___ /
6 ___Groups___ /
7 /
8 /
9 /
10 /
11 /
12 /
13 /
14 Page /
15 AutoCtrl /
16 GrpCtrl /
17 CodeCtrl /
18 RteCtrl /
USER ID
hh:mm
Ofrd OFFERED. This field displays the peg count of those calls
allowed access to the final trunk group. The peg count
includes those calls deflected by network management.
Ovf OVERFLOW. This field displays the peg count and
percentage (%) of calls overflowing from trunk group. The
percentage calculation does not include those calls deflected
by NWM.
ACH ATTEMPTS PER CIRCUIT PER HOUR. This field displays
the outgoing call attempts per circuit per hour in the trunk
group.
CCH CONNECTIONS PER CIRCUIT PER HOUR. This field
displays the number of outgoing connections per circuit per
hour in a final trunk group.
ICCH INCOMING CONNECTIONS PER CIRCUIT PER HOUR.
This field display is similar to CCH but pegs the incoming
connections.
CCS HUNDRED CALL SECONDS PER HOUR. This field
displays the traffic usage on a trunk group. Both incoming
and outgoing usage is included.
Defl DEFLECTED. This field displays the number of calls
deflected from a trunk group by any of the following controls:
DRE,.PRE, SKIP or CanT.
Figure 7–8
Display finals commands
Note: IntCCtrl (menu item 13) is present in DMS-200 or DMS-300 offices only.
Auto controls The auto control level is accessed from the top NWM level
by the input command “Autoctrl.” Figure 7–9 outlines the NWM “AutoCtrl”
level display. It displays the automatic controls that are active or disabled.
The types of automatic controls available are:
• IDOC Internal Dynamic Overload Control (1–3)
• PPLN Preplan Number Control (0–255)
• AOCR Automatic Out-of-Chain Reroutes (0–63)
• SILC Selective Incoming Load Control (level 1 or 2)
AutoCtrl AutoCtrl
0 Quit_ / IODC PPln AOCR SDOC
2 /
3 / Active 321 0 0 0
4 List_ / Disabled 31 0 0 0
5 Apply_ /
6 Remove_ /
7 Disable_ / AUTOCTRL:
8 Enable_ /
9 _IDOC_ /
10 _PPln_ /
11 _AOCR_ /
12 /
13 _SDOC_ /
14 Page /
15 /
16 /
17 /
18 /
Group controls The group controls level is accessed from the top NWM
level by the input command “GrpCtrl.” The commands in this menu enable
the network manager to list, apply, or remove any of the group controls on
selected trunk groups.
Figure 7–10 outlines the NWM “Grpctrl” level display. Group controls are
available in the DMS-100 Family:
• DRE
• PRE
• CanT
• Skip
• CanF
• STR
• ITB
• SILC
• TASI (DMS-300)
Group controls can be applied to any trunk group as required by the network
manager by telescoping to the appropriate control menu. Once a group
control has been activated, it is displayed on NWM level display.
Figure 7–10
GRPCTRL menu and example display
Note: TASI (menu item 15) is activated only on a DMS-300 with feature package NTX308AA.
Similarly, the heading TASI appears only on a DMS-300 switch; the value represents the number of
active controls. DMS-300 has TASI instead of STR.
AOCR – 0–63
SILC – Level 1022
SDOC – 1–3
Note: The brackets [ ] indicate that the enclosed parameters are
optional.
• Group Control
The REMOVE command removes the specified control from all trunk
groups or if ALL is not entered, from the trunk group selected for the
control. The REMOVE command format for group control is:
REMOVE ctrl ALL
where:
ctrl = type of group control, DRE, PRE, CanT, CanF, Skip, ITB,
STR, or SILC.
• Route Control
The REMOVE command removes the active route control defined by the
parameter RrtNo, or all active reroute numbers.
Any controls that have been deactivated are no longer displayed on the
NWM MAP. Also this is logged in the log system and an output report
provided. The REMOVE command format for route control is:
REMOVE rrte [rrt no
ALL]
Note: The brackets [ ] indicate that the enclosed parameters are
optional.
Traffic counts
In the DMS-100 Family, certain OM tables are of particular interest to the
network manager. Summary reports of these measurements can be directed
to a network management output device (usually a teleprinter) at intervals
scheduled by the network management personnel on an auto (normally 15 or
30 minutes or optionally at 5, 10, 15, 20 or 30 minutes), half hourly, hourly,
daily, weekly or monthly basis.
These are some of the reports provided for network management in the
operational measurements:
• Code Blocking Reports (CBK)
• Reroute Report (RRTE)
• Receiver Attachment Delay Report (RADR)
• Preroute Peg Count Reports (PRP)
• Trunk group report (TRK)
Status output
The network manager can list the various controls implemented in the
DMS-100 Family. The information is a typical display on the network
management MAP for the various controls (see Network Management
System Reference Manual, 297-1001-453.)
Grpctrl Level
The LIST command displays a list of trunk groups which have the specified
control in effect on them.
LIST ctrl [fsclli1–––––––fsclli9
ALL]
where:
ctrl = Type of group control, PRE, DRE, CanT, CanF, Skip,
ITB,STR, and SILC
fsclli = Full or Short form CLLI up to nine can be entered.
ALL = All fsclli specified by ctrl.
The following is displayed on the network management MAP.
Display Description
Database management
The database management in DMS-100 Family provides the tools and
capabilities which enable the user to modify office data resident in memory.
The following defines the various database management and administration
features available in the DMS-100 Family switching system.
Memory alteration
The DMS-100 Family incorporates a flexible and efficient means of altering
the contents of memory (program parameters and office data update).
Office data modification
Office data modification may be used to add, change, or delete routine
related office data, office parameters and trunk data in local and remote
locations. In the DMS-100 Family system, office data updates are termed as
DMOs. MDC customers can have the capability to modify LENs, DNs, or
features assigned to lines in their customer group only by the use of the
customer station rearrangement feature.
TTY/(VDU) entry for immediate activation The data modification
facility permits fast and accurate input of DMOs and provides readily
understandable machine output. All data modification programs are resident
in DMS-100 Family systems, and as such, no special procedures need be
followed for loading or execution.
The data modification system is data table oriented. The Table Editor (TE)
provides a number of table oriented commands for use by maintenance and
administrative personnel in executing DMOs.
DMOs are entered by typing in table editor commands and the associated
parameters using the keyboard of a teleprinter or VDU, which may be a
designated MAP or a dedicated DMO I/O terminal. The MAP may be
located either locally or remotely.
Extensive use is made of validity and error checks and system safeguards to
avoid functional input errors which may cause a loss of or translation data
integrity. Failure to pass the checks will abort the order (command) and
output an error message to the operator. This error message is in sufficient
detail to identify clearly the reason for failure. Diagnostic messages and/or
related data are output on the TTY or VDU screen.
TTY/VDU entry in pending order file (POF) for delayed activation
DMOs may be activated immediately or placed in a Pending Order File
(POF) in the DMS data store for activation at a later time.
The pending order file has four capabilities:
• Manual request of the pending order file dump (such as, listing by total
file, due date, or unique identifier).
• Automatic reminder output message of pending orders prior to the due
date.
• Manual activation (by either total file, due date or unique identifier).
• Orders may be activated singly or collectively.
Memory reload
It is possible to reload the system’s memory completely and rapidly by
means of local or remote controls, or a reset terminal interface (RTIF),
utilizing automatic and manual recovery, by use of bootstrapping. See the
“Software engineering” chapter for a bootstrap loader description.
Teletypewriter (TTY) input/output
The MAP provides an interface between the maintenance personnel and
DMS-100 Family systems. The tasks performed at the MAP include general
maintenance functions (error detection and diagnosis), administration
functions (network management, customer data modification, etc.), and
trunk and line testing functions. The basic components of the MAP include
a VDU, with keyboard, a voice communication module, testing facilities and
position furniture.
Printers and TTYs are used in conjunction with the VDU for maintenance,
traffic counts, service orders, and trunk testing. Hard copies of critical status
indicators and plant and traffic data are available from the TTYs whenever
requested. The VDUs and TTYs can be located locally or remotely as per
operating company requirements.
Automatic traffic and engineering measurements
See Operational measurement on page 7–13 for operational measurements.
Memory verification
In DMS-100 Family systems, Data Store (DS) is divided into protected store
and unprotected store. The protected store contains critical system data
(such as, addresses of procedures in program store or constants), office data
(hardware configuration) and all the translation data. The unprotected
memory or store contains transient data or per-call type information. The
transient or per-call data includes items such as called number or channel
numbers, which are removed from store once the call is finished.
The DMS-100 Family switching system can make an on-line comparison of
all the data stored in protected memory with a backup magnetic tape. If any
mismatches between the backup tape and memory occur, the addresses and
contents of memory at those addresses are printed out on an input/output
device. The system has an input message capability to print out on an
input/output device all the protected memory addresses and contents of
memory stored at those addresses.
Routing of output messages
Output messages in DMS-100 Family systems can be dedicated to an output
terminal, assignable by the operating company with a backup terminal in
case the primary terminal fails. Also, all output messages in response to an
input message are output on the terminal that the input message was entered
on.
Figure 7–11
Structure of typical table and subtable
Table name
Field Field Field Field Field
name name name name name
& no. & no. & no. & no. & no.
1 2 3 4 n
Tuples
* Top 1 Key Data Data Data
Sub-
3 Key Data table Data
pointer
Subtable name
3 K3 D D D D D
4 K4 D D
D
* The cursor is automaticaly positioned initially D
at the top of the table by the “Table” command. N KN D D D D D
Input prompter
The prompter in table editor has two input modes: prompting and
non-prompting.
Prompting mode
• The name of the required field or parameter is displayed.
• The user must then input syntactically correct data for the field entirely
on the current line.
• If the input data was not correct, an error message is printed and the field
or parameter must be re-entered
• After two syntax errors on the same field, the valid syntax range is
displayed.
• While in the prompt mode, the user may enter “ABORT” which has the
effect of killing the command.
Non-prompting mode All commands are initially in the non-prompting
mode. The required parameters are presumed to be on the current line. The
system processes one parameter at a time until it either runs out of input or
encounters an input error. At that time, the system goes into the prompt
mode looking for the missing or invalid parameter. Any input line in either
mode can be continued to the next line by placing a “+” at the end of the
line.
Description of table editor commands
The table editor commands consist of two or more alphabetic characters
followed, in most cases, by parameters. When the table editor is ready to
accept a command, it displays the prompt character at the start of a new line
on the terminal.
There are three basic categories of commands in the table editor: primitive,
conditional, and compound. The commands covered in each category are
shown in table 7–3.
Primitive commands such as LIST, ADD, DELETE, TOP, and BOTTOM,
are the basic commands used to manipulate the tables.
Conditional commands are used as field value test and logical commands as
follows.
Table 7–3
Table editor commands
Logical Commands:
Boolean_result logop Boolean_result
These commands apply the indicated logical operation (logop) to their left
and right parameters and return a Boolean indicating the logical result:
AND
OR
Compound commands are primitives which can optionally have a
conditional second part, such as, list all (FIELD 2 EQ “XXX”). For further
details see NTP 297-1001-310.
Table 7–4
Service order commands
Command Usage Applicable to
Table 7–4
Service order commands (continued)
Command Usage Applicable to
NEW
SONUMBER: NOW 85 8 7 AM
DN:
3621495
LCC:
1FR
LATANAME:
NILLATA
LTG:
1
LEN:
HOST 00 0 02 31
OPTION:
DGT
OPTION:
$
COMMAND AS ENTERED
NEW NOW 85 8 7 AM 3621495 1FR NILLATA 1 HOST 00 0 02 31 ( DGT ) $
Operational Measurements (OMs) and traffic data are provided for the SMS
and SLC-96. Call originations from and termination to SMS/SLC-96 lines
are included in the Traffic Separations Measurement System (TSMS).
Registers for Subscriber Line Usage (SLU) and message rate are assignable
to SMS/SLC-96 lines. The MAP provides the capability to query and
display the switch status data of special service circuits, such as complete
cross-connect information and the assigned circuit paths.
Dump/restore
This is a process carried out by Northern Telecom when an office is to
receive a software upgrade, a Batch Change Supplement (BCS). The
process consists of five sequential steps.
1 With the two CPUs in sync, a copy of the data in the tables is dumped to
tape.
2 The CPUs are split, sync is dropped and the inactive side is loaded with
the new software. This is a no-data image.
3 Using a facility called MATE10, communication is established with the
inactive CPU.
4 The data files on the tape are transmitted to the inactive CPU and the
data tables are restored using batch DMO procedures.
5 Activity is then switched and the office is running on the new software.
Pending order file (POF)
The POF is a collection of service orders and DMOs which are not due on
the day they are collated. The POF facility provides a method of verifying
and executing the POF data files. A prompt facility can send a reminder
message to the log device indicating that a particular file of pending orders
is now due (see Pending Order Subsystem Reference Manual,
297-1001-126.)
There are two methods of creating and starting POFs. One is for the service
order system, the other is used through the table editor.
POF – service order system created To create a POF via the service
order system the service order clerk enters a service order number and the
date of activation. The DMOs generated by the service order are stored in a
special table called DMOTAB. The prompt message information is stored in
another table called NPENDING. Both of these tables are accessible via the
table editor and are stored in the DMS memory. Editing of a POF created
via the service order system requires deletion of the tuples in DMUTAB and
NPENDING that have the service order number as a key, and re-adding the
POF via the service order system.
POF – table editor created Creation of a POF via the table editor allows
the user more flexibility. The user enters the “POF” mode after entering the
table for which he wants to create the DMOs . DMOs in “POF” mode are
not entered directly in the table but rather sent to a user-specified file on a
user-specified device. Error checking is performed on the DMOs before
they are added to the file. When the user has finished entering DMOs, he
leaves “POF” mode or quits the table. Both actions will close the file just
created. Editing of DMO files is accomplished by use of the system file
editor.
A prompt message is not automatically created for DMO files as it is for
service order POFs. The prompt can be manually created for DMO files via
the “CREATE” command in the PENDING subsystem. “CREATE” places
an entry in table NPENDING, and a log message is output at the requested
time.
DMO files can also be created via the file editor, or they could be created
off-line to the DMS-100 and then loaded into the DMS-100 (as a DMS-type
file) via tape, data link, etc. This last is the recommended method of
applying batch data changes to the DMS. Two resident modules are
provided to use with DMO files: DMOVER which verifies the validity of
data files (especially useful for files created off-line), and DMOPRO which
verifies and processes the data files (actually inputs the data).
Pending subsystem This is accessed from the CI by entering the
command “PENDING.” The prompt string will change to “POF” until the
“LEAVE” command is input.
While in the PENDING mode the user can “ACTIVATE” or “DISPLAY”
one, all, or a group (specified by date and time) of POFs. To examine a POF
the user looks at table “DMOTAB” for service order POFs, or prints or edits
the user defined file if the POF is in the DMO format.
Journal file (JF)
Only those DMOs that have been activated and deemed valid are entered
into the journal file and these are identified by means of a unique JF
identification (ID) number which is assigned automatically by the journal
file system. These ID numbers are only resident on the journal file but are
output to the user in the form of a confirmation message when the associated
DMO or service order is activated and stored on the JF. (The message is
output to the device from which the database change is initiated.) (see
Journal File Description, 297-1001-127.)
JF intermediate storage mechanism
Once valid service orders and DMOs are activated they are stored in a buffer
area in the DMS-100 family core memory that is provided for those journal
file records that are to be recorded on tape. When full, the buffers are
transferred to the journal file. These buffers reside in protected store and
thus survive restarts (or cold starts).
JF tape service order/DMO organization
The ID numbers (and their associated data updates) residing on the JF file
are not necessarily in any sequential order since several users can be
activating different data table changes concurrently. When DMO are
performed, only one JF record ID number is output for the entire service
order even though several data tables can be updated. This is due to the fact
that service orders are executive routine driven (to update numerous tables
simultaneously). Internally on the JF tape, however, the individual data
table changes (the number of changes being dependant on the service order
executive routine) are recorded as individual entities, each of which retain
the identical JF record ID number. When DMOs are effected, one record ID
number is output for every data table (tuple) change initiated.
Starting and maintaining a JF
If a JF can be kept on tape or disk, the access is through the DIRP
subsystem. If the office does not have disk, then the usual place for the JF is
at the end of the backup image tape. As both the backup image and the JF
must be mounted at the same time this not only saves a tape drive, it also
positions the tape at the JF after the system has reloaded. The allows
immediate application of the JF. If the CPUs are not in sync, then JF
updates are inhibited from the table editor and from service orders. There is
an override capability.
The JF is stopped, (and DMOs as well), before a new backup image of the
system is to taken. Once the new image of the system is to be taken and is
in position on the reserved tape drive then a new JF is started at the end of
the new image tape. If the office has disk, then both the image and the JF
are on disk so the position does not matter. The old JF must be transferred
to an archive tape or disk which will contain all old JFs cataloged by JF
name which contains the creation date.
The user manipulates the JF with five commands:
• START – starts a new JF
• STOP – stops a JF
• RESTART – reopen and continue a JF
• STATUS – reports state of JF
• APPLY – execute the DMOs in the JF
Note: APPLY can be used only after a system reload and before a new
JF START is done.
Billing
DMS switch billing features
The DMS international switch supports the following billing features:
• ITOPS billing
• International Call Recording
• Inter Administration Accounting
• ICAMA
• DMS-100 metering
ITOPS billing
ITOPS billing uses ICAMA to bill calls that are handled by an ITOPS
operator. For a complete description of ITOPS billing, refer to “ITOPS
billing feature” in this document.
International Call Recording
The International Call Recording (ICR) feature allows recording of all calls
or only selected calls in a DMS switch. For a complete description of ICR
billing, refer to “ICR feature description” feature” in this document.
Inter Administration Accounting
The Inter Administration Accounting (IAA) feature records details of all
transit calls which can then be used by operating companies for division of
revenue purposes. For a complete description of IAA billing, refer to “IAA
billing feature” in this document.
International Centralized Automatic Message Accounting
The International Centralized Automatic Message Accounting (ICAMA)
feature is used in DMS toll offices to record call details. For a complete
description of ICAMA billing, refer to “ICAMA billing feature” in this
document.
DMS-100 meter billing
Meters are data store registers that are incremented as required for each call.
At required times, the data store registers are transferred to tape, disk,
back-up file or out-of-service file. A complete description of the DMS-100
software meter billing feature is contained in this chapter.
PPM pulsing
Time (s) 0 5 10 15 20 25
Pulse p p p p p p p p p p p
Time (s) 0 5 10 15 20 25
Pulse p p p
p p p
p p p
p p p
Time (s) 0 5 10 15 20 25
where
N is the number of pulses in each phase
T is the duration of each phase (≤ 50 min)
* repeated until call disconnect
fffffffff
Note: Pulse-refresh is the period of time between pulses in a phase, and
is equal to TX/NX. A null phase occurs when N = 0 and T = 0. A free
phase occurs when N = 0 and T > 0.
• When a tariff change occurs, the current phase completes before the new
tariff becomes effective.
• The maximum supported pulse rate is two pulses per second.
• The minimum supported pulse rate is one pulse per 50 min.
The time-of-day system
The metering system is an application which uses the time-of-day (TOD)
system. The TOD system uses the following tables to set up time of day
definitions:
• DAYTYPES
• TODHEAD
• DAYOWEEK
• DAYOYEAR
• TIMEODAY
Figure 8–2xxx
Meter pulse application
FW-31061
meter
pulses
DMS-100
LOCAL TOLL
OFFICE O/G I/C OFFICE
TRK TRK
ORIGINATING
PARTY
Figure 8–3xxx
Tandeming of meter pulses to hardware meters and incoming trunks
FW-31061
DMS-100
CC
IDTC IDTC
TANDEM
PULSES OUTGOING
INCOMING TRUNK
TRUNK
ILGC
LCM
SPM
ORIGINATING
PARTY
Meter pulse tandeming is performed when the following conditions are met:
• When the field HWMETER of table MSRCDATA contains a value of
LNRCVMOJ or TKRCVMOJ for a specific metering data index (MDI)
and logical network name (LNETWORK),
• When the field FUNCTION of table MTSIGSYS contains a value of
RECEPTION for the trunk trunk group types OPR, MTR, or ITOPS in
table TRKGRP.
Meter pulse generation is performed under the following conditions:
• When the field HWMETER of table MSRCDATA contains a value of
TKPULSE for a specified MDI and LNETWORK.
• When the field FUNCTION of table MTSIGSYS contains a value of
GENERATION for trunk group types OPR, MTR, or ITOPS in table
TRKGRP, and MTOGSSI is specified for the trunk group in table
TRKSGRP,.
• When the entry in field CALLSET (ALLCALLS or DEMAND) of table
MTSIGSYS is matched
Line and trunk metering
The metering system provides software meters for the following:
• Standard or regular subscriber lines.
• Coin lines are lines that require the collection of a coin or coins in order
to originate a call, or to extend the duration of the call. Coin lines can
also be metered or non-metered. A non-metered coin line requires only
one coin to originate a call, and the call can continue indefinitely without
further charges. Metered coin lines collect a coin for a specified amount
of call time. When the prepaid time is reached, more coins are required
to extend the call time.
• Trunk groups
Metering for standard lines and coin lines is specified in table LINEATTR.
Metering options for standard lines include no metering, software metering
only, or both software and hardware metering. Metering options for coin
lines include hardware metering, or both software and hardware metering.
Metering for trunk groups is specified in table TRKGRP. No distinction is
made between subgroups or among members, however, only OPR, MTR,
and ITOPS trunks support metering. Metering options for trunk groups
include no metering or software metering.
Any line or trunk without software or hardware metering has all originating
calls processed without incurring charges.
Software metering
Each line or trunk group with software metering has at least one software
meter assigned.
Hardware metering
The generation of pulses for a call originated by a line or trunk group that
requires hardware metering, begins on call answer. The pulses may be used
to either increment the counter on an SPM, or to collect a coin on a metered
coin line. Service order option SPM must be assigned to a line to allow
meter pulses sent to be transmitted. Pulses can be either 12 kHz or 16 kHz.
If the line is datafilled as a non-metered coin line, then a battery reversal is
sent at the point of call answer and continues for the duration of the call. No
other pulses are transmitted. The battery reversal collects a single coin.
Metering for subscriber features
Charging for subscriber features and feature administration is optional. The
metering system provides the capability of charging for the following items:
• assigning of a feature to a subscriber
• administration activating of a feature
• subscriber activation of a feature
• administration programming of a feature
• subscriber programming of a feature
• feature use by a subscriber
• subscriber interrogation of a feature
The DIRP OOS subsystem sets up a recording volume to receive all OOS
billing records, and the DIRP BIL subsystem sets up a recording volume to
create a billing file.
OOS records for a billing period can be incorporated into the billing file in
either of the following ways:
• the active OOS file is rotated to standby at the end of the billing period
• the active OOS file can be closed before the billing file is created
If OOS records for a billing period are not to be added to the billing file, the
OOS file may be left in an active and open state, and the files will not be
processed.
DIRP commands are used to create, mount, demount, rotate, and close the
OOS and billing files. For additional information on the DIRP system, refer
to Device Independent Recording Package Product Guide, 297-1001-013,
Device Independent Recording Package Administration Guide,
297-1001-345, and Device Independent Recording Package Translation
Guide, 297-1001-356.
Billing file content
Billing file data consists of out-of-service and in-service data.
Out-of-service data are all billing records recorded in the billing OOS file.
In-service data consists of billing records for each software meter assigned
to a subscriber. Multiple records are created for subscribers that are
assigned more than one software meter, as one record is generated for each
meter. Billing records for members of multiline hunt (MLH) and distributed
line hunt (DLH) groups are generated with the pilot number of the group.
The OOS billing records and in-service billing records are identical in
format. A billing end-of-OOS record is generated after the last OOS record
is written to distinguish them from in-service billing records.
Data is written to billing files in blocks consisting of 56 line billing records
or 40 trunk billing records. To ensure fixed size blocks, the last block of
billing records is padded with blank billing records.
Billing records
There are two different billing records for lines and trunks.
Line record format
Each line record in the billing file contains the following data:
Directory number A ten-digit number identifying the owner of the meter.
Date A six digit number specifying the date the billing record was created,
in the form YYMMDD.
Figure 8–5 illustrates the format of trunk billing record formats, including a
standard record, and a blank filler record.
Figure 8–5xxx
Trunk billing records
Figure 8–6xxx
Byte format for a line billing record
15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
10 blank T
11 blank blank
12 blank blank
13 blank blank Example
blank blank DN 6137267734
14
Wrap No
15 1 1 4 2
Date 861105
16 0 0 0 6 Name SUBSMET
17 0 0 0 0 Count 000000061142
Figure 8–7xxx
Byte format for a trunk billing record
15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 bit
audit verifies that each block in the queue is in chronological sequence, and
that each allocated block is in use.
THQ audits are run daily at 03:15, or by commands available at the
MTRSYS menu level of the MAP.
Billing recovery process
Since software meters reside in computing module (CM) data store, they are
not affected by an image reload. Line data, however, only reflects the
information stored when the image was taken, and must be updated by a
journal file (JF) application.
Since call processing can resume before a JF application, the software
meters must be synchronized with the current line data. Since each meter
block contains line ownership information, the line to software meter link
can be restored. Any meter not linked to a line is marked as a recycle meter.
When the JF is applied, an attempt is made to link the line being modified
with new data to one of the recycle meters. If a new line has been added,
the recycle meter associated with the line is used instead of allocating a new
meter.
To avoid retaining recycle meters associated with lines that are not restored,
all recycle meters that have not been linked to lines by the third meter audit
are deleted and written to an OOS file. Meters can be deleted with or
without writing their contents to an OOS file by use of the RCLR command.
Meter backup utility
The metering system contains a backup utility to restore meter data should
the CM data store memory become corrupt. The backup utility copies the
metering data to permanent store in bulk format using DIRP.
Since the backup utility runs automatically, a number of backups are stored
in the same file. The frequency of meter backup images is determined by
the office parameters BACKUP_METER_FREQUENCY_LINES and
BACKUP_METER_FREQUENCY_TRUNKS in table OFCENG.
At the MTRSYS level of the MAP, the command RESTORE can be used to
retrieve the most recent backup data from storage and re-establish meter
integrity. If this command is used after a image reload, the JF should be
applied prior to meter backup.
DIRP MAP level commands can be used to create, mount, demount, rotate
and close the backup files. The MTRSYS MAP level command MSTORE
can be used to create a backup file.
System restarts
The impact of different types of system restarts on software metering are as
follows:
• Although software metering survives a CC warm restart, any call
spanning the restart has its charge calculated by CC metering not
XMS-based peripheral module (XPM) metering.
• Although software metering survives a CC cold restart, calls do not. As
a result, any call active when a cold restart occurs is not charged.
• Although software metering survives a CC reload restart, calls do not.
As a result, any call active when a reload restart occurs is not charged.
• Although software metering survives a CC reload from image restart,
line data does not. As a result, JF updates must be run to obtain the
latest line information for the audit process.
• Although software metering survives an XPM cold SWACT, calls do
not. As a result, any call terminated by the XPM SWACT is treated as a
failed call.
• Software metering is not affected by an XPM warm SWACT.
Logs
The DMS-100 switch generates log messages that indicate various events
such as surveillance status, maintenance actions, information items, alarms
and suggested corrective actions. There are over 1600 log messages,
arranged within approximately 140 groups.
Control of log messages
Log messages can be controlled as follows:
• Log output is customized by changing the customer data tables listed in
Customer Data Schema, 297-1001-451.
• Commands at the LOGUTIL level of the MAP can temporarily override
parameters set in the customer data tables.
LOGUTIL
The LOGUTIL level of the MAP contains commands that allow you to
browse software buffers for information about messages, and to temporarily
control the routing and generating of reports.
The following documents contain additional information on the operation
and features of LOGUTIL:
• Log Report Reference Manual, 297-1001-840
• DMS-100 Family Maintenance and Operations Manual, NED 297-0003
Table 8–1xxx
DMS-100i billing logs
Log Definition
AMAB logs
AMAB122 Displays the international call recording (ICR) international
format record generated by a call.
AMAB153 Displays the ICR Turkey format record generated by a call.
AMAB154 Indicates that an ICR extension block could not be obtained for
a call.
AMAB160 Displays the international CAMA (ICAMA) record code BC or
the international inter administration accounting (IAA) record
code BD generated by a call.
AMAB161 Indicates that recording units could not be obtained for a call.
APS logs
APS1xx Indicates that an attendant pay station (APS) call was made,
and the log is to be used to obtain the call detail and cost. The
suffix xx indicates the printer device if the log was generated
by a hotel billing information center (HOBIC). If the log was
not generated by the HOBIC, the suffix is 00.
MTR logs
MTR100 Generated when a subscriber’s meter count surpasses the
maximum meter count for the second time in one billing period,
and a wraparound occurs.
MTR107 Generated every time the recovery, table history queue (THQ)
audit, Charge Updating, and THQCLEAN processes begin to
run.
MTR109 Generated every time the recovery, THQ audit, Charge
Updating, and THQCLEAN processes complete.
MTR114 Generated when the changeover system sends a negative
acknowledgement indicating that the international line group
controller (ILGC) or the international digital trunk controller
(IDTC) is not using the correct tariff number tables for a given
list of logical networks.
MTR116 Generated when the displayed line has no meter block
allocated due to an error condition.
—continued—
Table 8–1xxx
DMS-100i billing logs (continued)
Log Definition
MTR118 Generated when the meter block for the displayed line is also
referenced by another line agent.
MTR119 Generated when all recycle meters in the system are set to
zero, or when the audit runs for the third time after a Reload
Restart.
MTR120 Generated when an attempt is made to charge for a feature
and there is no charge specified in table FEATCHG.
MTR121 Generated when a meter is found with inconsistent control
information during the AGENT/METER audit run.
MTR122 Generated when a recycled meter has been found with
inconsistent control information during the AGENT/METER
audit run.
Table 8–1xxx
DMS-100i billing logs (continued)
Log Definition
SCR logs
SCR100 Indicates that a selective call recording (SCR) call was made,
and the log is to be used to obtain the call detail and cost. The
suffix xx indicates the printer device if the log was generated
by the HOBIC. If the log was not generated by the HOBIC, the
suffix is 00.
End
Priority logs
The following table lists billing priority logs, their alarm class, and the
recommended action.
Table 8–2
Priority logs
Log Alarm Action
class
Operational measurements
Operational measurements (OM) provide information on switch
performance and activity. OM data is organized by group, with each OM
group consisting of related measurements that are displayed in registers.
Each register has a unique name, and no group has more than 32 registers.
Note: When OMs are polled, the contents of active registers do not
necessarily contain current information, since the frequency of updated
data received at the computing module (CM) varies depending on the
peripheral module type and its status.
Table 8–3xxx
DMS-100i billing OM groups
Group Description
Table 8–4xxx
DMS-100i billing priority OM registers
Performance OM group Register Associated logs
factor
Table 8–4xxx
DMS-100i billing priority OM registers (continued)
Performance OM group Register Associated logs
factor
OM thresholding
In order to assemble a more useful set of statistics, OMs can be assigned
thresholds for specific counts over a specified period of time, and limits can
be set on the duration of an OM before triggering an alarm.
Maintenance assistance package (feature package NTX053AA) allows
operating company personnel to create a threshold level for individual OMs.
The OMs to be assigned thresholds are entered in table OMTHRESH. The
following information is included in the table:
• OM register name key
• enable trigger (Y or N)
• alarm level (none, minor, major, critical)
• event threshold (1 to 32 767)
• time interval (1 to 32 767 min)
Table 8–5
DMS-100 billing SPMS index
Basic Description Associated Associated Associated
index OM group OM registers log
Alarms
The following table lists the DMS-100 billing alarms that appear at the EXT
level of the MAP display, their alarm class, and recommended maintenance
actions:
Table 8–6xxx
DMS-100 billing EXT alarms
Alarm Alarm Action
class
METBCK major The metering backup DIRP file has not been
mounted, which can jeopardize the security of the
billing system. Mount the backup file.
METOOS minor The metering out-of-service DIRP file has not been
mounted, which can potentially deny all metering
system data changes. Mount the out-of-service file.
NTHQBLKS minor Less than 10% of the THQ blocks are free for use
by CC metering. Contact the next level of support.
Note: For more information on the DIRP file system, refer to Device
Independent Recording Package Product Guide, 297-1001-013. For
information on tape mounting and demounting procedures refer to Bellcore
Format AMA Maintenance Guide, 297-1001-570.
Commands
This section includes the user–interface command levels and commands that
are associated with DMS-100 billing, including:
• a list and brief description of the AUDIT, DIRP, MTRSYS, LTPMTR,
MTRTTP, and TTPSUB MAP levels, and of specific CI commands, used
for support, and checking for meter content
• a “tree” diagram showing the relationship of the MAP levels and
sublevels to be used
• descriptions of useful non-menu commands
Menu commands
Figure 8–8 illustrates the hierarchy of DMS-100 MAP menu levels with the
levels associated with billing highlighted. The menu commands available at
the MAP levels associated with billing are illustrated in figures 8–9 to 8–15.
Figure 8–8xxx
DMS MAP menu levels
MAPCI
MTC
MTRSYS
AUDIT
LTPMTR
TTPSUB
MTRSYS
0 Quit
2 Qmtrblk
3 Tariff
4 Errscan
5 Errstop
6 Mstore
7 Restore_
8 Billing_
9 TNT_
10 Audit
11
12 THQCLEAN
13
14
15
16
17
18
Commands associated with the MTRSYS level of the MAP display are as
follows:
Qmtrblk
Displays three values: the number of used, unused, and recycle meter blocks.
Since recycle meters may be in used or unused meter blocks, the sum of
these three values do not represent the total number of meter blocks
allocated. A log with this information is also generated.
An example of command output is as follows:
MTRBLKS LINES: USED 619 UNUSED 382 RECYCLE 0
TRUNKS: USED 1253 UNUSED 795
Note: Recycle meter blocks do not apply to trunks, because trunk assignments
are static, while line assignments change frequently. As a result, lines follow a
different recovery process should a RELOAD from image be performed.
Tariff
Displays the rate of charging for the specified network. A second optional
parameter is the tariff index. This command will display the ANSWER,
INITIAL and OVERTIME tariffs and the time unit. The data is derived
from table MTARIFF.
Errscan
Enables scanning for the metering error status report. This report
corresponds to three MTRERR OM group fields and is displayed on the
screen under the headings: METER OVERFLOW, INVALID MDI and
LATE CHANGEOVER. This command is not supported for trunk metering.
Errstop
Disables scanning for the metering error status report. This command is not
supported for trunk metering.
Mstore
Manually performs backup of software meter counts onto a backup facility.
Restore
Restores meter counts from the backup facility should the software meters
become corrupt.
Billing
Stores charges for lines or trunks on a recording medium from their
software meters, and invokes the billing utility.
TNT
Displays the tariff number table index for the network (or all networks)
based on the current time of day.
Audit
Invokes the AUDIT level of the MAP display.
THQclean
Clears all table history queues to free resources for central control (CC)
metering. This command should be used with caution, as it may reduce the
switch call capacity.
AUDIT
0 Quit
2
3
4 LnMtrs
5 TkMtrs
6
7 THQAUD
8
9
10
11
12
13
14
15
16
17
18
Commands associated with the AUDIT level of the MAP display are as
follows:
LnMtrs
Verifies that the meterblocks allocated represent the line datafill, and that
there is no corruption in the software meters data structure.
TkMtrs
Verifies that the meterblocks allocated represent the trunk datafill, and that
there is no corruption in the software meters data structure.
THQAUD
Ensures the THQs used in CC metering are sane.
TTP
0 Quit_
2 Post_
3 Seize_
4
5 Bsy_
6 RTS_
7 Tst_
8
9
10 Cktloc
11 Hold
12 Next_
13 Rls_
14 Ckt_
15 Trnslvf_
16 Stksdr_
17 Pads_
18 Level_
The only command associated with the TTP level of the MAP display that
impacts billing services is the LEVEL command. The LEVEL command
has several optional parameters depending on the office configuration. Use
the parameter TTPSUB with this command to access the trunk software
metering facility.
TTPSUB
0 Quit_
2 Post_
3
4
5 Bsy_
6 RTS_
7 Tst_
8
9
10
11 Hold
12 Next_
13 Rls_
14 GrpMtrs
15 Print_
16
17
18
Commands associated with the TTPSUB level of the MAP display are as
follows:
Post
Posts a specified trunk circuit.
Bsy_
Busies the specified trunk circuit or the posted trunk circuit if no circuit is
specified.
RTS_
Returns to service the specified trunk circuit or the posted trunk circuit if no
circuit is specified.
Tst_
Performs diagnostic tests on the specified trunk circuit or the posted trunk
circuit if no circuit is specified.
Hold
Holds the posted trunk circuit.
Next_
Seizes the next circuit in the posted trunk group or the held group.
Rls_
Performs a forced release of the specified trunk circuit or the posted one if
no circuit is specified.
GrpMtrs
Display the software meters and their current value for the trunk group of
the posted trunk circuit.
An example of command output is as follows:
LTP
0 Quit_
2 Post_
3
4
5 Bsy_
6 RTS_
7 Diag
8
9 AlmStat
10 CktLoc
11 Hold
12 Next_
13
14
15
16 Prefix
17 LCO_
18 Level_
The only command associated with the LTP level of the MAP display that
impacts billing services is the LEVEL command. The LEVEL command
has several optional parameters depending on the office configuration. Use
the parameter LTPMTR with this command to access the line software
metering facility.
LTPMTR
0 Quit_
2 Post_
3 Counts_
4 Meters_
5 Print_
6
7
8
9
10
11 Hold
12 Next_
13
14
15
16
17
18
Commands associated with the LTPMTR level of the MAP display are as
follows:
Post
Posts a specified line.
Counts
Display meter counts for one or all meters.
Meters
Displays all meters owned by the posted line.
Hold
Holds the posted line.
Next_
Seizes the next circuit in the set of posted lines.
Print_
Print all meter counts for posted line. This command takes one parameter –
the device name. If the device is storage device, instead of an input/output
device, then the information is recorded in a file called MTRCOUNTS.
The information consists of the DN, date, LEN, meter names, meter counts
and WRAP bits.
DIRP MAP menu
The device independent recording package (DIRP) level of the MAP display
as illustrated in figure 8–15 is used to access the commands that control the
files and recording volumes of the DIRP.
Figure 8–15xxx
DIRP MAP menu
DIRP IOD
0 Quit IOC 0 1 2 3
2 Audit_ STAT . . . .
3 Query_
4 Mnt_ DIRP: . XFER: . DPPP: . DPPU: . NOP: .
5 Dmnt_ SLM : . NX25: . MLP : .
6 Rotate_
7 Close_
8 RsetVol_
9 _AMA_
10 _OM_
11 _JF_
Hidden commands
12
13 cleanup
14 revive
15
16 _Active
17 _Stdby_
18 _Paralel
Audit_
Manually initiates a specific audit procedure.
Query_
Displays the current status of recording files for the specified subsystem.
Mnt_
Assigns a recording volume to a subsystem.
Dmnt_
Makes the specified volume available to the specified subsystem by
demounting the volume from a DIRP pool.
Rotate_
Initiates manual rotation of duties for the recording files of the specified
subsystem.
Close_
Manually closes specific DIRP files.
RsetVol_
Resets the state of a volume to indicate to DIRP that the specified volume is
now available for recording.
Cleanup
Renames the terminated removed (R) files and deletes closed parallel files.
Revive
Allows the DIRP child processes DIRPGI, DIRPDSON, and DIRPTSON to
be recreated after they have failed.
Non-menu commands
QMTRTBL
Enables the user to view the contents of the deassigned meters tables. If a
device and file name are specified, the contents are written into the specified
file.
This command has the following syntax:
or
>QMTRTBL direct_no ALL {dev_name} {file_name}
where
direct_no is the directory number of the subscriber
MTR specifies that only the contents of the table
for the specified meter is to be queried
meter_name is one of the meter names datafilled in the
system
ALL specifies that the contents of all the
deassigned meters are to be queried
dev_name this optional parameter enables the
contents of the specified table or all tables
to be copied onto the specified device
file_name this optional parameter provides the name
of the file to which the contents of the
specified table or all tables are to be
written
DN MTR NAME MTR COUNT MTR DATA BITS DATE AND TIME
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
2000001 localcalls 000009754 T T F F F T 91/04/03 19:00
2000010 localcalls 738402938 F T F T F T 91/04/10 12:00
IRATE
This command, when entered from the CI prompt, accesses an international
rating test program (IRATE) that enables the operating company to verify
the rating system datafill.
The IRATE program allows the operating company to enter test call details,
and request the rate step or charge calculation or both for the test call, with
the results appearing on the MAP display.
Table 8–7 lists the IRATE subcommands, their associated parameters, and a
description.
Table 8–7xxx
IRATE subcommands
Subcommand Parameters Description
HELP — Help
Displays online documentation about the IRATE program and
its available subcommands.
Q subcommand Query
name Displays more detailed information about each subcommand.
QUIT — Quit
Enables you to exit from the IRATE program, and return to the
CI level.
—continued—
Table 8–7xxx
IRATE subcommands
Subcommand Parameters Description
—continued—
Table 8–7xxx
IRATE subcommands
Subcommand Parameters Description
CC — Calculate charge
Used to calculate the charges for the test call if the rate step is
known.
CB — Calculate both
Used to calculate both the rate step and the charges based on
the current settings.
End
RS = 0
MTRPRINT
This command prints the line billing file located on a tape onto an
input/output device.
The command has the following syntax:
>MTRPRINT direct_no
where
direct_no is the directory number of the subscriber
MTKPRINT
This command prints the trunk billing file located on a tape onto an
input/output device.
The command has the following syntax:
DELFM
This command deletes the feature meter from a line’s meter block. The line
must not have chargeable features assigned, or any features active.
The command has the following syntax:
RCLR
This command clears the recycle meters.
The command has the following syntax:
>RCLR Y or N
where
Y specifies that the recycle meters are to be written to the
out-of-service file. This is the default value.
N specifies that the recycle meters are not
to be written to an out-of-service file.
METVER
This command verifies the consistency of the metering tariff tables, and
should be used after changes to these tables have been made to ensure that
all networks have valid tariffs for all time periods.
The command has the following syntax:
or
>QMTRTBL direct_no ALL {dev_name} {file_name}
where
direct_no is the directory number of the subscriber
MTR specifies that only the contents of the table
for the specified meter is to be queried
meter_name is one of the meter names datafilled in the
system
ALL specifies that the contents of all the
deassigned meters are to be queried
dev_name this optional parameter enables the
contents of the specified table or all tables
to be copied onto the specified device
file_name this optional parameter provides the name
of the file to which the contents of the
specified table or all tables are to be
written
Translations
Translations is the process by which information in data tables is accessed
by the system and processed to allow the network or feature to operate. In
order to implement DMS-100 billing, certain tables must be datafilled in a
particular sequence to ensure the smooth operation of the network.
Translations database
In order to perform translations, the switch must access data stored in the
computing module (CM) memory called the translations database.
The translations database contains numerous data tables. Each table has a
specific purpose and contains a certain type of data. For example, table
C7RTESET logically associates linksets to be used in routes through a CCS7
network. When processing a call, the DMS switch may access many tables
to collect the data needed to complete the call.
A table consists of horizontal rows and vertical columns of data. Each row
contains one record of data and is called a tuple. Each column is called a
field. Figure 8–16 illustrates the terminology used to describe a table.
Figure 8–16xxx
Illustration of translations terms
FW-30072
Table
Subtable
Data
key field
Each table has a key field or fields. Tables may have more than one key
field. These fields uniquely identify any tuple in the table. Knowing the key
fields of a table is important when using the table editor.
range
The range of a field is the set of all possible data values that can be entered
in the field. For example, a field called NUMBER may have a range of 1
through 20. RANGE is also a command that can be entered at the switch to
determine the range of the table or field.
subfield
A field is a division of a field. For example, in table C7NETSSN, field
SSNAMES contains two subfields: SSNAME and SSNUMBER.
table editor
The table editor is the user interface to the translation database. It allows the
user to view tables, add or delete tuples, and change data in tuples.
tuple
A tuple is one row of data in a table.
vector
A vector is a field that can contain more than one entry. Each entry is
separated by a space; a plus (+) sign allows continuation to the next line of
data input; and a dollar ($) sign indicates the end of the vector. For example,
the OPTCARD field in table LTCINV can contain up to 10 optional cards;
each entry is separated by a space and the vector ends with the dollar sign.
This section describes the datafill requirements for tables used by the
DMS-100 billing system.
Translations table flow
The DMS-100i billing translation process is shown in the table flow chart on
page 8–53 and includes the following tables.
Table TRKGRP specifies the metering data index (MDI) and the
XLANAME for trunk calls
Table LINEATTR specifies the line attribute index for line originated calls.
Table PXCODE translates the incoming digit string in segments.
Table MSRCDATA specifies the source tariff index (STI) for each logical
network and MDI combination.
Table MDESTIDX specifies the destination tariff index (DTI) for each
logical network and the associated metering zone combination.
Table MTARFIDX specifies the tariff index (TRFIDX) for each each logical
network, STI, and DTI combination.
Table TIMEODAY specifies the tariff number table (TNT) number for each
logical network and day type and time combination.
Table MTARFNUM specifies the tariff number (TARIFNUM) for each
logical network, TNT number and TRFIDX combination.
Table MTARIFF specifies the metering pulse rate for each of the three
phases of a call.
Table Table
TRKGRP LINEATTR
Table
PXCODE
Table Table
MDESTIDX MSRCDATA
Table Table
TIMEODAY MTARFIDX
Table
MTARFNUM
Table
MTARIFF
Datafill sequence
The following tables require datafill to implement DMS-100i billing
services. The tables are listed in the order in which they are to be datafilled.
—continued—
End
DMS toll
office
LOCAL ––––––––––
OFFICE ANI Outgoing
trunk trunk
ICAMA
system
LOCAL
––––––––––
OFFICE Outgoing
ANI
trunk trunk
Call classes for ICAMA are based on definitions of the universal translation
(UXLA) call classes. Table 8–8 lists the call-class codes supported by
ICAMA.
Table 8–8xxx
ICAMA call class codes
Class Description Feature support
Restarts/SWACTs
This section discusses the impact of the restarts and SWACT on ICAMA
billing.
Cold or reload restarts in the CC
If a cold restart occurs before the call information is written to permanent
storage, the call information is lost. Records would be lost for calls in
progress, as well as any completed calls whose records are still in the
temporary buffer, but have not yet been written to tape or disk.
Figure 8–18 shows the relationship between the time the restart occurs and
the impact of the restart on the recording of call data.
Figure 8–18xxx
Impact of cold or reload restarts on ICAMA recording
FW-31063
a b c d e f g h i
Time
a b c d e f g h i
Time
SWACT in the CC
Because all calls are aborted during a SWACT, no data is recorded for calls
in progress.
Cold SWACT/restart in the XPM
If a restart or cold SWACT occurs in the peripheral, the calls are aborted,
although the call record is still generated with an error status. The call
duration is also recorded.
Warm SWACT in the XPM
If a warm SWACT occurs while the call is not in a talking state, for
example, dialing, ringing, answer, or disconnect, ICAMA generates a call
record with a status code of 2 to indicate a call failure.
Call class
A binary coded decimal (BCD) digit code that identifies the call class being
recorded. This information is obtained from translations.
Record code
A two–character field that uniquely identifies the type of record. For
ICAMA, the value in this field is BC.
Call info
A two-BCD digit code in this field that identifies general call information
using a Y/N (yes/no) format as illustrated in figure 8–21.
Figure 8–21xxx
Call information field details
Meaning 0 1 2 3 4 5 6 7
Answered N Y N Y N Y N Y
Service analyzed N N Y Y N N Y Y
Chargeable call N N N N Y Y Y Y
Calling number
A ten-digit number, right-justified and padded with #F characters when
required, that identifies the call originator. This information is obtained
from ANI.
Called number
An eighteen-digit number, right-justified and padded with #F characters
when required, that identifies the digits received from translations. These
digits include the country code and national significant number (NSN).
Start time
A twelve-digit BCD field that identifies the year, month, day, hour, minute,
and second when the call was answered.
Duration
An eight–digit BCD field that identifies the duration of the call measured in
seconds, according to the following equation:
Duration = Disconnect time – Start time
OG Trunk ID
A four–digit BCD field that identifies the outgoing trunk group number.
IC trunk ID
A four–digit BCD field that identifies the incoming trunk group number.
Figure 8–22 summarizes the number of BCD units required for each field in
the call record. A BCD unit contains four bits.
Figure 8–22xxx
Call record tape format
Call record
Figure 8–23 contains two boxes. The left box is a ICAMA call record that
contains sample data from the bottom of the figure. The right box displays
the position of each data field.
Figure 8–23xxx
ICAMA record format example
Byte 1 Byte 0
BCD BCD
WD 2 3 0 1 Word Byte 1 Byte 0
0 9 0 C B 0 CALL CLASS RECORD CODE
1 0 5 0 0 1 CALL INFO (CALL MODE)
2 F F 0 0 2 (SERV FEAT)
3 F F 6 8 3 CALLING
4 9 0 8 1 4 NUMBER
5 F F F F 5
6 0 0 0 2 6 CALLED
7 6 6 6 1 7 NUMBER
8 1 6 9 4 8
9 SPARE 9 9 9 SPARE
10 9 0 5 8 10
11 7 0 3 2 11 START TIME
12 3 1 5 4 12
13 4 5 3 0 13 DURATION
14 0 0 0 0 14
15 0 0 0 0 15
16 0 0 0 0 16
17 0 2 1 0 17 OG TRUNK ID
18 4 3 2 0 18 IC TRUNK ID
Called number
An eighteen-digit number, right-justified and padded with #F characters,
when required, that defines the digits received from translations. These
digits include the country code and national significant number (NSN).
Start time
A twelve-digit BCD field that specifies the year, month, day, hour, minute,
and second when the call was answered.
Duration
An eight-digit BCD field giving the duration of the call measured in
seconds.
Pulse count
An eight-digit BCD field containing the number of meter pulses
accumulated for the call. For unanswered calls, the value entered is 0 (zero).
OG Trunk ID
A four-digit BCD field that specifies the number of the outgoing trunk group
used during the call.
Figure 8–24 illustrates the different field names, the number of BCD digits
required for each field in the call, and a definition of each field.
Figure 8–24xxx
ICR record format
Date 6 Year/month/day
Time 6 Hour/minute/second
DMS-100
ANI
1
3 International
4
5 National
6
7
8
Activating IAA
IAA is activated as follows:
1 To indicate those trunks that are to be recorded, datafill Y in subfield
IAA of field GRPINFO in table TRKGRP.
Note: This step applies to MTR trunks only. IAA is not valid for OPR
trunks, nor is the field displayed for ANI trunks.
Figure 8–26xxx
IAA record format
Date 6 Year/month/day
Time 6 Hour/minute/second
Activating ITOPS
ITOPS cannot be activated or deactivated by the operating company, as this
feature is software load dependant.
Office parameter VALIDATE_CCITT_LUHN_DIGIT in table OFCENG is
used to specify if check digit verification is required for CCITT format
calling card numbers. The default value is Y (validate check digit).
Office parameter GENERATE_ITOPS_LOG_ENTRY in table OFCVAR is
used to generate a log each time an ITOPS call record is created. The log
number can be AMAB180 for a standard ITOPS record, and AMAB181 to
AMAB186, or AMAB188 depending on the extension record involved in
the call.
Record code
This field contains information that identifies the type of call record. For
ITOPS, the value is hexadecimal BE (#BE).
Call class
A two-digit binary coded decimal (BCD) code that identifies the call class
being recorded. Only operator assisted (code 12) or operator originated calls
(code 13) are allowed.
Information digits
Information digits 1 to 6 record various events that occur during a call as
follows:
• Information digit 1 indicates if:
— ANI has failed for the call
— the operator has entered the called number
— the calling number was identified by the operator
Values for digit 1 that apply to ITOPS are as follows
Meaning Value
0 1 2 3 4 5 6 7
ANI failed? N Y N Y N Y N Y
Operator dialed? N N Y Y N N Y Y
Operator identified? N N N N Y Y Y Y
Meaning Value
0 1 2 3 4 5 6 7
Call answered? N Y N Y N Y N Y
Call Failure? N N Y Y N N Y Y
Service analyzed? N N N N Y Y Y Y
Meaning Value
0 1 2 3 4 5 6 7
Trouble report? N Y N Y N Y N Y
Charge adjust? N N Y Y N N Y Y
No connect? N N N N Y Y Y Y
Meaning Value
0 1 2 3 4 5 6 7
Dial rate key? N Y N Y N Y N Y
No charge origination? N N Y Y N N Y Y
No charge key ? N N N N Y Y Y Y
Meaning Value
0 1 2 3 4 5 6 7
Transferred call N Y N Y N Y N Y
Cancel timing? N N Y Y N N Y Y
Cancel call? N N N N Y Y Y Y
Meaning Value
0 1 2 3 4 5 6 7
Calling number verify? N Y N Y N Y N Y
Trunk offering (TBI)? N N Y Y N N Y Y
Called number overwrite? N N N N Y Y Y Y
Operator number
A four-digit BCD number that indicates the number of the operator that
handled the call. If more than one operator was involved, the number
specifies the last operator to handle the call. Operator numbers are datafilled
in table ITOPSOPR.
Team number
A two-digit BCD number that identifies the team to which the last operator
to handle the call belongs. Team numbers are datafilled in table ITOPSOPR.
Outgoing trunk identification
A four-digit BCD number that represents the external outgoing trunk group
name.
Incoming trunk identification
A four-digit BCD number that represents the external incoming trunk group
name.
As shown in figure 8–28, the size of the special billing extension record is
22 BCDs, divided among the following fields:
Extension code
A two-digit BCD code that specifies the type of extension record present.
For special billing, the extension code is hexadecimal E6 (#E6).
Billing number code
A one-digit BCD code specifying the type of special billing number used.
Billing digit
A one-digit BCD code that indicates if the billing number code entered was
verified by the operator, appears on a hot list, or both. Valid entries are as
follows:
• 0 – default value
• 1 – operator-verified, but not a hot list number
• 2 – not operator-verified, but is a hot list number
• 3 – operator-verified, and is a hot list number
Billing number
An 18-digit field that contains the special billing number to which the call is
billed. This field can contain a combination of numeric digits and letter
characters, provided the total number of characters does not exceed 18. The
digits are right-justified and the unused character positions are padded with
hexadecimal Fs (#F) when required.
ITOPS
extension Room Guest
code number name
E7 2 6 40
As shown in figure 8–29, the total size of the of the hotel billing extension
record is 48 BCDs divided among the following fields:
Extension code
A two-BCD code that specifies the type of extension record present. For a
hotel billing extension, the BCD extension code is hexadecimal E7 (#E7).
Room number
A field of six BCDs that specifies the room number to which the call is
billed. This field can contain six numeric digits, three alphanumeric
characters, or a combination of numeric and alphanumeric characters.
Guest name
A field of 40 BCDs (20 alphanumeric characters) that contains the name of
the hotel guest as entered by the operator. This field is left-justified and
padded with hexadecimal Fs (#F) as required.
ITOPS
extension. Quoted
code Amount
E8 2 10
As shown in figure 8–30, the total size of the of the charge extension record
is 12 BCDs, divided among the following fields:
Extension code
A two-BCD code that specifies the type of extension record present. For a
charge extension record, the extension code is hexadecimal E8 (#E8).
Quoted amount
A ten-digit BCD field that contains the quoted cost of the call in the local
currency. The code is right-justified and padded with hexadecimal Fs (#F)
when required.
As shown in figure 8–31, the total size of the of the charge adjustment
extension record is 20 BCDs, divided among the following fields:
Extension code
A two-BCD code that specifies the type of extension record present. For a
charge adjustment extension record, the extension code is hexadecimal E9
(#E9).
Time
A four-BCD field that specifies the time the operator entered the charge
adjustment. This field is of the form HH:MM.
Charge adjustment type
A two-BCD field that specifies the reason for the charge adjustment entry.
The assignment of charge adjustment types is controlled by the operating
company.
Charge adjustment indicator
A one-BCD field that specifies the type of credit that has been given to the
customer.
Charge adjustment amount
A ten-BCD field that specifies the amount of the charge adjustment. The
type of adjustment value stored is dependent on the value stored in the
adjustment indicator field. All adjustment values are right-justified and
padded with hexadecimal Fs (#F) when required.
Filler
A one-BCD field used to ensure that the charge adjustment record ends on
an even byte boundary, as one byte requires two BCDs.
As shown in figure 8–32, the total size of the of the foreign alternate route
extension record is 6 BCDs, divided among the following fields:
Extension code
A two-BCD code that specifies the type of extension record present. For a
foreign alternate route extension record, the extension code is hexadecimal
EA (#EA).
Alternate route country code
A three-BCD field that specifies the country involved as the alternate route.
Valid values for this field are 000 through 999. Since leading zeros are
required for some country codes, padding is not performed in this field.
Alternate route type
A one-BCD field that indicates the type of alternate route call handled by the
operator.
ITOPS
extension Database
code class
EB 2 2
As shown in figure 8–33, the total size of the of the database call extension
record is 4 BCDs, divided among the following fields:
Extension code
A two-BCD code that specifies the type of extension record present. For a
database call extension record, the extension code is hexadecimal EB (EA).
Database class
A two-BCD field that specifies the call class of the database call. The
assignment of database call class types is controlled by the operating
company.
As shown in figure 8–34, the size of the special long billing extension record
is 23 BCDs, divided among the following fields:
Extension code
A two-BCD code that specifies the type of extension record present. For
special billing, the extension code is hexadecimal ED (ED).
Billing number code
A one-digit BCD code specifying the type of special billing number used.
The only valid entry is 7, indicating a CCITT calling card.
Billing digit
A one-digit BCD code that indicates if the billing number code entered was
operator-verified, appears on a hot list, or both. Valid entries are as follows:
• 0 – default value
• 1 – operator-verified, but not a hot list number
• 2 – not operator-verified, but is a hot list number
• 3 – operator-verified, and is a hot list number
Billing number
A 19-BCD field that contains the CCITT calling card number to which the
call is billed. The digits are right-justified.
MASSTC feature
The MASSTC command, when entered from the CI prompt, accesses an
international mass table control (MASSTC) program that enables operating
companies to simultaneously activate data changes to the ITOPS tables
listed in table 8–9.
Table 8–9
ITOPS tables associated with MASSTC
Table Form NTP Purpose of table
—continued—
Table 8–9
ITOPS tables associated with MASSTC (continued)
Table Form NTP Purpose of table
End
Figure 8–35xxx
MASSTC program states
Duplicate
Save INITIAL
state Scrap
Enable
MASSTC program
INITIAL state The MASSTC program is normally in this state. In this
state, the following conditions apply:
• active tables may be edited, and any changes to the datafill will
immediately affect the rating of calls
• the inactive tables are empty
• attempts to add a tuple to an inactive table will fail
Enter the STATUS subcommand in the INITIAL state to display the
following information:
INITIAL STATE
NO INACTIVE DATA
THE FOLLOWING TABLES HAVE INACTIVE TWINS...
TIMEZONE ATTRIB SCHEDEF RSLOC
HOLITRMT MODSET TAXMAPS TAX
RSFOR RSNAT MODMAP CHGHEAD
CHGATRIB ATRIMOD RATEMOD RNDING
DUPLICATED STATE
OLD DATA IS ACTIVE
NEW DATA IS INACTIVE
THE FOLLOWING TABLES WERE DUPLICATED AT yy/mm/dd hh;mm....
TIMEZONEI ATTRIBI SCHEDEFI RSLOCI
HOLITRMTI MODSETI TAXMAPSI TAXI
RSFORI RSNATI MODMAPI CHGHEADI
CHGATRIBI ATRIMODI RATEMODI RNDINGI
SWITCHED state The MASSTC program enters this state only after the
SWAP command has been entered. In this state, the following conditions
apply:
• the active tables contain the new data
• the inactive tables contain the old data
• inactive tables cannot be edited
Enter the STATUS command in this state to display the following
information:
SWITCHED STATE
NEW DATA IS ACTIVE
OLD DATA IS INACTIVE
THE FOLLOWING TABLES WERE DUPLICATED AT yy/mm/dd hh;mm...
TIMEZONE ATTRIB SCHEDEF RSLOC
HOLITRMT MODSET TAXMAPS TAX
RSFOR RSNAT MODMAP CHGHEAD
CHGATRIB ATRIMOD RATEMOD RNDING
Table 8–10xxx
MASSTC subcommands
Subcommand Description
DUPLICATE Copies the contents of each active table into the corresponding
inactive table.
SWAP Exchanges the contents of the active and inactive tables. This
command can be used anytime during a MASSTC session.
SAVE Saves the current active data permanently. The saved data will
be copied into the active table on all restarts.
SCRAP Erases all new data. The new data must be inactive when this
command is issued.
QUIT Exits the MASSTC program. The previously active table data
will be used on all restarts.
ITOPS
ITOPS description
The International Traffic Operator Position System (ITOPS) allows an
operating company to provide complete operator assistance on local and toll
calls in a national and international environment. ITOPS combines
hardware and software components to create an effective call-processing
system.
ITOPS is made up of several interactive terminals that are used to handle
calls requiring operator assistance. Each terminal is known as an ITOPS
position and consists of a microprocessor-based controller, a keyboard, and a
cathode-ray tube (crt). An operator uses an operator headset with the
position. Figure 9–1 illustrates the components of an ITOPS position.
Figure 9–1xxx
ITOPS position components
TOPS monitor
ITOPS keyboard
Operator headset
ITOPS
controller
Voice link to DMS switch
Data link to DMS switch
When handling a call at an ITOPS position, the operator uses the keyboard
and screen to transmit and receive call details. These call details are
exchanged in data form between the controller of the DMS switch and the
ITOPS position controller. The operator’s headset provides voice contact
with calling and called parties, other operators, and supervisors.
ITOPS resides on a DMS-200i toll switch or a DMS-100i/200i local/toll
switch. The ITOPS positions are grouped into teams or traffic offices
according to the call-handling requirements of the operating company. A
traffic office can contain multiple operator positions and assistance
positions. One in-charge position is allowed for each traffic office.
characteristics of a call. The rate step is used with the rate schedule to
determine the charges applicable to a call.
Manual entry of a rate step
The rate step feature allows the operator to manually enter a rate step when
rating table entries do not exist for the called number. The ITOPS operator
contacts the rate and route operator for the rate step calculation or consults a
list of rates.
ITOPS rating test program (IRATE)
The operating company uses IRATE to verify its rating system datafill.
IRATE uses live data from the rating system data tables but is completely
independent of call processing and ITOPS positions. IRATE allows the
operating company to do the following:
• enter relevant details of a test call from the command interpreter (CI)
level
• request rate step, charge calculation, or both rate step and charge
calculation for the test call, with the results appearing at the terminal
Charge calculator
ITOPS can calculate charges on all types of local, toll, and foreign calls and
can take into account factors such as time zones, holidays, and taxes.
Charge application based on attributes: ITOPS allows the operating
company to vary call charges based on call attributes. The following are call
attributes:
• coin
• hotel
• time and charges
• database call
• attended pay station
• person to person
• bill to called party
• bill as person call back
• calling card
• bill to third party
Different charges may be assigned to all combinations of these attributes.
Discount and surcharge application
The operating company can apply discounts or surcharges to the subscriber
based on the date and time of the call. First, the date is checked to see if it is
a discounted holiday rate. After the holiday is checked, the system checks
the day of the week and the method of billing (station or person). Next, the
system checks the origination time of the call. These combined factors
determine any discounts applied to the call.
The operating company may also apply surcharges to a call. Surcharges
may be applied on the following types of billing and charges:
• station collect billing
• station third number billing
• station time and charges
• station calling card billing
• person collect billing
• person third number billing
• person time and charges
• person calling card billing
Charge quotations
ITOPS allows the operator to calculate and quote charges to the subscriber
for hotel calls, attended pay station calls, coin calls, and calls requiring time
and charges quotations. Hotel charges are reported to operators at the hotel
billing information center (HOBIC) for quoting. The operator at an ITOPS
position quotes information for the coin calls and the calls requiring time
and charges.
Third-party billing
ITOPS allows the subscriber to bill a call to a third party. The operator has
the option of calling the third party for verification that they will accept the
charges.
Calling-card billing
ITOPS allows the subscriber to use a calling-card number as a method of
billing. The operator verifies the card number by accessing a calling card
database and requesting validation. This function is not directly supported
from an ITOPS position, but the operator has the ability to forward connect
to the appropriate operator who does have this validation information.
ITOPS supports four types of calling-card formats:
• revenue accounting office (RAO)
• overseas
• directory number
• Consultative Committee on International Telephony and Telegraphy
(CCITT)
“
Hot list
The hot list feature allows the operating company to define a table
containing up to 64 fraudulent calling-card or third-party numbers. The
operator is alerted on the display screen of the fraudulent number and can
take the appropriate steps outlined by the operating company.
Tax calculation
ITOPS offers the operating company the option of calculating taxes for a
call based on a percentage of the total call charge or based on the
charge-per-minute or charge-per-second rate of the call.
Charge adjustments
ITOPS allows the operator to make several types of charge adjustments:
• monetary
• time
• entire charge deletion
• mark the call for later
Table 9–1
Call origination types
Operator assisted (OA) The subscriber dialed the number directly but
requires operator assistance.
—continued—
Table 9–1
Call origination types (continued)
Direct dialed, international The subscriber dialed a call direct from a foreign
(FOR) origination without operator assistance.
Intercept (INT) A call has been intercepted and the subscriber has
changed the number.
Book (BOOK) The subscriber wants to book a call in the delay call
database.
Notify (NFY) The specified notify period has expired and the call
should be recalled. No operator intervention is
required.
Time and charges (T&C) Time and charges need to be quoted and the call
should be recalled. No operator intervention is re-
quired.
Held (HLD) The call held by the operator has ended and it
should be recalled. No operator intervention is re-
quired.
Coin recall (RCL xx) A specified number of minutes have elapsed and
the call should be recalled. No operator interven-
tion is required.
—continued—
Table 9–1
Call origination types (continued)
End
Some calls that originate as DD and OA are recalled to the operator because
no calling number is identified. These calls are:
• operator number identification (ONI) — ONI calls originate at end
offices. End (local) offices are switching offices that accommodate
terminating subscriber lines and provide trunks for establishing
connections to and from other switching offices. These end offices are
not equipped to automatically provide the calling number automatic
number identification (ANI) spill. Therefore, the call is routed to an
operator for collection of the calling number. The operator display for
this call varies depending on whether it is DD or OA.
• automatic number identification failed (ANIF) — ANIF calls originate at
end offices that are equipped to automatically provide the calling
number, but fail to do so on a given call because of resource failure. The
call is routed to an operator for collection of the calling number. The
operator display for this call varies depending on whether it is DD or
OA.
• recall — Recall calls are handled by an operator, released, and returned
to an operator for further operator assistance. The returning operator may
not be the original operator.
— Recalls are initiated by the system for coin calls when additional
coins are required or for alternate billing requests. The operator
screen display varies according to the type of recall.
Determining station class
The station class of the call indicates the type of station from which the call
originates, such as coin, attended pay station, hotel, or non-coin. The station
class is determined by whether the trunk group is dedicated or combined.
Dedicated trunks carry traffic for one station class only, such as coin.
Combined trunks carry traffic from various kinds of stations, such as coin,
non-coin, and hotel.
If a call arrives at an ITOPS office over a dedicated trunk group, the system
automatically determines the station class. If the call arrives on a combined
trunk group, the ITOPS system determines the station class by examining
the ANI information provided by the end office.
Determining whether an operator is needed
Once the system identifies the station class and call type, it can determine
whether operator assistance is required. OA and OH calls always route to
an operator. DD calls do not normally route to an operator unless there is a
resource failure. ONI and ANIF calls, recalls, and service code calls always
route to an operator.
Routing a call to an operator
ITOPS uses queues to manage calls requiring operator assistance. The
system uses two queues for position management (idle position queues) and
six queues for incoming calls distribution (calls-waiting queues).
Idle position queues: The ITOPS office maintains two queues
associated with operator positions. These queues keep track of the number
of positions that have both loops (loop 1 and loop 2) available, and the
number of positions that have only one loop available. Most incoming calls
are routed to positions that have both loops available. The ITOPS system
office searches the queue for the most idle position and connects the call to
that position. If there are no positions with both loops available, the system
places the call in one of the calls-waiting queues.
Calls-waiting queues: Six calls-waiting (CW) queues are associated
with an ITOPS office:
• nontransfer (general)
• transfer 1
• transfer 2
• nontransfer recall
• transfer 1 recall
• transfer 2 recall
When a call arrives at the ITOPS office and an operator position is available
to process a call (both loops are available), the call is connected directly to
that position. However, if a position is not available, the call is timestamped
and placed in one of the CW queues.
The ITOPS system is configured to automatically distribute calls evenly
across all positions so that no one position is overburdened.
Each operator position is assigned to process calls from one or more of the
CW queues. For example, one operator position can service the general
queue, another can service the transfer queues, and another can service all
queues. The association of positions with queues is defined by the operating
company in datafill.
Calls in the recall queue are handled first. The recall category consists of
calls that have been previously connected to an operator but must be
reconnected for additional assistance. After all recalls are serviced, the
oldest call in the nonrecall queue is connected to an operator.
Nonrecall calls are newly originated calls that have not yet received operator
assistance. The queue in which a call is placed is defined in datafill; for
example, calls can be queued based on call type or trunk group type.
Through datafill, call types can be prioritized within the queues to give
certain call types higher priority.
Dequeueing calls: Within each priority level, calls are serviced on a
first-in, first-out basis, depending on the type of call.
Calls-waiting queue thresholds: The system provides a threshold
mechanism to limit the amount of time a call must wait in the CW queue. If
the threshold is reached, any calls received requiring operator assistance are
deflected for as long as the threshold is exceeded.
Overflow: If the queue size is exceeded, an overflow condition occurs, and
all additional calls requiring operator assistance are deflected. Queue
overflow conditions indicate office engineering problems. Refer to ITOPS
Planning and Engineering Guide, 297-2181-155, for information about
proper engineering of an ITOPS office.
Enhanced Automatic Call Distribution (ACD) queuing: The
enhanced ACD system simplifies and increases the flexibility of call
queuing. This enhanced system takes the calls destined to operator services,
analyzes their call properties, and assigns them to the appropriate operator or
call queue. It provides up to 256 call classes, 64 call queues, and up to 99
combinations of attributes to create new call classes.
Call handling
When a call arrives, the operator is notified by a call arrival tone followed
by a screen display. The screen display is depends on call origination type
and station class. For complete details on screen displays, refer to ITOPS
Operator Guide, 297-2181-300.
Procedures: The basic call-handling procedures are as follows:
1 The operator asks the subscriber for the service desired.
2 The operator enters the appropriate information (calling or called
number, or billing information) and performs the necessary functions.
3 The operator releases the call from the position.
As the operator is processing the call, the DMS is working to establish an
outgoing route for the call. Once the route is established, the operator
releases the call from the position, the DMS connects the calling and called
party directly; the three-port conference circuit is dropped.
Billing: A call cannot be floated from an operator position until all billing
requirements are satisfied. For all call types, the ITOPS billing system
provides a series of billing classes.
The rate at which a call is billed is determined automatically by the ITOPS
rating system software. This process is transparent to the operator; however,
the operator can provide rate information to the customer upon request.
Immediate rate information is required for the following kinds of calls:
• coin-paid, initial, and subsequent periods
• hotel-paid/APS-paid
• time-and-charges requests
Operator force configuration determines the type of TTY devices that are
present in an office. There are two types of operator force configurations:
single-traffic office and multitraffic office.
In a single-traffic office configuration, the operator force consists of one
traffic office (the operators are all located in the same place). The
supervisor in a single-traffic office configuration can monitor the status of
the office from an in-charge position that has force management
capabilities.
In a multitraffic office configuration, a force management position monitors
the status of the different traffic offices. The force manager position can
obtain the following types of information for each office:
• the total number of occupied positions
• the number of occupied positions not accepting new calls
• the number of positions that are out of service
The voice trunk connects the operator to any network point through a
conference circuit. The conference circuits are used for connections between
all parties involved in the call. For this reason, three-port conference circuits
are required for ITOPS calls.
The data trunk allows the data flow between the position and the central
control through the digital modem. Through this interface, manipulation of
the voice and data trunks is possible from the maintenance and
administration position (MAP). Refer to the ITOPS Routine Maintenance
Procedures, 297-2181-523, ITOPS Maintenance Guide, 297-2181-524,
ITOPS Card Replacement Procedures, 297-2181-525, and ITOPS Trouble
Locating and Clearing Procedures, 297-2181-520, for details on the
maintenance support for ITOPS position trunks.
Figure 9–2 illustrates the ITOPS operator position connections.
Figure 9–2
ITOPS operator position connections connectional
Network
Voice Data
2 1 0 Central
Three-port IDTC Digital control
conference modem
circuit
ITOPS devices
ITOPS devices are TTYs that are used for various activities throughout the
ITOPS system. Devices monitor performance of the operator workforce and
print operator statistics and information about the system and the traffic
office. They also administer the controlled traffic and operator study
registers, and print time and charges (T&C) data.
The following are ITOPS devices:
• traffic administration data system (TADS) TTY
FADS TTY
(Force supervisor)
1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Call transfer types: The force supervisor can restrict the call-origination
types presented to an ITOPS operator by assigning that operator a call
transfer set or a controlled traffic set of calls.
The call transfer set is composed of the call transfer types that operators can
handle. Operators can transfer calls they cannot handle to operators who can
handle those types of calls. Calls can be marked for transfer in two ways:
• The operating company may datafill a table that automatically transfers a
call to an available operator.
• The operator may transfer the call, through keying action, to another
position for processing.
When a call is marked as a transfer call, it can be presented only to operators
capable of handling this transfer type. The call transfer set active at any
given ITOPS operator position is the combination of the call transfer sets of
the position and of the operator logged into it.
The controlled traffic set is composed of the call-origination types that an
operator can handle. Unless specified otherwise, operators are assumed to
be able to handle all ITOPS call types. The controlled traffic set is beneficial
in training new operators because it allows the force supervisor to restrict the
call-origination types to those an operator can handle.
Queue thresholds: ITOPS provides calls-waiting queues when there are
more calls than can be handled by the operator work force. The force
supervisor determines the queue threshold or limit on the number of calls
that can be stored in a queue. Queue thresholds are indicated by the
following signs:
• A display flashes “calls waiting” on the in-charge and assistance position
screens when a queue reaches its assigned threshold. This signal
indicates that operators need to increase their call-handling speed.
• The flashing signal is removed when an operator handles the call and the
number of the calls in the queue is diminished.
• If the call cannot be queued because the call-deflect threshold is reached,
the call is deflected to treatment and a display flashes “calls deflected”
on the in-charge and assistance position screens.
Enhanced Automatic Call Distribution
The enhanced Automatic Call Distribution (ACD) feature simplifies and
increases the flexibility of call queuing. This system takes the calls destined
to operator services, analyzes their call properties, and assigns them to the
appropriate operator or call queue. The enhanced ACD feature is included
as part of the base ITOPS software, but activating it for use is optional.
To increase flexibility, the enhanced ACD system can contain up to 256 call
classes and up to 64 call queues. The system allows the operator to enter up
Figure 9–4
Typical ITOPS call configuration
Trunk #1 Trunk #2
Data Voice
trunk trunk
ITOPS
position
Calling Digital 0 1 2 Called
party modem Conference Party
A circuit B
The delay call database allows the operator to perform the following tasks:
• retrieve calls from the database using the calling number of the
subscriber or the serial number designated by the delay call database
• modify details of a stored call
• delete individual call entries from the database
• delete unprocessed calls in the database that are older than the operating
company’s definable limit. (This is called mass call deletion.)
The delay call database can accommodate 5120 calls at any given time.
Timed delay calls: The operator can store calls, set a recall time, and have
the calls automatically recalled to the position when the timer expires. The
details of the call are displayed at the operator’s crt, but no parties are
involved in the call. The operator must connect the calling and called parties
in the call. If both parties are reached, the call is completed and the operator
can then start timing it. If both parties cannot be reached, the call can again
be stored in the delay call database.
The database can also store calls without a specified time. These calls are
recalled only when they are manually retrieved by the operator.
Untimed delay calls: Untimed delay calls are returned to an ITOPS
operator when a foreign operator informs the ITOPS operator that an
attempted call can now be processed. The operator retrieves the call from
the database and the foreign operator becomes the called party.
The operator’s crt is updated with the call details and the ITOPS operator
attempts to reach the original calling party. When the calling party is
reached, the calling and called connection is established and the ITOPS
operator can start timing the call.
Route queuing: Route queuing is a method of storing calls in the ITOPS
database that are waiting for an outgoing route (trunk group). When a
member in the trunk group becomes available, it is held. The call is recalled
to the ITOPS position so that the operator can place the call using the
available outgoing trunk. This saves the time and effort of checking
repeatedly for an available trunk.
With the route queuing feature, if the subscriber tries to complete a call over
a busy outgoing route, the call can be queued on that outgoing trunk. When a
member in the trunk group becomes available, it is held for the subscriber.
Then the call is recalled to an available ITOPS position, and the operator can
complete the call.
If more than one call requires the same outgoing route, these calls are
queued. The first call requiring the route has the first chance of completion.
Subsequent calls must wait. When the first call is complete, the required
route is idled and is held for the next call in the queue waiting for that route.
At this point, the trunk is not available for normal call processing. This
process continues until there are no more calls waiting for the particular
route and the route becomes idle.
International Rating System - NTXB84
The International Rating System feature package provides the capabilities
needed to rate calls and to calculate charges, including tax, based on
predefined characteristics. Using the rating system, the operating company
can provide subscribers with a reliable and consistent method for rating
ITOPS calls.
The International Rating System allows the operating company to create rate
schedules consisting of a common set of rating characteristics that apply
from an originating point to a terminating point. These schedules can be
used for various combinations of originating and terminating points that
share the same rating characteristics.
Although the rate schedule and the rate step drive the charge calculation for
all rateable calls, billing details also affect the final charges. Call charges
may vary depending on whether the call was person or station billed,
operator handled or assisted, direct dialed, or coin or noncoin. Different
charges may be assigned to various combinations of these factors as required
by the call. The rate schedule provides the necessary refinement of the
charges based on the billing details.
Charge calculator
The charge calculator is a set of procedures used with ITOPS tables that
determines the charges to be applied to all types of local, toll, and foreign
calls. The final charge depends on the following factors:
• type of call
— direct dialed (DD)
— operator assisted (OA)
— operator handled (OH)
— operator assisted, international (INW)
— direct dialed, international (FOR)
— directory assistance (DA)
— intercept (INT)
— book (BOOK)
— database
— operator (OPER)
— special (SPL)
— notify (NFY)
• hotel
• time and charges
• database call
• attended pay station
• person to person
• bill to called party
• bill as person call back
• calling card
• bill to third number
Different charges may be assigned to all combinations of these attributes.
Call duration: Most operating companies have two basic rates for billing
toll calls: an initial period rate and a subsequent period rate. The operating
company may apply charges to the subscriber on a minute or second basis.
For example, an initial rate per period might be $3 for the first three minutes
or less. The subsequent rate per period might be $1 for every 45 seconds
after the initial period expires.
The initial rate is always charged, even if the call terminates before the
initial period. The subsequent rate is charged at the start of every subsequent
period.
The operating company has the option to datafill tables for accommodating
any of the following three scenarios:
1 The initial and subsequent period rates are measured in minutes. The
charge is $2.00 for the one-minute initial period and $.55 for the
one-minute subsequent interval. The net charge for a call lasting one
minute and forty seconds is $2.55.
2 The initial and subsequent period rates are measured in blocks of sixty
seconds. The charges are $1.60 for the sixty-second initial period rate
and $1.40 for the sixty-second subsequent period rate. The net charge
for a call lasting one minute and forty seconds is $3.00.
3 The initial and subsequent period rates are measured in seconds. The
charge is $1.50 for the sixty-second initial period and $.50 for the
thirty-second subsequent period rate. The net charge for a call lasting
one minute and forty seconds is $2.50.
The schedule name and rate step number create an indexing key into the call
duration table. When charge information is required, the index key is used
to obtain the length of the initial and subsequent periods as well as the
charges associated with the index.
The charges can be in cents, dollars, pounds, or other currency as long as the
monetary system is based on decimal and arithmetic values.
Time zones: Time zone variations are needed when the office the ITOPS
switch is serving is in a different time zone from the ITOPS switch. In these
cases the time in the originating office is used to calculate charges.
The table specifying time zone information provides, in minutes, the time
difference between the local time and the time of the called party. The
default value is zero. Even when the called party is paying for the call, the
time used here is the time of the originating party.
Holidays: The charge calculator defines the year’s holidays that receive
special rating, as well as specifying the type of treatment applied to calls
made during the holidays.
Taxes: Taxes are applied to a call after the basic charge (initial and
subsequent), class charges, and any surcharges are calculated.
Federal, state/provincial, and municipal taxes can be applied singly or in
combinations. For example, the state/provincial tax and the municipal tax
may be combined as the second tax and applied to the charges after the first
tax.
The operating company can specify two ways for taxes to be applied:
• tax rates for taxes based on the entire call charge
• tax rates for fixed rates based on the initial period (charge-per-minute or
charge-per-second) rate of the call
Tax types or methods are either fixed or rate. Fixed-rate tax types are fixed
tax charges applied to a call. Rate tax types are based on the charge of the
call. In addition to specifying the taxing method, the datafill can be entered
to allow either prediscount or postdiscount charge taxes to be applied.
Rounding factors: Rounding factors, like taxes, are applied to a call after
the basic charge (initial and subsequent), class charges, and any surcharges
have been calculated.
The operating company can specify how rounding will be applied to charges
in any of the following ways:
• Rounding for coin lines is applied to charges before any tax is applied.
Rounding to the smallest coin available for use in the coin phone will
occur, depending on the rounding factor.
• Rounding is applied to each tax before adding to charges.
• Rounding is applied to total charges after tax.
Rate-step calculation
The rate-step calculation feature allows the operating company to calculate
and define rates for each office. Figure 9–5 on page 9–34 shows the various
rates the ITOPS rating system can calculate. The home switch in Figure 9–5
is in country WW and handles areas 1 and 2. This allows the operating
company to create a different set of rates from each office in country WW to
any other office in country WW. Also, from each office in country WW, the
operating company may set up a different set of rates for each country (in
this case XX, YY, and ZZ).
A caller from office A calling office B can have a different rate from one
calling from office C to office B, or office D to B, or even office B to A.
This is done by defining a schedule set for each office (area code office
code). A rate schedule is a set of charges for the type of call being made. A
schedule set consists of multiple rate schedules applicable to a chargeable
call.
Figure 9–5xx
Rating areas
Country WW
Country
XX
Country
YY
Country
Area 1 ZZ
Area 2
Finding a rate step: Once the schedule sets and schedules are defined,
the index into the specific schedule must be provided to get the charges for
the call. This index is found by mapping the calling schedule set and the
called number to a specific schedule and rate step. A rate step is a value
used in a standard algorithm that calculates the final charges for all rateable
calls and is used as an index into a schedule. There are three types of called
numbers: national, foreign, and local.
National rating: This type of rating applies when the called number is a
national toll call. National rate step calculation allows the definition of a
schedule and a rate step based on calling schedule set name and the called
area and office code. There is also an indicator of what type of rate
calculation is to be done to find the rate step. Only single rate area is
defined.
Foreign rating: Foreign rating is used when calling a foreign country.
This type of rating allows the definition of one set of charges from each
schedule set name to each country (or country and city).
Local rating: Local rating is used when the called number is within the
same local serving area. This type of rating provides a schedule and a rate
step for each calling schedule set name.
Manual entry of a rate step
When rating table entries do not exist for a called number, the operator can
enter a rate step manually. The ITOPS operator contacts the rate and route
operator for the rate step calculation or consults a list of rates for entry into
the system.
ITOPS rating test program
The ITOPS rating test program (IRATE) feature allows the operating
company to verify its rating system datafill. IRATE may be used on its own;
however, it is designed to be used with the mass table control feature
(described below). IRATE uses live data from the rating system data tables
but is completely independent of call processing and ITOPS positions.
IRATE allows the operating company to accomplish the following tasks:
• enter relevant details of a test call from the command interpreter level
• request rate step, charge calculation, or both for the test call with the
results appearing at the terminal
Duplicate
Scrap
Initial
Enable
Swap
Switched Duplicated
Initial state: Active tables may be edited, and any changes to the datafill
have immediate implications in the rating of calls. The inactive tables are
empty.
Duplicated state: Two commands can cause the system to enter the
duplicated state. One command causes the inactive tables to start out empty,
and the other command causes the inactive tables to start out with a copy of
the active data.
All tables can be edited. Table control error checks ensure that the active and
inactive data form consistent sets.
Switched state: The system enters the switched state only after the swap
state is entered. In the switched state, the active tables contain new data, and
the inactive tables contain the old data.
Charge modifications
ITOPS allows the operator to make charge adjustments for subscribers. The
operator can make monetary or time adjustments, delete the entire charge for
a call, and mark the call for later adjustment.
When the subscriber is unable to complete a direct dialed (DD) call, the
operator can assist the subscriber and still charge the DD rate by using the
dial rate function. This function may be used for assisting handicapped
subscribers or re-dialing calls because of bad transmission. This dial rate
function is used at the operating company’s discretion.
ITOPS call details billing record
ITOPS uses the international centralized automatic message accounting
(ICAMA) and the international call recording (ICR) formats as guidelines
for its call details billing record. This record is divided into a standard call
details billing record and six extension records.
A standard call details billing record is created for each call arriving at an
operator position. Extension records are included for certain types of
operator-handled calls, and they contain additional information needed to
bill the call.
Table 9–2 describes the items found in the standard call details billing
record.
Table 9–2
Standard call details billing record
Item Description
ITOPS originating type How the call was originally presented to the
operator
Service feature code The station class of the calling and called parties
—continued—
Table 9–2
Standard call details billing record (continued)
Item Description
Start/answer time The time the call was answered, or the time the call
was not answered and arrived at the operator posi-
tion
Information digits one Various events that occur during a call, such as the
through six operator entering the called number, ANI failed for
the call, the call failed after successful call setup,
the call includes a trouble report, the call includes a
charge adjustment, the operator billed the call at a
direct dialed rate, the call was transferred, or the
operator used toll break-in to verify the calling num-
ber
Team number The team number of the operator who last handled
the call
Outgoing trunk identification The outgoing trunk group number representing the
position. If no outgoing trunk is used, then a de-
fault value is used.
Incoming trunk identification The incoming trunk group number representing the
position. If no incoming trunk is used, then a de-
fault value is used.
End
Certain ITOPS billing records have extension records that provide additional
information for call billing. Some calls can have more than one type of
extension record. ITOPS extension records are required for the types of
calls shown in table 9–3.
Table 9–3
ITOPS extension records
Item Description
Special billing Used for calls billed to a number other than the
calling number, such as a third party or calling card
number
Time and charge quotations Required for time and charge quotations
Figure 9–7
ITOPS hardware architecture
Network
Central
control
complex
MTM IDTC
Incoming
2–way
0 1 2 MTR or
Digital ITOPS
3–port trunks
modem conf. cir.
PCM30 (CLG party)
channels
Outgoing
MTR trunk
(CLD party)
Switching office
ITOPS Office
Channel
Notes: bank
One three-port conference circuit 4–wire 4–wire FSK
is alloted for each ITOPS analog voice analog data
position.
One digital modem is alloted for each ITOPS 4–wire FSK
ITOPS position and TTY. position analog data
Channel bank may be located TTY
in either switching or ITOPS offices. Admin. and
HOBIC devices
Transmission facilities
Digital facilities provide communication paths between the ITOPS positions
and the DMS switch. The digital facilities provide a connection between
the 4-wire analog voice and 4-wire analog data circuits of each operator
For complete details on the above positions, refer to the ITOPS Operator
Guide, 297-2181-300, and the ITOPS Force Management Guide,
297-2181-310.
ITOPS administrative equipment
ITOPS administration has the following responsibilities:
• controls the number of calls reaching a group of operators
• balances the work load among the traffic offices
• receives information to determine short and long-range staffing needs
Table 9–4
Teletypewriters associated with a HOBIC
Teletypewriter Description
Autoquote (AQ) TTY A receive-only TTY at the hotel that provides the AQ
service.
Record (REC) TTY A receive-only TTY located at the HOBIC. This TTY
receives a duplicate copy of messages sent to the
AQ and VQ TTYs. It serves as a backup if any AQ
or VQ TTY malfunctions.
Table 9–5
Equipment provisioning for single-traffic and multitraffic offices
End
Maintenance
This chapter summarizes the maintenance facilities provided on the
DMS-100 International switch. While the majority of the information
provided in this chapter applies to all DMS-100 International applications,
specific maintenance capabilities may vary depending on the market in
which the system is deployed, the application of the system, and the types of
optional hardware and software provisioned.
Maintenance capabilities and procedures for the DMS-100 Family are
documented extensively in Northern Telecom Practices (NTPs). Detailed
descriptive information, subdivided by maintenance subsystem (for
example, lines maintenance or trunks maintenance) is provided in the
following documents:
• Input/Output Devices Maintenance Guide, 297-1001-590
• Networks Maintenance Guide, 297-1001-591
• Peripheral Modules Maintenance Guide, 297-1001-592
• External Devices Maintenance Guide, 297-1001-593
• Lines Maintenance Guide, 297-1001-594
• Trunks Maintenance Guide, 297-1001-595
• DMS SuperNode and DMS SuperNode SE Computing Module
Maintenance Guide, 297-5001-548
• DMS SuperNode and DMS SuperNode SE Message Switch Maintenance
Guide, 297-5001-549
• International Traffic Operator Position System (ITOPS) Maintenance
Guide, 297-2181-524
• Remote Line Concentrating Module/Outside Plant Module Maintenance
Guide, 297-2701-520
Additional maintenance documents can be located by referring to the chapter
“Documentation” in this book, or by consulting the following documents:
• Index to Maintenance Procedures, 297-1001-500, describes all
procedural maintenance documents available for DMS-100 Family
switching systems, including alarm clearing, trouble locating, card
replacement, and routine maintenance procedures.
MAP provisioning
MAPs can be located on site, or connected via modem to a remote site. The
number of MAPs provisioned on a particular system is specified by the
operating company, and depends upon the size and application of the office,
and the types of MAP capabilities required. A minimum of two MAPs are
provisioned on a system. Where required, a larger number of MAPs can be
provisioned, allowing the various input/output functions required to
maintain and administer the system to be divided among different personnel
and performed concurrently.
The maximum number of MAPs which can be provisioned on a system is
limited by the number of available input/output controller (IOC) device
ports available. This capacity, however, is not a practical constraint; any
normal configuration has termination capacity for over 100 MAPs.
Printers are used in conjunction with the MAP for data entry and retrieval,
including customer data modifications, network management and
operational measurement (OM) reports, and log reports.
A dedicated MAP, assigned to position zero in data table TERMDEV, is
identically equipped in all systems. This MAP is commonly referred to as
the operator position. MAP assignments are recorded in protected data store
memory and preserved on office image tapes to ensure these assignments are
not lost in the event of system restarts.
MAP components
The MAP consists of the following components:
• visual display unit (VDU)
• keyboard
• voice communication module
• position furniture
• external test equipment jacks
• printer (or teletype)
• data set (modem) for remote MAP
Position furniture
Figure 10–2
MAP display layout
Command menu
display area Work area
trunks, 101 test lines, local talk lines and communications lines terminate at
the voice communication module. Jack-ended trunks are hard wired directly
to the jacks located on the position furniture. Each trunk is associated with
two jacks, one for the transmit side and one for the receive side.
Refer to page 10–28 for additional information on test lines.
Printer
A printer can be provisioned with the MAP to provide a permanent copy of
output reports or copy stored on disk, tape or alternate MAPs. The VDU
and printer accept and display upper or lower case characters. Multiple
printers can be provisioned on the system.
Teletypewriters
Teletypewriters (TTYs) can be used in DMS-100 Family systems for data
entry and retrieval, and can be located on site or at a remote location. TTYs
are used to implement trunk and line work orders and network management
controls, initiate diagnostics, and receive log messages, OM reports, and
network management reports. TTY reports include report number, time,
report trouble, and related data.
Furniture
The optional MAP furniture is a modular system providing table surfaces
which are positioned for either standing or sitting use. The furniture,
consisting of tables and shelf units, is assembled in various configurations to
provide work space and documentation storage for administrative or
maintenance functions, and to mount the VDU, jack field, and
communications module.
MAP interface to the DMS-100 Family system
Each MAP is connected to the system via a device controller card located in
the input/output controller (IOC) shelf, which resides in the input/output
equipment (IOE) frame. Up to four MAPs can be connected to a single
device controller.
MAP access and security features
The DMS-100 Family system offers the following capabilities on an
optional basis:
• automatic dial-back
• command screening
• password control
• access control
• audit trail
• automatic logout of dial-up lines.
Automatic dial-back
With automatic dial-back, the user dials and logs in with the prescribed user
ID, password, and terminal number. This number is an automatic dial-back
ID which may be a valid telephone number of up to ten digits or a
predefined index number. In either case, the automatic dial-back identifiers
are listed in data table DIALBACK. After the number is input by the user,
the line is disconnected and the table is referenced to validate the terminal
number. If valid, automatic dial-back is made to the telephone number
assigned to the terminal ID. If the login attempt is invalid, it is recorded in
an audit file.
Command screening
The command screening feature allows the system to do extensive screening
of user commands prior to execution. Command screening can be applied to
any user, terminal, or both. DMS-100 Family system users and terminals are
assigned single or multiple privilege command classes from 0–31, for
depending on the functions of the user or terminal. To ensure system
security, all commands will initially default to an operating company defined
class until privilege classes are assigned.
Password control
The optional password control feature disables all automatic logon
procedures for the DMS-100 Family system and secures the system against
unauthorized users. Users and passwords must be identified to the system
prior to logging in. No user is able to display or change the password of
another. Passwords are encoded and scrambled, however, the decoding
algorithm does not reside in the system. The operating company selects the
parameters for the number of characters in each password and determines
the effective time interval before a password must be changed by the user.
Password expiration warnings and a predesignated number of logon attempts
before permanent lockout from the system are included in this feature. As
an added security measure, the system prompts the user for the current
password when a password change is attempted.
Access control
This feature allows control of user access to a DMS-100 Family system by
controlling login access to consoles. Consoles can selectively enable or
disable logins. These consoles can be automatically disabled on login
failure when a time limit is set for logon sequence completion. A time limit
specifying the maximum time an enabled console may be left unused can
also be set, and various access and security related events are logged. The
login sequence is made more secure by erasing the password display from
the VDU screen immediately after entry.
Access to customer data tables is controlled by the privilege class assigned
to each table. When the user attempts to access a table, the user’s privilege
class is compared to the privilege class of the table. If privilege classes
The system can store up to 1000 security reports, which can only be
accessed via authorized operating company personnel. In addition, the
operating company can specify alarm levels to flag these reports.
Automatic logout of dial-up lines
Automatic logout of dial-up lines allows users to be automatically logged
out. The connection is dropped when a facility open condition is detected.
Optionally, the terminal may be disconnected.
Figure 10–3
MTC level MAP display example
TIME 14 : 40 >
The alarm banner is displayed at all sublevels of the MTC subsystem, and is
updated continuously. In cases where multiple alarm conditions exist in the
same functional area (for example, two CM alarms), the most severe alarm
is displayed. When the most severe fault is cleared, the next most severe
alarm is displayed. When all alarms are cleared, the status field returns to a
dot, indicating no faults within the subsystem. Additional information on
the DMS alarm system is provided on page 10–18.
The abbreviated maintenance subsystem names used in the MTC level alarm
banner correspond to the following functional areas of the switch:
• NT40-based systems:
— CC central control
— CMC central message controller
— IOD input/output devices
— Net network
— PM peripheral modules
— CCS common channel signaling
— Lns lines
— Trks trunks
— Ext external devices
• SuperNode-based systems:
— CM computing module
— MS message switch
— IOD input/output devices
— Net network
— PM peripheral modules
— CCS common channel signaling
— Lns lines
— Trks trunks
— Ext external alarms
— Appl applications
Detailed procedural maintenance manuals are provided by Northern
Telecom to clear all alarms which can possibly appear in the alarm banner.
MTC level MAP commands
The MTC level command set (shown along the left side of figure 10–3)
provides command access to major architectural areas within the switching
system; for example, command 12, “NET”, accesses the MAP maintenance
sublevel for the switching network subsystem. The MTC level command set
or “menu” also provides access to specific maintenance functions; for
example, command 5 “Bert” accesses the bit error rate test utility.
Two types of commands are available at the MAP MTC level and its
sublevels: menu commands, as described above, appear on the display.
Additional commands are available at the MTC level and its sublevels which
do not appear on the display, referred to as “non-menu” commands. In both
cases, online help is available to explain the usage of each command, and its
syntax. Non-menu commands available for the MTC level, or any of its
sublevels, can be determined by listing the software directory associated
with that level.
Full details of all available MAP commands, functions, and system
responses are provided in the following documents:
• Nonmenu Commands Reference Manual, 297-1001-820
• Menu Commands Reference Manual, 297-1001-821
MTC sublevels
To facilitate the isolation of faults in the DMS-100 Family system, the MAP
has a telescoping feature. Telescoping is the following of a branching
process to determine the smallest replaceable unit which should be changed
to restore system status to normal. The status data at any telescoping level
being displayed is continuously updated to reflect the current status without
the need to request an update. Supplementary data, within a particular level
or a lower level, is requested for displays by entering the appropriate
commands.
Each maintenance (MTC) sublevel provides a command and status display
increment to the base MTC level, consisting of more detailed status
information pertaining to its associated function, and a specific command set
tailored to maintenance aspects of the subsystem. For example, figure 10–4
shows an example of the MAP display which results by telescoping from the
MTC level to the NET sublevel.
Figure 10–4
Network level display example
TIME 14 : 40 >
an audible alarm and the appearance of the alarm status indication, *C* for
critical and M for major. Audible alarms can be silenced through the
command SIL input at the keyboard. A blank space below a system status
fault indication denotes a minor alarm. Dead-system alarms cannot be
silenced by software command, only by operation of the audible alarm reset
switch. Additional information on the DMS alarm system is provided on
page 10–18.
Isolation of faults
To initiate tests from the MAP, the appropriate subsystem menus have to be
accessed. Tests on trunks, analog or digital, and service circuits are initiated
after the appropriate circuits have been posted at the trunk test position
(TTP) level. Tests on lines are performed after the line has been posted at
the line test position (LTP) level. Tests on the central control complex and
the networks are initiated after requesting the diagnostics be run on the
appropriate equipment.
When any unit is removed from service, the LOG system is updated to
reflect this change. For example, if a trunk is made busy and removed from
service, the trunk log subsystem will be updated and an output message
provided at the TTP or printer assigned to the LOG system.
Log report system
The MAP units are software driven from the DMS-100 Family system.
Included in the software is a log report (LOG) system, which provides
information storage and retrieval for system-related messages or reports.
During operation, reports are generated by the DMS-100 Family system
software and sent to the LOG system.
In the LOG system, these reports are categorized into a number of report
classes according to the subsystem which generated the report. The LOG
system stores these reports in several LOG buffers. There is one LOG
buffer for each subsystem. Each subsystem has a number of basic report
types associated with it. Each output report is associated with a given report
type and consists of a fixed format and variable data.
Optionally, on the basis of information stored in the LOG system, the reports
are routed to MAPs or other output devices associated with specified classes
of users. These output messages have a sequence number which can be
printed in the order of alarm severity, critical, major, minor.
For more information on the DMS-100 Family log report system, refer to the
Log Report Reference Manual, 297-1001-840.
same time, enables the Activity Switch Timer (AST). If, for some reason,
the TRAP is not served, the AST will fire and an activity switch will occur.
The sanity timer guards against software or micro-program loop errors not
detected by the TRAP system.
Calls encountering trouble
The objective of the DMS-100 Family system recovery system is to
minimize service interruption. Calls encountering trouble are handled in
two ways:
• Calls are attempted to be completed satisfactorily without the customer
being made aware that a trouble exists. For example, when problems
occur during path setup through the switching network, a retry is
attempted without customer knowledge.
• Calls that cannot be completed due to internal or external trouble
conditions are routed to operating company-defined treatments such as
tones, for example, 120 IPM or recorded announcements.
Safeguards
The DMS-100 Family system provides safeguards, daily scheduled CPU
changeovers, routine diagnostics, per-call tests, and other background tests,
where trouble conditions within the service recovery and protection facilities
can be detected before service is affected.
Further safeguards are provided to prevent an inadvertent degradation of
service due to improper data commands or actions. Any data changes
implemented in the system are checked for validity and format and tested.
In addition, the system provides input terminal restrictions which can restrict
data changes to be implemented through the appropriate terminals.
Through MAP commands, units of the system may be removed from
service. For duplicated units, that is, CPUs, CMCs, and networks, if one of
the duplicated units is already busied out or inactive, and a command is
input to busy or make the mate unit inactive, the system will provide a
warning message and no action is taken to busy the unit. In the case of
unduplicated units, such as the MTM, the system provides warning
messages if circuits are service busy, and requires command validation
before the unit is removed from service.
The DMS-100 System safeguards are provided to ensure that when a
number of trunks in a trunk group are busied out either by the system or
maintenance personnel, alarm conditions are provided on the MAP. Alarm
conditions are classified as critical, major, and minor, dependent on the
percentage of trunks busied in a trunk group. Once the percentage threshold
is exceeded, the appropriate alarm is triggered for the affected trunk group.
The threshold percentage information, from 0–100%, is defined by the OC
in Common Language Location Identifier (CLLI) maintenance tables for all
trunk groups.
software includes the ability to take babbling nodes out of service as they
occur.
Babbler line handler The babbling line handler software package can
diagnose the line card and loop of a line that has been reported as a babbling
line in the peripheral. The line card will be removed from service, and
placed in the cut state to prevent system degradation. An audit will revisit
each babbling line and return it to service if babbling has ceased.
System alarms
The DMS-100 Family alarm system performs three functions:
• generates audible and visual indications as a result of trouble conditions
detected within the system, or in associated equipment
• provides a hardware-generated dead system alarm to alert maintenance
personnel when the DMS-100 Family system software is not functioning
• provides manual facilities for alarm conditions, including:
— silencing audible alarms
— transferring alarms to a remote monitoring location
— grouping the alarms of the DMS-100 Family system with other
systems
— transferring alarms from an unattended trunk or line test center to the
main system.
There are normally two NT3X82 dead system alarm packs in the office.
These alarms are wired to ensure any loss of communication between both
of these packs and the CPU is required to generate the dead system alarm.
Here the DS relays on both packs, normally held by software, are released,
thus generating an audible and visual critical alarm, indicating a dead
system. The signals usually are cross-connected at the distribution frame
(DF) and can be picked up by engineered telemetry equipment.
In addition, there is an NT3X84 alarm sending circuit pack in the office.
When the DS relays in the NT3X82 packs are released, a ground is extended
to the DS relay on the alarm-sending pack which causes it to operate. This
action starts the dead system tone generator, 480 Hz, and activates a logic
circuit which will select and seize an idle E&M trunk from two regular
operator trunks. These trunks and the pack interface are cross-connected at
the DF. The 480-Hz tone is then connected to the seized trunk. If neither
trunk is idle Trunk 1 is automatically seized, the tone is connected to it and
the M lead is pulsed at 60 IPM. Switches in the trunk selector logic circuit
can be set to accommodate either Type I or Type II E&M signaling.
Alarm system operation
Alarm and control inputs detected by the alarm system hardware are
interfaced with the alarm system software through scan points. These scan
points detect inputs generated by hard wired alarm contacts in the DMS-100
Family system hardware, by the operation of manual control switches, or by
the alarm circuits of miscellaneous equipment associated with the DMS-100
Family system.
Alarm and control inputs are generated by the software of the other nine
maintenance subsystems which complete the DMS-100 Family maintenance
system are designated system inputs. Each maintenance subsystem controls
its own alarm status display in the system status area of the VDU at the
MAP. The alarm system software checks for changes in the alarm status of
these subsystems every five seconds and updates the audible and visual
alarm indications accordingly. This software responds to alarm or control
inputs by operating or releasing appropriate Signal Distribution (SD) points
in alarm system hardware and to initiate or terminate the corresponding
audible and visual alarm, or control function.
Alarm classes
Trouble conditions are assigned to an alarm class by the maintenance
subsystem which detects the trouble. The alarm classes, in decreasing order
of severity, are critical, major, and minor. Critical alarms are reported within
two seconds of occurrence. Major alarms are reported within 30 seconds of
occurrence. Minor alarm trouble notification is reported within two minutes
of the occurrence of the trouble. All alarms displayed at the MAP are
entered in the LOG system and output reports routed to the appropriate
printers.
Audible alarms
Audible alarms are activated either on-site or transferred by the remote
alarm transfer circuit to a remote monitoring location. On-site audible
signaling devices are mounted on two different audible alarm panels, the
main audible alarm panel and an optional audible alarm panel, for the Trunk
Test Center (TTC) and/or the Line Test Center (LTC).
Up to two main audible alarm panels may be supplied per office depending
on office size and configuration. For example, an additional audible alarm
panel for the power plant if located on another floor. The audible alarm
panels may be either wall or column mounted.
The main audible alarm panel provides the signaling devices. The TTC
audible alarm panel also provides a TTC chime which signals an incoming
call on a 101 communication test line.
• Critical Bell Loud bell which can signal dead system
alarm, critical system equipment failure,
critical power plant failure, or critical
system or power plant failure in another
system if alarm grouping is in effect.
Table 10–1
ACD alarm conditions
Designation Device Function
Table 10–1
ACD alarm conditions
Designation Device Function
non-locking audible alarm reset switch at the ACD. The dead system
audible alarm cannot be silenced by software command.
Trunk test center alarms
The Trunk Test Center (TTC) is the area of a DMS-100 Family system
where Trunk Test Positions (TTPs) are located. The alarm system software
detects the trunk group alarm conditions, and initiates the corresponding
audible and visual alarms.
There are two alarm conditions specifically associated with the TTC; an
incoming call on a 101 communication test line and a trunk group
out-of-service alarm. Audible signaling devices for these alarms are
mounted on the TTC audible alarm panel. An incoming 101 call sounds the
TTC chime. The trunk group alarm is a system alarm detected by the
software of the trunk maintenance subsystem. Depending on the number of
trunks out of service, for example, system busy and manual busy, within one
trunk group, a minor, major, or a critical system alarm is generated.
TTC night alarm transfer
When the TTC area is unattended, the night alarm transfer circuit provides
the facility to transfer the TTC alarms to the main office alarm system. With
night alarm transfer in effect, incoming calls to 101 test lines at the TTP
generate a minor system alarm instead of sounding the TTC chime.
Line test center alarms
The Line Test Center (LTC) is the area of a DMS-100 Family system where
line test positions are located. Alarms will be generated when the quantity
of line failures reaches or exceeds the operating company-defined threshold
value.
The threshold for each type may be set independently. The alarm class
displayed will be the more severe one of the two types. For alarm purposes
two fault types, diagnostic failures and permanent signal partial dial
conditions, are used. Diagnostic failures indicate that a line has failed a
manual short or extended diagnostic test, an automatic short or extended
diagnostic test, or a system-invoked extended diagnostic test occurs.
Remote alarm transfer
The remote alarm transfer circuit allows the transfer of major and minor
alarm indications, for the system and its power plant, to a remote
alarm-receiving circuit in a distant office. This permits the local office to be
unattended. Since the remote alarm-receiving circuit monitors only major
and minor alarm classes, critical alarms are transferred to the remote
location as majors. If alarm grouping is in effect, alarms originating on
preceding and succeeding floors are also transferred to the remote location.
The transfer circuit communicates with the remote alarm-receiving circuit in
the distant office over two leads using either –130V or +130V supplies for
signaling.
Alarm grouping
The alarm grouping circuit provides the capability to group alarms
originating in the DMS-100 Family system with those originating on
preceding (lower) and succeeding (upper) floors which have compatible
office alarm systems.
Alarm circuit power detection
In the DMS-100 Family system, loss of alarm circuitry power is detected.
The office alarm system contains alarm circuits to indicate a failure in the
supply that powers the alarm circuits themselves. A failure in the fuses that
supply the alarm circuits causes an audible alarm to be sounded.
External alarms
The external alarm maintenance subsystem monitors the alarm circuits of
any equipment outside the DMS-100 Family system, such as, door alarms,
fire alarms, and other miscellaneous building alarms, through the operation
of DMS Scan Points (SC).
Operating company defined alarm capabilities
The DMS-100 Family systems can accommodate up to 7,168 operating
company assignable scan points. The Signal Distributor (SDs) points are
only limited by the number of MTMs that can be added to a DMS office.
Software facilities are available to permit very flexible operating company
designed logic arrangements between the SCs and SDs. Detection of an SC
change of state can cause multiple SD operations. Multiple SCs can control
a single SD. The operating company designed logic is defined via the MAP
using the alarm scan table and the alarm signal distributor table.
Alarm sending and checking system
The Alarm Sending and Checking System (ASCS) is a software feature
compatible with the DMS-100 Family system office alarm system. When
activated, the feature provides a facility for sending an indication of an
alarm condition occurring in the DMS office over a trunk to a remote
operator position. If the receiving operator is at a Traffic Operator Position
System (TOPS) position, an ANI-8 information digit, spilled over the trunk,
is translated to an alarm indication on the video screen. The operator uses
the TOPS facility to access further information. When the trunk termination
is a regular operator position, upon answer of the call a tone is generated
across the trunk indicating that the originating office has encountered an
alarm condition. The operator then dials a DN allocated to the ASCS
checking facility, and hears a specific tone from the office indicating the
severity of the alarm condition. The tones and directory numbers used for
The SRT can be accessed by dialing one of the following number formats,
from the station to be tested. There is a concurrent eligibility for ten digit
number formats:
• Seven digit number Where three digit access code is followed by the
last four digits of the DN for the station that is being tested or where a
two digit access code is followed by the last five digits of the DN for the
station that is being tested.
• Ten digit number Where a three digit access code is followed by the
seven digit DN of the station that is being tested.
• Thirteen digit number Where a three digit access code is followed by
the NPA and the seven digit DN of the station that is being tested.
• Concurrent eligibility of 10 or 13-digit number If this option is
adopted there is a delay of a few seconds when no NPA is used, so that
the LCD can be assured that no additional digits are forthcoming.
Seven-digit number can be made available concurrently with ten,
thirteen, or concurrent eligibility.
A dial tone returned to the station signals that the SRT is ready for use; a
reorder tone signals that the SRT is not ready for use. The SRT times out
after 3.5 minutes if the test sequence is not acted on. (The Electronic
Business Set (EBS) line is automatically returned to the state IDL seven
minutes after the start of the test.) After accessing the SRT, the tester
conducts tests that are applicable according to the Lines Maintenance Guide,
297-1001-594.
• Headset (HSET) returns trunk in the control position on the TTP to the
communication module for voice access.
Capability is provided to direct a T101 call to a designated TTP, for
example, a 101XX call will be routed to the TTP designated XX. If the TTP
is busy, the call will be offered to any free TTP. Tests to a distant office
code 101 test line are directed from the MAP position. Refer to the Trunks
Maintenance Guide, 297-1001-595 for additional information.
Tl02 test line
The purpose of the T102 test line, also known as a milliwatt test line, is to
apply a 1004 Hz test tone towards the originating office to facilitate simple
one-way manual or automatic transmission loss measurements. The test
tone is applied for a timed duration of nine-seconds during which an
off-hook answer signal is provided. An on-hook signal, followed by a quiet
termination, is then transmitted to the originating end until the connection is
released by the originating end.
Seven-digit access is available for the T102 test line, as well as T102 access.
The DMS Family system directs manual and automatic tests to the milliwatt
test line.
Detailed information on test line operation is provided in the Trunks
Maintenance Guide, 297-1001-595.
ATME test line
The automatic transmission measuring equipment (ATME) test line provides
the following test capabilities on the DMS-100 International switch:
• loss measurement
• frequency deviation
• noise
• supervision
• busy flash
The PCM level meter (PLM) card contains circuitry for measuring the level
and frequency of PCM samples representing analog voice frequencies or
tones. The PLM card contains a level meter and a frequency meter which
are activated by control signals from the MTM at the appropriate time for
the channel under test.
The following tests also can be performed using automatic line testing
(ALT):
• short diagnostic
• extended diagnostic
• line insulation
• on-hook balance network
The system determines that hazardous voltage exists on a line when any of
the following conditions apply:
• the voltage at TIP (A-lead) equals or exceeds +75.0 volts ± 4.0 volts
with respect to ground at any time during a 30 ms interval
• the voltage at TIP (A-lead) is equal to or less than –75.0 volts ± 4.0 volts
with respect to ground at any time during a 30 ms interval
• the voltage at RING (B-lead) equals or exceeds +75.0 volts ± 4.0 volts
with respect to ground at any time during a 30 ms interval
• the voltage at RING (B-lead) is equal to or less than –75.0 volts ± 4.0
volts with respect to ground at any time during a 30 ms interval
Transmission
The DMS-100 International switching system is specified for application as
a Class 5 or higher office, and is capable of meeting the transmission Plan
Objective of all normal configurations.
Transmission level
The DMS-100 International switch operates at 0 dBr when configured in a
digital network, unless otherwise specified by the operating company.
Table 11–1
Digital standard level points
Bit position
Word no. 1 2 3 4 5 6 7 8
1 0 0 1 1 0 1 0 0
2 0 0 1 0 0 0 0 1
3 0 0 1 0 0 0 0 1
4 0 0 1 1 0 1 0 0
5 1 0 1 1 0 1 0 0
6 1 0 1 0 0 0 0 1
7 1 0 1 0 0 0 0 1
8 1 0 1 1 0 1 0 0
The long-term variation due to temperature and voltage changes may change
the loss by not more than +0.5 dB.
Analog trunk equipment transmission levels
Analog test access is provided via a maintenance trunk module (MTM).
Test trunk circuits in DMS-100 International systems incorporate either a
fixed or variable pad in order to introduce the loss level required for
application in the transmission plan. The variable loss in the range 0 to
15.75 dB in 0.25 dB steps adjustable by software or manual switches allows
for compensation of cable losses and/or the requirements of the transmission
plan objectives. An example is the 101 communications test trunk
(NT5X30).
Characteristics of subscriber line interfaces (NT6X93, NT6X94)
Input return loss
The return loss of the impedance presented to the 2–wire port against the
reference impedance will be equal to or greater than the limits given in
figure 11–1.
Figure 11–1xxx
Input return loss limit
FW-31173
18
14
Input return loss (dB)
0
0 300 500 2000 3400
Frequency (Hz)
Exchange
Weighting Send test point
network Ti
Standard Po
Uniform spectrum send side
signal generator
Terminal
balance return
loss limit test
network
Root mean Receive To
square or quasi
root mean Standard Pi
Sinusoidal square detector receive
oscillator side
Exchange
test point
Note: This equipment may also be all-digital, with equivalent functions. The “standard send side”
and “standard receive side” are then not present.
The TBRL limits given assume the transmit and receive pads, P1 and P0, are
set to 0dB.
Figure 11–3xxx
Terminal balance return loss limit
FW-31175
20
Terminal balance return loss (dB)
15
0
0 300 500 2500 3400
Frequency (Hz)
Table 11–2
Longitudinal conversion loss
Frequency (hertz) Minimum LCL (90% of interfaces)
204 58 dB
504 58 dB
1004 58 dB
3004 53 dB
Loss tolerances
The 1004 Hz loss for 95% of L-L connections will be within +/– 0.5 dB of
the value measured as the average loss.
Attenuation/frequency distortion
The attenuation/frequency distortion of 95% of L-L connections, relative to
1004 Hz, will meet the limits shown in figure 11–4.
Figure 11–4xxx
Attenuation distortion limits
FW-31176
Attenuation distortion (dB)
3.0
2.0
1.5
0.6
0
–0.6
Table 11–3
Variation of gain with input level
Input level Gain variation
+3 to –40 dBm0 +/– 0.5 dB
–40 to –50 dBm0 +/– 1.0 dB
–50 to –55 dBm0 +/– 3.0 dB
where:
LTN is the total weighted noise level for the local digital exchange
PTN is the total weighted noise power for the local digital exchange
For example, if the output relative level (LO) of the exchange is –7 dBr, the
total weighted noise power (PTN) = 263 pWp, corresponding to
LTN = –66 dBmp.
The above noise values for digital local exchanges are based on an output
relative level of –7 dBr. In cases where essentially higher output relative
levels are used, for example, for intra-office calls, the noise contribution of
the PCM process will increase proportionally.
Crosstalk
The crosstalk between individual channels will be such that with a
sine-wave signal of frequency 1100 Hz and a level of 0 dBm0 applied to the
input port, the crosstalk level received in any other channel will not exceed
–67 dBm0.
For measurement, an auxiliary signal (a low level activating signal) should
be injected into the disturbed channel; a pseudo-noise signal as specified in
CCITT Rec. 0.131 at a level of –60 to –50 dBm0 is suitable. It is necessary
to use a frequency selective detector when performing this measurement.
Local distortion, including quantizing distortion
With a sine-wave signal at a nominal frequency of 820 Hz or preferably
1020 Hz (see Recommendations 0.132) applied to the input part of a
connection, the ratio of signal-to-total distortion power measured with the
proper noise weighting (see Table 4 of Recommendation G.223), should
exceed the value given by the formula:
{LS + LO –SN}/10 {LN/10}
S/N(total) = LS + LO – 10 (log (10 + 10 ))
10
where:
S/N(total) is the modified signal-to-total distortion ratio for digital local
exchanges
LS is the signal level of the measuring signal in dBm0
LO is the output relative level of the local exchange in dBr
S/N is the signal-to-total distortion ratio for PCM-channel
translating equipment in Recommendation G.712
40
Signal-to-total distortion ratio (dB)
30
20
10
0
–60 –50 –40 –30 –20 –10 0
Input level
Impulse noise
Noise counts will not exceed 5 counts in 5 minutes at a threshold level of
–35 dBm0.
Inter-modulation distortion
When measured in accordance with the 4-tone test method of composite
input level of –13 dBm0, 95% of test connections will be equal to or exceed:
Table 11–4
Inter-modulation distortion
dB below received power
R2 R3
line-to-line 44 45
The 4-tone test method involves the transmission of four equal level tones
(856, 863, 1374 & 1385 Hz) at the given composite level.
Table 11–5
Group delay distortion with frequency
Frequency (Hz) GRF delay distortion (uS)
500 600
600 400
1000 200
2500 200
2600 300
2800 400
3000 600
The IDTC transmission features are fully compliant with CCITT Red Book
Rec. G.703, 704, 705, and include the following.
Table 11–6
PCM-30 carrier interface characteristics
Characteristic Specification
Rate (input) 2.048 Mb/s +/– 50 PPM
(output) 2.048 Mb/s, phase locked to the office clock
Structure 32 channels per frame, 8 bits per channel
Signal structure com- bits numbered 1 to 8, bit 1 transmitted first
patible data format
Idle code 1 1 0 1 0 1 0 1 (CCITT Rec. G.174, paragraph 16.1)
0 1 0 1 0 1 0 0 (CCITT Rec. Q.503, paragraph 3.6)
Channels channels numbered 0 to 31, channel 0 transmitted first
Format 256 bits per frame, 240 speech bits (30 X 8)
Signaling and framing contained in channels 0 and 15
Code high density bipolar 3 (HBD3) code
PCM-30 receiver input see CCITT Rec. G703.6.3
signal
PCM-30 transmitter – see CCITT Rec. G703.6.2
– nominal peak voltage of a mark
– coaxial 75 ohm resistive 2.37V, symmetrical pairs 120
ohm resistive 3.0V
– positive/negative unbalance: 5%
– half amplitude width: 224 =/– 25 uS
– unbalance in width of positive/negative pulse: 5%
– peak voltage of a space (no pulse) COAX 75 ohm
resistive 0 +/–2.37V, symmetrical pair 120 ohm resistive
0 +/– 0.3V
Line impedance 75 ohm resistive (coaxial), 120 ohm balanced (symmetri-
cal pair)
Transformer inter- 500 Vdc minimum
winding isolation
Channel and port IDTC – distributed over equipped network ports
mapping
Table 11–7
Network interface characteristics
Characteristic Specification
Rate 2.56 Mb/s
Structure 32 channels per frame, 10 bits per channel
Data format bits numbered 9 to 0, bit 9 transmitted first
Channels channels numbered 0 to 31, channel 0 transmitted first
Ports IDTC – ports numbered 0 to 15, plane 0 and plane 1
Frame frame bit–0, of channel–0
Signaling message channel – port 0, channel 0, bits 9 to 2, bit 9 is
most significant, bit 1 not used
Code bi-phase with frame plus violation once per frame
Receiver sensitivity 0.25V peak-to-peak
Driver output 2.4V +/– 0.1 peak-to-peak
Rise time 50 nanoseconds, minimum
Line impedance 100 ohms
Transformer inter- 500 Vdc minimum
winding isolation
Clock synchronization
The DMS-100 International switch is synchronized using the Preselected
Alternate Master Slave (PAMS) arrangement. There are three possible
synchronous clock system configurations, as shown in figure 11–6 on
page 11–13.
• Master-Internal office: The free-running oscillator in one plane of the
central messaging element (central message controller on NT40–based
systems, or message switch on SuperNode-based systems) is used as the
network master clock. The associated hardware is duplicated in both
planes of the central messaging element; one clock is active and the
other is inactive. The inactive clock remains in sync with the active
clock to allow an immediate switch of clocking activity if required.
• Master-External office: This office is equipped to synchronize to an
analog reference clock. This analog clock may come either from a
source external to the office or from atomic clocks located in the office.
• Slave office: The central messaging element clocks in this type of office
are slaved to the clock in a master office, or to another slave office above
it in the network hierarchy, by clock signals carried over a dedicated
PCM-30 timing link. The PCM-30 timing links are duplicated for
reliability, and are provisioned in an International Digital Trunk
Controller (IDTC).
Figure 11–6xxx
Synchronous clock system outline of possible configurations
External
Master reference Optional
source source
frequencies:
10.24 MHz
10.0 MHz
5.0 MHz
DS-1 lines 2.56 MHz
*Slave *Slave 2.048 MHz
1.024 MHz
1.0 MHz
NT40 clocking
DMS-100 International switches based on the NT40 processing platform can
be configured as follows.
Standard synchronizable clock
The clock hardware consists of an NT3X14BA synchronous master clock
counter card and an NT3X15BB synchronous master clock card. These
cards are usually mounted in the CMC shelf, but the NT3X15BB
synchronous master clock card can be located in the Central Processing Unit
(CPU). Each office has two sets of clock circuit cards, one in each CMC
(CPU) to operate as active and standby sources.
The synchronous master clock card receives power from the associated
shelf. This circuit pack contains on-board regulators to produce the stable
voltages required by clock hardware components.
Backplane wiring options on the NT3X15BB synchronous master clock
allow a master office which is operating with an external reference oscillator
to synchronize to signals of 10.24, 10.0, 5.0, 2.56, 2.048, 1.024 MHz, or 1.0
MHz with a sine wave nominally +2.5V and 50 ohm impedance.
Stratum 1 synchronization
The DMS-200 uses the master external office configuration to achieve
Stratum 1 level accuracy, by interfacing with a cesium clock reference
mounted in a Master Reference Frequency Frame (MRFF).
The MRFF has two configurations:
• The NT5X23AA consists of two in-service cesium clocks, each being
independently powered and connected to its respective CMC clock
circuit. A third non in-service cesium reference is provided as a spare.
• The NT5X23AB consists of a two-clock system as described above
without provision for a spare.
Each cesium clock reference is connected to its associated CMC clock by a
maximum 200 feet (61 M) of 50 ohm coaxial cable and control wiring. The
control wiring allows the DMS-200 to provide indications of trouble on the
MRFF equipment and allows the DMS-200 to operate status lamps (active,
standby and spare) on the MRFF display and alarm circuit packs,
NT5X24AA, which are provided for each reference clock and are located on
the MRFF.
Stratum 2 synchronization
Stratum 2 level synchronization for a DMS-200 is provided by using the
master external on the slave configuration.
The stratum 2 Synchronizable Clock Master Oscillator, NT3X16AA, is an
ovenized Voltage Controlled Crystal Oscillator (VCXO). The NT3X16AA
is located on a Stratum 2 Oscillator Shelf, NT3X95AA, which is installed in
an NTOX43 Input/Output Equipment (IOE) frame. There are two clock
modules in the shelf, one for each CMC. The NT3X16AA, located in the
IOE frame is connected to its associated CMC Controller, NT3X15DA, by a
maximum of 200 feet (61 M) of 50 ohm coaxial cable and control wiring.
The Stratum 2 hardware mounted in the CMC consists of a Synchronizable
Master Clock Counter (NT3X14BC) and a Stratum 2 Master Oscillator
Controller (NT3X15DA).
Stratum 3 synchronization
Stratum 3 synchronization in the DMS-100 International switch uses the
slave configuration with a clock that meets Stratum 3 performance criteria.
The VCXO for Stratum 3 is located on the NT3X15CA, Stratum 3
synchronizable clock master oscillator, housed on the CMC shelf. The
NT3X14BA, synchronizable master clock counter, is housed in the same
location.
Synchronization loop
The block diagram in figure 11–7 on page 11–16 illustrates half of a
duplicated system. The main components are identified and their locations
shown within the DMS-100 International office. The control loop for the
active clock and its standby loop are identified.
A distributed phase-lock loop is formed by the clock oscillator whose output
is distributed to all hardware modules in the DMS-100 International office, a
digital phase comparator in the IDTC common equipment, the DMS-100
International internal message system, and the control algorithm in the CPU
which controls the clock oscillator frequency through a digital-to-analog
converter.
The incoming master clock signal received over the timing link is compared
in the PCM-30 card with the local clock signal. Firmware in the IDTC
samples the phase differences between the two signals and reports to the
synchronous clock software through the message system of the DMS-100
International switch, the IDTC message processor, the Network Message
Controller (NMC), and the CMC message processor.
In addition to checking for phase differences, the PCM-30 card also detects
and reports slips, bipolar violations, and lost synchronization.
The IDTC collects phase comparison samples at 400 ms intervals. After 32
samples are collected (12.8 seconds), a phase report is sent to the CC. The
phase report contains the 32 samples and information on slips.
In the CC control algorithm, the 32 phase values are normalized and
summed. The resulting average phase sample is integrated with 30–bit
resolution. The integrator output is then added to the average phase sample,
multiplied by a coefficient. The 30-bit word is truncated (to match the
resolution of the digital-to-analog converter in the NT3X15 synchronous
master clock card) and is sent to the NT3X14 controller. The control word
is thus updated after each phase report, every 12.8 seconds.
The D/A converter supplies a DC voltage to control the frequency of the
Voltage-Controlled Crystal Oscillator (VCXO) on the associated circuit
pack.
Figure 11–7
Synchronous clock system (simplified block diagram)
DS1 DCM/DTC
line processor DCM/DTC
card
10.24
MHz
mate Network
Active NMC module
CMC Clock control
clock 10.24 regen and NMC
loop
MHz
to digital
circuits
Phase CMC
det Synch message
osc processor
NT3X15
Standby D/A
or conv
active
ext NT3X14
control cont
loop
CPU
software
Clock
control
algorithm
The Stratum 2 and standard clocks use an ovenized crystal oscillator while
the Stratum 3 uses a temperature compensated oscillator. The D/A converter
allows the oscillators to be adjusted to a typical accuracy of:
• standard synchronizable clock: 1.0 x 10–9
• Stratum 2 clock: 1.0 x 10–11
• Stratum 3 clock: 7.5 x 10–9
The output of the VCXO assembly is a 10.24 MHz square wave to the
various digital circuits in the system.
The phase detector in the standby control loop comprises two counters
which allow the oscillator frequency to be compared with other sources – the
clock signal from the mate CMC, or an external reference clock when in the
master-external operation configuration. When used with an external
reference source, failure of such a source can be detected by the Central
Control (CC). Clock activity is then switched to use the mate CMC
oscillator as the office clock.
DMS SuperNode clocking
The clock system in the SN and SNSE switches is fully redundant. There
are two clock systems, one in each MS. One clock system is designated the
master and the other the slave. Frequency and phase synchronization is
maintained between the two systems, so that if a master clock system fails,
the slave clock system automatically assumes the master role.
The NT9X53 clock cards produce signals at two frequencies: the system
clock source at a nominal frequency of 10.24 MHz, and the subsystem clock
source at 16.384 MHz. Both signals are divided to 8 kHz for their respective
frame pulses. The frame pulses are synchronous.
The subsystem clock signal drives the MS T-bus, and is distributed to the
CM, ENET, and application processors (AP) using DS512 links. The system
clock signal is distributed to the JNET and the I/O controller (IOC) using
DS30 links.
The NT9X54 provides the electrical interfaces for various clock signals.
The types of clock signals include the following:
• analog external reference signals from atomic or loran-C clocks
• composite clock signals from a timing signal generator (TSG)
• DMS remote clock (Stratum 2 and 2.5)
• mate frame pulse
Stratum levels
A stratum level is a rating given to an oscillator to indicate its holdover
accuracy. The highest level of accuracy is Stratum 1. This rating is reserved
for the most accurate oscillators available, such as atomic and loran.
The following table shows the maximum acceptable drift for each stratum
level. The numbers shown represent how much of a complete cycle the
oscillator drifts on every cycle. When the numbers are inverted, it indicates
how many complete cycles it takes for the oscillator to be out of
synchronization by one full cycle.
Synchronization configurations
The MS clock systems can be set up in any of the following configurations.
Master internal In a master internal office, the master clock is free
running (not synchronized to any external reference). Only signaling
transfer points are configured this way.
Master external In a master external office, the master clock is
synchronized to an external reference clock by a PLL. Phase comparison
between the master clock and the reference is performed on the NT9X53
card. In this configuration, the clock source for the network is typically a
Stratum 1.
Slave In a slave office, the master clock is synchronized to an incoming
PCM-30 carrier. Phase comparison between the master clock and the
reference is performed at the IDTC.
clock ensures less than 1 slip in 10 hours after being in the free-run mode for
72 hours.
Frequency capture width
When in a slave mode, a clock must be capable of synchronizing to a master
which is offset from the correct frequency by its maximum or minimum
allowed frequency. The DMS-100 International switch is capable of
synchronizing to a master with the following offsets:
• standard synchronizable clock: ± 7.5 x 10–7
• stratum 2 clock: ± 1.6 x 10–8
• stratum 3 clock: ± 4.6 x 10–6
Compression law
The compression law used in DMS-100 International switches is a linear
approximation to A = 256 law as shown in CCITT G.711 tables.
The characteristics, decision levels and code assignments described in the
following sections include the effect of compression and expansion.
Table 11–8
CODEC decision levels
Level number n Level magnitude Xn
0 0
1 ≤ n ≤ 16 2n–1
17 ≤ n ≤ 32 4 n – 33
33 ≤ n ≤ 48 8 n – 161
49 ≤ n ≤ 64 16 n – 545
65 ≤ n ≤ 80 32 n – 1569
81 ≤ n ≤ 96 64 n – 4129
97 ≤ n ≤ 112 128 n – 10273
113 ≤ n ≤ 128 256 n – 2409
Figure 11–8xxx
Codec transfer characteristics
Y0 = X0 = 0
Yn = Xn + Xm : n = 1, 2 ...127
2 m=n+1
Y2
Y1
X4 X3 X2 X1
X1 X2 X3 X4
Coder
Y1 input
Y2
Y3
Y0 = X0 = 0
N = 1, 2, 3, .....127
YN = XN + XN + 1
2
Equipment
This chapter provides general equipment installation and provisioning
information for DMS switching systems. Exact provisioning, installation,
and layout for each system varies depending on the market in which the
system is deployed, the application of the system, and the capabilities
purchased. For more detailed information on hardware and software
functionalities and provisioning rules, consult the following Northern
Telecom Practices (NTPs):
• DMS-100 Family Provisioning Manual, 297-1001-450
• Hardware Description Manual, 297-1001-805
Physical
The DMS-100 Family system hardware is packaged into single- or
double-bay frames equipped with appropriate shelves. Bays are identified
by their primary function. Shelves and drawers are identified by their
specific function.
Table 12–1 identifies the DMS-100 Family frames or bays with their
corresponding shelves and drawers.
Table 12–1
DMS-100 family bays and corresponding shelves/drawers
Bays Shelves/drawers
PS — Program Store
DS — Data Store
CU — Cooling Unit
DCE — Digital Carrier DCM — Digital Carrier Module
Equipment (4 max)
CU — Cooling Unit
IDTE — Digital Trunk Equipment IDTC — Digital Trunk Controller
CU — Cooling Unit
—continued—
Table 12–1
DMS-100 family bays and corresponding shelves/drawers (continued)
Bays Shelves/drawers
CU — Cooling Unit
LTE — Line Trunk Equipment LTC — Line Trunk Controller (2 max)
CU — Cooling Unit
MEX — Memory Extension DS — Data Store
(Duplicated)
FSP — Frame Supervisory Panel
Table 12–1
DMS-100 family bays and corresponding shelves/drawers (continued)
Bays Shelves/drawers
CU — Cooling Unit
MSS — Maintenance Spare Storage XXX — For storage of spare circuit
packs
NETC — Network Frame Planes NCO — Network Crosspoint Plane 0
Combined (Duplicated networks)
NCI — Network Crosspoint Plane 1
CU — Cooling Unit
PDC — Power Distribution Center FPA — Fuse Panel “A” Feed (5 max)
FP — Filter Panel
—continued—
Table 12–1
DMS-100 family bays and corresponding shelves/drawers (continued)
Bays Shelves/drawers
CU — Cooling Unit
RCME — Remote Control & RMM — Remote Maintenance Module
Maintenance Equipment
HIE — Host Interface Equipment Shelf
LD — Line Drawer
FP — Filter Panel
RSE — Remote Service Equipment RSM — Remote Service Module
Table 12–1
DMS-100 family bays and corresponding shelves/drawers (continued)
Bays Shelves/drawers
End
Table 12–2
Shelf heights
Shelf Inches Millimeters
All printed circuit boards measure 12.5 inches (317 mm) high and 10 inches
(254 mm) deep except miscellaneous alarm Printed Circuit Boards (PCBs)
(which are about half the height), first generation line circuit PCBs (which
are 4 inches [102 mm] by 4 inches [102 mm]), and second generation line
circuit PCBs (which are 3 inches [76 mm] by 3.5 inches [89 mm]). All
PCBs with the exception of the line circuit have face plates of suitable
widths. Both single and double sided PCBs are used.
Circuit pack extenders are provided for PCBs that can tolerate the additional
time delays imposed. Extenders can swivel through a 180-degree arc.
Circuit pack extenders are not provided for all units, such as high-speed
logic circuits, that cannot tolerate extra distance.
The alignment guides for installing circuit packs in a shelf or drawer allow
for considerable misalignment while attempting to seat a circuit pack
without causing equipment damage. Wherever possible, quick-fasten
devices (rather than bolts or screws) are used for all replaceable units.
The heads of fastening devices for all mounted units are accessible with a
standard screwdriver.
Equipment frames
A single-bay frame has been designed and used throughout the DMS-100
Family system. In some cases (for example, CCC), two bays are fixed
together to form a double-bay frame.
Frame structure members do not impede normal maintenance access or
removal of frame component modules or subassemblies.
EMI hardware, mounting hardware, cabling hardware, and rear panels are
provided for a given bay configuration and are normally shipped loose. This
equipment is provided in addition to the basic framework.
Equipment frame dimensions
All NT40 based DMS-100 Family equipment frames use an identical
framework assembly with specific dimensions:
Depth: 18 inches (457 mm)
Width: 27 inches (686 mm)
Height: Framework only 84.0 inches (2.13 m)
Overframe Cable Duct 10.0 inches (254 mm)
Cross-Aisle Cable Duct 7.1 inches (180 mm)
Subtotal 101.1 inches or 8′5″ ± 1″ (2.6 m ± 25 mm)
Ladder Type Cable Rack 12.0 inches (304 mm)
(Top is 9′ 4″ (2.8 m from floor)
Total 113.1 inches (2.9 m)
Recommended clear ceiling height is 132 inches (11 feet). Minimum clear
ceiling height is 120 inches (10 feet).
The level above the 101.1 inches or 8′5″ (2.6 m) is reserved for the ladder
type cable rack which is normally used for cabling to the Distributing Frame
(DF), transmission systems, and power distribution.
The DMS SuperNode equipment frame has specific dimensions:
Depth: 24 inches (610 mm)
Width: 42 inches (1.1 m)
Height: 72 inches (1.8 m)
1 1
2
3 2
4 3
5 3
6 4
7 4
Table 12–3
Earthquake bracing requirements
Where in Building Equipment is Installed
ATC Zone Ground Section Mid Section Top Section
5, 6, 7 E M M
3, 4 E E M
Table 12–3
Earthquake bracing requirements
ATC Zone Ground Section Mid Section Top Section
1, 2 E* E* E
Note: E — The reinforced base frame is required.
M — The reinforced base frame and mechanical bracing frame are
required.
E* — NT0X25AA Earthquake Framework assembly is recommended.
This cluster may be continued in a line if the gap and cable restrictions are
met.
For all installations, DMS-100 frames should be located at least one foot
from walls, posts, or pipes. Any foreign equipment which is not earthquake
braced should be located away from DMS-100 equipment at a distance at
least the foreign equipment height plus one foot.
Figure 12–1xxx
Typical floor for 25 000 line, 5000 trunk DMS-100 office
20’-0” 20’-0”
01 03 + +
V30
3’-0” MIS
LCE LGE PDC LCE
15 17 04 05 03 18 19
5’-0”
Main distributing frame
29’-3”
00 01 03 05 01 000 1-00 1-01 03 06
LGE LCE PDC TME DNI NETC
3’-0”
3” Growth
V1
8’-0”
Maintenance area
LTP
0 1 2 3
Legend:
Maintenance aisle PDC Power distribution DTE Digital trunk equipment
center
CCC Central control complex LCE Line concentrating equipment
SLC Speech link
MEX Memory extension connecting LGE Line group equipment
IOE Input/output equipment MSS Maintenance spare TTP Trunk test position
storage
DNI Digital network interconnections
TME Trunk module
equipment
Figure 12–2xxx
Typical cable duct and cable rack layout
Notes:
1 Single line denotes cross aisle cable duct
1’–6”
MDF (9’–0”)
3’–0”
Min.
Legend:
2’–0” 6’–9” 3’–3”
RLCM Remote Line Concentrating
1’–6”
Module
RLCM RLCM
PWR BD Power Board
(operating company/NTI)
1’–3”
PWR
ORB MDF Main Distributing Frame
BD.
(operating company)
3’–0”
Min.
Maintenance Aisle
1’–9”
Battery
The total length of power cable run between a PDC frame and any remote
line concentrating module frame shall not exceed 41 feet (12.5 m).
Figure 12–4xxx
RLCM typical cable duct and cable rack layout
02 01 00
RLCM RLCM
PWR
ORB
BD.
Battery
Over-frame
trough
10”
Notes:
7’–11” ± 1”
Denotes ladder type cable rack
7’–1” ± 1”
Ladder cable rack (9’–4”)
Figure 12–5xxx
Typical floor plan for a 1280 line DMS-100 family RSC office
1’–6”
MDF (9’–0”)
V1 V8
3’–3” 11’–3”
3’–0”
Min.
1’–6”
00 00 00 01 02
6’–0”
3’–0”
LCE
1’–6”
PWR 03 04
ORB
BD.
3’–0”
Min.
1’–9”
Battery
Figure 12–6xxx
RSC typical cable duct and cable rack layout
MDF (9’–0”)
V1 V8
15”L
3’–3” (9’–4”)
00 00 00 01 02
RM E RCE LCE
LCE
P WR 03 04
ORB
B D.
12”L
(9’–4”)
Battery
The OPM cabinet is composed of the main compartment and the end access
compartment.
The main compartment houses the re-packaged RLCM, ac breakers,
rectifiers, batteries, and environmental control system. The compartment is
accessed from the front of the cabinet via a pair of doors. The majority of
the equipment is housed on a pair of swing-out, double-latched, hinged bay
frames. The hinged bays allow access to both the rear of the shelves and to
additional equipment positioned against the back wall of the cabinet.
The end access compartment, physically isolated from the main
compartment, houses protection, termination, and cross-connection
equipment. Access is provided via a single door.
The doors of the cabinet are hinged with recessed lock pins and are provided
with padlocking facilities. For security purposes, cabinet door alarms are
provided.
Each hinged equipment frame contains three shelves of equipment. Four of
the shelves within the swinging bays are occupied by the RLCM equipment
consisting of a dual-shelf LCM, an RMM, and a Host Interface Equipment
(HIE) shelf.
The remaining two shelves consist of an FSP which includes office
repeaters, a Power Control Unit (PCU) for ac power, a rectifier system
consisting of a pair of rectifiers, and a battery control unit. An
Environmental Control Unit (ECU) is located at the bottom of each bay.
The battery strings, ac entrance panel and optional test equipment are
contained in fixed bays located behind the swing-out equipment bays.
Outside plant and power cables enter the cabinet at the left end of the cabinet
base. Outside plant cable and termination is provided on the Service
Protection Center (SPC) located in the end access compartment. The outside
plant cable protection consists of both VF and DS-1 line protection. VF line
protection is provided by carbon protector modules. DS-1 line protection is
provided by gas tube protector modules.
Protection for 640 subscriber VF pairs and termination for 675 pairs is
always provided. Optionally, an additional 1375 pair termination may be
added, which will allow the end access compartment to serve a cross
connection function.
The environmental control system regulates the environmental conditions in
the main cabinet compartment to protect the OPM electronics. The OPM is
designed for operation within an ambient temperature range from 5°C
(40°F) to +40°C (104°F) and a relative humidity range from 5 to 99%.
Figure 12–7xxx
Outside plant module — interior view
Fuse
alarms
LCA:1
React. 1
BCU
33 XXX PCU 39
panel
LCA:0 32
NTBIX
NTBIX
React. 0
19
FSP 28
HIE:XXX
5 RMM:XXX 19
ECU:XXX
0 5
ECU:XXX
Bay 0 0
Bay 1
Table 12–4xxx
Floor plan data
Heat Approx
Sub Type of frame Abbrev. DC fuse Typical weight
syst dissip. rating current
watts/hr kg lb
Message switching 6 1050 MS6E 6–20A 23A 227 500
equipment 2–5 A
Message switching 7 650 MS7E 6–20A 15A — —
equipment 2–5 A
Inverter
Peripheral
DC power equipment
The DMS-100 Family dc power equipment is furnished by the operating
company. Current drains for the different DMS-100 Family bays and frames
are supplied to the operating company for determining the size of the power
plant and rectifiers on a per-site basis.
Power distribution center
The power distribution center is described in Chapter 13 of this document.
Cable distribution
In DMS-100 Family systems, the overframe cabling duct is provided as part
of the standard equipment frame. Earthquake protection is provided as
described in Equipment frame earthquake resistance on page 12–10.
The cable duct is raised from the framework 4 inches (102 mm) to facilitate
ventilation of the equipment mounted on the framework.
DMS-100 Family cable is routed in a frame modular cable trough as shown
in figure 12–8 on page 12–26. The compartments are designed to provide
functional separation for the various cables, and to minimize
electromagnetic interference between digital, analog, and power circuits. A
cross-aisle duct, whose section is somewhat similar, is available to distribute
cable between aisles.
The cable assigned to the different compartments varies with the equipment
lineup. Figure 12–8 shows the preferred standard. However, if it is
necessary to mix peripheral equipment types within a lineup, some care is
necessary to separate digital and analog cable, or cables for which there is
danger of cross-talk interference. The cable trough is sectioned with metal
dividers separating the digital cables from the analog to reduce such
interference.
The sectional area of the overframe cable duct is designed to accommodate
the worst case cable cross-section for an l8 frame lineup. It is expected that
for normal office layouts the trough capacity will not be limited by the
cross-section area, but by the need to segregate various cable types if
peripheral equipment is mixed within a lineup. Should such situations arise
cable may be rerouted with cross-aisle ducts.
A trough is available to bridge single frame gaps within a lineup without
requiring stanchion or empty framework. Building columns require
cross-aisle ducts to route cable to the continuation of the lineup. Cabling to
DF, power, and other switching systems running perpendicular to the
equipment lineups, are distributed in overframe via ladder cable racks. No
provision has been made to route alarm and other miscellaneous signals,
which are usually connected directly between the Frame Supervisory Panels
(FSP) in each frame.
Figure 12–8xxx
Cable trough assembly
(See inserts)
6” 6” 2.4” 3” 6” 6” 2.4” 3”
Digital
Power
6”
6”
Analog Analog
18”
4”
7’–1”±1”
Rear of Rear of
framework framework
Inserts
CC cabling
Digroup
(digroup)
Power
Building provisions
Ceiling height
A minimum clear ceiling height of 10 feet (3 m) is recommended for the
DMS-100 Family switching system. The recommended clear ceiling height
is 11 feet (3.4 m).
Building ceiling supports
Building ceiling supports are not required for DMS-100 Family equipment
frames or intra-switch cabling. Cable racks for cabling to MDF, power, and
other equipment (for example, ladder-type cable racks) are not supported by
the DMS-100 Family frames and require ceiling supports.
Column and cable hole spacing and arrangements
Due to the compactness of the DMS-100 Family systems, column and cable
hole spacing and arrangements are not critical and flexible floor plan layouts
may be engineered to accommodate various arrangements as required.
Frame handling and door openings
Frames in the DMS-100 Family can be handled in a vertical or horizontal
position provided that when they are lying horizontal, they are positioned on
their side, not on the front or back. Each frame must be kept fully packed, to
avoid damage, until it reaches the office floor location.
Hoisting Tool No. ITA-9938 can be used to hoist equipment frames in a
horizontal position.
Crated frames are 7 feet 5 inches (2.26 m) high, 2 feet 5 inches (.74 m) deep,
and up to 5 feet 6 inches (1.68 m) wide. To accommodate hoisting fixtures,
the recommended equipment entrance door opening into the building should
at least be 5 feet (1.5 m) wide by 10 feet (3 m) high. Within the building,
the uncrated frames on shipping dollies require an equipment entrance door
opening of 3 feet (.91 m) wide by 8 feet (2.4 m) high.
Air conditioning
The size of the building area housing the DMS-100 Family system, the
building insulating properties, the climatic condition, the interior airflow and
the size of the system itself all influence the air conditioning requirements.
The heat to be dissipated in DMS-100 Family systems per square foot
averaged over the equipment room floor area is 40 watts. This is for
DMS-100 Family switching equipment only. Other factors, such as the
number of maintenance personnel, have to be taken into consideration to
calculate the overall heat to be dissipated.
Environment
Ambient temperature and humidity
The DMS equipment, including remotes, is designed for operation within the
ambient temperature and relative humidity ranges shown in table 12–5:
Table 12–5xxx
Ambient temperature and humidity
Normal Short Term
(Note 1) (Note 2)
PCB design
Northern Telecom Procurement Specification 25001 on Printed Circuit
Boards restricts all PCB laminated material to NEMA Ll-l Grade FR-4
epoxy glass material.
Each printed circuit board is marked to indicate that the material passed the
UL flammability test (UL 94 V-0).
Structural material
All structural material, including the PCB and backplanes, have an oxygen
index of 28% or greater, and a 94 V-0 rating as determined by Underwriters
Laboratories standard 94 test for flammability of plastic materials.
Component selection
All components are selected to meet the appropriate IEC needle flame test.
Wire and cable
Wire and cable used throughout the DMS-100 Family of products meet
ASTM D2633 requirements for thermoplastic insulation and jacketed wire
and cable.
Transportation and storage environments
Transportation
DMS-100 Family equipment packaged for transportation is capable of
enduring the rigors of shipping via truck, rail, sea, or air. The environmental
conditions during transportation must not exceed specifications:
• Ambient Temperature: –40°F (–14°C) to 160°F (71°C)
• Humidity: 10% to 95%; max water vapor pressure not to exceed
25 mmHg
• Vibration: up to 3.5 g at 5 Hz to 500 Hz
• Shock: equivalent to a 6-inch (152-mm) drop for a 1000-lb (454-kg)
equipped bay
Storage
DMS-100 Family equipment may be stored packed in a sheltered
environment under specific environmental conditions:
• Ambient Temperature: –40°F (–14°C) to 160°F (71°C)
• Humidity: 10% to 95%; max water vapor pressure not to exceed
25 mmHg.
Grounding
The use of ac coupling for signal leads, together with input-output isolation
of ac to dc converters, essentially eliminates the need for a ground window.
In the DMS-100 Family, the battery return, signal ground, and frame ground
are separate entities although they are brought to the same dc potential. See
Cable distribution on page 12–25 for physical grounding arrangements.
AC coupling
AC coupling is employed between the central message controller and
network, and between network and peripheral equipment. This ac coupling
refers only to the signal leads. Network and peripheral frames need not be
insulated from the floor unless they form part of the CPU lineup.
DC coupling
The CPU, associated memory, and the central message controller are all dc
coupled. These units are considered for grounding purposes as a single
group entity. All frames in a lineup that contains this entity must be
insulated from foreign grounds.
Frame ground
A frame ground bus is bonded to each frame in the lineup at the top. The
bus terminates at the PDC frame ground which is also the battery return
ground.
The PDC frame ground bus of all PDCs are bonded together and connected
at one point to the central office ground for that floor for a dedicated power
plant, or to the battery ground reference point for a shared power plant.
Framework isolation
The equipment lineup that contains the CCC module must have all
framework isolated from any ground other than framework ground. This
isolation of frames prevents possible malfunction of memory shelves due to
electromagnetic fields which may be generated if an unusually large fault
current was allowed to flow through the CCC framework.
DMS-100 Family frames not in the CCC module lineup need not be isolated
from other grounds. Isolation may be specified by the operating company
during the engineering period, if local arrangements so require.
Signal ground
Advantage is taken of the printed circuit backplane construction to isolate
the signal ground on a per-shelf basis. A single ground connection is taken
from the signal ground of each shelf and all of them connected to a single
bonding point of the frame ground stop at that frame. Exceptions to this rule
are in the central control complex and the MSB, where all signal grounds
from the shelves are connected to a single point on the frame ground bus for
all the frames in the central control complex.
Illumination
Lighting levels are provided and maintained in the DMS-100 Family System
so as to afford satisfactory and safe working conditions at all times. To this
Electromagnetic interference
Electromagnetic emissions
The system, with available options, conforms to the emission requirements
of the Federal Communication Commission (Part l5, Subpart J). Frames are
available which are in compliance with FCC regulations for location on
customer premises.
Radiated susceptibility
The system shall not exhibit any malfunctions or be degraded beyond its
specified tolerances when subjected to electric field strengths of 5 V/m or
less, root-mean-square value corresponding to the peak of the envelope, if
modulated, over the frequency range 10 kHz to 10 GHz.
Miscellaneous
Hardware
In addition to the DMS-100 Family equipment bays, other equipment may
be supplied as requested for testing purposes. This equipment is shown in
Table 12–6 on page 12–31.
Table 12–6xxx
Additional equipment
Description Code
Table 12–6xxx
Additional equipment(continued)
Description Code
Table 12–6xxx
Additional equipment(continued)
Description Code
Floor maintenance
Floor maintenance procedures as outlined on pages 1 and 2 of AT & T
practice “BSP 770-140-010” Issue 1, August 1977, are suitable in the
DMS-100 environment.
Craftsperson interfaces
Input/output system
The I/O system consists of this equipment:
• Magnetic Tape Devices (MTD)
• Disk Drive Units (DDU)
• Visual Display Units (VDU)
• Teleprinters (PRT) Receive and/or Send
• Data Buffer Units
• Data Sets (Modems)
• Billing Media Converter (BMC)
• Distributed Processing Peripheral (DPP).
A minimum of five I/O ports, excluding MTDs, must be provided for all
DMS-100 Family systems:
• 1 VDU port for the Maintenance and Administration Position (MAP)
• 2 VDU, PRT ports for the Technical Assistance Service (TAS)
• 2 VDU ports for the Portable Maintenance (AXU).
I/O devices, as shown in table 12–7, are supported by the DMS-100 software
for logging maintenance and machine activity.
Table 12–7xxx
I/O devices
Table 12–8xxx
Teleprinters
Make/model Baud rate
Table 12–9xxx
Device requirements
Cabling Interface type Modem required
distance (feet)
up to 50 RS232 No
up to 1200 Current Loop No
over 50 RS232 Yes
Each terminal, regardless of its type, can appear either in a local or remote
location. Modems may be arranged for either dial-up or dedicated
operation.
Table 12–10 shows the modem recommended for use with DMS-100 Family
systems:
Table 12–10xxx
Modem requirements
108/113
RIXON C202/C212 300/120
Power requirements
This chapter summarizes power and grounding specifications for DMS-100
Family switching systems. Specific system configurations vary depending
upon the the market in which the system is deployed, the application and
size of the system, and equipment provisioned. Additional detail on power
and grounding is provided in the following documents:
• Power Distribution and Grounding Guide, 297-1001-156
• DMS-100 Family Provisioning Manual, 297-1001-450.
Operating voltage
Power in the DMS SuperNode and the DMS–100 Family switching systems
is distributed at a nominal potential of –48 V dc. The operating voltage
ranges cover two conditions:
• Normal conditions: –42.75 V dc to –55.8 V dc
Note: Normal conditions occur under battery float (high end) and
maximum voltage drop (low end) operation.
The actual operating voltages are measured at the input to the Power
Distribution Center (PDC), and include the loop voltage drops between the
office battery terminals and the PDC. Except for line current and some relay
equipment, all input power in the DMS-100 Family system is further
processed by converters and inverters to different dc and ac voltages.
Battery noise limits
The DMS-100 Family system may share power plants with other systems
provided that the requirements of power plant sharing are met and that the
office voltage range and battery noise level are within specified limits:
• Voltage range –44.75 V dc and –55.8 V dc
• The noise present at the battery terminals of any dc source to which the
DMS-100 switch will be connected, shall not exceed 55 dBrnc or
300 mV rms in any 3 kHz band between 10 kHz and 20 MHz.
• Step voltage changes on the dc source (measured at PDC) should not
exceed 5 V dc in magnitude at a rate of change of 1 V/ms. Faster rates
of change can be tolerated if the step voltage magnitude is less than 5 V.
The product of magnitude and rate of change should not exceed
5 V2/ms., and the voltage limits (operating voltage) must not be
exceeded.
End cell switching is permissible for use with the DMS-100 Family systems
subject to the above step voltage changes.
Power consumptions
The current drain and power consumption are normally engineered by
Northern Telecom on a per-job basis.
Table 13–1 provides the maximum power consumption on a per-frame basis,
assuming that each frame is fully equipped and all equipment is operating at
maximum capacity.
Table 13–1
Maximum power consumption per frame
Frame Code Max watt
Table 13–1
Maximum power consumption per frame (continued)
Frame Code Max watt
Power alarms
DMS-100 Family power plant failures/alarms are detected through alarm
scan points in the associated maintenance trunk module. Power plant alarms
are grouped together (the grouping is definable by operating company) and
may be assigned optionally by the operating company into any of these
categories:
• Critical alarm from system power plant
• Major alarm from system power plant
• Minor alarm from system power plant.
The DMS-100 Family alarm control and display panel incorporates visual
lamps to indicate four types of failures:
• Critical power plant failure
• Major power plant failure
• Minor power plant failure
• Power distribution center Alarm Battery Supply (ABS) failure.
Figure 13–1
Typical PDC arrangement
–48V
office Office
battery return
Fuses
600A
max- Returned cables
each omitted Power room
PDC-02
–A
–B
–A
Feeder
–B
PDC-01
–A
Feeder
–B
PDC-00
Figure 13–2
Framework arrangement – non-ISG installations
No. 6 AWG
FBB
LR LR
LR TBR
LR LR BR
TB1 TB1
BR LR LR LR LR
LRB
FBB
Note: Earlier installations may have framework ground referenced to the BR plate of PDC-00.
logic return isolated and connected to the DMS SPG at only one point. This
isolation ensures that all other building grounds, grounding conductors, and
return conductors do not contaminate the logic return or framework ground.
Central office building ground
It is recommended that resistance between of the central office building
principal ground to earth should be as low as is practically possible
(preferably 5 Ω or less) and in no case should it exceed 25 Ω. This coincides
with the requirements set by the National Electrical Code (ANSI/NFPA No.
70–1978, Article 250–84) for the commercial ac ground in any building.
Figure 13–3
Framework ground and logic return – ISG systems
BR LR LR LR
No strap
LRB
FBB
LR LR LR
DMS
To DMS SPG SuperNode To DMS SPG
(see page 13–9 (see page 13–9
for conductor size) for conductor size)
Note: Dedicated FBE and LRE bars are preferred for the STP equipment.
AC grounding arrangements
Conforming with national electrical codes, each ac distribution circuit must
be provided with an additional conductor, commonly referred to as the
“green wire.” A separate green-wire network is required for each ac source.
Commercial ac is a source that originates (has a ground reference) external
to the DMS-100 system. Each protected ac inverter constitutes a source that
originates (has a ground reference) internal to the DMS-100 switch.
The main ac and telecommunication system grounds must be connected
together at the building principle ground. A connection is also required
between the Central Office DMS SPG ground appearance used for the
DMS-100 System and the green-wire network for all commercial ac circuits
used on or within the DMS-100 System. As a minimum, the size of the
connection should conform to electrical code requirements for equipment
grounding conductors.
The DMS-100 switch may be configured with either of two configurations
of the green-wire network. Because of the ground interconnections and the
circuit isolation inherent to the DMS-100 switch, the green wire network can
be configured to make contact with the DMS-100 framework while retaining
operational and personnel safety integrity. In this configuration, the
DMS-100 switch and the green wire are isolated from incidental contact
with building ground.
If an isolated green-wire network is chosen, the green wire and ac
components will be isolated from the DMS-100 framework. All ac will be
insulated from contact with the DMS-100 framework and incidental building
ground.
Four types of equipment can be affected:
• ac receptacle
• conduit and junction boxes
• shoulder bushings and mounting hardware
• the ac safety ground wire.
The green wire for each protected ac inverter is bonded to the inverter
chassis and referenced to the DMS framework ground. To preserve isolation
of the commercial ac green-wire network, inverters must not be used to
power any equipment whose chassis may have electrical contact with the
building ground.
Frame loads
The secondary distribution of dc power from the PDC to individual DMS
frames is described in the following paragraphs in conjunction with
identified figures. All figures are based on the latest equipment and frame
Figure 13–4xxx
DMS SuperNode cabinet power distribution
Power feeders
from the A and B
busbars in the PDC EMI cabinet
(13 feeders)
FSP Shelf
ABS and
Input termi- alarm circuits
nal
EMI filters
Cooling
Blower Blower shelf Blower Blower
Note 1: Plane 0 of the DMS SuperNode frame is powered from the A Bus of the PDC frame.
Note 2: Plane 1 of the DMS SuperNode frame is powered from the B Bus of the PDC frame.
Note 3: The power module circuits are fused in the PDC at 25 A.
Note 4: The blower circuits are fused in the PDC at 5 A. Two of the blower circuits are fed
from the A Bus of the PDC, and the other two blower circuits are fed from the B Bus.
PDC (see figure 13–5) A cooling unit is located in the bottom shelf of each
frame.
The cooling unit for present vintage systems (NT3X90AC) contains five
fans, each of which requires –48V dc power. Power for the NT3X90AC
type cooling unit is obtained from two 5-A fuses at the PDC.
The cooling unit in earlier vintage systems, (NT0X30) contains five fans,
each of which requires 120 V, 60 Hz no-break ac power. The no-break
ac supply is generated by separate dc-ac inverters. Two inverters feed a
maximum of four cooling units. Each inverter is fed from a 20-A fuse at
the PDC. To increase reliability, the inverters are alternately connected to
the A Bus and the B Bus of the PDC.
The ABS is protected by a 10-A fuse at the FSP of the PDC. In a DMS
equipped with an Enhanced Alarm System (EAS), the ABS power feeder
originates from a 20-A fuse or circuit breaker at the power plant powering
the PDC. In a DMS not equipped with EAS, this feeder originates at the A
Bus of the PDC.
Figure 13–5xxx
CCC and MEX frame power distribution
20 A
10 A 10 A
Power converters
CC/MEX shelves
Note 3 Note 3
Cooling unit Cooling unit
(notes 2 and 3) (notes 2 and 3)
Note 1: Each feeder (–48 V) has a corresponding return of the same gauge connected to the
battery return bus of the PDC.
Note 2: The CCC frame requires a cooling unit at all times. The MEX frame requires a cooling
unit only when four memory shelves are provisioned.
Note 3: The current NT3X90AC type cooling unit requires two 5-A fuses at the PDC.
Each NT3X90AA and NT3X90AB type cooling unit requires a 5-A fuse at the PDC.
Earlier NT0X30 type cooling units with separate NT0X87 type inverters require a 20 A-fuse at the
PDC for each inverter.
Note 4: In a DMS with EAS, the ABS power feeder originates from a 20-A fuse or circuit breaker
at the power plant powering the PDC. In a DMS not equipped with EAS, this feeder originates at
the A Bus of the PDC. The ABS is protected by a 10-A fuse at the PDC.
Figure 13–6xxx
NT5X13 NM frame power distribution
Note 1
NM frame
B feeder A feeder
NM plane 0
PDC (2 shelves)
frame
Power converters
B bus A bus
10 A 10 A
20 A 20 A
FSP
10 A 1.33 A
ABS
Note 3
10 A 10 A
Power converters
NM plane 1
(2 shelves)
5A 5A
Inverter Cooling unit
Inverter (note 2)
Note 1: Each feeder (–48 V) has a corresponding return of the same gauge connected to
the battery return bar of the PDC.
Note 2: The NT3X90AC type cooling unit has two fans. Each is powered (–48 V) through a
5 A fuse at the PDC. The NT3X90AA and NT3X90AB type cooling units are equipped with
self-contained inverters as follows:
a. The first NM frame contains two inverters.
b. Each remaining NM frame contains one inverter.
Note 3: In a DMS with EAS, the ABS power feeder originates from a 20-A fuse or circuit
breaker at the power plant powering the PDC. In a DMS not equipped with EAS, this feeder
originates at the A Bus of the PDC. The ABS is protected by a 10-A fuse at the PDC.
Figure 13–7xxx
NT0X48 NM frame power distribution
Note 1 NM frame
PDC NM shelves
frame Power converters
B bus A bus
NM frame
(Plane 1)
10 A 10 A
20 A
20 A
To NM shelves
via 10 A fuses 20 A 10 A 1.33 A FSP
in FSP shelf ABS
Note 4
20 A
10 A 10 A
ABS
Power converters
NM shelves
Note 3 Note 3
Cooling unit Cooling unit
(notes 2 and 3) (notes 2 and 3)
Note 1: Each feeder (–48 V) has a corresponding return of the same gauge connected to the
battery return bar of the PDC.
Note 2: Each NM frame is equipped with a cooling unit.
Note 3: The current NT3X90AC type cooling unit requires two 5 A fuses at the PDC.
Each NT3X90AA and NT3X90AB type unit requires a 5 A fuse at the PDC.
Earlier NT0X30 type cooling units with separate NT0X87 type inverters require a 20 A fuse at
the PDC for each inverter. The NT0X87 type inverters are not always used on NM frames be-
cause two NT0X87 inverters are required for every four cooling units.
Note 4: In a DMS with EAS, the ABS power feeder originates from a 20-A fuse or circuit
breaker at the power plant powering the PDC. In a DMS not equipped with EAS, this feeder
originates at the A Bus of the PDC. The ABS is protected by a 10-A fuse within the PDC.
In a DMS-100F system equipped with the EAS, the 20-A power feeders
associated with the primary alarm MTM (OAU) and secondary alarm MTM
originate directly from the power plant powering the PDCs. When a TME
requires two power feeders, it can be powered either (a) by two feeders from
the PDC, or (b) by one feeder from the PDC and another feeder from the
power plant.
E&M trunk interface
For E&M circuits, the battery return of the PDC is used as a separate logic
return, and a talk battery supply is provided from the PDC battery feed
through a filter at the FSP. The talk battery supply is then distributed to each
shelf through a 3-A fuse at the FSP.
Figure 13–8xxx
TME frame power distribution
TME frame
MTM or TM
Power AXU
converter Talk battery
Note 1 shelf
65
B feeder A feeder
TM, MTM, or OAU
PDC
frame Power Talk battery
B bus A bus converter
TME frame 51
20 A
To TM 3A 3A
10 A 10 A Note 3.
shelves
5A
Note 2
OR
20 A
Note 2 FSP
20 A
Filter
10 A 1.33 A
ABS
Note 4
3A
ABS
3A
10 A 10 A 10 A 3A
Note 1: Each feeder (–48 V) has a corresponding return of the same gauge connected to the
battery return bar of the PDC.
Note 2: For an OAU application, the 10-A circuit breaker assigned to the the top shelf is fed
through a separate 20-A fuse in the PDC. For an OAU application with EAS, the –48V feeders
and their returns for the primary and secondary alarm MTMs originate directly from the power
plant powering the PDC. The MTM in the top shelf is a secondary alarm, and the MTM in position
51 is a primary alarm.
Note 3: For an OAU application, the 5-A breaker is provided for the AXU shelf associated with
the MTM in shelf position 51.
Note 4: In a DMS with EAS, the ABS power feeder originates from a 20-A fuse or circuit breaker
at the power plant powering the PDC. In a DMS not equipped with EAS, this feeder originates at
the A Bus of the PDC. The ABS is protected by a 10-A fuse within the PDC.
Figure 13–9xxx
DCE frame power distribution
Note 1
20 A 20 A
DCM shelves
FSP
10 A
1.33 A
Note 2
ABS ABS
10 A 10 A
Power converters
DCM shelves
Note 1: Each feeder (–48 V) has a corresponding return of the same gauge connected to
the battery return bar of the PDC.
Note 2: In a DMS with EAS, the ABS power feeder originates from a 20-A fuse or circuit
breaker at the power plant powering the PDC. In a DMS not equipped with EAS, this feeder
originates at the A Bus of the PDC. The ABS is protected by a 10-A fuse at the PDC.
Figure 13–10xxx
LME frame power distribution
Bay 0 Bay 1
Note 1 Power converters Power converters
A feeder +5 V +5 V
or or
B feeder +24 V +24 V +12 V +24 V +24 V +12 V
PDC
frame
B bus A bus LD cards 15–19 LD cards 15–19
FSP FSP
10 A 10 A 10 A
20 A
10 A 10 A 10 A
20 A 20 A
20 A 20 A
20 A
10 A 10 A 5A 5A
Filter Talk battery A Talk battery A
Filter
LD 00–09 LD 00–09
5A 5A
Filter Talk battery B Talk battery B
Filter
10 A 10 A LD 10–19 LD 10–19
1.33 A
ABS
10 A 1.33 A
ABS
Figure 13–11xxx
LCE frame power distribution
20 A 20 A
Filter
20 A 20 A
Filter
10 A 1.33 A
ABS
Note 2
20 A 20 A
20 A 20 A
10 A 10 A 10 A 10 A
7.5 A
fuses
ABS (to other frames)
Power Power Talk battery B
converters converters LSG 00–09
(LCM 0) (LCM 1)
7.5 A
fuses
Note 1: Each feeder (–48 V) has a corresponding return of the same gauge connected to
the battery return bar of the PDC.
Note 2: In a DMS with EAS, the ABS power feeder originates from a 20-A fuse or circuit
breaker at the power plant powering the PDC. In a DMS not equipped with EAS, this feeder
originates at the A Bus of the PDC. The ABS is protected by a 10-A fuse at the PDC.
Figure 13–12xxx
LGE frame power distribution
Note 1
B feeder A feeder
PDC
frame
LGE frame
B bus A bus
FSP
10 A 1.33 A
ABS frame
Note 2
20 A 20 A
20 A 20 A
10 A 10 A 10 A 10 A
LGE shelf
LGC 0 LGC 1
Note 1: Each feeder (–48 V) has a corresponding return of the same gauge connected to
the battery return bar of the PDC.
Note 2: In a DMS with EAS, the ABS power feeder originates from a 20-A fuse or circuit
breaker at the power plant powering the PDC. In a DMS not equipped with EAS, this feeder
originates at the A Bus of the PDC. The ABS is protected by a 10-A fuse at the PDC.
Figure 13–13xxx
MTC frame power distribution
Note 1
B feeder A feeder
MTC frame (plane 0)
PDC
frame MTD 1
B bus A bus
MTC frame (plane 1)
Note 2 10 A
20 A 20 A
MTD 2
10 A 1.33 A
ABS
FSP
Note 2
Note 3
IOC shelf or 20 A 20 A
modem shelf
10 A
ABS
IOC Inverter
ABS (to other frames) IOC shelf or (power to
modem shelf ac modem)
Note 1: Each feeder (–48 V) has a corresponding return of the same gauge connected to the
battery return bus of the PDC.
Note 2: When the MTD is a Hewlett-Packard unit, the MTD is powered directly from the
20-A fuse in the PDC. When the MTD is a Cook Electric unit, power is supplied through a
10-A fuse in the FSP.
Note 3: When an IOC shelf is used, power from the 20-A fuse (PDC) is fed through a
10-A fuse (FSP) to the IOC. If a modem shelf is used instead of an IOC, power from the
20-A fuse is fed directly to an inverter which then powers the modem.
Note 4: In a DMS with EAS, the ABS power feeder originates from a 20-A fuse or circuit
breaker at the power plant powering the PDC. In a DMS not equipped with EAS, this feeder
originates at the A Bus of the PDC. The ABS is protected by a 10-A fuse at the PDC.
Positions of panels on the PDC frame are identified by the level of the holes
in the vertical supports of the frames.
Figure 13–14
Typical PDC frame configuration (front view)
Fuse Fuse
62F00 62F14 Ground
panel
FA B feed
58
FA A
54
FA B
50
FSP
45
41 A
37 B
Filler panels
33 A
or
29 Fuse distribution B
panels
A
25
B
21
FA A FA B Filter panel
filter unit filter unit
fuse A fuse B 16
Spare
fuse Filler panel
holder
DC-DC converters
The DMS-100 Family uses most of the input power at voltages other than
–48 V dc. The voltages are derived from the nominal –48 V dc by dc-dc
converters in the shelf or frame in which they are used. Although these
voltages are dictated by the technology used, most of them are standardized
to +5 V, +12 V, +15 V, or +24 V.
Converter features
Several different types of dc-dc converters are required to meet the voltage
and current requirements of the DMS-100 switch. All of the dc-dc
converters used in the DMS-100 switch have three features:
• Isolation between battery return and logic ground.
• Regulation of the output voltages against changes in load and/or battery
voltage, within specified limits.
• Protection against overload of the outputs.
AC power distribution
Protected AC sources
The only protected 60 Hz 120 V ac power required in the DMS-100 Family
System is for the Maintenance and Administration Position (MAP) VDU,
Technical Assistance (TAS) data sets, analog recorded announcement
equipment and cooling units. The protected ac source is derived from –48 V
dc through separate 500-VA inverters. Inverter power may be provided for a
MAP associated printer, TTY and other input/output devices at the request
of the operating company.
Outside plant module power requirements
The OPM cabinet electrical system consists of power distribution
equipment, two rectifiers, and batteries. The OPM uses 30 A, single phase,
220 V, 60 Hz, commercial ac supply. Backup batteries may be provisioned
to provide power during ac failure conditions. The batteries may be sized
(200 AH) to provide the power required to maintain OPM operation for up
to 8 hours at a traffic level of approximately 5 ccs/line.
An emergency generator plug is provided for attachment of a 220-V ac
generator to supply power during extended commercial ac failure. Any
emergency generator attached to the OPM must be capable of supplying an
average power of 1500 watts with peaks to 4000 watts.
The cabinet ground bar, located in the end access compartment, is to be
connected to a ground rod provided by the user in accordance with local
utility codes. Maximum recommended ground to earth resistance is 25 Ω..
There are eight terminations on the ground bar. These consist of the
protector block ground connection, electronic equipment ground, the
incoming ac main ground, a connection to the ground rod and finally, four
incoming outside plant cable sheath grounds.
Figure 13–15
Recommended power plant configuration
ac ground
PDC Other
00 PDC Logic
misc return
grd (as req)
750
MCM
Note: The power plant may be in a different location than the DMS.
Figure 13–16
Specific isolated battery return configuration shown here with MDG/ground window
arrangement
from ac cabinet
PDC
ac ground Logic
00
return
CO MGB
ground
Battery Battery
return Power plant
misc
grd positive 48V negative
Grounding Framework
electrode ground
conductor
750 MCM
Figure 13–17
Configuration with non-isolated battery return
ac ground
PDC Other
00 PDC Logic
misc return
grd (as req)
750
MCM
Figure 13–18
Shared power plant
–48 V
Misc &
other +48 V
equipment
on this Converter
floor
Battery
reference
750 MCM Feeds to other PDCs
Building
principal
ground
DMS system must not be located more than one floor away from the battery main ground refer-
ence point.
Documentation
A variety of document types are used to support and maintain an accurate
record of the design and capabilities of the DMS-100 Family. These
document types include job specific, non-proprietary hardware, systems,
proprietary, installation, and various other documents. This section lists and
describes these different document types.
Documentation media
Northern Telecom’s documentation for the DMS-100 Family and the DMS
SuperNode switching system is available on HELMSMAN CD-ROM, paper,
microfiche, or microfilm form. User documentation is included when a
switching system is shipped. Northern Telecom’s primary documentation
offer is CD-ROM, however, the user may choose a mixture of paper and
microfiche. Software listings of each specific BCS program are available
only in microfiche.
Documentation ordering
Documentation can be ordered in one of two ways:
• Documentation can be ordered through the NT86XX questionnaire at
customer information (CI) time of the switch or any extension (growth)
to an existing switch in accordance with the purchasing agreement.
• Documentation may be purchased by submitting a purchase order for
documents to the Northern Telecom merchandise order specialist.
Documentation standing order services can also be ordered through the
merchandise order specialist.
Documentation catalogs
Documentation catalogs, giving document titles and prices, will be released
annually.
Documentation structure
There are three types of standard documentation provided with DMS-100
Family Systems:
• Job Specific Documentation on page 14–7
• Non-Proprietary Hardware Documentation on page 14–8
• Systems documentation on page 14–9.
Figure 14–1
Modular documentation system structure
Product
system
Documentation
Firmware segments
Circuit pack level Circuit packs software programs
(circuit schematic)
ROMs,
Component level hybrids
Table 14–1
Associated documents
PEC code Description
Changes, issues, and releases are controlled and recorded by the change
control system. Two customer documents contain information related to this
system. They are the document index (DI) and the office inventory record
(OIR). In the DI, the customer is informed of the documents which are
being provided for each office and the issues of these documents. The
documents defined as requirements in the DI are those that are compatible
with the associated equipment. In the OIR, the products supplied against
each order are listed by product engineering code (PEC), title, release
number, and quantity installed. The OIR is intended to be kept (by the
customer) as an office log of changes.
Northern Telecom publications
Northern Telecom publications (NTPs) are issued to document various
aspects of the DMS-100 Family of switches and in general cover several
disciplines:
• Administration
• Engineering
• Installation and testing
• Operations
• Provisioning
• Translations
• Maintenance.
In chapters 4 through 9 the system level documents are listed in the first
section of the chapter. Subsequent sections are listed in alphabetical order
by product name. Within each section the documents are listed in numerical
order.
Publications are identified in chapter 2 of DMS-100 Family Guide to
Northern Telecom Publications, 297-1001-001 by NTP number, title,
release, BCS, and status.
NTP index
This is an explanation of the column headers used in the chapter 2 table of
DMS-100 Family Guide to Northern Telecom Publications, 297-100-001:
Number The standard ten-digit NTP number appears in this column. All
DMS-100 Family publications are in division 297.
Title This column contains the title of the NTP.
Release This column contains two-part code consisting of a two-digit
stream number left of the decimal point, and a two-digit issue number to the
right of the decimal point. The code is not printed on some NTPs; it will be
added when the NTP is reissued.
If an NTP is reissued with new information specific to one or more BCS
software releases, the stream number is increased by one. If the NTP is
reissued to make other corrections, the issue number is increased by one.
BCS This column contains the batch change supplement (BCS) number.
This is the release number of the software to which the NTP applies.
Status This column contains the document publication status:
PREL Preliminary
0STD Standard
Note 1: Some NTPs are rated STD N by means of an N suffix to the
NTP number on the document (for example, 297-2101-102N); however,
the term STD N is used in the index.
Note 2: Some earlier issues of NTPs used a three-part rating code such
as, 01D02. Those publications are considered DRAFT rated if the letter
is W, and STD N if the letter is D.
Job specific documentation
Documents are provided for a particular DMS-100 Family office with the
initial delivery of the system. These documents are office specific.
Document index
The document index (DI) is an index of all standard and job related
documents provided for each office and includes the issue of each document
supplied.
Office-inventory record
The office inventory record (OIR) contains the list of all the components of
a customer office, the quantity and the release. Input is from job
specifications and on-site equipment audits. The OIR is used for keeping
track of the contents of a particular office, change control and extended
warranty service.
Office feature record
The office feature record (OFR) contains a short description of each
PROTEL software subsystem resident in a particular office and a list of
feature packages provided in the software load.
Central office job specifications
The main function of the job specification is to list the materials needed to
assemble and install a job. There are three types of job specifications:
material requirement, configuration, and cable. After the job specification
has been used by manufacturing to assemble the job, and by the installer to
install the job, it is no longer required since the job drawings contain all the
information about the contents of the office.
Central office job drawings
Job drawings are generated by customer engineering job specifications, job
information memorandum (JIM) or 88K orders. Job drawings are
permanent records of what is in a particular office and indicate the location
of components when the positioning is variable. A breakdown of the
drawings for a standard job would be up to 20 job drawings, six facility
drawings showing floor plan, lighting, cable racks, ceiling inserts,
Systems documentation
Northern Telecom publications
NTPs are required to engineer, operate, and maintain a Northern Telecom
product. NTPs describe the DMS-100:
• System description—Describes the system, its features, and how it
operates.
• Engineering, equipment application and ordering information—Contain
the information required to select the proper equipment when planning a
new office or expanding the facilities of an existing office.
• System/equipment performance specification—Written for engineers to
enable them to assess the applicability of the system or equipment to
their requirements.
• Identification, installation, and tests—Written for technicians of
telephone companies whose duties include installation, field adjustments,
and tests that are necessary before turnover for regular operation.
• Operating procedures—Written for all field personnel who operate
equipment. The contents has four parts:
— Identification of controls and indicators and their function
— Step-by-step procedures for operating the controls
— Alternative methods of operation under trouble conditions
— Recommended methods for maintaining service under trouble
conditions
• Simple trouble test and repair procedures—Written for operating
company field service technicians and aimed at quick restoral of service
without extensive test, adjustment, and repair procedures.
• Preventive in-service maintenance—Written for operating company
technicians:
— Recommended time intervals between tests and adjustments
— Detailed test and adjustment procedures which indicate limits at or
beyond which corrective adjustment or repair must be done
— Special procedures for maintenance and cleaning of floors, offices
and equipment which is likely to affect the operational reliability of
the equipment.
• Traffic—Written for operating company personnel and include traffic
provisioning, translations, and administration.
• Maintenance center procedures—Document tests and adjustments that
are required on equipment or apparatus prior to their use as replacement
items in the field.
Optional documentation
Northern Telecom is expanding the available DMS-100 Family
documentation as described in this section.
Feature description manual (FDM)
The Feature description manual (FDM) contains brief descriptions of
DMS-100 Family features, and indexes of software packages.
The FDM contains the following information.
• Hardware requirements
• Feature impact summary table
• Feature descriptions
• Features listed by BCS
• Feature number to feature package and BCS cross–reference
• Feature package to feature number and BCS cross–reference
• Feature name to feature number, package and BCS cross–reference
Proprietary documentation
The documents described in this section identify proprietary documents.
Proprietary information list
The following list of proprietary documents are described in proprietary
documentation:
• Release control record (RCR)
• Detailed assembly drawing (AD)
• Program documentation index (PDI)
• Central control software program listings
• Program description information
• Central control cross–references
• XPM peripheral module software program listings
— Program listings
— cross–references
— Directories
• Operating manuals
— Support operating system (SOS) loader reference manual
— DMS-100 Family Software update controller user manual
• Programmers manuals
• Software debug manuals
• DMS-100 Family system description
• Network integrity fault analysis guidelines
• Technical assistance manuals (TAM)
• Installation manuals
Program listings, cross reference, and directory are produced for each
version of the firmware/software released.
Program listings
The XPM software is written in assembler code. Line peripheral code is
assembled using a relocatable assembler; the rest of the PMs use an absolute
assembler. The program listings provided on the fiche are assembled source
listings.
Cross references
For the XPM software that uses the absolute assembler, a cross reference is
provided on a subsystem basis. For XPM firmware/software that uses the
relocatable assembler, a cross reference consists of an alphabetical list of all
the labels in the subsystem or module listing.
Directories
A directory exists only for the XPM firmware/software that uses the
absolute assembler. The directory (label dictionary) is provided on a
subsystem basis. It consists of an alphabetical list of all the labels defined
within the subsystem.
Operating manuals
These manuals provide information on tools to implement and manage
updates to the DMS-100 and peripheral software loads.
SOS loader reference manual
The support operating system (SOS) loader reference manual contains
information common to all software versions.
The SOS loader reference manual gives detailed information for using
on-line tools to determine what modules are loaded in a specific office. It is
also used to determine the physical address of the program code and date
elements in a specific office
DMS-100 family software update controller user manual
The DMS-100 Family software update controller (DSU) user manual
describes how to use the DSU to implement and manage updates to
DMS-100 software loads.
Programmers manuals
This documentation common to all the software versions provides
information on the PROTEL language and supportive information on
machine architecture and operating system. It consists of six manuals:
• PROTEL Introductory Manual
• PROTEL Reference Manual
• DMS-100 Family central control complex guide 1 (machine architecture)
• DMS-100 Family central control complex guide 2 (user’s manual)
• XPM Assembler user’s manual
• XPM Pascal manual.
Installation manuals
These manuals are used by DMS-100 Family equipment installation
personnel. They describe all the various tests that are applied before the
system is turned over to a telephone operating company. These manuals are
available under a separate installation agreement .
List of terms
This chapter expands and defines abbreviations used in this document. For
information on terms not defined in this chapter, refer to Glossary of Terms
and Abbreviations, 297-1001-825.
A-link
A signaling data link that connects service switching points (SSP) and
service control points (SCP) to signaling transfer points (STP). See also
service control point (SCP) or service switching point (SSP).
ANI
See automatic number identification (ANI).
application-specific unit (ASU)
A combination of hardware and software components that carries out a
particular function on the signals carried on the channel buses (C-bus) and
frame transport buses (F-bus) in a link peripheral processor (LPP).
Examples of ASUs are Ethernet interface units (EIU), CCS7 link interface
units (LIU7), and network interface units (NIU).
ASU
See application-specific unit (ASU).
automatic number identification (ANI)
A system whereby a calling number is identified automatically and
transmitted to the automatic message accounting (AMA) office equipment
for billing.
BCD
Binary coded decimal.
BCS
Batch change supplement.
BDW
Block descriptor word.
capability code
An address that allows a Common Channel Signaling 7 (CCS7) node to
identify itself by more than one point code. For example, each node of a
signaling transfer point pair is identified by the same capability code and by
individual capability codes. See also point code.
CB
See channel bank (CB).
C-bus
See channel bus (C-bus).
CC
Central control. On NT-40 based systems, the Central Control subsystem.
On SuperNode-based systems, the DMS-Core.
CCITT
See Consultative Committee on International Telephony and Telegraphy
(CCITT).
CCS
See common channel signaling (CCS).
CCS7
See Common Channel Signaling 7 (CCS7).
CCS7 link interface unit (LIU7)
A peripheral module (PM) that processes messages entering and leaving a
link peripheral processor (LPP) through an individual signaling data link.
Each LIU7 consists of a set of cards and a paddle board provisioned in one
of the link interface shelves of the LPP. See also link peripheral processor
(LPP).
central processing unit (CPU)
The hardware unit of a computing system that contains the circuits that
control and perform the execution of instructions.
central side (C-side)
The side of a node that faces away from the peripheral modules (PM) and
toward the central control (CC). Also known as control side. See also
peripheral side (P-side).
channel bank (CB)
Communication equipment performing the operation of multiplexing. A
channel bank is used typically for multiplexing voice grade channels.
connectionless signaling
A type of signaling in which no fixed end-to-end connection is associated
with the call. The route followed by the information and signaling between
the originating and terminating subscriber is not fixed and can change from
one message to the next. For example, signaling used to access a database
for 800-number translations and maintenance signaling messages between
signaling points are considered connectionless signaling. Also known as
transaction services.
connection-oriented signaling
A signaling process in which a fixed end-to-end path is established for the
call. The signaling protocol establishes a fixed path although the signaling
itself can travel by way of different paths for the duration of the call. All
information associated with the call follows a fixed path even though the
signaling itself is not connection-oriented. Also known as trunk signaling.
Consultative Committee on International Telephony and Telegraphy (CCITT)
Consultative Committee on International Telephony and Telegraphy
(CCITT) operates under the auspices of the United Nations and is the forum
for international agreement on recommendations for international
communication systems.
CPU
See central processing unit (CPU).
C-side
See central side (C-side).
DCE
See digital carrier equipment (DCE) frame.
DDU
See disk drive unit (DDU).
destination point code (DPC)
A Common Channel Signaling 7 (CCS7) term defining the termination of a
signaling message. See also originating point code (OPC).
digital carrier equipment (DCE) frame
An equipment frame that houses digital carrier modules (DCM).
digital trunk controller (DTC)
A peripheral module (PM) that connects DS30 links from the network with
digital trunk circuits.
DMS SuperNode
A central control complex (CCC) for the DMS-100 switch. The two major
components of DMS SuperNode are the computing module (CM) and the
message switch (MS). Both are compatible with the network module (NM),
the I/O controller (IOC), and XMS-based peripheral modules (XPM).
DMS SuperNode SE (SNSE)
A smaller version of DMS SuperNode designed to service smaller offices
(maximum 20 000 lines). It is based on existing SuperNode technology and
can be used in all existing applications of SuperNode, including Common
Channel Signaling 7 (CCS7) and international. SNSE supports all
SuperNode software features at a reduced call processing capacity.
DMS SuperNode Signaling Transfer Point (DMS-STP)
A high-throughput data packet switch providing connectivity between the
nodes of a Common Channel Signaling 7 (CCS7) network.
DMS SuperNode Signaling Transfer Point/Service Switching Point Integrated
Node (DMS-STP/SSP INode)
A CCS7 integrated node that combines the functionality of a signaling
transfer point (STP) and a service switching point (SSP). The integrated
node consists of the DMS-core, DMS-bus, I/O controller (IOC), office alarm
system (OAS), JNET or ENET, link peripheral processors (LPP), and
peripheral modules (PM) such as digital trunk controllers (DTC), line group
controllers (LGC), and maintenance trunk modules (MTM).
DN
See directory number (DN).
DNPC
See dual network packaged core (DNPC).
double shelf network (DSN)
A network with one network plane on a single shelf of a double shelf
network equipment (DSNE) frame, permitting two complete networks for
each plane in a single bay.
double shelf network equipment (DSNE) frame
A frame that packages one network plane on a single shelf, permitting two
complete networks for each plane in a single bay.
DPC
See destination point code (DPC).
DPCC
See dual-plane combined core cabinet (DPCC).
DS
Data store contains transient information for each call, as well as customer
data and office parameters.
DS-0
A protocol for data transmission that represents one channel in a 24-channel
DS-1 trunk.
DS-0A
An asynchronous DS-0. See DS-0.
DS-1
The 8-bit 24-channel 1.544-Mbit/s digital signaling format used in the
DMS-100 Family switches. The DS-1 signal is the North American
standard for digital trunks. It is a closely specified bipolar pulse stream.
DS-1 carries 24 information channels of 64 kbit/s each (DS-0s).
DS30
A 10-bit 32-channel 2.048-Mbit/s speech-signaling and message-signaling
link as used in the DMS-100 Family switches.
DS30A
A 32-channel transmission link between the line concentrating module
(LCM) and controllers in the DMS-100 Family switches. DS30A is similar
to DS30, though intended for use over shorter distances.
DS512 fiber link
The fiber optic transmission link implemented in the DMS SuperNode
processor. The DS512 is used for connecting the computing module (CM)
to the message switch. One DS512 fiber link is the equivalent of 16 DS30
links.
DSN
See double shelf network (DSN) or dual shelf network.
DSNE
See double shelf network equipment
DTC
See digital trunk controller (DTC).
DTE
See digital trunk equipment (DTE).
ICR
International call recording.
IDTC
See international digital trunk controller (IDTC).
INode
See integrated node (INode). See also DMS SuperNode Signaling Transfer
Point/Service Switching Point Integrated Node (DMS-STP/SSP INode).
integrated node (INode)
A combination of a DMS SuperNode Signaling Transfer Point (DMS-STP)
and a DMS Signaling Point/Service Switching Point (DMS SP/SSP). It has
all the functions of both, and requires fewer frames and cabinets.
international digital trunk controller (IDTC)
A digital trunk controller (DTC) that acts as an interface between a DMS
switch and PCM30 trunks. See also, digital trunk controller (DTC), and
PCM30 digital trunk controller (PDTC).
interperipheral connection (IPC)
A connection in the interperipheral message link (IPML) in common
channel interoffice signaling. Two IPCs can share the message handling
load.
IOC
See I/O controller (IOC).
I/O controller (IOC)
An equipment shelf that provides an interface between up to 36 I/O devices
and the central message controller (CMC). The IOC contains a peripheral
processor (PP) that independently performs local tasks, thus relieving the
load on the CPU.
IOD
See I/O device (IOD).
I/O device (IOD)
A device that allows data to be entered into a data processing system,
received from the system, or both.
IOE
See I/O equipment (IOE) frame
I/O equipment (IOE) frame
A frame that houses I/O devices.
IPC
See interperipheral connection (IPC).
ITOPS
International traffic operator position system.
JF
Journal file. A utility that records changes to the datafill tables of the
DMS-100 family switches. The JF provides a means of restoring the tables
if it is necessary to reload office software from a backup source.
JNET
See junctored network (JNET).
junctored network (JNET)
A time-division multiplexed system that allows for switching of 1920
channels per network pair (fully duplicated). Additional channels are
established through the use of external junctors, internal junctors, and a
digital network interconnecting (DNI) frame. Channels then can be routed
directly, or use alternate routing, through the use of junctors, a DNI frame,
and software control. Capacity for a DMS-100 switch is 32 network pairs or
61 440 channels (1920 channels × 32 network pairs).
LEN
Line equipment number. A seven digit functional reference that identifies
line circuits. The LEN provides physical location information on equipment
such as site, frame number, unit number, line subgroup (shelf), and circuit
pack.
LIM
See link interface module (LIM).
link
• In a DMS switch, a connection between any two nodes.
• A four-wire group of conductors providing transmit and receive paths for
the serial speech or message data between components of DMS-100
Family switches. Speech links connect peripheral modules (PM) to the
network modules (NM). Message links connect NM controllers or I/O
controllers (IOC) to the central message controller (CMC).
• A logical switched virtual circuit (SVC). Up to 256 logical SVCs are
carried on a physical X.25 communication cable.
link interface module (LIM)
A peripheral module (PM) that controls messaging between link interface
units (LIU) in a link peripheral processor (LPP). The LIM also controls
messages between the LPP and the DMS-bus component. An LIM consists
of two LIM units and two frame transport buses (F-bus). The two LIM units
operate in a load-sharing mode with each other. See also frame transport bus
(F-bus), link peripheral processor (LPP), and local message switch (LMS).
link interface shelf (LIS)
A shelf in a link peripheral processor (LPP) that houses application-specific
units (ASU) and associated power converters.
link interface unit for CCS7 (LIU7)
See CCS7 link interface unit (LIU7).
link peripheral processor (LPP)
The DMS SuperNode equipment frame or cabinet that contains two types of
peripheral modules (PM): a link interface module (LIM) and one or more
application-specific units (ASU). See also application-specific unit (ASU),
CCS7 link interface unit (LIU7), and link interface module (LIM).
linkset
• A group of links related to one application instance.
• A collection of links connecting two adjacent signaling points in CCITT
no. 6 signaling (N6), common channel interoffice signaling no. 6
(CCIS6), and Common Channel Signaling 7 (CCS7).
link status signal unit (LSSU)
A type of signal unit that contains information about signaling unit state
changes. The LSSU has priority over other types of signal units.
LIS
See link interface shelf (LIS).
LIU7
See CCS7 link interface unit (LIU7).
LMS
See local message switch (LMS).
local message switch (LMS)
A shelf in the link peripheral processor (LPP) frame or cabinet. The LMS
exchanges messages between application-specific units (ASU) in the LPP
and provides access to the DMS-bus. Also known as link interface module
(LIM).
loopback
The reflection of data signals of known characteristics to their point of origin
so that the reflected bit stream can be compared with the transmitted bit
stream.
LPP
See link peripheral processor (LPP).
LSSU
See link status signal unit (LSSU).
magnetic tape drive (MTD)
In a DMS switch, a device used to record DMS-100 Family data. An MTD
can be mounted on either a magnetic tape center (MTC) frame or an
input/output equipment (IOE) frame. Also known as tape drive.
maintenance and administration position
See MAP.
maintenance trunk module (MTM)
In a trunk module equipment (TME) frame, a peripheral module (PM) that is
equipped with test and service circuit cards and contains special buses to
accommodate test cards for maintenance. The MTM provides an interface
between the DMS-100 Family digital network and the test and service
circuits.
MAP
Maintenance and administration position. A group of components that
provides a user interface between operating company personnel and the
DMS-100 Family switches. The interface consists of a video display unit
(VDU) and keyboard, a voice communications module, test facilities, and
special furniture.
MAPCI
MAP command interpreter
message signal unit (MSU)
A type of signal unit that contains signaling information. The MSUs are
buffered until positive acknowledgement is received.
message switch (MS)
A high-capacity communications facility that functions as the messaging
hub of the dual-plane combined core (DPCC) of a DMS SuperNode
processor. The MS controls messaging between the DMS-bus components
by concentrating and distributing messages and by allowing other DMS-STP
components to communicate directly with each other.
message transfer part (MTP)
A CCITT no. 7 signaling (N7) protocol that provides a connectionless
transport system for carrying common channel interoffice signaling no. 6
(CCIS6) and Common Channel Signaling 7 (CCS7) signaling messages
nailed-up cross-connection
A special-services connection in which channels on a DS-1 link used for
special-services cards are not switched through the DMS-100 network.
Instead, they are looped around in the subscriber carrier module–100S
(SMS) or subscriber carrier module–100 urban (SMU) formatter card onto a
second DS-1 link leading to a channel bank, DMS-100 switch, or other
telephone equipment.
NET
network
network (NET)
• An organization of stations that can intercommunicate but not
necessarily on the same channel.
• Two or more interrelated circuits.
• A combination of terminals and circuits in which transmission facilities
interconnect user stations directly.
• A combination of circuits and terminals serviced by a single switching or
processing center.
• An interconnected group of computers or terminals.
• The NET module frame of the DMS-100 switch.
network interface unit (NIU)
A DMS SuperNode application-specific unit (ASU) that provides
channelized access for F-bus resident link interface units (LIU) using a
channel bus (C-bus). The NIU resides in a link peripheral processor (LPP)
frame.
network module (NM)
The basic building block of the DMS-100 Family switches. The NM
accepts incoming calls and uses connection instructions from the central
control complex (CCC) to connect the incoming calls to the appropriate
outgoing channels. Network module controllers control the activities in the
NM.
network module controller (NMC)
A group of circuit cards that communicates with the central message
controller (CMC). The NMC is located in the network module (NM). The
NMC organizes the flow of internal messages by directing messages to the
peripheral modules (PM) or interpreting connection instructions to the
crosspoint switches.
network operations protocol (NOP)
A protocol that provides an interface between a DMS-100 Family switch
and its remote systems.
PDTC
See PCM30 digital trunk controller (PDTC).
PEC
See product engineering code (PEC).
peg count
The number of times an event occurs; for example, the number of telephone
calls originated during a specified period of time.
peripheral module (PM)
Any hardware module in the DMS-100 Family switches that provides an
interface between external line, trunk, or service facilities. A PM contains
peripheral processors (PP), which perform local routines, thus relieving the
load on the CPU.
peripheral processor (PP)
A hardware device in the peripheral module (PM) that performs local
processing independent of the CPU. The PP is driven by read-only memory
(ROM) in the PM, thus releasing CPU run time for higher level activities.
peripheral side (P-side)
The side of a node facing away from the central control (CC) and toward the
peripheral modules (PM). See also central side (C-side).
per-trunk signaling (PTS)
A conventional telephony method of signaling that multiplexes the control
signal of a call with voice or data over the same trunk.
PM
See peripheral module (PM).
point code
The address of a signaling point. See also capability code.
power distribution center (PDC)
The frame containing the components for distributing office battery feeds to
equipment frames of the DMS-100 Family switches. The PDC accepts A
and B cables from the office battery and provides protected subsidiary feeds
to each frame or shelf. It also contains noise suppression and alarm circuits
and provides a dedicated feed for the alarm battery supply.
PP
See peripheral processor (PP).
SPMS
See Switch Performance Monitoring System (SPMS).
SS7
See signaling system 7 (SS7).
SS#7
See signaling system #7 (SS#7).
SSP
See service switching point (SSP).
SSN
See subsystem number (SSN).
ST
See signaling terminal (ST) or symbol table (ST).
S/T-bus
An internal eight-wire bus (of which only four wires are used to transmit and
receive messages) that connects terminals to the NT1 for access to the
ISDN. Messages are transmitted from port to port over the S/T-bus. Also
known as S/T-interface and S/T-loop. Formerly known as transaction bus
(T-bus).
STP
See signaling transfer point (STP).
SWACT
See switch of activity (SWACT).
switch of activity (SWACT)
In a DMS fault-tolerant system, a reversal of the states of two identical
devices devoted to the same function. A SWACT makes an active device
inactive and an inactive device active.
Switch Performance Monitoring System (SPMS)
A feature for assessing DMS–100 switch performance and highlighting poor
performance items.
system load module (SLM)
A mass storage system in a DMS SuperNode processor that stores office
images. From the SLM, new loads or stored images can be booted into the
computing module (CM).
T-bus
transaction bus. See the preferred term, S/T-bus.
TCAP
See transaction capabilities application part (TCAP).
telephone user part (TUP)
A CCITT no. 7 signaling (N7) protocol that provides signaling between a
Common Channel Signaling 7 (CCS7) switching office and a designated
customer setup.
TL
See transmission link (TL).
T-link
A full duplex byte-oriented adaptation protocol designed to transfer
synchronous or asynchronous data over a digital circuit at digital trunk
equipment (DTE) data rates of up to 64 kbit/s.
TM
See trunk module (TM).
TME
See trunk module equipment (TME) frame.
transaction bus (T-bus)
See S/T-bus.
transaction capabilities application part (TCAP)
A service that provides a common protocol for remote operations across the
Common Channel Signaling 7 (CCS7) network. The protocol consists of
message formatting, content rules, and exchange procedures. TCAP
provides the ability for the service switching point (SSP) to communicate
with a service control point (SCP). TCAP is used by the ISDN layer facility
message to transport service information for transaction signaling, not
associated with an active call, over primary rate interface (PRI) links.
transaction services
See connectionless signaling.
transmission link (TL)
In a Common Channel Signaling 7 (CCS7) network, a T1 digital carrier
terminating on a digital trunk controller (DTC). In the DMS switch, the TL
is a single voice carrier on a DS30 link over connections through the
network and into the message switch and buffer 7 (MSB7).
DMS-100 International
Technical Specification