REFTechnical Reference Guide iDS 83rev E061510
REFTechnical Reference Guide iDS 83rev E061510
12 Fast Acquisition
Feature Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Purpose
The Technical Reference Guide provides detailed technical information on iDirect technology
and major features as implemented in iDS Release 8.3.
Intended Audience
The intended audience for this guide includes network operators using the iDirect iDS system,
network architects, and anyone upgrading to iDS Release 8.3.
Note: It is expected that the user of this material has attended the iDirect IOM
training course and is familiar with the iDirect network solution and associated
equipment.
Document Conventions
This section illustrates and describes the conventions used throughout the manual. Take a
look now, before you begin using this manual, so that you’ll know how to interpret the
information presented.
Bold italic Used to emphasize Note: Several remote model types can be
Trebuchet information for the user, configured as iSCPC remotes.
font such as in notes.
Getting Help
The iDirect Technical Assistance Center (TAC) is available to help you 24 hours a day, 365 days
a year. Software user guides, installation procedures, a FAQ page, and other documentation
that supports our products are available on the TAC webpage. Please access our TAC webpage
at: http://tac.idirect.net.
If you are unable to find the answers or information that you need, you can contact the TAC at
(703) 648-8151.
If you are interested in purchasing iDirect products, please contact iDirect Corporate Sales by
telephone or email.
Telephone: (703) 648-8000
Email: [email protected]
This chapter presents a high level overview of an iDirect Network. It provides a sample iDirect
network and describes IP architecture in SCPC and TDMA networks.
System Overview
An iDirect network is a satellite based TCP/IP network with a Star topology in which a Time
Division Multiplexed (TDM) broadcast downstream channel from a central hub location is
shared by a number of remote nodes. An example iDirect network is shown in Figure 1.
The iDirect Hub equipment consists of an iDirect Hub Chassis with Hub Line Cards, a Protocol
Processor (PP), a Network Management System (NMS) and the appropriate RF equipment. Each
remote node consists of an iDirect broadband router and the appropriate external VSAT
equipment. The remotes transmit to the hub on one or more shared upstream carriers using
IP Architecture
The following figures illustrate the basic iDirect IP Architecture with different levels
configuration available to you:
• Figure 2, “iDirect IP Architecture – Multiple VLANs per Remote”
• Figure 3, “iDirect IP Architecture – VLAN Spanning Remotes”
• Figure 4, “iDirect IP Architecture – Classic IP Configuration”
iDirect allows you to mix traditional IP routing based networks with VLAN based
configurations. This capability provides support for customers that have conflicting IP address
ranges in a direct fashion, and multiple independent customers at a single remote site by
configuring multiple VLANs directly on the remote.
In addition to end-to-end VLAN connection, the system supports RIPv2 in an end-to-end
manner including over the satellite link; RIPv2 can be configured on per-network interface.
In addition to the network architectures discussed so far, the iDirect iSCPC solution allows you
to configure, control and monitor point-to-point Single Carrier per Channel (SCPC) links.
These links, sometimes referred to as “trunks” or “bent pipes,” may terminate at your
teleport, or may be located elsewhere. Each end-point in an iSCPC link sends and receives
data across a dedicated SCPC carrier. As with all SCPC channels, the bandwidth is constant
and available to both sides at all times, regardless of the amount of data presented for
transmission. SCPC links are less efficient in their use of space segment than are iDS TDMA
networks. However, they are very useful for certain applications. Figure 5 shows an iDirect
system containing an iSCPC link and a TDMA network, all under the control of the NMS.
This chapter provides general guidelines for designing mesh networks using iDirect
equipment. Various physical and network topologies are presented, including how each
different configuration may affect the cost and performance of the overall network. Network
and equipment requirements are specified, as well as the limitations of the current phase of
iDirect’s Mesh solution. Overviews are provided for the commissioning procedure for an
iDirect Mesh network; converting existing star networks to mesh; and creating new mesh
networks.
iDirect’s Mesh offering provides a full-mesh solution implemented as a mesh overlay network
superimposed on an iDirect star network. The mesh overlay provides direct connectivity
between remote terminals with a single trip over the satellite, thereby halving the latency
and reducing satellite bandwidth requirements. As with other iDirect features, mesh is being
implemented in a phased manner. The first phase was delivered in IDS Release 7.0. Phase II of
mesh, which was delivered in iDS Release 8.2 and is supported in this release, added the
following enhancements to the original Mesh feature:
• The ability to configure multiple mesh inroutes per inroute group
• The ability to configure separate data rates for star and mesh inroutes
• Support for TRANSEC over mesh
If you are running a Mesh Phase I release (iDS 7.0, 7.1 or 8.0), you are limited to a single
inroute per mesh inroute group. In addition, TRANSEC over mesh is not supported in Mesh
Phase I. For details of iDirect hardware and features supported for each release, see “Mesh
Feature Set and Capability Matrix” on page 36.
In the network shown in Figure 6, the one-way transmission delay from user A to user B over a
geosynchronous satellite averages 550 ms. The extended length of the delay is due to the
“double-hop” transmission path: remote A to the satellite; the satellite to the hub; the hub
back to the satellite; and the satellite to remote B. This transmission delay, added to the
voice processing and routing delays in each terminal, results in an unacceptable quality of
service for voice. In addition, the remote-to-remote transmission requires twice as much
satellite bandwidth as a single-hop call.
A more cost-effective use of satellite bandwidth and improved quality of service for real-time
traffic can be achieved by providing remote-to-remote connections over a single satellite
hop, as provided in mesh networks (Figure 7).
In a full-mesh network, all remotes can communicate directly with one another. A mesh
network is ideal for any application that is intolerant of the double-hop delays inherent in star
networks and where remote-to-remote communications are required. A mesh satellite
network typically consists of a master terminal, which provides network management and
network synchronization, and remote user terminals.
One advantage of the iDirect Mesh implementation is that mesh remote terminals continue to
be part of the star network. This allows the monitor and control functions and the timing
reference for the mesh network to be provided by the existing hub equipment over the SCPC
downstream carrier.
In an iDirect Mesh network, the hub broadcasts to all remotes on the star outbound channel.
This broadcast transmits user traffic as well as the control and timing information for the
entire network of inbound mesh and star channels. The mesh remotes transmit user data on
mesh TDMA inbound channels, which other mesh remotes are configured to receive.
Note: The following remote model types are supported over iDirect Mesh: iNFINITI
5300/5350; iNFINITI 7300/7350; iNFINITI 8350; Evolution e8350; iConnex-100;
iConnex-700; and iConnex e800.
Each mesh remote is configured with a “home” mesh inroute. A mesh remote receives its
home inroute using the second demodulator on the Indoor Unit (IDU). All mesh transmissions
to the remote must be sent on the home inroute of the destination remote. Therefore, any
peer remote sending single-hop data must frequency hop to the peer’s home inroute before
transmitting.
Note: iDirect Mesh is logically a full-mesh network topology. All remotes can
communicate directly with each other (and the hub) in a single-hop. This is
accomplished by allowing the remote to receive both the outbound channel
from the hub and its home TDMA mesh inbound channel. This is sometimes
referred to as a star/mesh configuration. When referring to the iDirect product
portfolio, “star/mesh” and “mesh” are synonymous.
Network Architecture
All mesh networks consist of a single broadcast outbound channel and at least one mesh TDMA
inbound channel per inroute group.
Transponder Usage
The outbound and inbound channels must use the same transponder.
Note: The outbound loopback signal is demodulated on the same line card (M1D1
only) that modulates the outbound channel. This line card is capable of
demodulating a star or mesh inbound channel.
The outbound channel supporting a mesh network carries all outbound user data and the
network monitoring and control information used to control the mesh inbound channels,
including timing and slot allocation for the inbound channels; dynamic bandwidth allocation
changes for the remotes. The hub is the only node in a mesh network that transmits on the
mesh outbound channel.
The outbound channel in a mesh network has the following capabilities:
Bandwidth Management (QoS): The outbound channel possesses the full suite of QoS (Quality
of Service) functionality provided by iDirect. This includes Committed Information Rate (CIR),
minimum and maximum information rates, Class Based Weighted Fair Queuing (CBWFQ), etc.
Group QoS is fully supported for mesh networks beginning with iDS Release 8.2.
Centralized Management: The iDirect mesh network can be managed from the centralized
Network Operations Center (NOC) running the iDirect NMS applications. The hub provides
connectivity for this centralized network management.
Network Synchronization: The iDirect TDMA inbound channels take advantage of significant
bandwidth efficiency and performance enhancements provided by the accurate timing and
frequency synchronization that the outbound channel provides. The centralized hub provides
the frequency and timing references to the remote terminals over the outbound channel. This
results in lower equipment costs for the remote terminals.
it does not transmit on these channels. The remote terminals are assigned transmit time slots
on the inbound channels based on the dynamic bandwidth allocation algorithms provided by
the hub.
The D-TDMA channels provide the following capabilities:
Multiple Frequencies: A mesh network can contain one or more D-TDMA mesh inbound
channels for remote-to-remote and remote-to-hub connectivity within an inroute group. Each
terminal is able to quickly hop between these frequencies to provide the same efficient
bandwidth usage as a single large TDMA channel, but without the high-power output and large
antenna requirements for large mesh inbound channels. Beginning with iDS Release 8.2,
iDirect supports separate inbound carriers with different data rates for star and mesh. See
“Mesh/Star Frequency Hopping ” on page 18 for details.
Dynamic Allocation: Bandwidth is only assigned to remote terminals that need to transmit
data, and is taken away from idle terminals. These allocation decisions are made several
times a second by the hub which is constantly monitoring the bandwidth demands of the
remote terminals. The outbound channel is then used to transmit the dynamic bandwidth
allocation of the mesh inbound carriers.
Single Hop: Data is able to traverse the network directly from a remote terminal to another
remote terminal with a single trip over the satellite. This is critical for latency-sensitive
applications, such as voice and video connections.
iDirect networks support a number of features, including the following:
• Application and Group QoS
• Voice jitter handling
• IP routing
• TCP/HTTP acceleration
• cRTP
All such iDirect features are valid and available for mesh networks.
Note: The different sizes of the star and mesh carriers in the figure represent the
higher power transmission required for mesh inroutes to operate at the
contracted power.
Multiple mesh and star inroute groups may co-exist in a single network. Each mesh inroute
group uses its own inbound channels for remote-to-remote traffic within the respective group
and for star return traffic. There are no limitations to the number or combination of inroute
groups in a network, other than the bandwidth required and slot availability in a hub chassis
for each inroute. However, a mesh inroute group is limited to 250 remotes and eight inroutes.
Star Outbound
Star Inbound
Mesh Outbound
Mesh Inbound
Hub
Star Remote
Group Mesh Remote
Group
Mesh Outbound
Private
Hub Mesh Remote
Group
Network Topology
When determining the best topology for your iDirect Mesh network, you should consider the
following points regarding TCP traffic acceleration, single-hop versus double-hop traffic,
traffic between mesh inroute groups, and the relative size of your star and mesh carriers.
All unreliable (un-accelerated) traffic between mesh-enabled remotes in an inroute group
takes a single hop. By default, all reliable traffic between the same remotes is accelerated
and takes a double-hop. This must be considered when determining the outbound channel
bandwidth.
In certain networks, the additional outbound traffic required for double-hop traffic may not
be acceptable. For example, in a network where almost all the traffic is remote-to-remote,
there is no requirement for a large outbound channel, other than for the accelerated TCP
traffic. In that case, iDirect provides the ability to configure TCP traffic to take a single hop
between mesh remotes. However, the single hop TCP traffic will not be accelerated.
Note: When TCP acceleration (sometimes called “spoofing”) is disabled, each TCP
session is limited to a maximum of 128 kbps due to latency introduced by the
satellite path. Under ideal conditions, maximum throughput is 800 kbps
without acceleration.
A network may consist of multiple inroute groups. Although un-accelerated traffic within a
mesh inroute group takes a single hop, all traffic between inroute groups takes a double hop.
For example, if a network contains two mesh inroute groups (group A and group B), then a
mesh remote in group A can communicate with a mesh remote in group B only via the hub.
You may configure different symbol rates for star and mesh carriers in an inroute group. The
symbol rate for star carriers must be between 64 and 5750 ksym/s. The symbol rate for mesh
carriers must be between 128 ksym/s and 2048 ksym/s.
When configuring two symbol rates, the following restrictions apply:
• All carriers (star and mesh) must have the same FEC rate and modulation type.
• All star carriers in the inroute group must have the same symbol rate.
• All mesh carriers in the inroute group must have the same symbol rate.
• The symbol rate for star-only carriers must be greater than or equal to the symbol rate for
mesh carriers.
• The difference between the two symbol rates must be a multiple of 2n, where n is an
integer.
Note: Since there is only one transmitter per remote, the overall data rate a remote
achieves on a star inroute is reduced by the amount of time spent transmitting
on the mesh inroutes. Since it takes a longer time to transmit an equal amount
of data at a lower data rate, the star inroute capacity of the remote can be
significantly reduced by mesh transmissions when different symbol rates are
used for star and mesh.
The following section provides an example of a typical network topology carrying high-volume
star traffic and low-volume mesh traffic.
Frequency Hopping
Mesh Phase II supports frequency hopping between mesh and star inbound channels within an
inroute group.
Figure 13. Mesh Frequency Hopping: Inroute Group with Two Inroutes
Note: Allowing only non-TCP traffic to be transmitted directly from one remote to
another adds to the QoS functionality within the iDirect platform. By default,
only allowing the traffic that benefits from a single hop between remote results
in fewer configuration issues for the Network Operator. Mesh inbound channels
can be scaled appropriately for time-sensitive traffic such as voice and video.
Routing
Prior to the introduction of the mesh feature, all upstream data from a remote was routed
over the satellite to the hub protocol processor. With the introduction of iDirect Mesh,
additional routing information is provided to each remote in the form of a routing table. This
table contains routing information for all remotes in the mesh inroute group and the subnets
behind those remotes. The routing table is periodically updated based on the addition or
deletion of new remotes in the mesh inroute group; the addition or deletion of static routes in
the NMS; enabling or disabling of RIP; or in the event of failure conditions detected on the
remote or line card. The mesh routing table is periodically multicast to all remotes in the
mesh inroute group.
To increase remote-to-remote availability, the system provides data path redundancy for
mesh traffic. It is possible for a remote to drop out of the mesh network due to a deep rain
fade at the remote site. The remote detects this condition when it fails to receive its own
bursts. However, because the hub has a large antenna, the remote may still be able to
operate in the star network. Under these circumstances, the mesh routing table is updated,
causing all traffic to and from that remote to be routed through the hub. When the rain fade
passes, the mesh routing table is updated again, and mesh traffic for that remote again takes
the single-hop path.
To operate in the mesh network, a mesh remote requires power, frequency and timing
information determined by the hub from its SCPC loopback signal. Because of this, the entire
mesh network falls back to star mode if the hub fails to receive its loopback. In that event,
the routing table is updated causing all traffic to or from all remotes to be routed through the
hub. Once the hub re-acquires the outbound loopback signal, the mesh routing table is again
updated and the remotes rejoin the mesh network.
Hardware Requirements
This section describes the hub and remote hardware requirements for mesh networks. Please
refer to the section “Mesh Feature Set and Capability Matrix” on page 36 for a detailed list of
iDirect products and features that support mesh.
TDM Outbound Receiver: Continuously demodulates the outbound carrier from the hub and
provides the filtered IP packets and network synchronization information. The outbound
receiver connects to the antenna LNB via the L-band receive IFL cable. The down-converted
satellite spectrum from the LNB is also provided to the D-TDMA receiver.
TDMA Satellite Transmitter: The TDMA transmitter is responsible for transmitting all data
destined for the hub or for other remote terminals over the satellite TDMA channels.
TDMA Satellite Receiver: The TDMA receiver is responsible for demodulating a TDMA carrier
to provide remote-to-remote mesh connectivity. The receiver tunes to the channel based on
control information received from the hub.
Note: Compared to star VSAT networks, where the small dish size and low-power BUC
are acceptable for many applications, a mesh network typically requires both
larger dishes and BUC to close the link. See “Network Considerations” on
page 22.
Whenever possible, iBuilder enforces hardware requirements during network configuration.
Network Considerations
This section discusses the following topics with respect to iDirect Mesh networks: “Link
Budget Analysis,” “Uplink Control Protocol (UCP),” and “Bandwidth Considerations.”
Note: iDirect recommends that an LBA be performed for each site to determine
optimal network performance and cost.
the Reference site, the antenna size is correctly determined when the PEB is less than
or equal to the reference PEB.
• HPA Size: Use the same carrier parameters as those used for the Reference site to
determine the required HPA size.
Frequency
In a star configuration, frequency offsets introduced to the upstream signal (by frequency
down-conversion at a remote’s LNB, up-conversion at a remote’s BUC, satellite frequency
translation, and down-conversion at the hub) are all nulled out by Uplink Control Protocol
messages from the hub to each remote. This occurs every 20 seconds. Short-term frequency
drift by each remote can be accommodated by the hub because it uses a highly stable
reference to demodulate each burst.
A remote does not have such a highly stable local reference source. The remote uses the
outbound channel as a reference source for the inbound channel. A change in temperature of
a DRO LNB can cause a significant frequency drift to the reference. In a mesh network, this
can have adverse effects on both the SCPC outbound and TDMA inbound carriers, resulting in
a remote demodulator that is unable to reliably recover data from the mesh channel. A PLL
LNB offers superior performance, since it is not subject to the same short term frequency
drift.
Power
A typical iDirect star network consists of a hub with a large antenna, and multiple remotes
with small antennas and small BUCs. In a star network, UPC adjusts each remote’s transmit
power on the inbound channel until a nominal carrier-to-noise signal strength of
approximately 9 dB is achieved at the hub. Because of the large hub antenna, the operating
point of a remote is typically below the contracted power (EPEBW) at the satellite. For a
mesh network, where remotes typically have smaller antennas than the hub, a remote does
not reliably receive data from a another remote using the same power. It is therefore
important to maximize the use of all available power.
UPC for a mesh network adjusts the remote Tx power so that it always operates at the EIRP at
beam center on the satellite to close the link, even under rain fade conditions. This can be
equal to or less than the contracted power/EPEBW. Larger antennas and BUCs are required to
meet this requirement. The EIRP at beam center and the size of the equipment are calculated
based on a link budget analysis.
The UPC algorithm uses a combination of the following parameters to adjust each remote
transmit power to achieve the EIRP@BC at the satellite:
• Clear-sky C/N for each TDMA inbound and for the SCPC outbound loopback channels
(obtained during hub commissioning)
• The hub UPC margin (the extent to which external hub-side equipment can accommodate
hub UPC1)
• The outbound loopback C/N at the hub
• Each remote inbound C/N at the hub
The inbound UPC algorithm determines hub-side fade, remote-side fade, and correlated fades
by comparing the current outbound and inbound signal strengths against those obtained
during clear sky calibration. For example, if the outbound loopback C/N falls below the clear
sky condition, it can be assumed that a hub-side fade (compensated by hub side UPC)
occurred. Assuming no remote side fade, an equivalent downlink fade of the inbound channel
would be expected. No power correction is made to the remote. If hub-side UPC margin is
exceeded, then outbound loopback C/N is affected by both uplink and downlink fade and a
significant difference compared to clear sky would be observed.
Similarly if the inbound C/N drops for a particular remote and the outbound loopback C/N
does not change compared to the clear sky value, UPC increases the remote transmit power
until the inbound channel clear sky C/N is attained. Similar C/N comparisons are made to
accommodate correlated fades.
UPC now operates on a per carrier basis. Each carrier’s individually commissioned clear-sky
C/N is used by the algorithm when monitoring and adjusting the carrier.
Note: For each remote in a mesh network, the inbound C/N at the hub is likely to be
greater than that typically observed in a star network. Also, when a remote is
in the mesh network, the nominal C/N signal strength value for a star network
is not used as the reference.
In the event of an outbound loopback failure, the UPC algorithm reverts to star mode. This
redundancy allows remotes in a mesh inroute group to continue to operate in star only mode.
Figure 17 illustrates Uplink Power Control.
1. iDirect equipment does not support hub-side UPC. Typical RFT equipment at a teleport installation
uses a beacon receiver to measure downlink fade. An algorithm running in the beacon receiver calcu-
lates equivalent uplink fade and adjusts an attenuator to ensure a constant power (EPEBW) at the
satellite for the outbound carrier. The beacon receiver and attenuator is outside of iDirect’s control.
For a hub without UPC, the margin is set to zero.
Timing
An inbound channel consists of a TDMA frame with an integer number of traffic slots. In a star
network, during the acquisition process, the arrival time of the Start of the TDMA
frame/inbound channel at the hub is determined. The acquisition algorithm adjusts in time
the start of transmission of the frame for each remote such that it arrives at the satellite at
exactly the same time. The burst scheduler in the protocol processor ensures that two
remotes do not burst at the same time. With this process the hub line card knows when to
expect each burst relative to the outbound channel transmit reference. As the satellite moves
within its station keeping box, the uplink control protocol adjusts the Start timing of a frame
for each remote, so that the inbound channel frame always arrives at the hub at the same
time.
A similar mechanism that informs a remote when to expect the start of frame for the inbound
channel is required. This is achieved by determining the round trip time for hub-to-satellite-
to-hub from the outbound channel loopback. This information is relayed to each remote. An
algorithm determines when to expect the Start of the inbound channel, and determines burst
boundaries.
Note: A mesh remote listens to all inbound channel bursts, including bursts it originates.
Only those bursts transmitted from other remotes and destined for that remote
are processed by software. All other traffic is dropped, including bursts
transmitted by the remote itself.
Bandwidth Considerations
When determining bandwidth requirements for a mesh network, it is important to understand
that there are a number of settings that must be common to all remotes in an inroute group.
In a star network, a VLAN can be configured on a hub-remote pair basis. For a mesh network,
all remotes in the inroute group must have a common VLAN configuration. VLAN IDs require
two bytes of header for transmission. An additional two bytes is required to indicate that the
destination applies to mesh traffic only. Star traffic is unaffected; however, if VLAN is
configured, it is also enabled for traffic to the hub.
In a star network, remote status is periodically sent to the hub and reported in iMonitor. With
the same periodicity, additional status information is reported on the health of the mesh link.
This traffic is nominal.
There is a finite amount of processing capability on any remote. A mesh remote receives and
processes star outbound traffic; processes and sends star and mesh inbound traffic; and
receives and processes mesh inbound traffic. The amount of traffic a remote can maintain on
the outbound and inbound channels varies greatly depending on the short term ratio. It must
be understood that although a line card can support an inbound channel of 4 Mbps aggregated
across many remotes, a remote-remote connection will not support this rate. However, a
remote does drop inbound traffic not destined for it, thereby limiting unnecessary processing
of bursts. Sample performance curves are available from iDirect.
Mesh Commissioning
The commissioning of a mesh network requires a few steps that are not required for the
commissioning of a star network. Due to the requirement for the mesh inbound channel to
operate at the contracted power point on the satellite, calibration of both the outbound
loopback and each mesh inbound channel at the hub under clear sky conditions is required
during commissioning. Signal strength measurements (C/N) of the respective channels
observed in iMonitor are recorded in iBuilder. The clear sky C/N values obtained during
commissioning are used for uplink power control of each remote.
Note: In a mesh network, where relatively small antennas (compared to the hub
antenna) are used at remote sites, additional attention to Link Budget Analysis
(LBA) is required. Each remote requires an LBA to determine antenna and BUC
size for the intended availability and data rate.
Note: In order for a mesh network to operate optimally and to prevent over-driving
the satellite, commissioning must be performed under clear sky conditions. See
the iBuilder User Guide for more information.
Pre-Migration Tasks
Prior to converting an existing star network to a mesh network, iDirect recommends that you
perform the following:
• A link budget analysis comparison for star/mesh versus the star-only network.
• Verification of the satellite transponder configuration for the hub and each remote. All
hubs and remotes must be in the same geographic footprint. They must be able to receive
their own transmit signals. This precludes the use of the majority of spot beam and hemi-
beam transponders for mesh networks.
• Verification that all ODU hardware requirements are met: externally referenced PLL LNBs
for Private Hubs; PLL LNB for all remotes; and BUC and antenna sizing for a given data
rate.
• Calibration of each outbound and inbound channel to determine clear sky C/N values.
• Re-commissioning of each remote. This applies to initial transmit power only, and can be
achieved remotely.
Migration Tasks
To migrate from a star to a mesh network, perform the following:
• Reconfigure the M1D1 Tx Line Card. In iBuilder, a check box to indicate that the card is
enabled for mesh automatically generates the required configuration for the outbound
loopback carrier. The outbound channel clear sky and UPC margin information must be
also entered in iBuilder.
• Calibrate the inbound carrier on an M1D1 or M0D1. This is performed at the same time as
commissioning the first remote in a mesh inroute group. See the iBuilder User Guide for
more information. Subsequent mesh inbound channels can be calibrated and added to the
network without affecting existing outbound or inbound channels.
• Re-commission the initial Tx power setting and record the outbound and inbound clear sky
C/N conditions. Selection of mesh in iBuilder automatically configures the second
demodulator for the inbound channel. Incorrect commissioning of a remote may prevent
the remote from acquiring into the network.
line card or inroute group level, all mesh traffic stops for that inroute group, regardless of the
settings for each remote in the group. Frequency requirements still apply.
The Mesh options in Figure 19 are only available at the inroute group level if the Enabled
check box is selected. If Mesh is enabled, the values set at the inroute group level apply to all
remotes in the inroute group, possibly overriding the individual remote settings. If Mesh is
disabled, the individual remote settings are honored.
Note: These additional fields are sent in the remote status message only for mesh-
enabled remotes. Non-mesh remotes do not incur the additional overhead for
this information, and archived information for non-mesh remotes isn’t
meaningful.
• Mesh traffic will never show up on the IP statistics display, since this display represents
traffic upstream from the protocol processor.
Consider Table 2. Assume that Remote 1 and Remote 2 are passing 150 kbps of traffic between
each other. At the same time, Remote 1 is also sending 150 kbps of traffic to the Internet. The
Mesh, SAT, and IP traffic graphs will show the following statistics for these two remotes:
The IP traffic graph will show 150 kbps on the upstream for Remote 1.
The SAT traffic graph will show 450 kbps on the upstream for Remotes 1 and 2, 300 kbps for
the mesh traffic and 150 kbps for the Internet-bound traffic.
The mesh traffic graph will show 300 kbps received and 300 kbps transmitted for Remotes 1
and 2, as shown in Table 2.
Note: In the example above, the total throughput on the channel is not 600 kbps.
Each byte in mesh is actually counted twice: once by the sender and once by the
receiver.
You may use the mesh IP statistics to determine if there is mesh traffic loss on the link. In
order to do this, you must select all mesh remotes for the display. When you do this, the
transmitted kbps and received kbps should be identical. If they’re not, it is likely there is
packet loss across the mesh link.
To Internet
Remote 1
Upstream Router
Upstream Lan Segment
Protocol
Processor
Remote 3
Note: Probe Mesh is primarily intended for debugging. When Probe Mesh is enabled,
the remotes send debug information to iMonitor. This increases the processing
on the remotes and uses upstream bandwidth that could otherwise be used to
send traffic.
Phase
Product Type iDirect Model
Supported
Line Card M1D1 (Required for mesh outroute and supports inroute) Phase I and II
M1D1-T (Required for TRANSEC for mesh outroute and Phase II
supports inroute)
M0D1 (Supports mesh inroute) Phase I and II
Private Hub Private Hub (Mesh) Phase I
5000 iNFINITI Series 5300 Phase I and II
5350 Phase I and II
7000 iNFINITI Series 7300 Phase I and II
7350 Phase I and II
8000 iNFINITI Series 8350 Phase II
8000 Evolution Series e8350 (DVB-S2 hardware)* Phase II
iConnex iConnex 100 Phase II
iConnex 700 (Formerly iConnex 200) Phase I and II
* In Releases 8.2 and 8.3, DVB-S2 hardware works in legacy mode only.
* In Releases 8.2 and 8.3, DVB-S2 hardware works in legacy mode only.
** Cross-strapped capability may be developed in future IDS releases.
This chapter describes the modulation modes and Forward Error Correction (FEC) rates that
are supported in iDS Release 8.3. It also describes possible upstream and downstream
combinations.
Note: For specific Eb/No values for each FEC rate and Modulation combination, refer
to the iDirect Link Budget Analysis Guide, which is available for download from
the TAC web page located at http://tac.idirect.net.
Modulation Mode
Spread
Spectrum
iSCPC Links
§ SCPC channel framing uses a modified HDLC header, which requires bit-stuffing to prevent false end-of-frame
detection. The actual payload is variable, and always slightly less than the numbers indicated in the table.
§§ The TDMA Payload Bytes value removes the TDMA header overhead of 10 bytes: Demand=2 + LL=6 + PAD=2.
SAR, Encryption, and VLAN features add additional overhead.
§§§ This FEC combination is not recommended for new designs. For new network designs, iDirect recommends
using FEC 0.495. If you have an existing network operating at an information rate of 10 Msps or greater, the
network may experience errors due to an FEC decoding limitation.
This section provides information about Spread Spectrum technology in an iDirect network. It
discusses the following topics:
• “What is Spread Spectrum?” on page 41
• “Downstream Specifications” on page 43
• “Upstream Specifications” on page 44
Spreading takes place when the input data (dt) is multiplied with the PN code (pnt) which
results in the transmit baseband signal (txb). The baseband signal is then modulated and
transmitted to the receiving station. Despreading takes place at the receiving station when
the baseband signal is demodulated (rxb) and correlated with the replica PN (pnr) which
results in the data output (dr).
Beginning with iDS Release 8.0, Spread Spectrum transmission is supported in both TDMA and
SCPC configurations. SS mode is employed in iDirect networks to minimize adjacent satellite
interference (ASI). ASI can occur in applications such as Comms-On-The-Move (COTM) because
the small antenna (typically sub-meter) used on mobile vehicles has small aperture size, large
beam width, and high pointing error which can combine to cause ASI. Enabling SS reduces the
spectral density of the transmission so that it is low enough to avoid interfering with adjacent
satellites.
Conversely, when receiving through a COTM antenna, SS improves carrier performance in
cases of ASI (channel/interference).
The iDirect SS is an extension of BPSK modulation in both upstream and downstream. The
signal is spread over wider bandwidth according to a Spreading Factor (SF) that you select.
You can select an SF of 1, 2, or 4. Each symbol in the spreading code is called a “chip”, and
the spread rate is the rate at which chips are transmitted. For example, selecting an SF of 1
means that the spread rate is one chip per symbol (which is equivalent to regular BPSK, and
therefore, there is no spreading). Selecting an SF of 4 means that the spread rate is four chips
per symbol.
An additional Spreading Factor, COTM SF=1, is for upstream TDMA carriers only. Like an SF of
1, if you select COTM SF=1, there is no spreading. However, the size of the carrier unique
word is increased, allowing mobile remotes to remain in the network when they might
otherwise drop out. An advantage of this spreading factor is that you can receive error-free
data at a slightly lower C/N compared to regular BPSK. However, carriers with COTM SF=1
transmit at a slightly lower information rate.
COTM SF=1 is primarily intended for use by fast moving mobile remotes. The additional unique
word overhead allows the remote to tolerate more than 10 times as much frequency offset as
can be tolerated by regular BPSK. That makes COTM SF=1 the appropriate choice when the
Doppler effect caused by vehicle speed and acceleration is significant even though the link
budget does not require spreading. Examples include small maritime vessels, motor vehicles,
trains, and aircraft. Slow moving, large maritime vessels generally do not require COTM SF=1.
Spread Spectrum can also be used to hide a carrier in the noise of an empty transponder.
However, SS should not be confused with Code Division Multiple Access (CDMA), which is the
process of transmitting multiple SS channels simultaneously on the same bandwidth.
Spread Spectrum may also be useful in situations where local or RF interference is
unavoidable, such as hostile jamming. However, iDirect designed the Spread Spectrum feature
primarily for COTM and ASI mitigation. iDirect SS may be a good solution for overcoming some
instances of interference or jamming, but it is recommended that you discuss your particular
application with iDirect sales engineering.
Note: You must install the M1D1-TSS HLC in a slot that has one empty slot to the right.
For example, if you want to install the HLC in slot 4, slot 5 must be empty. Be sure
that you also check chassis slot configuration in iBuilder to verify that you are not
installing the HLC in a reserved slot.
The remote that supports spread spectrum is the 8350 series iNFINITI remote. The 3000, 5000,
and 7000 series iNFINITI series remotes do not support spread spectrum.
Downstream Specifications
The specifications for the spread spectrum downstream channel are outlined in Table 6.
Note: Beginning with iDS Release 8.2, the iBuilder selections for Spreading Factors of
2 and 4 on the iBuilder Carrier Information tab changed to COTM SF=2 and
COTM SF=4. These Spreading Factors are identical to 2 and 4 in iDS Release
8.0.
Upstream Specifications
The specifications for the spread spectrum upstream channel are outlined in Table 8. The
Spreading Factor COTM 1, used in fast moving mobile applications, is described on page 42.
Note: Beginning with iDS Release 8.2, the iBuilder selections for Spreading Factors of 2
and 4 on the iBuilder Carrier Information tab changed to COTM SF=2 and COTM
SF=4. These Spreading Factors are identical to 2 and 4 in iDS Release 8.0.
PARAMETERS VALUES
Modulation BPSK
Spreading Factor 1, COTM 1, 2, or 4
Symbol Rate 64 ksym/s - 1.875 Msym/s
Chip Rate 7.5 Mchip maximum
FEC Rate .66, .431, .533
BER Performance Refer to the iDirect Link
Budget Analysis Guide
Maximum Frequency Offset 1/8% of Fsym
Unique Word Overhead 128 symbols
Burst Size 1024 bits
Occupied Bandwidth 1.2 * Symbol Rate
Hardware Platform iNFINITI series 8350
This chapter describes how you can configure Quality of Service definitions to achieve
maximum efficiency by prioritizing traffic.
QoS Measures
When discussing QoS, at least four interrelated measures are considered. These are
Throughput, Latency, Jitter, and Packet Loss. This section describes these parameters in
general terms, without specific regard to an iDirect network.
Throughput. Throughput is a measure of capacity and indicates the amount of user data that
is received by the end user application. For example, a G729 voice call without additional
compression (such as cRTP), or voice suppression, requires a constant 24 Kbps of application
level RTP data to achieve acceptable voice quality for the duration of the call. Therefore this
application requires 24 Kbps of throughput. When adequate throughput cannot be achieved
on a continuous basis to support a particular application, Qos can be adversely affected.
Latency. Latency is a measure of the amount of time between events. Unqualified latency is
the amount of time between the transmission of a packet from its source and the receipt of
that packet at the destination. If explicitly qualified, it may also mean the amount of time
between a request for a network resource and the time when that resource is received. In
general, latency accounts for the total delay between events and it includes transit time,
queuing, and processing delays. Keeping latency to a minimum is very important for VoIP
applications for human factor reasons.
Packet Loss. Packet Loss is a measure of the number of packets that are transmitted by a
source, but not received by the destination. The most common cause of packet loss on a
network is network congestion. Congestion occurs whenever the volume of traffic exceeds the
available bandwidth. In these cases, packets are filling queues internal to network devices at
a rate faster than those packets can be transmitted from the device. When this condition
exists, network devices drop packets to keep the network in a stable condition. Applications
that are built on a TCP transport interpret the absence of these packets (and the absence of
their related ACKs) as congestion and they invoke standard TCP slow-Start and congestion
avoidance techniques. With real time applications, such as VoIP or streaming video, it is often
impossible to gracefully recover these lost packets because there is not enough time to
retransmit lost packets. Packet loss may affect the application in adverse ways. For example,
parts of words in a voice call may be missing or there maybe an echo; video images may break
up or become block-like (pixilation effects).
Service Levels
A Service Level may represent a single application (such as VoIP traffic from a single IP
address) or a broad class of applications (such as all TCP based applications). Each Service
Level is defined by one or more packet-matching rules. The set of rules for a Service Level
allows logical combinations of comparisons to be made between the following IP packet
fields:
• Source IP address
• Destination IP address
• Source port
• Destination port
• Protocol (such as DiffServ DSCP)
• TOS priority
• TOS precedence
• VLAN ID
Packet Scheduling
Packet Scheduling is a method used to transmit traffic according to priority and classification.
In a network that has a remote that always has enough bandwidth for all of its applications,
packets are transmitted in the order that they are received without significant delay.
Application priority makes little difference since the remote never has to select which packet
to transmit next.
In a network where there are periods of time in which a remote does not have sufficient
bandwidth to transmit all queued packets the remote scheduling algorithm must determine
which packet from a set of queued packets across a number of service levels to transmit next.
For each service level you define in iBuilder, you can select any one of three queue types to
determine how packets using that service level are to be selected for transmission. These are
Priority Queue, Class-Based Weighted Fair Queue (CBWFQ), and Best-Effort Queue.
The procedures for defining profiles and service levels are detailed in chapter 8, “Configuring
Quality of Service for iDirect Networks” of the iBuilder User Guide.
Priority Queues are emptied before CBWFQ queues are serviced and CBWFQ queues are in
turn emptied before Best Effort queues are serviced. Figure 23 on page 49 presents an
overview of the iDirect packet scheduling algorithm.
The packet scheduling algorithm (Figure 23) first services packets from Priority Queues in
order of priority, P1 being the highest priority. It selects CBWFQ packets only after all Priority
Queues are empty. Similarly, packets are taken from Best Effort Queues only after all CBWFQ
packets are serviced.
You can define multiple service levels using any combination of the three queue types. For
example, you can use a combination of Priority and Best Effort Queues only.
Priority Queues
There are four levels of user Priority Queues:
• Level 1: P1 (Highest priority)
• Level 2: P2
• Level 3: P3
• Level 4: P4 (Lowest priority)
All queues of higher priority must be empty before any lower-priority queue are serviced. If
two or more queues are set to the same priority level, then all queues of equal priority are
emptied using a round-robin selection algorithm prior to selecting any packets from lower
priority queues.
Group QoS
Group QoS (GQoS), introduced in iDS Release 8.0, enhances the power and flexibility of
iDirect’s QoS feature for TDMA networks. It allows advanced network operators a high degree
of flexibility in creating subnetworks and groups of remotes with various levels of service
tailored to the characteristics of the user applications being supported.
Group QoS is built on the Group QoS tree: a hierarchical construct within which containership
and inheritance rules allow the iterative application of basic allocation methods across groups
and subgroups. QoS properties configured at each level of the Group QoS tree determine how
bandwidth is distributed when demand exceeds availability.
Group QoS enables the construction of very sophisticated and complex allocation models. It
allows network operators to create network subgroups with various levels of service on the
same outbound carrier or inroute group. It allows bandwidth to be subdivided among
customers or Service Providers, while also allowing oversubscription of one group’s configured
capacity when bandwidth belonging to another group is available.
Note: Group QoS applies only to TDMA networks. It does not apply to iDirect iSCPC
connections.
Note: If you are upgrading from a pre-8.0 iDirect Release, your TDMA networks can be
converted from the older QoS implementation to comply with the Group QoS
feature. See your Network Upgrade Procedure for special upgrade instructions
regarding this conversion.
For details on using the Group QoS feature, see the chapter titled “Configuring Quality of
Service for iDirect Networks” in the iBuilder User Guide.
Bandwidth Pool
A Bandwidth Pool is the highest node in the Group QoS hierarchy. As such, all sub-nodes of a
Bandwidth Pool represent subdivisions of the bandwidth within that Bandwidth Pool. In the
iDirect network, a Bandwidth Pool consists of an outbound carrier or an inroute group.
Bandwidth Group
A Bandwidth Pool can be divided into multiple Bandwidth Groups. Bandwidth Groups allow a
network operator to subdivide the bandwidth of an outroute or inroute group. Different
Bandwidth Groups can then be assigned to different Service Providers or Virtual Network
Operators (VNO).
Bandwidth Groups can be configured with any of the following:
• CIR and MIR: Typically, the sum of the CIR bandwidth of all Bandwidth Groups equals
the total bandwidth. When MIR is larger than CIR, the Bandwidth Group is allowed to
exceed its CIR when bandwidth is available.
• Priority: A group with highest priority receives its bandwidth before lower-priority
groups.
Service Group
A Service Provider or a Virtual Network Operator can further divide a Bandwidth Group into
sub-groups called Service Groups. A Service Group can be used strictly to group remotes into
sub-groups or, more typically, to differentiate groups by class of service. For example, a
platinum, gold, silver and best effort service could be defined as Service Groups under the
same Bandwidth Group.
Like Bandwidth Groups, Service Groups can be configured with CIR, MIR, Priority and Cost.
Service Groups are typically configured with either a CIR and MIR for a physical separation of
the groups, or with a combination of Priority, Cost and CIR/MIR to create tiered service. By
default, a single Service Group is created for each Bandwidth Group.
Application Group
An Application defines a specific service available to the end user. Application Groups are
associated with any Service Group. The following are examples:
• VoIP
• Video
• Oracle
• Citrix
• VLAN
• NMS Traffic
• Default
Each Application List can have one or more matching rules such as:
• Protocol: TCP, UDP, and ICMP
• Source and/or Destination IP or IP Subnet
• Source and/or Destination Port Number
• DSCP Value or DSCP Ranges
• VLAN
Each Application List can be configured with any of the following:
• CIR/MIR
• Priority
• Cost
Service Profiles
Service Profiles are derived from the Application Group by selecting Applications and
matching rules and assigning per remote CIR and MIR when applicable. While the Application
Group specifies the CIR/MIR by Application for the whole Service Group, the Service Profile
specifies the per-remote CIR/MIR by Application. For example, the VoIP Application could be
configured with a CIR of 1 Mbps for the Service Group in the Application Group and a CIR of 14
Kbps per-remote in the Service Profile.
Typically, all remotes in a Service Group use the Default Profile for that Service Group. When
a remote is created under an inroute group, the QoS Tab allows the operator to assign the
remote to a Bandwidth Group and Service Group. The new remote automatically receives the
default profile for the Service Group. The Group QoS interface can also be used to assign a
remote to a Service Group or change the assignment of the remote from one Service Group to
another.
In order to accommodate special cases, however, additional profiles (other than the Default
Profile) can be created. For example, profiles can be used by a specific remote to prioritize
an Application that is not used by other remotes; to prioritize a specific VLAN on a remote; or
to prioritize traffic to a specific IP address (such as a file server) connected to a specific
remote in the Service Group. Or a Network Operator may want to configure some remotes for
a single VoIP call and others for two VoIP calls. This can be accomplished by assigning
different profiles to each group of remotes.
Note: Another solution would be to create a single Bandwidth Group with two Service
Groups. This solution would limit the flexibility, however, if the satellite
provider decides in the future to further split each group into sub-groups.
VoIP could also be configured as priority 1 traffic. In that case, demand for VoIP must be fully
satisfied before serving lower priority applications. Therefore, it is important to configure an
MIR to avoid having VoIP consume all available bandwidth.
Note that cost could be used instead of priority if the intention were to have a fair allocation
rather than to satisfy the Platinum service before any bandwidth is allocated to Gold; and
then satisfy the Gold service before any bandwidth is allocated to Silver. For example:
• Platinum – Cost 0.1 - CIR 6 Mbps, MIR 12 Mbps
• Gold – Cost 0.2 - CIR 6 Mbps, MIR 18 Mbps
• Silver – Cost 0.3 - No CIR, No MIR Defined
single outbound carrier of 10 Mbps to provide service for both companies, while ensuring that
each customer receives the bandwidth that they paid for. This scenario is complicated by the
fact that, on oil rigs with both companies present, the network operator would like to use a
single remote to provide service to both by separating their terminals into VLAN-51 for
Company A and VLAN-52 for Company B. Both companies would also like to prioritize their
VoIP.
Configuration:
If we had separate remotes for each company, this would be a simple “Physical Segregation”
scenario. However, keeping both companies in the same Service Group and allocating
bandwidth by VLAN and application would not provide the strict separation of 8 Mbps for
Company A and 2 Mbps for Company B. Instead, the solution is to create two Service Groups:
• Company A: CIR/MIR 8 Mbps/8 Mbps
• Company B: CIR/MIR 2 Mbps /2 Mbps
Service Profiles for both companies would have VoIP and Default with the appropriate priority,
cost, CIR and MIR. In order to allow the same remote to serve both companies, the remote is
assigned to both Service Groups as shown in Figure 29. Note that this is an unusual
configuration and is not recommended for the typical application.
Application Throughput
Application throughput depends on properly classified and prioritized QoS and on properly
available bandwidth management. For example, if a VoIP application requires 16 Kbps and a
remote is only given 10 Kbps the application fails regardless of priority, since there is not
enough available bandwidth.
Bandwidth assignment is controlled by the Protocol Processor. As a result of the various
network topologies (for example, a shared TDM downstream with a deterministic TDMA
upstream), the Protocol Processor has different mechanisms for downstream control versus
upstream control. Downstream control of bandwidth is provided by continuously evaluating
network traffic flow to assigning bandwidth to remotes as needed. The Protocol Processor
assigns bandwidth and controls the transmission of packets for each remote according to the
QoS parameters defined for the remote’s downstream.
Upstream bandwidth is requested continuously with each TDMA burst from each remote. A
centralized bandwidth manager integrates the information contained in each request and
produces a TDMA burst time plan which assigns individual bursts to specific remotes. The
burst time plan is produced once per TDMA frame (typically 125 ms or 8 times per second).
Note: There is a 250 ms delay from the time that the remote makes a request for
bandwidth and when the Protocol Processor transmits the burst time plan to it.
iDirect has developed a number of features to address the challenges of providing adequate
bandwidth for a given application. These features are discussed in the sections that follow.
QoS Properties
There are several QoS properties that you can configure based on your traffic throughput
requirements. These are discussed in the sections that follow. For information of configuring
these properties, see chapter 8, “Configuring Quality of Service for iDirect Networks” of the
iBuilder User Guide.
Static CIR
You can configure a static Committed Information Rate (CIR) or an upstream minimum
information rate for any upstream (TDMA) channel. Static CIR is bandwidth that is guaranteed
even if the remote does not need the capacity. By default, a remote is configured with a
single slot per TDMA frame. Increasing this value is considered as an inefficient configuration
because these slots are wasted if the remote is inactive. No other remote can be given these
slots unless the remote with the static CIR has not been acquired into the network. A static
CIR is considered as the highest priority upstream bandwidth. Static CIR only applies in the
upstream direction. The downstream does not need or support the concept of a static CIR.
Dynamic CIR
You can configure Dynamic CIR values for remotes in both the downstream and upstream
directions. Dynamic CIR is not statically committed and is granted only when demand is
actually present. This allows you to support CIR based service level agreements and, based on
statistical analysis, oversubscribe networks with respect to CIR. If a remote has a CIR but
demand is less than the CIR, only the actual demanded bandwidth is granted. It is also
possible to indicate that only certain QoS service levels “trigger” a CIR request. In these
cases, traffic must be present in a triggering service level before the CIR is granted. Triggering
is specified on a per-service level basis.
Additional burst bandwidth is assigned evenly among all remotes in the network by default.
All available burstable bandwidth (BW) is equally divided between all remotes requesting
additional BW, regardless of already allocated CIR.
Previously, a remote in a highly congested network would often not get burst bandwidth
above its CIR. For example, consider a network with a 3 Mbps upstream and three remotes,
R1, R2, and R3. R1 and R2 are assigned a CIR of 1 Mbps each and R3 has no CIR. If all remotes
request 2 Mbps each, 1 Mbps is given to R3, making the total used BW 3 Mbps. In this case, R1
and R2 receive no additional BW.
Using the same example network, the additional 1 Mbps BW is evenly distributed by giving
each remote an additional 333 Kbps. The default configuration is to allow even bandwidth
distribution.
Further QoS configuration procedures can be found in chapter 8, “Configuring Quality of
Service for iDirect Networks” of the iBuilder User Guide.
latency and may result in a poor user experience. iDirect recommends that this feature be
used with care. The iBuilder GUI enforces a minimum of one slot per remote every two
seconds. For more information, please see the section titled “Upstream and Downstream Rate
Shaping” in chapter 7, “Configuring Remotes” of the iBuilder User Guide.
Sticky CIR
Sticky CIR is activated only when CIR is over-subscribed on the downstream or on the
upstream. When enabled, Sticky CIR favors remotes that have already received their CIR over
remotes that are currently asking for it. When disabled (the default setting), The Protocol
Processor reduces assigned bandwidth to all remotes to accommodate a new remote in the
network. Sticky CIT can be configured in the Bandwidth Group and Service Group level
interfaces in iBuilder.
Application Jitter
Jitter is the variation of latency on a packet-by-packet basis of application traffic. For an
application like VoIP, the transmitting equipment spaces each packet at a known fixed interval
(every 20 ms, for example). However, in a packet switched network, there is no guarantee
that the packets will arrive at their destination with the same interval rate. To compensate
for this, the receiving equipment employs a jitter buffer that attempts to play out the
arriving packets at the desired perfect interval rate. To do this it must introduce latency by
buffering packets for a certain amount of time and then playing them out at the fixed
interval.
While jitter plays a role in both downstream and upstream directions, a TDMA network tends
to introduce more jitter in the upstream direction. This is due to the discrete nature of the
TDMA time plan where a remote may only burst in an assigned slot. The inter-slot times
assigned to a particular remote do not match the desired play out rate, which results in jitter.
Another source of jitter is other traffic that a node transmits between (or in front of)
successive packets in the real-time stream. In situations where a large packet needs to be
transmitted in front of a real-time packet, jitter is introduced because the node must wait
longer than normal before transmission.
The iDirect system offers features that limit the effect of such problems; these features are
described the sections that follow.
Packet Segmentation
Beginning with iDS Release 8.2, Segmentation and Reassembly (SAR) and Packet Assembly and
Disassembly (PAD) have been replaced by a more efficient iDirect application. Although you
can continue to configure the downstream segment size in iBuilder, all upstream packet
segmentation is handled internally to optimize upstream packet segmentation.
You may wish to change the downstream segment size if you have a small outbound carrier
and need to reduce jitter in your downstream packets. Typically, this is not required. For
details on configuring the downstream segment size, see the chapter on “Configuring
Remotes” in the iBuilder User Guide.
Application Latency
Application latency is typically a concern for transaction-based applications such as credit
card verification systems. For applications like these, it is important that the priority traffic
be expedited through the system and sent, regardless of the less important background
traffic. This is especially important in bandwidth-limited conditions where a remote may only
have a single or a few TDMA slots. In this case, it is important to minimize latency as much as
possible after the distributor’s QoS decision. This allows a highly prioritized packet to make
its way immediately to the front of the transmit queue.
During acquisition, the iNFINITI remote attempts to join the network according to the burst
plan assigned to the remote by the hub. The initial transmit power must be set correctly so
that the remote can join the network and stay in the network. This chapter describes the best
practices for setting Transmit (TX) Initial Power in an iDirect network.
At any time after site commissioning, you can check the TX Initial Power setting by observing
the Remote Status and UCP tabs in iMonitor. If the remote modem is in a “steady state” and
no power adjustments are being made, you can compare the current TX Power to the TX
Initial Power parameter to verify that TX Initial Power is 3 dB higher than the TX Power. For
detailed information on how to set TX Initial Power, refer to the “Remote Installation and
Commissioning Guide”.
Note: Best nominal Tx Power measurements are made during clear sky conditions at
the hub and remote sites.
Ideal Case :
Optimal Detection R ange
6 7 8 9 10 11 12 13 14 C /N (dB )
Threshold C /N
U nder ideal circumstances , the average C /N of all remotes on the upstream channel is equal
to the center of the U CP adjustment range . Therefore the optimal detection range extends to
below the threshold C /N. (This example illustrates the TPC R ate 0 .66 threshold )
T X Initial P ow er T oo H igh:
S ke w ed D e tection R ange
6 7 8 9 10 11 12 13 14 C /N (dB )
T h re sh old C /N
W h en the T X Initia l P ow e r is set too high , rem otes entering the netw o rk skew the average C /N to
be above th e center o f the U C P A djustm ent R a nge . T herefore , durin g this period th e op tim al
detection ra nge does no t inclu de the thresho ld C /N an d rem otes experiencing rain fad e m ay
experie nce a perform an ce degrad ation .
Bursts can still be detected below threshold but the probability of detection and
demodulation reduces. This can lead to long acquisition times (Figure 32).
T X In itial P ow er T o o Lo w :
S kew ed D etection R ange
6 7 8 9 10 11 12 13 14 C /N (dB )
T hreshold C /N
W hen the T x Initial P ow er is set too low , rem otes entering the netw ork skew the averag e C /N to be
b elo w the center of the U C P A djustm ent R ange . T h is co u ld cau se rem o tes co m in g in at th e
h ig h er en d (e.g . 14 d B ) to exp erie n ce so m e d isto rtio n in th e d em o d u latio n p ro cess .
A dditionally, a rem ote acquiring at a low C /N (below threshold ) experiences a large num ber of
C R C errors w hen it enters the netw ork until its pow er is increased .
This chapter describes how the Global NMS works in a global architecture and a sample Global
NMS architecture.
In this example, there are 4 different networks connected to three different Regional
Network Control Centers (RNCCs). A group of remote terminals has been configured to roam
among the four networks.
Note: This diagram shows only one example from the set of possible network
configurations. In practice, there may be any number RNCCs and any number of
protocol processors at each RNCC.
On the left side of the diagram, a single NMS installed at the Global Network Control Center
(GNCC) manages all the RNCC components and the group of roaming remotes. Network
operators, both remote and local, can share the NMS server simultaneously with any number
of VNOs. (Only one VNO is shown in the Figure 34.) All users can run iBuilder, iMonitor, or both
on their PCs.
The connection between the GNCC and each RNCC must be a dedicated high-speed link.
Connections between NOC stations and the NMS server are typically standard Ethernet.
Remote NMS connections are made either over the public Internet protected by a VPN, port
forwarding, or a dedicated leased line.
This chapter describes basic recommended security measures to ensure that the NMS and
Protocol Processor servers are secure when connected to the public Internet. iDirect
recommends that you implement additional security measures over and above these minimal
steps.
Root Passwords
Root password access to the NMS and Protocol Processor servers should be reserved for only
those you want to have administrator-level access to your network. Restrict the distribution
of this password information.
Servers are shipped with default passwords. Change the default passwords after the
installation is complete and make sure these passwords are changed on a regular basis and
when an employee leaves your company.
When selecting your new passwords, iDirect recommends that you follow these practices for
constructing difficult-to-guess passwords:
• Use passwords that are at least 8 characters in length.
• Do not base passwords on dictionary words.
• Use passwords that contain a mixture of letters, numbers, and symbols.
This chapter describes how the Protocol Processor works in a global architecture. Specifically
it contains “Remote Distribution,” which describes how the Protocol Processor balances
remote traffic loading and “De-coupling of NMS and Datapath Components,” which describes
how the Protocol Processor Blades continue to function in the event of a Protocol Processor
Controller failure.
Remote Distribution
The actual distribution of remotes and processes across a blade set is determined by the
Protocol Processor controller dynamically in the following situations:
• At system Startup, the Protocol Processor Controller determines the distribution of
processes based on the number of remotes in the network(s).
• When a new remote is added in iBuilder, the Protocol Processor Controller analyzes the
current system load and adds the new remote to the blade with the least load.
• When a blade fails, the Protocol Processor Controller re-distributes the load across the
remaining blades, ensuring that each remaining blade takes a portion of the load.
The Protocol Processor controller does not perform dynamic load-balancing on remotes. Once
a remote is assigned to a particular blade, it remains there unless it is moved due to one of
the situations described above.
multiple Protocol Processor Blades. A high-level architecture of the Protocol Processor, with
one possible configuration of processes across two blades is shown in Figure 35.
PP Blade 1
N M S Server
sam nc sarm t
spaw n
NM S Servers and
control
sarouter sana
pp_controller
PP Blade 2
M onitor and C ontrol
sam nc
spaw n
and
control
sarm t
sarouter
This chapter describes how you can design your network through a Distributed NMS server,
manage it through iDS supporting software, and back up or restore the configuration.
You can distribute your NMS server processes across multiple server machines. The primary
benefits of machine distribution are improved server performance and better utilization of
disk space.
iDirect recommends a distributed NMS server configuration once the number of remotes being
controlled by a single NMS exceeds 500-600. iDirect has tested the new distributed platform
with over 3000 remotes with iDS 7.0.0. Future releases continue to push this number higher.
The most common distribution scheme for larger networks is shown in Figure 36.
This section describes how TRANSEC and FIPS is implemented in an iDirect Network. It
includes the following sections:
• “What is TRANSEC?" defines Transmission Security.
• “iDirect TRANSEC" describes protocol implementation.
• “TRANSEC Downstream" describes the data path from the hub to the remote.
• “TRANSEC Upstream" describes the data path from the remote to the hub.
• “TRANSEC Key Management" describes public and private key usage.
• “TRANSEC Remote Admission Protocol" describes acquisition and authentication.
• “Reconfiguring the Network for TRANSEC" describes conversion requirements.
What is TRANSEC?
Transmission Security (TRANSEC) prevents an adversary from exploiting information available
in a communications channel without necessarily having defeated the encryption inherent in
the channel. Even if an encrypted wireless transmission is not compromised, information such
as timing and traffic volumes can be determined by using basic signal processing techniques.
This information could provide someone monitoring the network a variety of information on
unit activity. For example, even if an adversary cannot defeat the encryption placed on
individual packets, it might be able to determine answers to questions such as:
• What types of applications are active on the network currently?
• Who is talking to whom?
• Is the network or a particular remote site active now?
• Is it possible to determine between network activity and real world activity, based on
traffic analysis and correlation?
There are a number of components to TRANSEC, one of them being activity detection. With
current VSAT systems an adversary can determine traffic volumes and communications
activities with a simple spectrum analyzer. With a TRANSEC compliant VSAT system an
adversary is presented with a strongly encrypted and constant wall of data. Other
components of TRANSEC include remote and hub authentication. TRANSEC eliminates the
ability of an adversary to bring a non-authorized remote into a secured network.
iDirect TRANSEC
iDirect achieves full TRANSEC compliance by presenting to an adversary who may be
eavesdropping on the RF link a constant “wall” of fixed-size, strongly encrypted (such as
Advanced Encryption Standard (AES) and 256 bit key Cipher Block Chaining (CBC) Mode) traffic
segments, which do not vary in frequency in response to network utilization.
Other than network messages that control the admission of a remote terminal into the
network, all portions of all packets are encrypted, and their original size is hidden. The
content and size of all user traffic (Layer 3 and above), as well as network link layer (Layer 2)
traffic is completely indeterminate from an adversary’s perspective. Further, no higher layer
information is revealed by monitoring the physical layer (Layer 1) signal.
The solution includes a remote-to-hub and a hub-to-remote authentication protocol based on
standard X.509 certificates designed to prevent man-in-the-middle attacks. This
authentication mechanism prevents an adversary’s remote from joining an iDirect TRANSEC
secured network. In a similar manner, it prevents an adversary from coercing a TRANSEC
remote into joining the adversary’s network. While these types of attacks are extremely
difficult to achieve even on a non-TRANSEC iDirect network, the mechanisms put in place for
the TRANSEC feature render them completely impossible.
Note: In this release, HiFin encryption cards are no longer required on you protocol
processor blades for TRANSEC key management.
All hub line cards and remote model types associated with a protocol processor must be
TRANSEC compatible. The only iDirect hardware that operate in TRANSEC mode are the M1D1-
T and M1D1-TSS Hub Line Cards, the iNFINITI 7350 and 8350 remotes, and the iConnex 100 and
iConnex 700 remotes. Therefore these are the only iDirect products that are capable of
operating in a FIPS 140-2 Level 1 compliant mode.
For more information, see “Chapter 16, Converting an Existing Network to TRANSEC” of the
iBuilder User Guide.
TRANSEC Downstream
A simplified block diagram for the iDirect TRANSEC downstream data path is shown in Figure
38. Each function represented in the diagram is implemented in software and firmware on a
TRANSEC capable line card.
Consider the diagram from left to right with variable length packets arriving on the far left
into the block named Packet Ingest. In this diagram, the encrypted path is shown as solid
black, and the unencrypted (clear) path is shown in dashed red. The Packet Ingest function
receives variable length packets which can belong to four logical classes: User Data, Bypass
Burst Time plan (BTP), Encrypted BTP, and Bypass Queue. All packets arriving at the transmit
Hub Line Card have this indication present as a pre-pended header placed there by the
protocol processor (not shown). The Packet Ingest function determines the message type and
places the packet in the appropriate queue. If the packet is not valid, it is not placed in any
queue and it is dropped.
Packets extracted from the Data Queue are always encrypted. Packets extracted from the
Clear Queue are always sent unencrypted, and time-sensitive BTP messages from the BTP
Queue can be sent in either mode. A BTP sent in the clear contains minimal traffic analysis
information for an adversary and is only utilized to allow remotes attempting to exchange
admission control messages with the hub to do so. Traffic sent in the clear bypasses the
Segmentation Engine and the AES Encryption Engine, and precedes the physical framing and
FEC engines for transmission. Clear, unencrypted packets are transmitted without regard to
segmentation; they are allowed to exist on the RF link with variable sized framing.
Encrypted traffic next enters the Segmentation Engine. The Segmentation Engine segments
incoming packets based on a configured size and provides fill-packets when necessary. The
Segmentation Engine allows the iDirect TRANSEC downstream to transmit a configurable,
fixed size TDM packet segment on a continuous basis.
After segmentation, fixed sized packets enter the Encryption Engine. The encryption
algorithm utilizes the AES algorithm with a 256 bit key and operates in CBC Mode. Packets exit
the Encryption Engine with a pre-pended header as shown in Figure 39.
The Encryption Header consists of five 32 bit words with four fields. The fields are:
• Code. This field indicates if the frame is encrypted or not, and if encrypted indicates the
entry within the key ring (described under the key management section later in this
document) to be utilized for this frame. The Code field is one byte in length.
• Seq. This field is a sequence number that increments with each segment. The Seq field is
two bytes in length (16 bits, unsigned).
• Rsvd. This field is 1 byte and is reserved for future use.
• Initialization Vector (IV). IV is utilized by the encryption/decryption algorithm and
contains random data. The IV field is 16 bytes in length (128 bits unsigned).
A new IV is generated for each segment. The first IV is generated from the cipher text of the
initial Known Answer Test (KAT) conducted at system boot time. Subsequent IVs are taken
from the last 128 bits of the cipher text of the previously encrypted segment. IVs are
continuously updated regardless of key rotations and they are independent of the key rotation
process. They are also continuously updated regardless of the presence of user traffic since
the filler segments are encrypted. While no logic is included to ensure that IVs do not repeat,
the chance of repetition is very small; estimates place the probability of an IV repeating at
1:2102 for a maximum iDirect downstream data rate.
The Segment is of fixed, configurable length and consists of a series of fixed length Fragment
Headers (FH) followed by variable length data Fragments (F). The entire Segment is
encrypted in a single operation by the encryption engine. The FH contains sufficient
information for the source packet stream, post decryption on the receiver, to be
reconstructed. Each Fragment contains a portion of a source packet.
The Encryption Header is transmitted unencrypted but contains only enough information for a
receiver to decrypt the segment if it is in possession of the symmetric key.
Once an encrypted packet exits the Encryption Engine it undergoes normal processing such as
framing and forward error correction coding. These functions are essentially independent of
TRANSEC but complete the downstream transmission chain and are thus depicted in figure 1.
TRANSEC Upstream
A simplified block diagram for the iDirect TRANSEC upstream data path is shown in Figure 40.
The functions represented in this diagram are implemented in software and firmware on a
TRANSEC capable remote.
The encrypted path is shown is solid black, and the unencrypted (clear) path is shown in
dashed red. The Packet Ingest function determines the message type and places the packet in
the appropriate queue or drops it if it is not valid.
Consider the diagram from left to right with variable length packets arriving on the far left
into the block named Packet Ingest. The upstream (remote to hub) path differs from the
downstream (hub to remote) in that on the upstream is configured for TDMA. Variable length
packets from a remote LAN are segmented in software, and can be considered as part of the
Packet Ingest function. Therefore there is no need for the firmware level segmentation
present in the downstream. Additionally, since the remote is not responsible for the
generation of BTPs, there is no need for the additional queues present in the downstream.
Packets extracted from the Data Queue are always encrypted. Packets exacted from the Clear
Queue are always sent unencrypted. The overwhelming majority of traffic will be extracted
from the Data Queue. Traffic sent in the clear bypasses the Encryption Engine and precedes
the FEC engine for transmission.
The encryption algorithm utilizes AES algorithm with a 256 bit key and will operate in CBC
Mode. Packets exit the Encryption Engine with a pre-pended header as described in Figure 41.
Note: TRANSEC overhead reduces the payload size shown in Table 5 on page 40 by the
following amounts for each FEC rate: .431: 7 bytes; .533: 4 bytes; . 660: 4
bytes; .793: 6 bytes.
The Encryption Header consists of a single 32 bit word with 3 fields. The fields are:
IV Seed. This field is a 29 bit field utilized to generate an 128 bit IV. The IV Seed field starts at
zero and increments for each transmitted burst. The full 128 bit IV is generated from the
padded seed by passing it though the encryption engine. The IV is expanded into a 128-bit IV
by encrypting it with the current AES key for the inroute. Remotes can therefore expand the
same seed into the same full IV. However, this does not create any problems because due to
addressing requirements, it is impossible for any two remotes within the same upstream to
generate the same plain text data. While no logic is included to ensure that IVs do not repeat
for a single terminal, repetition is impossible because the key rotates every two hours by
default. Since the seed increments for each transmission burst, the number of total bursts
prior to a seed wrapping around is 229 or 536,870,912. Given the two-hour key rotation
period, a single terminal would need to send over 75,000 TDMA bursts per second to exhaust
the range of the seed. This exceeds any possible iDirect upstream data rate by far.
Key ID. This field indicates the entry within the key ring (described under the key
management section later in this document) to be utilized for this frame.
Enc. This field indicates if the frame is encrypted or not.
The Segment is of fixed, configurable length and consists of what we might call the standard
iDirect TDMA frame. A description of the details of the standard frame are beyond the scope
of this document, but as a general description, consist of a Demand Header which indicates
the amount of bandwidth a remote is requesting, the iDirect Link Layer (LL) Header, and
ultimately the actual Payload. This Segment is encrypted. The Encryption Header is
transmitted unencrypted but contains only enough information for a receiver to decrypt the
segment if it is in possession of the symmetric key.
Once an encrypted packet exits the Encryption Engine it undergoes normal processing such as
forward error correction coding. This function is essentially independent of TRANSEC but
completes the upstream transmission chain (as shown in figure 3).
A remote will always burst in its assigned slots even when traffic is not present by generating
encrypted fill payloads as needed. The iDirect Hub dynamic allocation algorithm will always
operate in a mode whereby all available time slots within all time plans are filled.
Note: In this release, HiFin encryption cards are no longer required on you protocol
processor blades for TRANSEC key management.
Key Distribution Protocol (Figure 42), Key Rolling (Figure 43), and Host Keying Protocol (Figure
44) are based on standard techniques utilized within an X.509 based PKI.
Key Distribution Protocol assumes that upon the receipt of a certificate from a peer that the
host is able to validate and establish a chain of trust based on the contents of the certificate.
iDirect TRANSEC utilizes standard X.509 certificates and methodologies to verify the peer’s
certificate.
After the completion of the sequence shown in Figure 42, a peer may provide a key update
message again in an unsolicited fashion as needed. The data structure utilized to complete
key update (also called a key roll) is shown in Figure 43.
This data structure conceptually consists of a set of pointers (Current, Next, Fallow), a two
bit identification field (utilized in the Encryption Headers described above), and the actual
symmetric keys themselves. A key update consists of generating a new key, placing it in the
last fallow slot just prior to the Current pointer, updating the next pointers (circular update
so 11 rolls to 00) and current pointers and generating a Key Update message reflecting these
changes. The key roll mechanism allows for multiple keys to be “in play” simultaneously so
that seamless key rolls can be achieved. By default the iDirect TRANSEC solution rolls any
symmetric key every two hours, but this is a user configurable parameter. The iDirect Host
Keying Protocol is shown Figure 44.
This protocol describes how hosts are originally provided an X.509 certificate from a
Certificate Authority. iDirect provides a Certificate Authority Foundry module with its
TRANSEC hub. Host key generation is done on the host in all cases.
The Fast Acquisition feature reduces the average acquisition time for remotes, particularly in
large networks with hundreds or thousands of remotes. The acquisition messaging process
used in prior versions is included in this release. However, the Protocol Processor now makes
better use of the information available regarding hub receive frequency offsets common to all
remotes to reduce the overall network acquisition time. No additional license requirements
are required for this feature.
Feature Description
Fast Acquisition is configured on a per-remote basis. When a remote is attempting to acquire
the network, the Protocol Processor determines the frequency offset at which a remote
should transmit and conveys it to the remote in a time plan message. From the time plan
message, the remote learns when to transmit and at what frequency offset. The remote
transmit power level is configured in the option file. Based on the time plan message, the
remote calculates the correct Frame Start Delay (FSD). The fundamental aspects of
acquisition are how often a remote gets an opportunity to come into the network, and how
many frequency offsets need to be tried for each remote before it acquires the network.
If a remote can acquire the network more quickly by trying fewer frequency offsets, the
number of remotes that are out of the network at any one time can be reduced. This
determines how often other remotes get a chance to acquire. This feature reduces the
number of frequency offsets that need to be tried for each remote.
By using a common hub receive frequency offset, the fast acquisition algorithm can determine
an anticipated range smaller than the complete frequency sweep space configured for each
remote. As the common receive frequency offset is updated and refined, the sweep window is
reduced.
If an acquisition attempt fails within the reduced sweep window, the sweep window is
widened to include the entire sweep range. Fast Acquisition is enabled by default. You can
disable it by applying a custom key.
For a given ratio x:y, the hub informs the remote to acquire using the smaller frequency offset
range calculated based on the Fast Acquisition scheme. After x number of attempts, the
remote sweeps the entire range y times before it will sweep the narrower acquisition range.
The default ratio is 100:1. That is, try 100 frequency offsets within the reduced (common)
range before resorting to one full sweep of the remote’s frequency offsets.
If you want to modify the ratio, you can use custom keys that follow to override the defaults.
You must apply the custom key to the hub side for each remote in the network.
[REMOTE_DEFINITION]
sweep_freq_fast = 100
sweep_freq_entire_range = 1
sweep_method = 1 (Fast Acquisition enabled)
sweep_method = 0 (Fast Acquisition disabled)
Fast Acquisition cannot be used on 3100 series remotes when the upstream symbol rate is less
than 260 Ksym/s. This is because the FLL on 3100 series remotes is disabled for upstream
rates less than 260 Ksym/s.
The NMS disables Fast Acquisition for any remote that is enabled for an iDirect Music Box and
for any remote that is not configured to utilize the 10 MHz reference clock. In IF-only
networks, such as a test environment, the 10 MHz reference clock is not used.
The Remote Sleep Mode feature conserves remote power consumption during periods of
network inactivity. This section explains how Remote Sleep Mode is implemented. It includes
the following sections:
• “Feature Description" explains how Remote Sleep Mode works.
• “Awakening Methods" describes how remotes exit Remote Sleep Mode.
Feature Description
Remote Sleep mode is supported on all iNFINITI series remotes. In this mode, the BUC is
powered down, thus saving power consumption.
When Sleep Mode is enabled on the iBuilder GUI for a remote, the remote enters Remote
Sleep Mode after a configurable period elapses with no data to transmit. By default, the
remote exits Remote Sleep Mode whenever packets arrive on the local LAN for transmission on
the inbound carrier.
Note: You can use the powermgmt mode set sleep console command to enable or
powermgmt mode set wakeup to disable remote sleep mode.
The stimulus for a remote to exit sleep mode is also configurable in iBuilder. You can select
which types of traffic automatically “trigger wakeup” on the remote by selecting or clearing a
check box for the any of the QoS service levels used by the remote. If no service levels are
configured to trigger wakeup the remote, you can manually force the remote to exit sleep
mode by disabling sleep mode on the remote configuration screen.
Before a remote enters sleep mode, the protocol processor continues to allocate traffic slots
(including minimum CIR) to the remote. Before it enters sleep mode, the remote notifies the
NMS and the real time state of the remote is updated in iMonitor. Once the remote enters
sleep mode, as far as the protocol processor is concerned, the remote is out of the network.
Therefore, no traffic slots are allocated to the remote while it is in sleep mode. When the
remote receives traffic that triggers wakeup, the remote returns to the network and traffic
slots are allocated as normal by the protocol processor.
Awakening Methods
There are two methods by which a remote is “awakened” from Sleep Mode. They are
“Operator-Commanded Awakening”, and “Activity-Related Awakening”.
Operator-Commanded Awakening
With Operator Command Awakening, you can manually force a remote into Remote Sleep
Mode and subsequently “awake” it via the NMS. This can be done remotely from the Hub since
the remote continues to receive the downstream while in sleep mode.
[SAT0]
forced = 1
Note: When this custom key is set to 1, a remote with RIP enabled will always advertise
the satellite route as available on the local LAN, even if the satellite link is down.
Therefore, Sleep Mode feature is not compatible with configurations that rely on
the ability of the local router to detect loss of the satellite link.
To enable Remote Sleep Mode, see the chapter on configuring remotes in the iBuilder User
Guide.
To configure service level based wake up, see the QoS Chapter in the iBuilder User Guide.
This section contains information pertaining to Automatic Beam Selection (ABS) for roaming
remotes in a maritime environment.
Theory of Operation
Since the term “network” is used in many ways, the term “beam” is used rather than the
term “network” to refer to an outroute and its associated inroutes.
ABS is built on iDirect’s existing mobile remote functionality. When a modem is in a particular
beam, it operates as a traditional mobile remote in that beam.
In a maritime environment, a roaming remote terminal consists of an iDirect modem and a
controllable, steerable, stabilized antenna. The ABS software in the modem can command the
antenna to find and lock to any satellite. Using iBuilder, you can define an instance of the
remote in each beam that the modem is permitted to use. You can also configure and monitor
all instances of the remote as a single entity. The remote options file (which conveys
configuration parameters to the remote from the NMS) contains the definition of each of the
remote’s beams. Options files for roaming remotes, called “consolidated” options files, are
described in detail in the iBuilder User Guide.
As a vessel moves from the footprint of one beam into the footprint of another, the remote
must shift from the old beam to the new beam. Automatic Beam Selection enables the remote
to select a new beam, decide when to switch, and to perform the switch-over, without human
intervention. ABS logic in the modem reads the current location from the antenna and decides
which beam will provide optimal performance for that location. This decision is made by the
remote, rather than by the NMS, because the remote must be able to select a beam even if it
is not communicating with the network.
To determine the best beam for the current location, the remote relies on a beam map file
that is downloaded from the NMS to the remote and stored in memory. The beam map file is a
large data file containing beam quality information for each point on the Earth's surface as
computed by the satellite provider. Whenever a new beam is required by remotes using ABS,
the satellite provider must generate new map data in a pre-defined format referred to as a
“conveyance beam map file.” iDirect provides a utility that converts the conveyance beam
map file from the satellite provider into a beam map file that can be used by the iDirect
system.
Note: In order to use the iDirect ABS feature, the satellite provider must enter into an
agreement with iDirect to provide the beam map data in a specified format.
The iDirect NMS software consists of multiple server applications. One such server
application, know as the map server, manages the iDirect beam maps for remotes in its
networks. The map server reads the beam maps and waits for map requests from remote
modems.
A modem has a limited amount of non-volatile storage, so it cannot save an entire map of all
beams. Instead, the remote asks the map server to send a map of a smaller area (called a
beam “maplet”) that encompasses its current location. When the vessel nears the edge of its
current maplet, the remote asks for another beam maplet centered on its new location. The
geographical size of these beam maplets varies in order to keep the file size approximately
constant. A beam maplet typically covers a 1000 km square.
A steerable, stabilized antenna must know its geographical location in order to point to the
antenna. The antenna includes a GPS receiver for this purpose. The remote must also know its
geographical location to select the correct beam and to compute its distance from the
satellite. The remote periodically commands the antenna controller to send the current
location to the modem.
IP Mobility
Communications to the customer intranet (or to the Internet) are automatically re-
established after a beam switch-over. The process of joining the network after a new beam is
selected uses the same internet routing protocols that are already established in the iDirect
system. When a remote joins a beam, the Protocol Processor for that beam begins advertising
the remote's IP addresses to the upstream router using the RIP protocol. When a remote
leaves a beam, the Protocol Processor for that beam withdraws the advertisement for the
remote's IP addresses. When the upstream routers see these advertisements and withdrawals,
they communicate with each other using the appropriate IP protocols to determine their
routing tables. This permits other devices on the Internet to send data to the remote over the
new path with no manual intervention.
Operational Scenarios
This section presents a series of top-level operational scenarios that can be followed when
configuring and managing iDirect networks that contain roaming remotes using Automatic
Beam Selection. Steps for configuring network elements such as iDirect networks (beams) and
roaming remotes are documented in iBuilder User Guide. Steps specific to configuring ABS
functionality, such as adding an ABS-capable antenna or converting a conveyance beam map
file, are described in “Appendix C, Configuring Networks for Automatic Beam Selection” of
the iBuilder User Guide.
9. The NMS operator runs the map server as part of the NMS.
Adding a Vessel
This scenario outlines the steps required to add a roaming remote using ABS to all available
beams.
1. The NMS operator configures the remote modem in one beam.
2. The NMS operator adds the remote to the remaining beams.
3. The NMS operator saves the modem's options file and delivers it to the installer.
4. The installer installs the modem aboard a ship.
5. The installer copies the options file to the modem using iSite.
6. The installer manually selects a beam for commissioning.
7. The modem commands the antenna to point to the satellite.
8. The modem receives the current location from antenna.
9. The installer commissions the remote in the initial beam.
10. The modem enters the network and requests a maplet from the NMS map server.
11. The modem checks the maplet. If the commissioning beam is not the best beam, the
modem switches to the best beam as indicated in the maplet. This beam is then assigned
a high preference rating by the modem to prevent the modem from switching between
overlapping beams of similar quality.
12. Assuming center beam in clear sky conditions:
13. The installer sets the initial transmit power to 3 dB above the nominal transmit power.
14. The installer sets the maximum power to 6 dB above the nominal transmit power.
Note: Check the levels the first time the remote enters each new beam and adjust the
transmit power settings if necessary.
Normal Operations
This scenario describes the events that occur during normal operations when a modem is
receiving map information from the NMS.
1. The ship leaves port and travels to next destination.
2. The modem receives the current location from antenna every five minutes.
3. While in the beam, the antenna automatically tracks the satellite.
4. As the ship approaches the edge of the current maplet, the modem requests a new
maplet from the map server.
5. When the ship reaches a location where the maplet shows a better beam, the remote
switches by doing the following:
a. a. Computes best beam.
b. b. Saves best beam to non-volatile storage.
c. c. Reboots.
d. d. Reads the new best beam from non-volatile storage.
e. e. Commands the antenna to move to the correct satellite and beam.
f. f. Joins the new beam.
Mapless Operations
This scenario describes the events that occur during operations when a modem is not
receiving beam mapping information from the NMS.
1. While operational in a beam, the remote periodically asks the map server for a maplet.
The remote does not attempt to switch to a new beam unless one of the following
conditions are true:
a. a. The remote drops out of the network.
b. b. The remote receives a maplet indicating that a better beam exists.
c. c. The satellite drops below the minimum look elevation defined for that beam.
2. If not acquired, the remote selects a visible, usable beam based only on satellite
longitude and attempts to switch to that beam.
3. After five minutes, if the remote is still not acquired, it marks the new beam as unusable
and selects the best beam from the remaining visible, usable beams in the options file.
This step is repeated until the remote is acquired in a beam, or all visible beams are
marked as unusable.
4. If all visible beams are unusable, the remote marks them all as usable, and continues to
attempt to use each beam in a round-robin fashion as described in step 3.
Error Recovery
This section describes the actions taken by the modem under certain error conditions.
1. If the remote cannot communicate with the antenna and is not acquired into the network,
it will reboot after five minutes.
2. If the antenna is initializing, the remote waits for the initialization to complete. It will
not attempt to switch beams during this time.
This chapter describes how you can establish a primary and back up hub that are
geographically diverse. It includes:
• “Feature Description‚" which describes how geographic redundancy is accomplished.
• “Configuring Wait Time Interval for an Out-of-Network Remote‚" which describes how you
can set the wait period before switchover.
Feature Description
The Hub Geographic Redundancy feature builds on the previously developed Global NMS
feature and the existing dbBackup/dbRestore utility. You configure the Hub Geographic
Redundancy feature by defining all the network information for both the Primary and Backup
Teleports in the Primary NMS. All remotes are configured as roaming remotes and they are
defined identically in both the Primary and Backup Teleport network configurations.
Only iNFINITI remotes can currently participate in Global NMS networks. Since the backup
teleport feature also uses the Global NMS capability, this feature is also restricted to iNFINITI
remotes.
During normal (non-failure) operations, carrier transmission is inhibited on the Backup
Teleport. During failover conditions (when roaming network remotes fail to see the
downstream carrier through the Primary Teleport NMS) you can manually enable the
downstream transmission on the Backup Teleport, allowing the remotes to automatically
(after the configured default wait period of five minutes) acquire the downstream
transmission through the Backup Teleport NMS.
iDirect recommends the following for most efficient switchover:
• A separate IP connection (at least 128 Kbps) between the Primary and Backup Teleport
NMS for database backup and restore operations. A higher rate line can be employed to
reduce this database archive time.
• The downstream carrier characteristics for the Primary and Backup Teleports MUST be
different. For example, either the FEC, frequency, frame length, or data rate values must
be different.
• On a periodic basis, backup and restore your NMS configuration database between your
Primary and Backup Teleports. See the NMS Redundancy and Failover Technical Note for
complete NMS redundancy procedures.
This chapter describes carrier bandwidth optimization and carrier spacing. It includes the
following sections:
• “Overview" describes how reducing carrier spacing increases overall available bandwidth.
• “Increasing User Data Rate" provides an example of how you can increase user data rates
with out increasing occupied bandwidth.
• “Decreasing Channel Spacing to Gain Additional Bandwidth" provides an example of how
you can increase occupied bandwidth.
Overview
The Field Programmable Gated Array (FPGA) firmware uses optimized digital filtering which
reduces the amount of satellite bandwidth required for an iDirect carrier. Instead of using a
40% guard band between carriers, now the guard band may be reduced to as low as 20% on
both the broadcast Downstream channel and the TDMA Upstream. Figure 45 shows an overlay
of the original spectrum and the optimized spectrum.
This optimization translates directly into a cost savings for existing and future networks
deployed with iDirect remote modems.
The spectral shape of the carrier is not the only factor contributing to the guard band
requirement. Frequency stability parameters of a system may result in the need for a guard
band of slightly greater than 20% to be used. iDirect complies with the adjacent channel
interference specification in IESS 308 which accounts for adjacent channels on either side
with +7 dB higher power.
Be sure to consult the designer of your satellite link prior to changing any carrier parameters
to verify that they do not violate the policy of your satellite operator.
because the automatic frequency control algorithm uses the hub receiver’s estimate of
frequency offset to adjust each remote transmitter frequency. Hub stations which use a
feedback control system to lock their downconverter to an accurate reference may have
negligible offsets. Hub stations using a locked LNB will have a finite frequency stability range.
Another reason to add guard band is to account for frequency stability of other carriers
directly adjacent on the satellite which are not part of an iDirect network. Be sure to review
this situation with your satellite link designer before changing carrier parameters.
The example that follows accounts for a frequency stability range for systems using
equipment with more significant stability concerns. Given the “Current Carrier Parameters”
the previous example and a total frequency stability of +/-5 kHz, compute the new carrier
parameters:
Solution:
• Subtract the total frequency uncertainty from the available bandwidth to determine the
amount of bandwidth left for the carrier (882.724 kHz – 10 kHz = 872.724 kHz).
• Divide this result by the minimum channel spacing (872.724 / 1.2 = 727.270 kHz).
• Use the result as the carrier symbol rate and compute the remaining parameters.
New Carrier Parameters
• User Bit (info) Rate: 1153.450 kbps
• Carrier Bit Rate: 1454.540 kbps
• Carrier Symbol Rate: 727.270 ksps
• Occupied Bandwidth: 882.724 kHz
• Guard Band Between Carriers: 21.375% (Channel Spacing = 1.21375)
A 15.345% improvement in user bit rate was achieved at no additional cost.
This chapter describes basic hub line card failover concepts, transmit/receive verses receive-
only line card failover, failover sequence of events, and failover operation from a user’s point
of view.
For information about configuring your line cards for failover, refer the “Networks, Line
Cards, and Inroute Groups” chapter of the iBuilder User Guide.
Note: If your Tx line card fails, or you only have a single Rx line card and it fails, all
remotes must re-acquire into the network after failover is complete.
Rx-only line cards take longer to failover than Tx(Rx) cards because they need to receive a
new options file, flash it, and reset.
Event Server
determines line
card has failed
Configuration
Server is notified
YES
YES
Send command to
spare to switch
Send ACTIVE
role from Standby
options file of Rx-only line
YES NO to Primary; send
failed card to card?
ACTIVE options
spare and reset
file of failed card
but DO NOT reset
Apply necessary
changes to puma
(serial number)