100% found this document useful (1 vote)
258 views122 pages

REFTechnical Reference Guide iDS 83rev E061510

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
258 views122 pages

REFTechnical Reference Guide iDS 83rev E061510

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 122

Technical Reference Guide

iDS Release 8.3

June 15, 2010


Copyright © 2010 VT iDirect, Inc. All rights reserved. Reproduction in whole or in part without permission is
prohibited. Information contained herein is subject to change without notice. The specifications and information
regarding the products in this document are subject to change without notice. All statements, information, and
recommendations in this document are believed to be accurate, but are presented without warranty of any kind,
express, or implied. Users must take full responsibility for their application of any products. Trademarks, brand
names and products mentioned in this document are the property of their respective owners. All such references
are used strictly in an editorial fashion with no intent to convey any affiliation with the name or the product's
rightful owner.

Document Name: REF_Technical Reference Guide iDS 8.3_Rev E_061510.pdf

Document Part Number: T0000152

ii Technical Reference Guide


iDS Release 8.3
Contents

About This Guide


Purpose. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Intended Audience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Contents Of This Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Document Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Related Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Getting Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii

1 iDirect System Overview


System Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
IP Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2 Mesh Technical Description


Mesh Theory of Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Transponder Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Outbound TDM Channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Inbound D-TDMA Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Mesh Topology Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Physical Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Network Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Frequency Hopping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Mesh Frequency Hopping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Mesh/Star Frequency Hopping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Mesh Data Path. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Single-Hop and Double-Hop Traffic Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

Technical Reference Guide iii


iDS Release 8.3
Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Real-Time Call Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Hardware Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
HUB RFT Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Hub Chassis Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Private Hub Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Hub ODU Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Remote IDU Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Remote ODU Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Network Considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Link Budget Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Uplink Control Protocol (UCP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Bandwidth Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Mesh Commissioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Star-to-Mesh Network Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Pre-Migration Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Migration Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Configuring and Monitoring Mesh Networks . . . . . . . . . . . . . . . . . . . . . . . . . 29
Building Mesh Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Special Mesh Constants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Turning Mesh On and Off in iBuilder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Changes to Acquisition/Uplink Control in iBuilder . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Monitoring Mesh Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Additional Hub Statistics Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Additional Remote Status Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Mesh Traffic Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Remote-to-Remote Mesh Probe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Long-Term Bandwidth Usage Report for Mesh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Mesh Feature Set and Capability Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

3 Modulation Modes and FEC Rates


iDirect Modulation Modes And FEC Rates . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

4 iDirect Spread Spectrum Networks


What is Spread Spectrum? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Spread Spectrum Hardware Components . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Downstream Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

iv Technical Reference Guide


iDS Release 8.3
Supported Forward Error Correction (FEC) Rates . . . . . . . . . . . . . . . . . . . . . 43
Upstream Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

5 QoS Implementation Principles


Quality of Service (QoS). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
QoS Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
QoS Application, iSCPC and Filter Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Classification Profiles for Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Service Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Packet Scheduling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
Group QoS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Group QoS Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Group QoS Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Application Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
QoS Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Packet Segmentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Application Latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Maximum Channel Efficiency vs. Minimum Latency . . . . . . . . . . . . . . . . . . . . 62

6 Configuring Transmit Initial Power


What is TX Initial Power? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
How To Determine The Correct TX Initial Power . . . . . . . . . . . . . . . . . . . . . . 63
All Remotes Need To Transmit Bursts in The Same C/N Range. . . . . . . . . . . . . 64
What Happens When TX Initial Power Is Set Incorrectly? . . . . . . . . . . . . . . . . 65
When TX Initial Power is Too High . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
When TX Initial Power is Too Low . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

7 Global NMS Architecture


How the Global NMS Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Sample Global NMS Network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

8 Hub Network Security Recommendations


Limited Remote Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Root Passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

Technical Reference Guide v


iDS Release 8.3
9 Global Protocol Processor Architecture
Remote Distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
De-coupling of NMS and Datapath Components . . . . . . . . . . . . . . . . . . . . . . . 71

10 Distributed NMS Server


Distributed NMS Server Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
iBuilder and iMonitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
dbBackup/dbRestore and the Distributed NMS . . . . . . . . . . . . . . . . . . . . . . . 75
Distributed NMS Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

11 Transmission Security (TRANSEC)


What is TRANSEC? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
iDirect TRANSEC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
TRANSEC Downstream . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
TRANSEC Upstream . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
TRANSEC Key Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
TRANSEC Remote Admission Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Reconfiguring the Network for TRANSEC . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

12 Fast Acquisition
Feature Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

13 Remote Sleep Mode


Feature Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Awakening Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Operator-Commanded Awakening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Activity Related Awakening . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
Enabling Remote Sleep Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

14 Automatic Beam Selection


Automatic Beam Selection Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Theory of Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
Beam Characteristics: Visibility and Usability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
Selecting a Beam without a Map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

vi Technical Reference Guide


iDS Release 8.3
Controlling the Antenna . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
IP Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Operational Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Creating the Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Adding a Vessel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Normal Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Mapless Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Blockages and Beam Outages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Error Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

15 Hub Geographic Redundancy


Feature Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Configuring Wait Time Interval for an Out-of-Network Remote . . . . . . . . . . . . 100

16 Carrier Bandwidth Optimization


Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Increasing User Data Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Decreasing Channel Spacing to Gain Additional Bandwidth . . . . . . . . . . . . . . . 103

17 Hub Line Card Failover


Basic Failover Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Tx(Rx) versus Rx-Only Line Card Failover. . . . . . . . . . . . . . . . . . . . . . . . . . . 105
Failover Sequence of Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

Technical Reference Guide vii


iDS Release 8.3
List of Figures

Figure 1. Sample iDirect Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1


Figure 2. iDirect IP Architecture – Multiple VLANs per Remote . . . . . . . . . . . . . . . . . . . . . 3
Figure 3. iDirect IP Architecture – VLAN Spanning Remotes . . . . . . . . . . . . . . . . . . . . . . . . 4
Figure 4. iDirect IP Architecture – Classic IP Configuration . . . . . . . . . . . . . . . . . . . . . . . . 5
Figure 5. iDirect IP Architecture - TDMA and iSCPC Topologies . . . . . . . . . . . . . . . . . . . . . 6
Figure 6. Double-Hop Star Network Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Figure 7. Single-Hop Mesh Overlay Network Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Figure 8. Basic Mesh Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Figure 9. Integrated Mesh and Star Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Figure 10. Segregated Mesh and Star Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Figure 11. Mesh Private Hub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Figure 12. High-Volume Star / Low-Volume Mesh Topology . . . . . . . . . . . . . . . . . . . . . . . 16
Figure 13. Mesh Frequency Hopping: Inroute Group with Two Inroutes . . . . . . . . . . . . . . . 17
Figure 14. Mesh Frequency Hopping: Communicating Between Inroutes . . . . . . . . . . . . . . 18
Figure 15. Frequency Hopping with Star and Mesh . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Figure 16. Mesh VSAT Sizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Figure 17. Uplink Power Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Figure 18. Specifying UPC Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Figure 19. Common Remote Parameters for Mesh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Figure 20. Mesh, SAT, IP statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Figure 21. Spread Spectrum Network Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Figure 22. Remote and QoS Profile Relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Figure 23. iDirect Packet Scheduling Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Figure 24. Group QoS Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Figure 25. Physical Segregation Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Figure 26. CIR Per Application Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Figure 27. Tiered Service Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Figure 28. Third Level VLAN Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Figure 29. Shared Remote Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Figure 30. C/N Nominal Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
Figure 31. TX Initial Power Too High . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Figure 32. TX Initial Power Too Low . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Figure 33. Global NMS Database Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Figure 34. Sample Global NMS Network Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Figure 35. Protocol Processor Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Figure 36. Sample Distributed NMS Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Figure 37. dbBackup and dbRestore with a Distributed NMS . . . . . . . . . . . . . . . . . . . . . . 75
Figure 38. Downstream Data Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

viii Technical Reference Guide


iDS Release 8.3
Figure 39. SCPC TRANSEC Frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Figure 40. Upstream Data Path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Figure 41. TDMA TRANSEC Slot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Figure 42. Key Distribution Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Figure 43. Key Rolling and Key Ring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Figure 44. Host Keying Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Figure 45. Overlay of Carrier Spectrums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Figure 46. Adding an Upstream Carrier By Reducing Carrier Spacing . . . . . . . . . . . . . . . . 104
Figure 47. Failover Sequence of Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

Technical Reference Guide ix


iDS Release 8.3
List of Tables

Table 1. Mesh-Related Constants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29


Table 2. Mesh IP Statistics Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Table 3. iDirect Products Supporting Mesh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Table 4. Mesh Feature Set and Compatibility Matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Table 5. Modulation Modes and FEC Rates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Table 6. Spread Spectrum: Downstream Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Table 7. Spread Spectrum: Supported FEC Rates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Table 8. Spread Spectrum: Upstream Specifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Table 9. Power Consumption in Remote Sleep Mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

x Technical Reference Guide


iDS Release 8.3
About This Guide

Purpose
The Technical Reference Guide provides detailed technical information on iDirect technology
and major features as implemented in iDS Release 8.3.

Intended Audience
The intended audience for this guide includes network operators using the iDirect iDS system,
network architects, and anyone upgrading to iDS Release 8.3.

Note: It is expected that the user of this material has attended the iDirect IOM
training course and is familiar with the iDirect network solution and associated
equipment.

Contents Of This Guide


This document contains the following major sections:
• “iDirect System Overview”
• “Mesh Technical Description”
• “Modulation Modes and FEC Rates”
• “iDirect Spread Spectrum Networks”
• “QoS Implementation Principles”
• “Configuring Transmit Initial Power”
• “Global NMS Architecture”
• “Hub Network Security Recommendations”
• “Global Protocol Processor Architecture”
• “Distributed NMS Server”
• “Transmission Security (TRANSEC)”
• “Fast Acquisition”
• “Remote Sleep Mode”
• “Automatic Beam Selection”
• “Hub Geographic Redundancy”

Technical Reference Guide xi


iDS Release 8.3
• “Carrier Bandwidth Optimization”
• “Hub Line Card Failover”

Document Conventions
This section illustrates and describes the conventions used throughout the manual. Take a
look now, before you begin using this manual, so that you’ll know how to interpret the
information presented.

Convention Description Example


Blue Used when the user is [SWITCH_PORT_n]
Courier required to enter a vid = vlan_id
font command at a command
line prompt or in a console.
Courier Used when showing Output similar to the following sample appears:
font resulting output from a [SECURITY]
command that was entered password =
at a command line or on a
$idi2$/bFMhf$5H8mYAaP1sTZ0m1Ny/dYyLaS40/
console.
admin_password =
$idi2$146rgm$.KtDb4OH5CEBxzH6Ds2xM.ehHCH
os_password =
$1$UTKh0V$cc/UfNThFmBI7sT.zYptQ0
Bold Used when the user is 1. If you are adding a remote to an inroute group,
Trebuchet required to type right-click the Inroute Group and select Add Remote.
font information or values into a
field within a windows-type
interface software.

Used when specifying


names of commands, The Remote dialog box has a number of user-
menus, folders, tabs, selectable tabs across the top. The Information Tab is
dialogs, list boxes, and visible when the dialog box opens.
options.
Blue Used to show all For instructions on adding an iSCPC line card to the
Trebuchet hyperlinked text within a network tree and selecting a Hub RFT for the line card,
font document. see “Adding an iSCPC Line Card” on page 108.

Bold italic Used to emphasize Note: Several remote model types can be
Trebuchet information for the user, configured as iSCPC remotes.
font such as in notes.

Red italic Used when the user needs


Trebuchet to strictly follow the
font instructions or have WARNING! The following procedure may cause
(or see additional knowledge about a network outage.
table a procedure or action.
below)

xii Technical Reference Guide


iDS Release 8.3
Related Documents
The following iDirect documents are available at http://tac.idirect.net and may also contain
information relevant to this release. Please consult these documents for information about
installing and using iDirect’s satellite network software and equipment.
• iDS Release Notes
• iDS Software Installation Guide or Network Upgrade Procedure Guide
• iDS iBuilder User Guide
• iDS iMonitor User Guide
• iDS Software Installation Checklist/Software Upgrade Survey
• iDS Installation and Commissioning Guide for Remote Satellite Routers
• iDS Link Budget Analysis Guide

Getting Help
The iDirect Technical Assistance Center (TAC) is available to help you 24 hours a day, 365 days
a year. Software user guides, installation procedures, a FAQ page, and other documentation
that supports our products are available on the TAC webpage. Please access our TAC webpage
at: http://tac.idirect.net.
If you are unable to find the answers or information that you need, you can contact the TAC at
(703) 648-8151.
If you are interested in purchasing iDirect products, please contact iDirect Corporate Sales by
telephone or email.
Telephone: (703) 648-8000
Email: [email protected]

Technical Reference Guide xiii


iDS Release 8.3
xiv Technical Reference Guide
iDS Release 8.3
1 iDirect System Overview

This chapter presents a high level overview of an iDirect Network. It provides a sample iDirect
network and describes IP architecture in SCPC and TDMA networks.

System Overview
An iDirect network is a satellite based TCP/IP network with a Star topology in which a Time
Division Multiplexed (TDM) broadcast downstream channel from a central hub location is
shared by a number of remote nodes. An example iDirect network is shown in Figure 1.

Figure 1. Sample iDirect Network

The iDirect Hub equipment consists of an iDirect Hub Chassis with Hub Line Cards, a Protocol
Processor (PP), a Network Management System (NMS) and the appropriate RF equipment. Each
remote node consists of an iDirect broadband router and the appropriate external VSAT
equipment. The remotes transmit to the hub on one or more shared upstream carriers using

Technical Reference Guide 1


iDS Release 8.3
IP Architecture

Deterministic-Time Division Multiple Access (D-TDMA), based on dynamic timeplan slot


assignment generated at the Protocol Processor.
A Mesh overlay can be added to the basic Star network topology, allowing traffic to pass
directly between remote sites without traversing the hub. This allows real-time traffic to
reach its destination in a single satellite hop, significantly reducing delay. It also saves the
bandwidth required to retransmit Mesh traffic from the hub to the destination remote. For a
description of the iDirect Mesh overlay architecture, see “Mesh Technical Description” on
page 7.
The choice of upstream carriers is determined either at network acquisition time or
dynamically at run-time, based on a network configuration setting. iDS software has features
and controls that allow the system to be configured to provide QoS and other traffic
engineered solutions to remote users. All network configuration, control, and monitoring
functions are provided via the integrated NMS. The iDS software provides:
• Packet-based and network-based QoS, TCP acceleration
• TCP acceleration
• AES link encryption
• Local DNS cache on the remote
• End-to-end VLAN tagging
• Dynamic routing protocol support via RIPv2 over the satellite link
• Multicast support via IGMPv2
• VoIP support via voice optimized features such as cRTP
An iDirect network interfaces to the external world through IP over Ethernet via 10/100
Base-T ports on the remote unit and the Protocol Processor at the hub.

IP Architecture
The following figures illustrate the basic iDirect IP Architecture with different levels
configuration available to you:
• Figure 2, “iDirect IP Architecture – Multiple VLANs per Remote”
• Figure 3, “iDirect IP Architecture – VLAN Spanning Remotes”
• Figure 4, “iDirect IP Architecture – Classic IP Configuration”

2 Technical Reference Guide


iDS Release 8.3
IP Architecture

Figure 2. iDirect IP Architecture – Multiple VLANs per Remote

Technical Reference Guide 3


iDS Release 8.3
IP Architecture

Figure 3. iDirect IP Architecture – VLAN Spanning Remotes

4 Technical Reference Guide


iDS Release 8.3
IP Architecture

Figure 4. iDirect IP Architecture – Classic IP Configuration

iDirect allows you to mix traditional IP routing based networks with VLAN based
configurations. This capability provides support for customers that have conflicting IP address
ranges in a direct fashion, and multiple independent customers at a single remote site by
configuring multiple VLANs directly on the remote.
In addition to end-to-end VLAN connection, the system supports RIPv2 in an end-to-end
manner including over the satellite link; RIPv2 can be configured on per-network interface.
In addition to the network architectures discussed so far, the iDirect iSCPC solution allows you
to configure, control and monitor point-to-point Single Carrier per Channel (SCPC) links.
These links, sometimes referred to as “trunks” or “bent pipes,” may terminate at your
teleport, or may be located elsewhere. Each end-point in an iSCPC link sends and receives
data across a dedicated SCPC carrier. As with all SCPC channels, the bandwidth is constant
and available to both sides at all times, regardless of the amount of data presented for
transmission. SCPC links are less efficient in their use of space segment than are iDS TDMA
networks. However, they are very useful for certain applications. Figure 5 shows an iDirect
system containing an iSCPC link and a TDMA network, all under the control of the NMS.

Technical Reference Guide 5


iDS Release 8.3
IP Architecture

Figure 5. iDirect IP Architecture - TDMA and iSCPC Topologies

6 Technical Reference Guide


iDS Release 8.3
2 Mesh Technical
Description

This chapter provides general guidelines for designing mesh networks using iDirect
equipment. Various physical and network topologies are presented, including how each
different configuration may affect the cost and performance of the overall network. Network
and equipment requirements are specified, as well as the limitations of the current phase of
iDirect’s Mesh solution. Overviews are provided for the commissioning procedure for an
iDirect Mesh network; converting existing star networks to mesh; and creating new mesh
networks.
iDirect’s Mesh offering provides a full-mesh solution implemented as a mesh overlay network
superimposed on an iDirect star network. The mesh overlay provides direct connectivity
between remote terminals with a single trip over the satellite, thereby halving the latency
and reducing satellite bandwidth requirements. As with other iDirect features, mesh is being
implemented in a phased manner. The first phase was delivered in IDS Release 7.0. Phase II of
mesh, which was delivered in iDS Release 8.2 and is supported in this release, added the
following enhancements to the original Mesh feature:
• The ability to configure multiple mesh inroutes per inroute group
• The ability to configure separate data rates for star and mesh inroutes
• Support for TRANSEC over mesh
If you are running a Mesh Phase I release (iDS 7.0, 7.1 or 8.0), you are limited to a single
inroute per mesh inroute group. In addition, TRANSEC over mesh is not supported in Mesh
Phase I. For details of iDirect hardware and features supported for each release, see “Mesh
Feature Set and Capability Matrix” on page 36.

Mesh Theory of Operation


The iDirect Star network solution is ideal for networks which primarily require communication
between remote terminals and a common point such as the Internet, a PSTN or a corporate
data center. However, for real time applications requiring remote-to-remote connectivity, a
star network is not suitable.
For example, consider a Voice over IP (VoIP) call from remote User A to remote User B in a
star network (Figure 6).

Technical Reference Guide 7


iDS Release 8.3
Mesh Theory of Operation

Figure 6. Double-Hop Star Network Topology

In the network shown in Figure 6, the one-way transmission delay from user A to user B over a
geosynchronous satellite averages 550 ms. The extended length of the delay is due to the
“double-hop” transmission path: remote A to the satellite; the satellite to the hub; the hub
back to the satellite; and the satellite to remote B. This transmission delay, added to the
voice processing and routing delays in each terminal, results in an unacceptable quality of
service for voice. In addition, the remote-to-remote transmission requires twice as much
satellite bandwidth as a single-hop call.

8 Technical Reference Guide


iDS Release 8.3
Mesh Theory of Operation

A more cost-effective use of satellite bandwidth and improved quality of service for real-time
traffic can be achieved by providing remote-to-remote connections over a single satellite
hop, as provided in mesh networks (Figure 7).

Figure 7. Single-Hop Mesh Overlay Network Topology

In a full-mesh network, all remotes can communicate directly with one another. A mesh
network is ideal for any application that is intolerant of the double-hop delays inherent in star
networks and where remote-to-remote communications are required. A mesh satellite
network typically consists of a master terminal, which provides network management and
network synchronization, and remote user terminals.
One advantage of the iDirect Mesh implementation is that mesh remote terminals continue to
be part of the star network. This allows the monitor and control functions and the timing
reference for the mesh network to be provided by the existing hub equipment over the SCPC
downstream carrier.
In an iDirect Mesh network, the hub broadcasts to all remotes on the star outbound channel.
This broadcast transmits user traffic as well as the control and timing information for the
entire network of inbound mesh and star channels. The mesh remotes transmit user data on
mesh TDMA inbound channels, which other mesh remotes are configured to receive.

Note: The following remote model types are supported over iDirect Mesh: iNFINITI
5300/5350; iNFINITI 7300/7350; iNFINITI 8350; Evolution e8350; iConnex-100;
iConnex-700; and iConnex e800.

Technical Reference Guide 9


iDS Release 8.3
Network Architecture

Each mesh remote is configured with a “home” mesh inroute. A mesh remote receives its
home inroute using the second demodulator on the Indoor Unit (IDU). All mesh transmissions
to the remote must be sent on the home inroute of the destination remote. Therefore, any
peer remote sending single-hop data must frequency hop to the peer’s home inroute before
transmitting.

Note: iDirect Mesh is logically a full-mesh network topology. All remotes can
communicate directly with each other (and the hub) in a single-hop. This is
accomplished by allowing the remote to receive both the outbound channel
from the hub and its home TDMA mesh inbound channel. This is sometimes
referred to as a star/mesh configuration. When referring to the iDirect product
portfolio, “star/mesh” and “mesh” are synonymous.

Figure 8. Basic Mesh Topology

Network Architecture
All mesh networks consist of a single broadcast outbound channel and at least one mesh TDMA
inbound channel per inroute group.

Transponder Usage
The outbound and inbound channels must use the same transponder.

10 Technical Reference Guide


iDS Release 8.3
Network Architecture

Outbound TDM Channel


The outbound channel for a mesh network is similar to the outbound channel for a star
network, except for the differences noted in this section.
The hub must be able to receive its own mesh outbound channel (“loopback” signal). This
signal provides methods to:
• Determine hub-side rain fade
• Measure frequency offset introduced by the hub-side equipment
• Determine and track the location of the satellite relative to the hub
The hub accurately tracks the movement of the satellite. The information is used by each
remote to determine upstream timing synchronization.

Note: The outbound loopback signal is demodulated on the same line card (M1D1
only) that modulates the outbound channel. This line card is capable of
demodulating a star or mesh inbound channel.
The outbound channel supporting a mesh network carries all outbound user data and the
network monitoring and control information used to control the mesh inbound channels,
including timing and slot allocation for the inbound channels; dynamic bandwidth allocation
changes for the remotes. The hub is the only node in a mesh network that transmits on the
mesh outbound channel.
The outbound channel in a mesh network has the following capabilities:
Bandwidth Management (QoS): The outbound channel possesses the full suite of QoS (Quality
of Service) functionality provided by iDirect. This includes Committed Information Rate (CIR),
minimum and maximum information rates, Class Based Weighted Fair Queuing (CBWFQ), etc.
Group QoS is fully supported for mesh networks beginning with iDS Release 8.2.
Centralized Management: The iDirect mesh network can be managed from the centralized
Network Operations Center (NOC) running the iDirect NMS applications. The hub provides
connectivity for this centralized network management.
Network Synchronization: The iDirect TDMA inbound channels take advantage of significant
bandwidth efficiency and performance enhancements provided by the accurate timing and
frequency synchronization that the outbound channel provides. The centralized hub provides
the frequency and timing references to the remote terminals over the outbound channel. This
results in lower equipment costs for the remote terminals.

Inbound D-TDMA Channels


Each mesh remote terminal must be able listen to its mesh home inbound channel echo
return. If a remote can hear itself, it can be assumed that all other remotes will also be able
to hear this remote. (See “Routing” on page 20 to determine how the system behaves if a
remote does not hear its own bursts.) The same low-noise block converter (LNB) must be used
for both the outbound and inbound channels. Frequency offsets introduced in the LNB are
estimated for the outbound channel and applied to the inbound demodulator.
A mesh network consists of one or more inroute groups. Each mesh inroute group supports one
or more inbound Deterministic Time Division Multiple Access (D-TDMA) channel. This shared
access channel provides data and voice IP connectivity for remote–to-remote and remote–to-
hub communications. Although the hub receives and demodulates the mesh inbound channels,

Technical Reference Guide 11


iDS Release 8.3
Mesh Topology Options

it does not transmit on these channels. The remote terminals are assigned transmit time slots
on the inbound channels based on the dynamic bandwidth allocation algorithms provided by
the hub.
The D-TDMA channels provide the following capabilities:
Multiple Frequencies: A mesh network can contain one or more D-TDMA mesh inbound
channels for remote-to-remote and remote-to-hub connectivity within an inroute group. Each
terminal is able to quickly hop between these frequencies to provide the same efficient
bandwidth usage as a single large TDMA channel, but without the high-power output and large
antenna requirements for large mesh inbound channels. Beginning with iDS Release 8.2,
iDirect supports separate inbound carriers with different data rates for star and mesh. See
“Mesh/Star Frequency Hopping ” on page 18 for details.
Dynamic Allocation: Bandwidth is only assigned to remote terminals that need to transmit
data, and is taken away from idle terminals. These allocation decisions are made several
times a second by the hub which is constantly monitoring the bandwidth demands of the
remote terminals. The outbound channel is then used to transmit the dynamic bandwidth
allocation of the mesh inbound carriers.
Single Hop: Data is able to traverse the network directly from a remote terminal to another
remote terminal with a single trip over the satellite. This is critical for latency-sensitive
applications, such as voice and video connections.
iDirect networks support a number of features, including the following:
• Application and Group QoS
• Voice jitter handling
• IP routing
• TCP/HTTP acceleration
• cRTP
All such iDirect features are valid and available for mesh networks.

Mesh Topology Options


Physical Topology
You can design and implement a mesh network topology as either integrated mesh and star, or
as segregated mesh and star. Both options are discussed in this section.

Integrated Mesh and Star Topology


To implement an integrated mesh and star network on an existing hub outbound and
infrastructure, the Network Operator uses the current outbound channel for the network, but
adds additional mesh inbound channel(s). The existing outbound is used for both existing star
remotes and for newly added mesh remotes. The resulting hybrid network that includes star
and mesh sub-networks is shown in Figure 9.

Note: The different sizes of the star and mesh carriers in the figure represent the
higher power transmission required for mesh inroutes to operate at the
contracted power.

12 Technical Reference Guide


iDS Release 8.3
Mesh Topology Options

Figure 9. Integrated Mesh and Star Network

Multiple mesh and star inroute groups may co-exist in a single network. Each mesh inroute
group uses its own inbound channels for remote-to-remote traffic within the respective group
and for star return traffic. There are no limitations to the number or combination of inroute
groups in a network, other than the bandwidth required and slot availability in a hub chassis
for each inroute. However, a mesh inroute group is limited to 250 remotes and eight inroutes.

Segregated Mesh and Star Topology


To implement a segregated mesh and star network on an existing hub outbound and
infrastructure, the Network Operator adds a new outbound channel and one or more inbound
channels to the existing network, resulting in a totally separate mesh network. This topology
can be achieved on two iDirect product platforms:
• Hub Mesh: Separate outbound carriers and separate inbound carrier(s) on the iDirect
15000 series™ Satellite Hub (see Figure 10).
• Mesh Private Hub: A standalone segregated mesh option using an additional outbound
carrier and a single inbound carrier on the iDirect 10000 series™ Private Satellite Hub (see
Figure 11).

Technical Reference Guide 13


iDS Release 8.3
Mesh Topology Options

Star Outbound

Star Inbound

Mesh Outbound

Mesh Inbound

Star Star Mesh Mesh


In Outbound Out In

Hub
Star Remote
Group Mesh Remote
Group

Figure 10. Segregated Mesh and Star Networks

Mesh Outbound

Mesh Inbound Mesh Mesh


Out In

Private
Hub Mesh Remote
Group

Figure 11. Mesh Private Hub

14 Technical Reference Guide


iDS Release 8.3
Mesh Topology Options

Network Topology
When determining the best topology for your iDirect Mesh network, you should consider the
following points regarding TCP traffic acceleration, single-hop versus double-hop traffic,
traffic between mesh inroute groups, and the relative size of your star and mesh carriers.
All unreliable (un-accelerated) traffic between mesh-enabled remotes in an inroute group
takes a single hop. By default, all reliable traffic between the same remotes is accelerated
and takes a double-hop. This must be considered when determining the outbound channel
bandwidth.
In certain networks, the additional outbound traffic required for double-hop traffic may not
be acceptable. For example, in a network where almost all the traffic is remote-to-remote,
there is no requirement for a large outbound channel, other than for the accelerated TCP
traffic. In that case, iDirect provides the ability to configure TCP traffic to take a single hop
between mesh remotes. However, the single hop TCP traffic will not be accelerated.

Note: When TCP acceleration (sometimes called “spoofing”) is disabled, each TCP
session is limited to a maximum of 128 kbps due to latency introduced by the
satellite path. Under ideal conditions, maximum throughput is 800 kbps
without acceleration.
A network may consist of multiple inroute groups. Although un-accelerated traffic within a
mesh inroute group takes a single hop, all traffic between inroute groups takes a double hop.
For example, if a network contains two mesh inroute groups (group A and group B), then a
mesh remote in group A can communicate with a mesh remote in group B only via the hub.
You may configure different symbol rates for star and mesh carriers in an inroute group. The
symbol rate for star carriers must be between 64 and 5750 ksym/s. The symbol rate for mesh
carriers must be between 128 ksym/s and 2048 ksym/s.
When configuring two symbol rates, the following restrictions apply:
• All carriers (star and mesh) must have the same FEC rate and modulation type.
• All star carriers in the inroute group must have the same symbol rate.
• All mesh carriers in the inroute group must have the same symbol rate.
• The symbol rate for star-only carriers must be greater than or equal to the symbol rate for
mesh carriers.
• The difference between the two symbol rates must be a multiple of 2n, where n is an
integer.

Note: Since there is only one transmitter per remote, the overall data rate a remote
achieves on a star inroute is reduced by the amount of time spent transmitting
on the mesh inroutes. Since it takes a longer time to transmit an equal amount
of data at a lower data rate, the star inroute capacity of the remote can be
significantly reduced by mesh transmissions when different symbol rates are
used for star and mesh.
The following section provides an example of a typical network topology carrying high-volume
star traffic and low-volume mesh traffic.

Technical Reference Guide 15


iDS Release 8.3
Mesh Topology Options

High-Volume Star / Low-Volume Mesh


The high-volume star/low volume mesh topology reflects the requirement to operate a
network with higher data rate requirements for the star inbound and lower data rate
requirements for the mesh inbound channels. This topology combines high-volume data traffic
between the remotes and a central data repository (for example, Internet, intranet or HQ),
with lower data rate mesh inbound channels used for low volume traffic sent directly
between remote peers (for example, one to four voice lines). Figure 12 shows an example of a
high-volume star/low volume mesh network topology.
The benefits of high-volume star / low-volume mesh are that the Network Operator does not
suffer the additional costs associated with higher-specification BUCs and space segment that
would be required for a higher-bandwidth mesh inbound channel (i.e. 256+ kbps) that would
not be fully occupied.
To support this topology, Mesh Phase II allows you to configure different data rates for star
and mesh traffic within a single inroute group.

Figure 12. High-Volume Star / Low-Volume Mesh Topology

16 Technical Reference Guide


iDS Release 8.3
Frequency Hopping

Frequency Hopping
Mesh Phase II supports frequency hopping between mesh and star inbound channels within an
inroute group.

Mesh Frequency Hopping


A mesh remote receives a single mesh inbound channel, but can transmit on multiple mesh
inbound channels. Frequency hopping cannot be disabled for an inroute group containing
multiple mesh inbound channels. A mesh remote listens to both the TDM outbound channel
and its configured “home” mesh inbound channel. (See Figure 13 on p. 17). The remote does
not listen to multiple mesh inbound channels. This requires that a remote always transmit to
another mesh remote on the home inroute of the destination remote. (See Figure 14 on p.
18).

Figure 13. Mesh Frequency Hopping: Inroute Group with Two Inroutes

Technical Reference Guide 17


iDS Release 8.3
Frequency Hopping

Figure 14. Mesh Frequency Hopping: Communicating Between Inroutes

Mesh/Star Frequency Hopping


Frequency hopping also allows remotes to transmit on both mesh and star inbound carriers,
but the remote only receives its home mesh inbound channel. (See Figure 15 on p. 19.) You
can configure different symbol rates for star and mesh carriers in the same inroute group.
However, some restrictions apply:
• All star-only carriers in the inroute group must have the same symbol rate.
• All mesh carriers in the inroute group must have the same symbol rate.
• The difference between the two symbol rates must be a multiple of 2n, where n is an
integer.
• The symbol rate for star-only carriers must be greater than or equal to the symbol rate for
mesh carriers.
• No more than eight carriers (star and mesh) are allowed in the inroute group.

18 Technical Reference Guide


iDS Release 8.3
Mesh Data Path

Figure 15. Frequency Hopping with Star and Mesh

Mesh Data Path


This section describes traffic selection, routing, and real-time setup for traffic in the mesh
data path.

Single-Hop and Double-Hop Traffic Selection


In a mesh network, the data path is dependent upon the type of traffic. This dependency is
important when designing and sizing the network and its associated outbound and inbound
channels. By default, only real-time, non-connection-oriented (non-TCP/un-accelerated)
traffic traverses a mesh link from remote to remote. This results in non-real-time, remote-to-
remote traffic (TCP), which is latency tolerant, to flow through the hub. This must be taken
into consideration when sizing bandwidth. Double-hop traffic is accelerated on both hops.

Note: Allowing only non-TCP traffic to be transmitted directly from one remote to
another adds to the QoS functionality within the iDirect platform. By default,
only allowing the traffic that benefits from a single hop between remote results
in fewer configuration issues for the Network Operator. Mesh inbound channels
can be scaled appropriately for time-sensitive traffic such as voice and video.

Technical Reference Guide 19


iDS Release 8.3
Hardware Requirements

Routing
Prior to the introduction of the mesh feature, all upstream data from a remote was routed
over the satellite to the hub protocol processor. With the introduction of iDirect Mesh,
additional routing information is provided to each remote in the form of a routing table. This
table contains routing information for all remotes in the mesh inroute group and the subnets
behind those remotes. The routing table is periodically updated based on the addition or
deletion of new remotes in the mesh inroute group; the addition or deletion of static routes in
the NMS; enabling or disabling of RIP; or in the event of failure conditions detected on the
remote or line card. The mesh routing table is periodically multicast to all remotes in the
mesh inroute group.
To increase remote-to-remote availability, the system provides data path redundancy for
mesh traffic. It is possible for a remote to drop out of the mesh network due to a deep rain
fade at the remote site. The remote detects this condition when it fails to receive its own
bursts. However, because the hub has a large antenna, the remote may still be able to
operate in the star network. Under these circumstances, the mesh routing table is updated,
causing all traffic to and from that remote to be routed through the hub. When the rain fade
passes, the mesh routing table is updated again, and mesh traffic for that remote again takes
the single-hop path.
To operate in the mesh network, a mesh remote requires power, frequency and timing
information determined by the hub from its SCPC loopback signal. Because of this, the entire
mesh network falls back to star mode if the hub fails to receive its loopback. In that event,
the routing table is updated causing all traffic to or from all remotes to be routed through the
hub. Once the hub re-acquires the outbound loopback signal, the mesh routing table is again
updated and the remotes rejoin the mesh network.

Real-Time Call Setup


Call setup times for real-time applications, such as VoIP voice calls within an iDirect mesh
network, are identical to those of an iDirect star network (assuming other variables, such as
available bandwidth and QoS settings, are similar). This mesh and star similarity also holds
true in situations where a central call manager is installed at the hub to coordinate call setup.

Hardware Requirements
This section describes the hub and remote hardware requirements for mesh networks. Please
refer to the section “Mesh Feature Set and Capability Matrix” on page 36 for a detailed list of
iDirect products and features that support mesh.

HUB RFT Equipment


The outbound TDM loopback channel and the inbound TDMA channel must take the same RF
path at the Hub. The Uplink Control Protocol (UCP) assumes that the frequency offsets that
are introduced in the hub down-conversion equipment and the signal strength degradations in
the downlink path are common to both the outbound TDM loopback channel and the inbound
TDMA channel. UCP does not work correctly and mesh peer remotes cannot communicate
directly with each other if the hub RFT uses different equipment for each channel.

20 Technical Reference Guide


iDS Release 8.3
Hardware Requirements

Hub Chassis Hardware


For a Hub Chassis configuration:
• The outbound carrier must be sourced by an M1D1 (or M1D1-T) iNFINITI line card.
• The receive cable must be physically connected to the receive port on the M1D1 card.
• The inbound carrier must be demodulated by either an M1D1 or M0D1 iNFINITI line card
hub.

Note: A NetModem II Plus line card does not support mesh.

Private Hub Hardware


Only iNFINITI Private Hubs support mesh for both the outbound an inbound carriers. Minihub-
15, Minihub-30 Private Hubs do not support mesh.

Hub ODU Hardware


If an LNB is used at the hub (Hub Chassis or Private Hub), it must be an externally referenced
PLL downconverter LNB.

Remote IDU Hardware


Only iNFINITI 5300/5350, 7300/7350, 8350, Evolution e8350, and iCONNEX-100/700/e800
remote models support mesh.
The iDirect mesh terminal consists of the following components and features that are all
integrated into a single indoor unit (IDU):
Integrated Features: IP Router, TCP Optimization, RTTM feature (Application and System
QoS), cRTP, Encryption, MF-TDMA, D-TDMA, Automatic Uplink Power Control and Turbo
Coding.

TDM Outbound Receiver: Continuously demodulates the outbound carrier from the hub and
provides the filtered IP packets and network synchronization information. The outbound
receiver connects to the antenna LNB via the L-band receive IFL cable. The down-converted
satellite spectrum from the LNB is also provided to the D-TDMA receiver.

TDMA Satellite Transmitter: The TDMA transmitter is responsible for transmitting all data
destined for the hub or for other remote terminals over the satellite TDMA channels.

TDMA Satellite Receiver: The TDMA receiver is responsible for demodulating a TDMA carrier
to provide remote-to-remote mesh connectivity. The receiver tunes to the channel based on
control information received from the hub.

Remote ODU Hardware


In addition to the correct sizing of the ODU equipment (remote antenna and remote BUC) to
close the link, a PLL LNB must be used for the downconverter at the remote.

Technical Reference Guide 21


iDS Release 8.3
Network Considerations

Note: Compared to star VSAT networks, where the small dish size and low-power BUC
are acceptable for many applications, a mesh network typically requires both
larger dishes and BUC to close the link. See “Network Considerations” on
page 22.
Whenever possible, iBuilder enforces hardware requirements during network configuration.

Network Considerations
This section discusses the following topics with respect to iDirect Mesh networks: “Link
Budget Analysis,” “Uplink Control Protocol (UCP),” and “Bandwidth Considerations.”

Link Budget Analysis


When designing a mesh network, attention must be given to ensuring that equipment is
correctly sized. A Link Budget Analysis (LBA) for the outbound channel is performed in the
same way for both a star and mesh network. Typically, the outbound channel operates at the
Equal Power Equal BandWidth (EPEBW) point on the satellite.
In a star network, the inbound channel is typically configured to operate at a point lower than
the EPEBW point. A mesh inbound channel operates at or near EPEBW. The link budget
analysis provides a per-carrier percentage of transponder power or Power Equivalent
Bandwidth (PEB) where the availability of the remote-remote pair is met. For a given data
rate, this PEB is determined by the worst-case remote-to-remote (or possibly remote-to-hub)
link. The determination of BUC size, antenna size, FEC rate and data rate is an iterative
process designed to find the optimal solution.
Once determined, the PEB is used as the target or reference point for sizing subsequent mesh
remotes. It can be inferred that a signal reaching the satellite from any other remote at the
operating or reference point is detected by the remote in the worst-case EIRP contour
(assuming fade is not greater than the calculated fade margin). Remote sites in more
favorable EIRP contours may operate with a smaller antenna/BUC combination.

Note: iDirect recommends that an LBA be performed for each site to determine
optimal network performance and cost.

22 Technical Reference Guide


iDS Release 8.3
Network Considerations

Mesh Link Budget Outline


This section outlines the general tasks for determining a mesh link budget. Refer to Figure 16.

Figure 16. Mesh VSAT Sizing

To determine a mesh Link Budget Analysis, perform the following tasks:


1. Reference Mesh VSAT: Using the EIRP and G/T footprints of the satellite of interest and
the region to be covered, determine the current or future worst-case site (Step 1). The
first link calculation is this worst-case site back to itself (Step 2). Using various
combinations of antenna size, HPA size, and FEC that provides the most efficient
transponder usage and practical VSAT sizing for the desired carrier rate (Steps 3 and 4).
The percentage of transponder power or Power Equivalent Bandwidth PEB required is the
reference point for subsequent link budgets.
2. Forward/Downstream Carrier: Using the reference site and its associated antenna size
determined in Task 1, determine the combination of modulation and FEC that provides
the most efficient transponder usage.
3. Successive Mesh VSATs: The sizing of additional sites is a two step process, with the first
link budget sizing the antenna and the second sizing the HPA.
• Antenna Size: Calculate a link budget using the Reference VSAT as the transmit site
and the new site as the receive site. Using the same carrier parameters as those for

Technical Reference Guide 23


iDS Release 8.3
Network Considerations

the Reference site, the antenna size is correctly determined when the PEB is less than
or equal to the reference PEB.
• HPA Size: Use the same carrier parameters as those used for the Reference site to
determine the required HPA size.

Uplink Control Protocol (UCP)


Changes have been made to the iDirect UPC algorithm used to maintain optimal operation (at
the physical layer) of a remote in a mesh network. These changes affect frequency, power
and timing.

Frequency
In a star configuration, frequency offsets introduced to the upstream signal (by frequency
down-conversion at a remote’s LNB, up-conversion at a remote’s BUC, satellite frequency
translation, and down-conversion at the hub) are all nulled out by Uplink Control Protocol
messages from the hub to each remote. This occurs every 20 seconds. Short-term frequency
drift by each remote can be accommodated by the hub because it uses a highly stable
reference to demodulate each burst.
A remote does not have such a highly stable local reference source. The remote uses the
outbound channel as a reference source for the inbound channel. A change in temperature of
a DRO LNB can cause a significant frequency drift to the reference. In a mesh network, this
can have adverse effects on both the SCPC outbound and TDMA inbound carriers, resulting in
a remote demodulator that is unable to reliably recover data from the mesh channel. A PLL
LNB offers superior performance, since it is not subject to the same short term frequency
drift.

Power
A typical iDirect star network consists of a hub with a large antenna, and multiple remotes
with small antennas and small BUCs. In a star network, UPC adjusts each remote’s transmit
power on the inbound channel until a nominal carrier-to-noise signal strength of
approximately 9 dB is achieved at the hub. Because of the large hub antenna, the operating
point of a remote is typically below the contracted power (EPEBW) at the satellite. For a
mesh network, where remotes typically have smaller antennas than the hub, a remote does
not reliably receive data from a another remote using the same power. It is therefore
important to maximize the use of all available power.
UPC for a mesh network adjusts the remote Tx power so that it always operates at the EIRP at
beam center on the satellite to close the link, even under rain fade conditions. This can be
equal to or less than the contracted power/EPEBW. Larger antennas and BUCs are required to
meet this requirement. The EIRP at beam center and the size of the equipment are calculated
based on a link budget analysis.
The UPC algorithm uses a combination of the following parameters to adjust each remote
transmit power to achieve the EIRP@BC at the satellite:
• Clear-sky C/N for each TDMA inbound and for the SCPC outbound loopback channels
(obtained during hub commissioning)

24 Technical Reference Guide


iDS Release 8.3
Network Considerations

• The hub UPC margin (the extent to which external hub-side equipment can accommodate
hub UPC1)
• The outbound loopback C/N at the hub
• Each remote inbound C/N at the hub
The inbound UPC algorithm determines hub-side fade, remote-side fade, and correlated fades
by comparing the current outbound and inbound signal strengths against those obtained
during clear sky calibration. For example, if the outbound loopback C/N falls below the clear
sky condition, it can be assumed that a hub-side fade (compensated by hub side UPC)
occurred. Assuming no remote side fade, an equivalent downlink fade of the inbound channel
would be expected. No power correction is made to the remote. If hub-side UPC margin is
exceeded, then outbound loopback C/N is affected by both uplink and downlink fade and a
significant difference compared to clear sky would be observed.
Similarly if the inbound C/N drops for a particular remote and the outbound loopback C/N
does not change compared to the clear sky value, UPC increases the remote transmit power
until the inbound channel clear sky C/N is attained. Similar C/N comparisons are made to
accommodate correlated fades.
UPC now operates on a per carrier basis. Each carrier’s individually commissioned clear-sky
C/N is used by the algorithm when monitoring and adjusting the carrier.

Note: For each remote in a mesh network, the inbound C/N at the hub is likely to be
greater than that typically observed in a star network. Also, when a remote is
in the mesh network, the nominal C/N signal strength value for a star network
is not used as the reference.
In the event of an outbound loopback failure, the UPC algorithm reverts to star mode. This
redundancy allows remotes in a mesh inroute group to continue to operate in star only mode.
Figure 17 illustrates Uplink Power Control.

1. iDirect equipment does not support hub-side UPC. Typical RFT equipment at a teleport installation
uses a beacon receiver to measure downlink fade. An algorithm running in the beacon receiver calcu-
lates equivalent uplink fade and adjusts an attenuator to ensure a constant power (EPEBW) at the
satellite for the outbound carrier. The beacon receiver and attenuator is outside of iDirect’s control.
For a hub without UPC, the margin is set to zero.

Technical Reference Guide 25


iDS Release 8.3
Network Considerations

Figure 17. Uplink Power Control

Timing
An inbound channel consists of a TDMA frame with an integer number of traffic slots. In a star
network, during the acquisition process, the arrival time of the Start of the TDMA
frame/inbound channel at the hub is determined. The acquisition algorithm adjusts in time
the start of transmission of the frame for each remote such that it arrives at the satellite at
exactly the same time. The burst scheduler in the protocol processor ensures that two
remotes do not burst at the same time. With this process the hub line card knows when to
expect each burst relative to the outbound channel transmit reference. As the satellite moves

26 Technical Reference Guide


iDS Release 8.3
Mesh Commissioning

within its station keeping box, the uplink control protocol adjusts the Start timing of a frame
for each remote, so that the inbound channel frame always arrives at the hub at the same
time.
A similar mechanism that informs a remote when to expect the start of frame for the inbound
channel is required. This is achieved by determining the round trip time for hub-to-satellite-
to-hub from the outbound channel loopback. This information is relayed to each remote. An
algorithm determines when to expect the Start of the inbound channel, and determines burst
boundaries.

Note: A mesh remote listens to all inbound channel bursts, including bursts it originates.
Only those bursts transmitted from other remotes and destined for that remote
are processed by software. All other traffic is dropped, including bursts
transmitted by the remote itself.

Bandwidth Considerations
When determining bandwidth requirements for a mesh network, it is important to understand
that there are a number of settings that must be common to all remotes in an inroute group.
In a star network, a VLAN can be configured on a hub-remote pair basis. For a mesh network,
all remotes in the inroute group must have a common VLAN configuration. VLAN IDs require
two bytes of header for transmission. An additional two bytes is required to indicate that the
destination applies to mesh traffic only. Star traffic is unaffected; however, if VLAN is
configured, it is also enabled for traffic to the hub.
In a star network, remote status is periodically sent to the hub and reported in iMonitor. With
the same periodicity, additional status information is reported on the health of the mesh link.
This traffic is nominal.
There is a finite amount of processing capability on any remote. A mesh remote receives and
processes star outbound traffic; processes and sends star and mesh inbound traffic; and
receives and processes mesh inbound traffic. The amount of traffic a remote can maintain on
the outbound and inbound channels varies greatly depending on the short term ratio. It must
be understood that although a line card can support an inbound channel of 4 Mbps aggregated
across many remotes, a remote-remote connection will not support this rate. However, a
remote does drop inbound traffic not destined for it, thereby limiting unnecessary processing
of bursts. Sample performance curves are available from iDirect.

Mesh Commissioning
The commissioning of a mesh network requires a few steps that are not required for the
commissioning of a star network. Due to the requirement for the mesh inbound channel to
operate at the contracted power point on the satellite, calibration of both the outbound
loopback and each mesh inbound channel at the hub under clear sky conditions is required
during commissioning. Signal strength measurements (C/N) of the respective channels
observed in iMonitor are recorded in iBuilder. The clear sky C/N values obtained during
commissioning are used for uplink power control of each remote.

Technical Reference Guide 27


iDS Release 8.3
Star-to-Mesh Network Migration

Note: In a mesh network, where relatively small antennas (compared to the hub
antenna) are used at remote sites, additional attention to Link Budget Analysis
(LBA) is required. Each remote requires an LBA to determine antenna and BUC
size for the intended availability and data rate.

Note: In order for a mesh network to operate optimally and to prevent over-driving
the satellite, commissioning must be performed under clear sky conditions. See
the iBuilder User Guide for more information.

Star-to-Mesh Network Migration


There are tasks to perform prior to migrating from a star to a mesh network, as well as tasks
to perform during the migration. These tasks are summarized in this section. See the iBuilder
User Guide for a detailed procedure.

Pre-Migration Tasks
Prior to converting an existing star network to a mesh network, iDirect recommends that you
perform the following:
• A link budget analysis comparison for star/mesh versus the star-only network.
• Verification of the satellite transponder configuration for the hub and each remote. All
hubs and remotes must be in the same geographic footprint. They must be able to receive
their own transmit signals. This precludes the use of the majority of spot beam and hemi-
beam transponders for mesh networks.
• Verification that all ODU hardware requirements are met: externally referenced PLL LNBs
for Private Hubs; PLL LNB for all remotes; and BUC and antenna sizing for a given data
rate.
• Calibration of each outbound and inbound channel to determine clear sky C/N values.
• Re-commissioning of each remote. This applies to initial transmit power only, and can be
achieved remotely.

Migration Tasks
To migrate from a star to a mesh network, perform the following:
• Reconfigure the M1D1 Tx Line Card. In iBuilder, a check box to indicate that the card is
enabled for mesh automatically generates the required configuration for the outbound
loopback carrier. The outbound channel clear sky and UPC margin information must be
also entered in iBuilder.
• Calibrate the inbound carrier on an M1D1 or M0D1. This is performed at the same time as
commissioning the first remote in a mesh inroute group. See the iBuilder User Guide for
more information. Subsequent mesh inbound channels can be calibrated and added to the
network without affecting existing outbound or inbound channels.
• Re-commission the initial Tx power setting and record the outbound and inbound clear sky
C/N conditions. Selection of mesh in iBuilder automatically configures the second
demodulator for the inbound channel. Incorrect commissioning of a remote may prevent
the remote from acquiring into the network.

28 Technical Reference Guide


iDS Release 8.3
Configuring and Monitoring Mesh Networks

Configuring and Monitoring Mesh Networks


This section describes the functionality of the iDirect NMS to build and monitor mesh
networks. Complete details are contained in the iBuilder User Guide and the iMonitor User
Guide.

Building Mesh Networks


As with star networks, iBuilder provides all the tools necessary to create and configure mesh
networks. All mesh restrictions, such as remote and line card model types, carrier types, etc.,
are checked automatically by the software. For detailed information on building mesh
networks, including special considerations for link budgets and commissioning, see the
iBuilder User Guide.

Special Mesh Constants


One significant difference between star/mesh and pure star networks is that, in a mesh
network, the line card and all remotes must listen to their own loopback satellite
transmissions. During the commissioning process for mesh line cards and remotes, the
ideal clear-sky values (in dB) for these loopback signals should be calculated and recorded
in iBuilder. The over-the-air values (i.e. non-loopback) for the same signals must also be
calculated and recorded in iBuilder.
The clear-sky loopback values shown in Table 1 should be recorded during star/mesh
configuration.

Table 1. Mesh-Related Constants

Name Meaning Where Recorded


SCPC Loopback Clear-Sky C/N Ideal clear-sky SCPC signal quality in C/N The Transmit line card
as perceived by the transmit line card. for the mesh network
Hub UPC Margin Transmit power range of the external The Transmit line card
uplink power equipment at the hub for the mesh network
TDMA Clear-Sky C/N Calibrated clear-sky TDMA signal quality Uplink Control
as perceived by the line card. You must parameters tab on the
commission the first mesh remote to get mesh inroute group
this value. dialog
SCPC Clear-Sky C/N Calibrated clear-sky SCPC signal quality Each mesh remote in
in C/N as perceived by each remote. the mesh network
TDMA Loopback Clear-Sky C/N Calibrated clear-sky TDMA signal quality Each mesh remote in
in C/N as perceived by each remote the mesh network

Turning Mesh On and Off in iBuilder


For operational flexibility, iBuilder allows you to toggle the mesh transmit capability at
various levels of granularity in your network: line card, inroute group, and network. When you
turn mesh off for a specific remote, it affects only that remote; other mesh remotes in the
same inroute group continue to operate as mesh. However, when you turn mesh off at the Tx

Technical Reference Guide 29


iDS Release 8.3
Configuring and Monitoring Mesh Networks

line card or inroute group level, all mesh traffic stops for that inroute group, regardless of the
settings for each remote in the group. Frequency requirements still apply.

Changes to Acquisition/Uplink Control in iBuilder


The addition of mesh topology to the iDS system required some changes to the
Acquisition/Uplink Control dialog in iBuilder. Specifically:
• Acquisition/Uplink Control parameter are specified for each inroute in a network. The
tab for entering these values has moved from the Network dialog box to the Inroute Group
dialog box. You must now specify Acq/UCP parameters for each inroute in a network.
• The power adjust range is relative, not absolute. Prior to iDS Release 7.0, you specified
absolute fine and coarse adjust ranges based on a fixed nominal C/N value. Beginning
with iDS Release 7.0, you specify the fixed nominal value in a separate field, and the
fine/coarse range values are specified relative to this nominal value. (See Figure 18.)

Figure 18. Specifying UPC Parameters

30 Technical Reference Guide


iDS Release 8.3
Monitoring Mesh Networks

Common Remote Parameters for Mesh Inroute Groups


The following features must be turned on or off for all mesh remotes in an inroute group:
• UDP Header Compression
• cRTP
• UDP Payload Compression
iBuilder allows you to configure these features at the inroute group level of a mesh-enabled
inroute group, as shown here.

Figure 19. Common Remote Parameters for Mesh

The Mesh options in Figure 19 are only available at the inroute group level if the Enabled
check box is selected. If Mesh is enabled, the values set at the inroute group level apply to all
remotes in the inroute group, possibly overriding the individual remote settings. If Mesh is
disabled, the individual remote settings are honored.

Monitoring Mesh Networks


iMonitor provides monitoring tools and reports for mesh overlay networks. A number of mesh-
related parameters have been added to existing messages, and some new displays provide
detailed real-time and historical mesh information.

Additional Hub Statistics Information


The hub statistics message now contains the following information for mesh channels:
• SCPC SNR cal: The SCPC carrier-to-noise ratio as perceived by the SCPC loopback channel
on the mesh line card.
• SCPC symbol offset: The offset between the nominal and actual frame position on the
SCPC loopback channel on the mesh line card.
• SCPC frequency offset: The offset between the nominal and actual frequency as
perceived by the SCPC loopback channel on the mesh line card.
• SCPC frame lock status: The current lock status of the transmit line card’s SCPC loopback
channel.
• SCPC lostlock count: The number of times the mesh line card lost lock on the SCPC
loopback channel since the last statistics message.

Technical Reference Guide 31


iDS Release 8.3
Monitoring Mesh Networks

Additional Remote Status Information


The remote status message, sent to the NMS from each in-network remote every 20 seconds,
now contains the following additional information for mesh-enabled remotes:
• SCPC C/N: the carrier-to-noise ratio of the downstream SCPC channel as perceived at the
remote site.
• TDMA Loopback C/N: the carrier-to-noise ratio of the remote’s TDMA carrier as perceived
at the remote site through loopback.
• TDMA Symbol Offset: the offset between the TDMA transmission symbol timing and the
TDMA received symbol timing. This information is for debug purposes only; the actual UCP
symbol adjustments are still calculated at the hub and transmitted through UCP
messages.
• TDMA Frequency Offset: The offset between expected frequency and actual frequency
perceived by the mesh remote’s TDMA receiver. This information is for debug purposes
only; the actual UCP frequency adjustments are still calculated at the hub and
transmitted through UCP messages.
• Rx and Tx Reliable: the count of reliable bytes sent to (Rx) and from (Tx) this remote on
the mesh channel. Reliable traffic is typically TCP.
• Rx and Tx Unreliable: the count of unreliable bytes sent to (Rx) and from (Tx) this
remote on the mesh channel. Unreliable traffic is typically UDP voice or other real-time
traffic
• Rx and Tx OOB: the count of control and overhead traffic bytes (link layer, etc.) send to
(Rx) and from (Tx) this remote on the mesh channel.

Note: These additional fields are sent in the remote status message only for mesh-
enabled remotes. Non-mesh remotes do not incur the additional overhead for
this information, and archived information for non-mesh remotes isn’t
meaningful.

Mesh Traffic Statistics


The NMS collects mesh traffic to and from remotes, saves it in the data archive, and provides
it to iMonitor for real-time and/or historical display. To display mesh statistics, select the
Mesh Traffic Graph option from the network, inroute group, or individual remote level. As
with the IP and SAT traffic graphs, data for multiple remotes is aggregated into a single value
when you select more than one remote.
The data types collected per-remote are the same for mesh as for the SAT graph: unreliable
bytes sent/received, reliable bytes sent/received, overhead bytes sent/received.
When viewing statistics for mesh-enabled remotes, it’s important to keep the following facts
in mind:
• Remote-to-remote traffic traverses the satellite on the TDMA inroute.
• When viewing the SAT traffic graph, the upstream graph includes any remote-to-remote
mesh traffic.
• When viewing the mesh traffic graph, the displays for sent and received do not include
non-mesh traffic. That is, traffic from the remote(s) destined for an upstream host is not
included on the display.

32 Technical Reference Guide


iDS Release 8.3
Monitoring Mesh Networks

• Mesh traffic will never show up on the IP statistics display, since this display represents
traffic upstream from the protocol processor.
Consider Table 2. Assume that Remote 1 and Remote 2 are passing 150 kbps of traffic between
each other. At the same time, Remote 1 is also sending 150 kbps of traffic to the Internet. The
Mesh, SAT, and IP traffic graphs will show the following statistics for these two remotes:
The IP traffic graph will show 150 kbps on the upstream for Remote 1.
The SAT traffic graph will show 450 kbps on the upstream for Remotes 1 and 2, 300 kbps for
the mesh traffic and 150 kbps for the Internet-bound traffic.
The mesh traffic graph will show 300 kbps received and 300 kbps transmitted for Remotes 1
and 2, as shown in Table 2.

Table 2. Mesh IP Statistics Example

Tx kbps Rx kbps Total kbps


Remote 1 150 150 300
Remote 2 150 150 300
Total 300 300

Note: In the example above, the total throughput on the channel is not 600 kbps.
Each byte in mesh is actually counted twice: once by the sender and once by the
receiver.
You may use the mesh IP statistics to determine if there is mesh traffic loss on the link. In
order to do this, you must select all mesh remotes for the display. When you do this, the
transmitted kbps and received kbps should be identical. If they’re not, it is likely there is
packet loss across the mesh link.

Technical Reference Guide 33


iDS Release 8.3
Monitoring Mesh Networks

Mesh Traffic Stats Collected Here

To Internet
Remote 1

Upstream Router
Upstream Lan Segment

Tunnel Lan Segment


Remote 2

Protocol
Processor

Remote 3

IP Traffic Stats Collected Here SAT Traffic Stats Collected Here

Figure 20. Mesh, SAT, IP statistics collection

Remote-to-Remote Mesh Probe


The Probe Mesh pane is available from the individual mesh remote in the iMonitor network
tree view. It allows you to examine statistics on mesh communications between pairs of mesh
remotes.
Specifically, Probe Mesh allows you select a pair of remotes and observe the following data for
each:
• The number of attempts to transmit to the peer remote
• The number of bursts successfully transmitted to the peer remote
• The number of bursts received from the peer remote
• The number of bursts received from the peer remote that were dropped
To display the Probe Mesh pane:
• Right-click on a mesh remote and click Probe Mesh to display the Select Mesh Remotes
Pair dialog box.
• Select the peer remote from the Remote Two list and click OK. The Probe Mesh pane is
displayed showing the information described above.

34 Technical Reference Guide


iDS Release 8.3
Monitoring Mesh Networks

Note: Probe Mesh is primarily intended for debugging. When Probe Mesh is enabled,
the remotes send debug information to iMonitor. This increases the processing
on the remotes and uses upstream bandwidth that could otherwise be used to
send traffic.

Long-Term Bandwidth Usage Report for Mesh


iMonitor provides a version of the Long-Term Bandwidth Usage Report specifically for mesh
remotes, allowing fast and flexible bandwidth utilization analysis. A percent-of-max-capacity
figure is also calculated, which you can use to quantify unused bandwidth margin on both the
upstream and downstream channels.
At each level of the Tree, you can report on all remotes below the element you have selected.
To generate, view, save, or print the Mesh Long-Term Bandwidth Usage report, follow the
directions below:
• Right-click a network, inroute group, or remote.
• Select MESH Long Term Bandwidth Usage. The Long Term Bandwidth Usage Parameters
dialog box appears.
For further details on report parameters, see the iMonitor User Guide.

Technical Reference Guide 35


iDS Release 8.3
Mesh Feature Set and Capability Matrix

Mesh Feature Set and Capability Matrix


The tables in this section show the iDirect hardware and features supported in each phase of
the mesh release. The mesh phases supported by the various iDirect releases are:
• Mesh Phase I: Releases 7.0, 7.1, 8.0
• Mesh Phase II: Releases 8.2, 8.3

Table 3. iDirect Products Supporting Mesh

Phase
Product Type iDirect Model
Supported
Line Card M1D1 (Required for mesh outroute and supports inroute) Phase I and II
M1D1-T (Required for TRANSEC for mesh outroute and Phase II
supports inroute)
M0D1 (Supports mesh inroute) Phase I and II
Private Hub Private Hub (Mesh) Phase I
5000 iNFINITI Series 5300 Phase I and II
5350 Phase I and II
7000 iNFINITI Series 7300 Phase I and II
7350 Phase I and II
8000 iNFINITI Series 8350 Phase II
8000 Evolution Series e8350 (DVB-S2 hardware)* Phase II
iConnex iConnex 100 Phase II
iConnex 700 (Formerly iConnex 200) Phase I and II

* In Releases 8.2 and 8.3, DVB-S2 hardware works in legacy mode only.

36 Technical Reference Guide


iDS Release 8.3
Mesh Feature Set and Capability Matrix

Table 4. Mesh Feature Set and Compatibility Matrix

iDS Feature iDS Release


Maximum Mesh Inroute Data Rate (3 Mbps) Phase I and II
NMS Functionality Phase I and II
RTTM/QoS Phase I and II
Priority Queuing Phase I and II
Group QoS Phase I (8.0 only) and Phase II
Turbo Product Codes Phase I and II
Spot Beam Capability** Not Supported
TCP (Non-Accelerated) Phase I and II
Inbound TCP Accelerated Data Path (Remote-Hub-Remote) Phase I and II
Single Mesh Inroute per Inroute Group Phase I and II
Multiple Mesh Inroutes per Inroute Group Phase II
Multiple Mesh and Star Inroutes per Inroute Group Phase II
Frequency Hopping Phase II
Link Encryption Possible future
TRANSEC Phase II
Adaptive Coding & Modulation (ACM) Possible future
DVB-S2 hardware support* Phase II

* In Releases 8.2 and 8.3, DVB-S2 hardware works in legacy mode only.
** Cross-strapped capability may be developed in future IDS releases.

Technical Reference Guide 37


iDS Release 8.3
Mesh Feature Set and Capability Matrix

38 Technical Reference Guide


iDS Release 8.3
3 Modulation Modes and
FEC Rates

This chapter describes the modulation modes and Forward Error Correction (FEC) rates that
are supported in iDS Release 8.3. It also describes possible upstream and downstream
combinations.

iDirect Modulation Modes And FEC Rates


A complete set of modulation modes, channel types, and FEC rates are shown in the following
tables. Cells marked with an “X” represent combinations of modulation and FEC rates that are
not supported.

Note: For specific Eb/No values for each FEC rate and Modulation combination, refer
to the iDirect Link Budget Analysis Guide, which is available for download from
the TAC web page located at http://tac.idirect.net.

Technical Reference Guide 39


iDS Release 8.3
iDirect Modulation Modes And FEC Rates

Table 5. Modulation Modes and FEC Rates


Hardware Support Modulation Mode
Spread
Spectrum
SCPC
BPSK Block Payload
FEC Tx Rx BPSK QPSK 8PSK
(8350 and Size Bytes §
M1D1-TSS
Star and Mesh Networks
Only)
.431§§§ 53
M1D1 iNFINITI X Yes Yes X 1K
.533§§§ 66
.495 M1D1 iNFINITI, Evolution Yes Yes Yes X 4K 251
.793 M1D1 iNFINITI, Evolution Yes Yes Yes Yes 4K 404
.879 M1D1 iNFINITI, Evolution Yes Yes Yes Yes 16K 1800

Hardware Support Modulation Mode


Spread
Spectrum
TDMA
BPSK Block Payload
FEC Tx Rx BPSK QPSK 8PSK
(8350 and Size Bytes §§
M1D1-TSS
Only)
.431 43
iNFINITI, Evolution MxD1 Yes Yes Yes X 1K
.533 56
Yes
.660 iNFINITI, Evolution MxD1 Yes Yes Yes 1K 72
(No 3100)
.793 iNFINITI, Evolution MxD1 X Yes Yes X 4K 394

Modulation Mode
Spread
Spectrum
iSCPC Links

iSCPC BPSK Block Payload


Hardware Support BPSK QPSK 8PSK
FEC (8350 and Size Bytes §
M1D1-TSS
Only)
.431 53
M1D1-iSCPC, 5xxx, 73xx, 8350 X Yes Yes X 1K
.533 66
.495 M1D1-iSCPC, 5xxx, 73xx, 8350 Yes Yes Yes X 4K 251
.793 M1D1-iSCPC, 5xxx, 73xx, 8350 Yes Yes Yes Yes 4K 404
.879 M1D1-iSCPC, 5xxx, 73xx, 8350 Yes Yes Yes Yes 16K 1800

§ SCPC channel framing uses a modified HDLC header, which requires bit-stuffing to prevent false end-of-frame
detection. The actual payload is variable, and always slightly less than the numbers indicated in the table.

§§ The TDMA Payload Bytes value removes the TDMA header overhead of 10 bytes: Demand=2 + LL=6 + PAD=2.
SAR, Encryption, and VLAN features add additional overhead.

§§§ This FEC combination is not recommended for new designs. For new network designs, iDirect recommends
using FEC 0.495. If you have an existing network operating at an information rate of 10 Msps or greater, the
network may experience errors due to an FEC decoding limitation.

40 Technical Reference Guide


iDS Release 8.3
4 iDirect Spread Spectrum
Networks

This section provides information about Spread Spectrum technology in an iDirect network. It
discusses the following topics:
• “What is Spread Spectrum?” on page 41
• “Downstream Specifications” on page 43
• “Upstream Specifications” on page 44

What is Spread Spectrum?


Spread Spectrum (SS) is a transmission technique in which a pseudo-noise (PN) code is
employed as a modulation waveform to “spread” the signal energy over a bandwidth much
greater than the signal information bandwidth. The signal is “despread” at the receiver by
using a synchronized replica of the pseudo-noise code. By spreading the signal information
over greater bandwidth, less transmit power is required. A sample SS network diagram is
shown in Figure 21.

Figure 21. Spread Spectrum Network Diagram

Spreading takes place when the input data (dt) is multiplied with the PN code (pnt) which
results in the transmit baseband signal (txb). The baseband signal is then modulated and
transmitted to the receiving station. Despreading takes place at the receiving station when
the baseband signal is demodulated (rxb) and correlated with the replica PN (pnr) which
results in the data output (dr).
Beginning with iDS Release 8.0, Spread Spectrum transmission is supported in both TDMA and
SCPC configurations. SS mode is employed in iDirect networks to minimize adjacent satellite
interference (ASI). ASI can occur in applications such as Comms-On-The-Move (COTM) because

Technical Reference Guide 41


iDS Release 8.3
Spread Spectrum Hardware Components

the small antenna (typically sub-meter) used on mobile vehicles has small aperture size, large
beam width, and high pointing error which can combine to cause ASI. Enabling SS reduces the
spectral density of the transmission so that it is low enough to avoid interfering with adjacent
satellites.
Conversely, when receiving through a COTM antenna, SS improves carrier performance in
cases of ASI (channel/interference).
The iDirect SS is an extension of BPSK modulation in both upstream and downstream. The
signal is spread over wider bandwidth according to a Spreading Factor (SF) that you select.
You can select an SF of 1, 2, or 4. Each symbol in the spreading code is called a “chip”, and
the spread rate is the rate at which chips are transmitted. For example, selecting an SF of 1
means that the spread rate is one chip per symbol (which is equivalent to regular BPSK, and
therefore, there is no spreading). Selecting an SF of 4 means that the spread rate is four chips
per symbol.
An additional Spreading Factor, COTM SF=1, is for upstream TDMA carriers only. Like an SF of
1, if you select COTM SF=1, there is no spreading. However, the size of the carrier unique
word is increased, allowing mobile remotes to remain in the network when they might
otherwise drop out. An advantage of this spreading factor is that you can receive error-free
data at a slightly lower C/N compared to regular BPSK. However, carriers with COTM SF=1
transmit at a slightly lower information rate.
COTM SF=1 is primarily intended for use by fast moving mobile remotes. The additional unique
word overhead allows the remote to tolerate more than 10 times as much frequency offset as
can be tolerated by regular BPSK. That makes COTM SF=1 the appropriate choice when the
Doppler effect caused by vehicle speed and acceleration is significant even though the link
budget does not require spreading. Examples include small maritime vessels, motor vehicles,
trains, and aircraft. Slow moving, large maritime vessels generally do not require COTM SF=1.
Spread Spectrum can also be used to hide a carrier in the noise of an empty transponder.
However, SS should not be confused with Code Division Multiple Access (CDMA), which is the
process of transmitting multiple SS channels simultaneously on the same bandwidth.
Spread Spectrum may also be useful in situations where local or RF interference is
unavoidable, such as hostile jamming. However, iDirect designed the Spread Spectrum feature
primarily for COTM and ASI mitigation. iDirect SS may be a good solution for overcoming some
instances of interference or jamming, but it is recommended that you discuss your particular
application with iDirect sales engineering.

Spread Spectrum Hardware Components


The Hub Line Card (HLC) that supports Spread Spectrum is the M1D1-TSS line card and it
occupies two slots in the hub chassis. Therefore, the maximum number of SS HLCs you can
have in one chassis is 10, and you cannot install a M1D1-TSS HLC in slot 20.

Note: You must install the M1D1-TSS HLC in a slot that has one empty slot to the right.
For example, if you want to install the HLC in slot 4, slot 5 must be empty. Be sure
that you also check chassis slot configuration in iBuilder to verify that you are not
installing the HLC in a reserved slot.
The remote that supports spread spectrum is the 8350 series iNFINITI remote. The 3000, 5000,
and 7000 series iNFINITI series remotes do not support spread spectrum.

42 Technical Reference Guide


iDS Release 8.3
Downstream Specifications

Downstream Specifications
The specifications for the spread spectrum downstream channel are outlined in Table 6.

Note: Beginning with iDS Release 8.2, the iBuilder selections for Spreading Factors of
2 and 4 on the iBuilder Carrier Information tab changed to COTM SF=2 and
COTM SF=4. These Spreading Factors are identical to 2 and 4 in iDS Release
8.0.

Table 6. Spread Spectrum: Downstream Specifications

PARAMETERS VALUES ADDITIONAL INFORMATION


Modulation BPSK QPSK is not supported in SS
Spreading Factor 1, 2, or 4 SF=1 results in no spreading
Symbol Rate 64 ksym/s - 15 Msym/s
Chip Rate 15 Mchip/s maximum
FEC Rate 0.879, 0.793, 0.495
BER Performance < 1-8 at 1 dB above theoretical
C/N threshold
Occupied BW 1.2 * Chip Rate plus hub downcoverter oscillator
stability factor
Spectral Mask IESS-308/309, MIL-STD 188xxx
Carrier Suppression > -30 dBc
Hardware Platform M1D1-TSS HLC

Supported Forward Error Correction (FEC) Rates


The upstream and downstream FEC rates that are supported in this release are described in
Table 7.

Table 7. Spread Spectrum: Supported FEC Rates

BLOCK SIZE UPSTREAM FEC DOWNSTREAM FEC


1k .66 N/A
.431 N/A
.533 N/A
4K N/A .495
N/A .793
16K N/A .879

Technical Reference Guide 43


iDS Release 8.3
Upstream Specifications

Upstream Specifications
The specifications for the spread spectrum upstream channel are outlined in Table 8. The
Spreading Factor COTM 1, used in fast moving mobile applications, is described on page 42.

Note: Beginning with iDS Release 8.2, the iBuilder selections for Spreading Factors of 2
and 4 on the iBuilder Carrier Information tab changed to COTM SF=2 and COTM
SF=4. These Spreading Factors are identical to 2 and 4 in iDS Release 8.0.

Table 8. Spread Spectrum: Upstream Specifications

PARAMETERS VALUES
Modulation BPSK
Spreading Factor 1, COTM 1, 2, or 4
Symbol Rate 64 ksym/s - 1.875 Msym/s
Chip Rate 7.5 Mchip maximum
FEC Rate .66, .431, .533
BER Performance Refer to the iDirect Link
Budget Analysis Guide
Maximum Frequency Offset 1/8% of Fsym
Unique Word Overhead 128 symbols
Burst Size 1024 bits
Occupied Bandwidth 1.2 * Symbol Rate
Hardware Platform iNFINITI series 8350

44 Technical Reference Guide


iDS Release 8.3
5 QoS Implementation
Principles

This chapter describes how you can configure Quality of Service definitions to achieve
maximum efficiency by prioritizing traffic.

Quality of Service (QoS)


Quality of Service is defined as the way IP traffic is classified and prioritized as it flows
through the iDirect system.

QoS Measures
When discussing QoS, at least four interrelated measures are considered. These are
Throughput, Latency, Jitter, and Packet Loss. This section describes these parameters in
general terms, without specific regard to an iDirect network.

Throughput. Throughput is a measure of capacity and indicates the amount of user data that
is received by the end user application. For example, a G729 voice call without additional
compression (such as cRTP), or voice suppression, requires a constant 24 Kbps of application
level RTP data to achieve acceptable voice quality for the duration of the call. Therefore this
application requires 24 Kbps of throughput. When adequate throughput cannot be achieved
on a continuous basis to support a particular application, Qos can be adversely affected.

Latency. Latency is a measure of the amount of time between events. Unqualified latency is
the amount of time between the transmission of a packet from its source and the receipt of
that packet at the destination. If explicitly qualified, it may also mean the amount of time
between a request for a network resource and the time when that resource is received. In
general, latency accounts for the total delay between events and it includes transit time,
queuing, and processing delays. Keeping latency to a minimum is very important for VoIP
applications for human factor reasons.

Jitter. Jitter is a measure of the variation of latency on a packet-by-packet basis. Referring to


the G729 example again, if voice packets (containing two 10 ms voice samples) are
transmitted every 20 ms from the source VoIP equipment, ideally those voice packets arrive
at the destination every 20 ms; this is a jitter value of zero. When dealing with a packet-
switched network, zero jitter is particularly difficult to guarantee. To compensate for this, all
VoIP equipment contains a jitter buffer that collects voice packets and sends them at the
appropriate interval (20 ms in this example).

Technical Reference Guide 45


iDS Release 8.3
QoS Measures

Packet Loss. Packet Loss is a measure of the number of packets that are transmitted by a
source, but not received by the destination. The most common cause of packet loss on a
network is network congestion. Congestion occurs whenever the volume of traffic exceeds the
available bandwidth. In these cases, packets are filling queues internal to network devices at
a rate faster than those packets can be transmitted from the device. When this condition
exists, network devices drop packets to keep the network in a stable condition. Applications
that are built on a TCP transport interpret the absence of these packets (and the absence of
their related ACKs) as congestion and they invoke standard TCP slow-Start and congestion
avoidance techniques. With real time applications, such as VoIP or streaming video, it is often
impossible to gracefully recover these lost packets because there is not enough time to
retransmit lost packets. Packet loss may affect the application in adverse ways. For example,
parts of words in a voice call may be missing or there maybe an echo; video images may break
up or become block-like (pixilation effects).

QoS Application, iSCPC and Filter Profiles


QoS Profiles are defined by Application Profiles, iSCPC Profiles and Filter Profiles. An
Application or iSCPC Profile is a group of service levels, collected together and given a user-
defined name. A QoS Filter Profile encapsulates a single filter definition, and it consists of a
set of rules rather than a set of service levels. Application, iSCPC and Filter Profiles are
applied to downstream and upstream traffic independently, so that upstream traffic may have
certain QoS definitions, whereas downstream traffic may have a different set of QoS
definitions. (Figure 22 on page 47).
iSCPC Profiles and Application Profiles are used differently in TDMA networks than they are in
iSCPC connections.
• For TDMA networks, Application Profiles define the Group QoS Applications that you add
to your Service Profiles. You then assign the Service Profile to your TDMA remotes using
the Group QoS tab for your Bandwidth Pools.
• iSCPC Profiles are assigned directly to iSCPC line cards on the QoS tab. The Line Card
assignments of iSCPC Profiles are mirrored on the iSCPC remote.
Application Profiles are only used for Group QoS. iSCPC Profiles are used only by iSCPC line
cards and remotes and are not associated with Group QoS. See “Group QoS” on page 50 for a
general discussion of Group QoS. For details on configuring profiles, see chapter 8,
“Configuring Quality of Service for iDirect Networks” of the iBuilder User Guide.

46 Technical Reference Guide


iDS Release 8.3
Classification Profiles for Applications

Figure 22. Remote and QoS Profile Relationship

Classification Profiles for Applications


This section describes how the iDirect system distinguishes application IP packets from less
important background traffic. Each packet that enters the iDirect system is classified into one
of the configured Service Levels.

Service Levels
A Service Level may represent a single application (such as VoIP traffic from a single IP
address) or a broad class of applications (such as all TCP based applications). Each Service
Level is defined by one or more packet-matching rules. The set of rules for a Service Level
allows logical combinations of comparisons to be made between the following IP packet
fields:

Technical Reference Guide 47


iDS Release 8.3
Classification Profiles for Applications

• Source IP address
• Destination IP address
• Source port
• Destination port
• Protocol (such as DiffServ DSCP)
• TOS priority
• TOS precedence
• VLAN ID

Packet Scheduling
Packet Scheduling is a method used to transmit traffic according to priority and classification.
In a network that has a remote that always has enough bandwidth for all of its applications,
packets are transmitted in the order that they are received without significant delay.
Application priority makes little difference since the remote never has to select which packet
to transmit next.
In a network where there are periods of time in which a remote does not have sufficient
bandwidth to transmit all queued packets the remote scheduling algorithm must determine
which packet from a set of queued packets across a number of service levels to transmit next.
For each service level you define in iBuilder, you can select any one of three queue types to
determine how packets using that service level are to be selected for transmission. These are
Priority Queue, Class-Based Weighted Fair Queue (CBWFQ), and Best-Effort Queue.
The procedures for defining profiles and service levels are detailed in chapter 8, “Configuring
Quality of Service for iDirect Networks” of the iBuilder User Guide.
Priority Queues are emptied before CBWFQ queues are serviced and CBWFQ queues are in
turn emptied before Best Effort queues are serviced. Figure 23 on page 49 presents an
overview of the iDirect packet scheduling algorithm.

48 Technical Reference Guide


iDS Release 8.3
Classification Profiles for Applications

Figure 23. iDirect Packet Scheduling Algorithm

The packet scheduling algorithm (Figure 23) first services packets from Priority Queues in
order of priority, P1 being the highest priority. It selects CBWFQ packets only after all Priority
Queues are empty. Similarly, packets are taken from Best Effort Queues only after all CBWFQ
packets are serviced.
You can define multiple service levels using any combination of the three queue types. For
example, you can use a combination of Priority and Best Effort Queues only.

Priority Queues
There are four levels of user Priority Queues:
• Level 1: P1 (Highest priority)
• Level 2: P2
• Level 3: P3
• Level 4: P4 (Lowest priority)

Technical Reference Guide 49


iDS Release 8.3
Group QoS

All queues of higher priority must be empty before any lower-priority queue are serviced. If
two or more queues are set to the same priority level, then all queues of equal priority are
emptied using a round-robin selection algorithm prior to selecting any packets from lower
priority queues.

Class-Based Weighted Fair Queues


Packets are selected from Class-Based Weighted Fair Queues for transmission based on
the service level (or “class”) of the packet. Each service level is assigned a “cost”. Packet cost
is defined as the cost of its service level multiplied by its length. Packets with the lowest cost
are transmitted first, regardless of service level.
The cost of a service level changes during operation. Each time a queue is passed over in
favor of other service levels, the cost of the skipped queue is credited, which lowers the cost
of the packets in that queue. Over time, all service levels get an opportunity to transmit
occasionally even in the presence of higher priority traffic. Assuming there is a continuously
congested link with an equal amount of traffic on each service level, the total bandwidth
available is divided more evenly by deciding transmission priority based on each service level
cost.

Best Effort Queues


Packets in Best Effort queues do not have priority or cost. All packets in these queues are
treated equally by applying a round-robin selection algorithm to the queues. Best Effort
queues are only serviced if there are no packets waiting in Priority Queues and no packets
waiting CBWFQ Queues.

Group QoS
Group QoS (GQoS), introduced in iDS Release 8.0, enhances the power and flexibility of
iDirect’s QoS feature for TDMA networks. It allows advanced network operators a high degree
of flexibility in creating subnetworks and groups of remotes with various levels of service
tailored to the characteristics of the user applications being supported.
Group QoS is built on the Group QoS tree: a hierarchical construct within which containership
and inheritance rules allow the iterative application of basic allocation methods across groups
and subgroups. QoS properties configured at each level of the Group QoS tree determine how
bandwidth is distributed when demand exceeds availability.
Group QoS enables the construction of very sophisticated and complex allocation models. It
allows network operators to create network subgroups with various levels of service on the
same outbound carrier or inroute group. It allows bandwidth to be subdivided among
customers or Service Providers, while also allowing oversubscription of one group’s configured
capacity when bandwidth belonging to another group is available.

Note: Group QoS applies only to TDMA networks. It does not apply to iDirect iSCPC
connections.

Note: If you are upgrading from a pre-8.0 iDirect Release, your TDMA networks can be
converted from the older QoS implementation to comply with the Group QoS
feature. See your Network Upgrade Procedure for special upgrade instructions
regarding this conversion.

50 Technical Reference Guide


iDS Release 8.3
Group QoS

For details on using the Group QoS feature, see the chapter titled “Configuring Quality of
Service for iDirect Networks” in the iBuilder User Guide.

Group QoS Structure


The iDirect Group QoS model has the following structure as shown in Figure 24:

Figure 24. Group QoS Structure

Bandwidth Pool
A Bandwidth Pool is the highest node in the Group QoS hierarchy. As such, all sub-nodes of a
Bandwidth Pool represent subdivisions of the bandwidth within that Bandwidth Pool. In the
iDirect network, a Bandwidth Pool consists of an outbound carrier or an inroute group.

Bandwidth Group
A Bandwidth Pool can be divided into multiple Bandwidth Groups. Bandwidth Groups allow a
network operator to subdivide the bandwidth of an outroute or inroute group. Different
Bandwidth Groups can then be assigned to different Service Providers or Virtual Network
Operators (VNO).
Bandwidth Groups can be configured with any of the following:
• CIR and MIR: Typically, the sum of the CIR bandwidth of all Bandwidth Groups equals
the total bandwidth. When MIR is larger than CIR, the Bandwidth Group is allowed to
exceed its CIR when bandwidth is available.
• Priority: A group with highest priority receives its bandwidth before lower-priority
groups.

Technical Reference Guide 51


iDS Release 8.3
Group QoS

• Cost: Cost allows bandwidth allocations to different groups to be unequally


apportioned within the same priority. For equal requests, lower cost nodes are granted
more bandwidth than higher cost nodes.
Bandwidth Groups are typically configured using CIR and MIR for a strict division of the total
bandwidth among the groups. By default, any Bandwidth Pool is configured with a single
Bandwidth Group.

Service Group
A Service Provider or a Virtual Network Operator can further divide a Bandwidth Group into
sub-groups called Service Groups. A Service Group can be used strictly to group remotes into
sub-groups or, more typically, to differentiate groups by class of service. For example, a
platinum, gold, silver and best effort service could be defined as Service Groups under the
same Bandwidth Group.
Like Bandwidth Groups, Service Groups can be configured with CIR, MIR, Priority and Cost.
Service Groups are typically configured with either a CIR and MIR for a physical separation of
the groups, or with a combination of Priority, Cost and CIR/MIR to create tiered service. By
default, a single Service Group is created for each Bandwidth Group.

Application Group
An Application defines a specific service available to the end user. Application Groups are
associated with any Service Group. The following are examples:
• VoIP
• Video
• Oracle
• Citrix
• VLAN
• NMS Traffic
• Default
Each Application List can have one or more matching rules such as:
• Protocol: TCP, UDP, and ICMP
• Source and/or Destination IP or IP Subnet
• Source and/or Destination Port Number
• DSCP Value or DSCP Ranges
• VLAN
Each Application List can be configured with any of the following:
• CIR/MIR
• Priority
• Cost

Service Profiles
Service Profiles are derived from the Application Group by selecting Applications and
matching rules and assigning per remote CIR and MIR when applicable. While the Application
Group specifies the CIR/MIR by Application for the whole Service Group, the Service Profile
specifies the per-remote CIR/MIR by Application. For example, the VoIP Application could be

52 Technical Reference Guide


iDS Release 8.3
Group QoS

configured with a CIR of 1 Mbps for the Service Group in the Application Group and a CIR of 14
Kbps per-remote in the Service Profile.
Typically, all remotes in a Service Group use the Default Profile for that Service Group. When
a remote is created under an inroute group, the QoS Tab allows the operator to assign the
remote to a Bandwidth Group and Service Group. The new remote automatically receives the
default profile for the Service Group. The Group QoS interface can also be used to assign a
remote to a Service Group or change the assignment of the remote from one Service Group to
another.
In order to accommodate special cases, however, additional profiles (other than the Default
Profile) can be created. For example, profiles can be used by a specific remote to prioritize
an Application that is not used by other remotes; to prioritize a specific VLAN on a remote; or
to prioritize traffic to a specific IP address (such as a file server) connected to a specific
remote in the Service Group. Or a Network Operator may want to configure some remotes for
a single VoIP call and others for two VoIP calls. This can be accomplished by assigning
different profiles to each group of remotes.

Group QoS Scenarios

Physical Segregation Scenario


Example: A satellite provider would like to split a network with a 10 Mbps outbound carrier
for two Service Providers, allocating 6 Mbps for one and 4 Mbps for the other. The first group
should be allowed to burst up to 8 Mbps when the bandwidth is not being used by the second
group.
Configuration:
The satellite provider could configure two Bandwidth Groups as follows:
• The first group with: CIR/MIR of 6 Mbps/8 Mbps
• The second group with: CIR/MIR of 4 Mbps/4 Mbps
The sum of all CIR bandwidth should not exceed the total bandwidth. A scenario depicting
physical segregation is shown in Figure 25.

Technical Reference Guide 53


iDS Release 8.3
Group QoS

Figure 25. Physical Segregation Scenario

Note: Another solution would be to create a single Bandwidth Group with two Service
Groups. This solution would limit the flexibility, however, if the satellite
provider decides in the future to further split each group into sub-groups.

CIR Per Application Scenario


Example: A Service Provider has a 1 Mbps outbound carrier and would like to make sure that
half of it is dedicated to VoIP with up to two VoIP calls per remote. He also has a critical
application with Citrix traffic that requires an average of 8 Kbps per remote requiring 128
Kbps.
Configuration:
The Service Group’s Application List could be configured as follows:
• VoIP – CIR 512 Kbps
• Citrix – CIR 128 Kbps
• NMS – Priority 1, MIR 16K (Set NMS MIR to 1% to 2% of total BW)
• Default – Cost 1.0 (Default cost is 1.0)
The derived “Default Application Profile” could be configured as follows:
• VoIP – CIR 28 Kbps

54 Technical Reference Guide


iDS Release 8.3
Group QoS

• Citrix – CIR 8 Kbps


• NMS – Priority 1
• Default – Cost 1.0
A scenario depicting CIR per application is shown in Figure 26.

Figure 26. CIR Per Application Scenario

VoIP could also be configured as priority 1 traffic. In that case, demand for VoIP must be fully
satisfied before serving lower priority applications. Therefore, it is important to configure an
MIR to avoid having VoIP consume all available bandwidth.

Tiered Service Scenario


Example: A network operator with an 18 Mbps outbound carrier would like to provide
different classes of service for customers. The Platinum service will have the highest priority
and is designed for 50 remotes bursting up to an MIR of 256 Kbps. The Gold Service, sold to
200 customers, will have an MIR of 128 Kbps. The Silver Service will be a “best effort” service,
and will allow bursting up to 128 Kbps when bandwidth is available.
Configuration:
There are several ways to configure tiered services. The operator should keep in mind that
when priority is used for a Service Group, the Service Group is satisfied up to the MIR before
lower priority Service Groups are served. Here is one example of how the tiered service could
be configured:

Technical Reference Guide 55


iDS Release 8.3
Group QoS

• Platinum – Priority 1 – MIR 12 Mbps


• Gold – Priority 2 – MIR 18 Mbps (Identical to no MIR, since the Bandwidth Pool is only 18
Mbps.)
• Silver – Priority 3 – No MIR Defined (The same as an MIR of 18 Mbps)
A scenario depicting tiered service is shown in Figure 27.

Figure 27. Tiered Service Scenario

Note that cost could be used instead of priority if the intention were to have a fair allocation
rather than to satisfy the Platinum service before any bandwidth is allocated to Gold; and
then satisfy the Gold service before any bandwidth is allocated to Silver. For example:
• Platinum – Cost 0.1 - CIR 6 Mbps, MIR 12 Mbps
• Gold – Cost 0.2 - CIR 6 Mbps, MIR 18 Mbps
• Silver – Cost 0.3 - No CIR, No MIR Defined

Third Level of Segregation by VLAN Scenario


The iDirect Group QoS model is designed for two levels of physical segregation of bandwidth.
If the user has a need to split the bandwidth into a third level, this could be accomplished by
using VLANs.
Example: A satellite provider would like to divide an 18 Mbps carrier among six distributors,
each with 3 Mbps of bandwidth. One of the distributors would like to offer service to three
network operators, giving them 1 Mbps each. Another would like to provide a tiered service
(Platinum, Gold and Silver), dedicating 256 Kbps for the Platinum VoIP service. This

56 Technical Reference Guide


iDS Release 8.3
Group QoS

effectively provides a third level of physical segregation. It could be accomplished by using


VLANs as shown in the example below.
Configuration:
The Service Group’s Application Group for the tiered service could be configured as follows:
• Platinum – VLAN-91 & VoIP - Priority 1 – CIR 256 Kbps, MIR 256 Kbps
• Platinum – VLAN-91 & All Others - Priority 1 – CIR 256 Kbps, MIR 512 Kbps
• Gold – VLAN 92 - Priority 2 – CIR 256 Kbps, MIR 1 Mbps
• Silver – VLAN 93 - Priority 2 – CIR 0, MIR 1 Mbps
A scenario depicting a third level VLAN is shown in Figure 28.

Figure 28. Third Level VLAN Scenario

The Shared Remote Scenario


Example: A network operator provides service to oil rigs for two companies. Many of the oil
rigs have both companies present. Company A bought 8 Mbps of outbound bandwidth, while
Company B bought 2 Mbps of outbound bandwidth. The network operator would like to use a

Technical Reference Guide 57


iDS Release 8.3
Group QoS

single outbound carrier of 10 Mbps to provide service for both companies, while ensuring that
each customer receives the bandwidth that they paid for. This scenario is complicated by the
fact that, on oil rigs with both companies present, the network operator would like to use a
single remote to provide service to both by separating their terminals into VLAN-51 for
Company A and VLAN-52 for Company B. Both companies would also like to prioritize their
VoIP.
Configuration:
If we had separate remotes for each company, this would be a simple “Physical Segregation”
scenario. However, keeping both companies in the same Service Group and allocating
bandwidth by VLAN and application would not provide the strict separation of 8 Mbps for
Company A and 2 Mbps for Company B. Instead, the solution is to create two Service Groups:
• Company A: CIR/MIR 8 Mbps/8 Mbps
• Company B: CIR/MIR 2 Mbps /2 Mbps
Service Profiles for both companies would have VoIP and Default with the appropriate priority,
cost, CIR and MIR. In order to allow the same remote to serve both companies, the remote is
assigned to both Service Groups as shown in Figure 29. Note that this is an unusual
configuration and is not recommended for the typical application.

Figure 29. Shared Remote Scenario

58 Technical Reference Guide


iDS Release 8.3
Application Throughput

Application Throughput
Application throughput depends on properly classified and prioritized QoS and on properly
available bandwidth management. For example, if a VoIP application requires 16 Kbps and a
remote is only given 10 Kbps the application fails regardless of priority, since there is not
enough available bandwidth.
Bandwidth assignment is controlled by the Protocol Processor. As a result of the various
network topologies (for example, a shared TDM downstream with a deterministic TDMA
upstream), the Protocol Processor has different mechanisms for downstream control versus
upstream control. Downstream control of bandwidth is provided by continuously evaluating
network traffic flow to assigning bandwidth to remotes as needed. The Protocol Processor
assigns bandwidth and controls the transmission of packets for each remote according to the
QoS parameters defined for the remote’s downstream.
Upstream bandwidth is requested continuously with each TDMA burst from each remote. A
centralized bandwidth manager integrates the information contained in each request and
produces a TDMA burst time plan which assigns individual bursts to specific remotes. The
burst time plan is produced once per TDMA frame (typically 125 ms or 8 times per second).

Note: There is a 250 ms delay from the time that the remote makes a request for
bandwidth and when the Protocol Processor transmits the burst time plan to it.
iDirect has developed a number of features to address the challenges of providing adequate
bandwidth for a given application. These features are discussed in the sections that follow.

QoS Properties
There are several QoS properties that you can configure based on your traffic throughput
requirements. These are discussed in the sections that follow. For information of configuring
these properties, see chapter 8, “Configuring Quality of Service for iDirect Networks” of the
iBuilder User Guide.

Static CIR
You can configure a static Committed Information Rate (CIR) or an upstream minimum
information rate for any upstream (TDMA) channel. Static CIR is bandwidth that is guaranteed
even if the remote does not need the capacity. By default, a remote is configured with a
single slot per TDMA frame. Increasing this value is considered as an inefficient configuration
because these slots are wasted if the remote is inactive. No other remote can be given these
slots unless the remote with the static CIR has not been acquired into the network. A static
CIR is considered as the highest priority upstream bandwidth. Static CIR only applies in the
upstream direction. The downstream does not need or support the concept of a static CIR.

Dynamic CIR
You can configure Dynamic CIR values for remotes in both the downstream and upstream
directions. Dynamic CIR is not statically committed and is granted only when demand is
actually present. This allows you to support CIR based service level agreements and, based on
statistical analysis, oversubscribe networks with respect to CIR. If a remote has a CIR but
demand is less than the CIR, only the actual demanded bandwidth is granted. It is also
possible to indicate that only certain QoS service levels “trigger” a CIR request. In these

Technical Reference Guide 59


iDS Release 8.3
Application Throughput

cases, traffic must be present in a triggering service level before the CIR is granted. Triggering
is specified on a per-service level basis.
Additional burst bandwidth is assigned evenly among all remotes in the network by default.
All available burstable bandwidth (BW) is equally divided between all remotes requesting
additional BW, regardless of already allocated CIR.
Previously, a remote in a highly congested network would often not get burst bandwidth
above its CIR. For example, consider a network with a 3 Mbps upstream and three remotes,
R1, R2, and R3. R1 and R2 are assigned a CIR of 1 Mbps each and R3 has no CIR. If all remotes
request 2 Mbps each, 1 Mbps is given to R3, making the total used BW 3 Mbps. In this case, R1
and R2 receive no additional BW.
Using the same example network, the additional 1 Mbps BW is evenly distributed by giving
each remote an additional 333 Kbps. The default configuration is to allow even bandwidth
distribution.
Further QoS configuration procedures can be found in chapter 8, “Configuring Quality of
Service for iDirect Networks” of the iBuilder User Guide.

Free Slot Allocation


Free slot allocation is a round-robin distribution of unused TDMA slots by the centralized
bandwidth manager on a frame-by-frame basis. The bandwidth manager assigns TDMA slots to
particular remotes for each TDMA allocation interval based on current demand and
configuration constraints (such as minimum and maximum data rates, static CIR, dynamic CIR,
and others). At the end of this process it is possible that there are unused TDMA slots. In this
case, if Free Slot Allocation is enabled, the bandwidth manager gives these extra slots to
remotes in a fair manner, respecting any remote’s maximum configured data rate. Beginning
with iDS Release 8.2, Free Slot Allocation is always enabled. It is no longer configurable in
iBuilder. You can disable Free Slot Allocation with a custom key.

Compressed Real-Time Protocol (cRTP)


You can enable Compressed Real-Time Protocol (cRTP) to significantly reduce the bandwidth
requirements of VoIP flows. cRTP is implemented via standard header compression
techniques. It allows for better use of real-time bandwidth especially for RTP-based
applications, which utilize large numbers of small packets since the 40-byte IP/UDP/RTP
header often accounts for a significant fraction of the total packet length. iDirect has
implemented a standard header compression scheme including heuristic-based RTP detection
with negative cache support for misidentified UDP streams. For example, G729 voice RTP
results in less than 12 Kbps (uncompressed is 24 Kbps). To enable cRTP, see the section titled
“QoS Tab” in chapter 7, “Configuring Remotes” of the iBuilder User Guide.

Configurable Minimum CIR


It is possible to configure a remote upstream minimum statically committed CIR to less than
one burst in each TDMA frame. This feature allows many remotes to be “packed” into a single
upstream. Reducing a remote’s minimum statically committed CIR increases ramp latency.
Ramp latency is the amount of time it takes a remote to acquire the necessary bandwidth.
The lower the upstream static CIR, the fewer TDMA time plans contain a burst dedicated to
that remote, and the greater the ramp latency. Some applications may be sensitive to this

60 Technical Reference Guide


iDS Release 8.3
Packet Segmentation

latency and may result in a poor user experience. iDirect recommends that this feature be
used with care. The iBuilder GUI enforces a minimum of one slot per remote every two
seconds. For more information, please see the section titled “Upstream and Downstream Rate
Shaping” in chapter 7, “Configuring Remotes” of the iBuilder User Guide.

Sticky CIR
Sticky CIR is activated only when CIR is over-subscribed on the downstream or on the
upstream. When enabled, Sticky CIR favors remotes that have already received their CIR over
remotes that are currently asking for it. When disabled (the default setting), The Protocol
Processor reduces assigned bandwidth to all remotes to accommodate a new remote in the
network. Sticky CIT can be configured in the Bandwidth Group and Service Group level
interfaces in iBuilder.

Application Jitter
Jitter is the variation of latency on a packet-by-packet basis of application traffic. For an
application like VoIP, the transmitting equipment spaces each packet at a known fixed interval
(every 20 ms, for example). However, in a packet switched network, there is no guarantee
that the packets will arrive at their destination with the same interval rate. To compensate
for this, the receiving equipment employs a jitter buffer that attempts to play out the
arriving packets at the desired perfect interval rate. To do this it must introduce latency by
buffering packets for a certain amount of time and then playing them out at the fixed
interval.
While jitter plays a role in both downstream and upstream directions, a TDMA network tends
to introduce more jitter in the upstream direction. This is due to the discrete nature of the
TDMA time plan where a remote may only burst in an assigned slot. The inter-slot times
assigned to a particular remote do not match the desired play out rate, which results in jitter.
Another source of jitter is other traffic that a node transmits between (or in front of)
successive packets in the real-time stream. In situations where a large packet needs to be
transmitted in front of a real-time packet, jitter is introduced because the node must wait
longer than normal before transmission.
The iDirect system offers features that limit the effect of such problems; these features are
described the sections that follow.

TDMA Slot Feathering


The Protocol Processor bandwidth manager attempts to “feather” or spread out each
individual remote TDMA slots across the upstream frame. This is a desirable attribute in that a
particular remote’s bursts are spread out in time often reducing TDMA induced jitter. This
feature is enabled by selecting “Reduce Jitter” for an Application’s Service Level in iBuilder.
For details, see the chapter titled “Configuring Quality of Service for iDirect Networks” in the
iBuilder User Guide.

Packet Segmentation
Beginning with iDS Release 8.2, Segmentation and Reassembly (SAR) and Packet Assembly and
Disassembly (PAD) have been replaced by a more efficient iDirect application. Although you

Technical Reference Guide 61


iDS Release 8.3
Application Latency

can continue to configure the downstream segment size in iBuilder, all upstream packet
segmentation is handled internally to optimize upstream packet segmentation.
You may wish to change the downstream segment size if you have a small outbound carrier
and need to reduce jitter in your downstream packets. Typically, this is not required. For
details on configuring the downstream segment size, see the chapter on “Configuring
Remotes” in the iBuilder User Guide.

Application Latency
Application latency is typically a concern for transaction-based applications such as credit
card verification systems. For applications like these, it is important that the priority traffic
be expedited through the system and sent, regardless of the less important background
traffic. This is especially important in bandwidth-limited conditions where a remote may only
have a single or a few TDMA slots. In this case, it is important to minimize latency as much as
possible after the distributor’s QoS decision. This allows a highly prioritized packet to make
its way immediately to the front of the transmit queue.

Maximum Channel Efficiency vs. Minimum Latency


Each TDMA burst carries a discrete number of payload bytes. The remote must break higher-
level packets into TDMA-burst-sized chunks to pack these bursts for transmission. You can
control how bursts are packaged for transmission by selecting between two options on the
iBuilder Service level dialog box: Maximum Channel Efficiency (default) and Minimum Latency.
Maximum Channel Efficiency delays the release of a partially filled TDMA burst to allow for
the possibility that the next packet will fill the burst completely. In this configuration, the
system waits for up to four TDMA transmission attempts before releasing a partial burst.
Minimum Latency never delays partially filled TDMA bursts. Instead, it transmits them
immediately.
In general, Maximum Channel Efficiency is the desired choice, except in certain situations
when it is vitally important to achieve minimum latency for a prioritized service level. For
example, if your network is typically congested and you are configuring the system to work
with a transaction-based application which is bursty in nature and requires a minimum round
trip time, then Minimum Latency may be the better choice. You can configure these settings
in iBuilder from the QoS Service Level dialog box. For details, see the chapter titled
“Configuring Quality of Service for iDirect Networks” in the iBuilder User Guide.

62 Technical Reference Guide


iDS Release 8.3
6 Configuring Transmit
Initial Power

During acquisition, the iNFINITI remote attempts to join the network according to the burst
plan assigned to the remote by the hub. The initial transmit power must be set correctly so
that the remote can join the network and stay in the network. This chapter describes the best
practices for setting Transmit (TX) Initial Power in an iDirect network.

Note: It is important to set TX Initial Power on a remote modem correctly to ensure


optimal Upstream channel performance.

What is TX Initial Power?


TX Initial Power is the power level at which a remote modem transmits when joining the
network. You can set the Initial Power through iSite or iBuilder. When a remote modem is
attempting to join the network the hub sends SWEEP commands to it. These tell the remote
modem to burst in to the acquisition slot of the upstream channel. Each SWEEP command
contains a different frequency offset which tells the remote modem to change its frequency
slightly and then send a burst. During these acquisition bursts, the remote modem sets its
output power to the TX Initial Power parameter. If TX Initial Power is not set correctly, the
acquisition bursts may not be received and the remote modem cannot join the network.

How To Determine The Correct TX Initial Power


There are two ways to determine the correct TX Initial power:
• Locally, by using iSite during site commissioning.
• Remotely, by using iBuilder any time after site commissioning.
During site commissioning, the installer uses iSite to set TX Initial Power. This parameter is set
at a low value and it is manually increased until the remote modem is acquired into the
network. The hub then automatically adjusts the remote modem output power to a nominal
setting. With the acq on command enabled, UCP messages are displayed at the console and
the installer can observe the TX power adjustments being made by the hub. When the hub
determines that the bursts are arriving in the nominal C/N range, power adjustments are
stopped (displayed at the console as 0.0 dB adjustment). The installer can type tx power to
read the current power setting.
iDirect recommends that you set the TX Initial Power value to 3 dB above the tx power
reading. For example, if the tx power is -17 dBm, set TX Initial Power to -14 dBm.

Technical Reference Guide 63


iDS Release 8.3
All Remotes Need To Transmit Bursts in The Same C/N Range

At any time after site commissioning, you can check the TX Initial Power setting by observing
the Remote Status and UCP tabs in iMonitor. If the remote modem is in a “steady state” and
no power adjustments are being made, you can compare the current TX Power to the TX
Initial Power parameter to verify that TX Initial Power is 3 dB higher than the TX Power. For
detailed information on how to set TX Initial Power, refer to the “Remote Installation and
Commissioning Guide”.

Note: Best nominal Tx Power measurements are made during clear sky conditions at
the hub and remote sites.

All Remotes Need To Transmit Bursts in The Same C/N


Range
In a burst mode demodulator, the gain must be set at some nominal point prior to the arrival
of a data burst so that the burst is correctly detected and demodulated. Since a single Hub
Line Card receives bursts from many different remote modems, it constantly calculates the
optimal gain point by taking into account the average levels of all bursts arriving at that Hub
Line Card.
If all the bursts are arrive at similar C/N levels, the average is very near optimal for all of
them. However, if many bursts arrive at a varying C/N levels, the highest and lowest level
bursts can skew the average such that so that it is no longer optimal.
The nominal range is 2 dB wide (the green range in the iBuilder Acquisition/Uplink Control
tab). The actual range at which bursts can be optimally detected is approximately 8 dB wide
centered at the nominal gain point (Figure 30).

Ideal Case :
Optimal Detection R ange

6 7 8 9 10 11 12 13 14 C /N (dB )

Threshold C /N

U nder ideal circumstances , the average C /N of all remotes on the upstream channel is equal
to the center of the U CP adjustment range . Therefore the optimal detection range extends to
below the threshold C /N. (This example illustrates the TPC R ate 0 .66 threshold )

Figure 30. C/N Nominal Range

64 Technical Reference Guide


iDS Release 8.3
What Happens When TX Initial Power Is Set Incorrectly?

What Happens When TX Initial Power Is Set Incorrectly?


If the Initial Power is not set correctly, your network performance can me negatively
impacted. When remote is acquired by the hub, the center point of the 8 dB wide detection
range is set at the C/N value at the time that is acquired. This section described what
happens if the Initial Power is too high or too low.

When TX Initial Power is Too High


If the if TX Initial Power is set too high, and the C/N at the time of acquisition is 11.0 dB, The
C/N detection window range is from 7 dB to 15 dB and the Hub Line Card gain approaches the
upper limit of the nominal range. Since UCP updates occur every 20 seconds, it may take a
minute or more for carriers with too much initial power to adjust lower into the nominal
range. During this time, remotes that are operating under atmospheric fade conditions could
drop out of the network because the bursts no longer fall within the optimal detection range.
Remotes that are trying to acquire with a C/N value of less than 7 dB will not acquire the
network (Figure 31).

T X Initial P ow er T oo H igh:
S ke w ed D e tection R ange

6 7 8 9 10 11 12 13 14 C /N (dB )

T h re sh old C /N
W h en the T X Initia l P ow e r is set too high , rem otes entering the netw o rk skew the average C /N to
be above th e center o f the U C P A djustm ent R a nge . T herefore , durin g this period th e op tim al
detection ra nge does no t inclu de the thresho ld C /N an d rem otes experiencing rain fad e m ay
experie nce a perform an ce degrad ation .

Figure 31. TX Initial Power Too High

When TX Initial Power is Too Low


If the if TX Initial Power is set too low, and the C/N at the time of acquisition is 9.0 dB, the
C/N detection window range is from 5 dB to 13 dB and the Hub Line Card gain approaches the
lower limit of the nominal range. Since UCP updates occur every 20 seconds, it may take a
minute or more for carriers with initial power set too low to adjust higher into the nominal
range. During this time, remotes that are operating under clear sky conditions could drop out
of the network because the bursts no longer fall within the optimal detection range. Remotes
that are trying to acquire with a C/N value of greater than 13 dB will not acquire the network.

Technical Reference Guide 65


iDS Release 8.3
What Happens When TX Initial Power Is Set Incorrectly?

Bursts can still be detected below threshold but the probability of detection and
demodulation reduces. This can lead to long acquisition times (Figure 32).

T X In itial P ow er T o o Lo w :
S kew ed D etection R ange

6 7 8 9 10 11 12 13 14 C /N (dB )

T hreshold C /N
W hen the T x Initial P ow er is set too low , rem otes entering the netw ork skew the averag e C /N to be
b elo w the center of the U C P A djustm ent R ange . T h is co u ld cau se rem o tes co m in g in at th e
h ig h er en d (e.g . 14 d B ) to exp erie n ce so m e d isto rtio n in th e d em o d u latio n p ro cess .
A dditionally, a rem ote acquiring at a low C /N (below threshold ) experiences a large num ber of
C R C errors w hen it enters the netw ork until its pow er is increased .

Figure 32. TX Initial Power Too Low

66 Technical Reference Guide


iDS Release 8.3
7 Global NMS Architecture

This chapter describes how the Global NMS works in a global architecture and a sample Global
NMS architecture.

How the Global NMS Works


The Global NMS allows you to add a single physical remote, as identified by its Derived ID
(DID), to multiple networks at the same time.
A remote that is a member of multiple networks is called a “roaming remote.” For details on
defining and managing roaming remotes, refer to the iBuilder User Guide.
Figure 33 illustrates the current and Global NMS database relationships.

Figure 33. Global NMS Database Relationships

Technical Reference Guide 67


iDS Release 8.3
Sample Global NMS Network

Sample Global NMS Network


This section illustrates a sample global NMS architecture, and it explains how the NMS works
in this type of network (Figure 34).

Figure 34. Sample Global NMS Network Diagram

In this example, there are 4 different networks connected to three different Regional
Network Control Centers (RNCCs). A group of remote terminals has been configured to roam
among the four networks.

Note: This diagram shows only one example from the set of possible network
configurations. In practice, there may be any number RNCCs and any number of
protocol processors at each RNCC.
On the left side of the diagram, a single NMS installed at the Global Network Control Center
(GNCC) manages all the RNCC components and the group of roaming remotes. Network
operators, both remote and local, can share the NMS server simultaneously with any number
of VNOs. (Only one VNO is shown in the Figure 34.) All users can run iBuilder, iMonitor, or both
on their PCs.
The connection between the GNCC and each RNCC must be a dedicated high-speed link.
Connections between NOC stations and the NMS server are typically standard Ethernet.
Remote NMS connections are made either over the public Internet protected by a VPN, port
forwarding, or a dedicated leased line.

68 Technical Reference Guide


iDS Release 8.3
8 Hub Network Security
Recommendations

This chapter describes basic recommended security measures to ensure that the NMS and
Protocol Processor servers are secure when connected to the public Internet. iDirect
recommends that you implement additional security measures over and above these minimal
steps.

Limited Remote Access


Access to the NMS and Protocol Processor servers should be protected behind a commercial-
grade firewall. If remote access is necessary for support, the iDirect Technical Assistance
Center can help you set up appropriate VPN access. Contact the TAC for details (see “Getting
Help” on page xiii).

Root Passwords
Root password access to the NMS and Protocol Processor servers should be reserved for only
those you want to have administrator-level access to your network. Restrict the distribution
of this password information.
Servers are shipped with default passwords. Change the default passwords after the
installation is complete and make sure these passwords are changed on a regular basis and
when an employee leaves your company.
When selecting your new passwords, iDirect recommends that you follow these practices for
constructing difficult-to-guess passwords:
• Use passwords that are at least 8 characters in length.
• Do not base passwords on dictionary words.
• Use passwords that contain a mixture of letters, numbers, and symbols.

Technical Reference Guide 69


iDS Release 8.3
Root Passwords

70 Technical Reference Guide


iDS Release 8.3
9 Global Protocol
Processor Architecture

This chapter describes how the Protocol Processor works in a global architecture. Specifically
it contains “Remote Distribution,” which describes how the Protocol Processor balances
remote traffic loading and “De-coupling of NMS and Datapath Components,” which describes
how the Protocol Processor Blades continue to function in the event of a Protocol Processor
Controller failure.

Remote Distribution
The actual distribution of remotes and processes across a blade set is determined by the
Protocol Processor controller dynamically in the following situations:
• At system Startup, the Protocol Processor Controller determines the distribution of
processes based on the number of remotes in the network(s).
• When a new remote is added in iBuilder, the Protocol Processor Controller analyzes the
current system load and adds the new remote to the blade with the least load.
• When a blade fails, the Protocol Processor Controller re-distributes the load across the
remaining blades, ensuring that each remaining blade takes a portion of the load.
The Protocol Processor controller does not perform dynamic load-balancing on remotes. Once
a remote is assigned to a particular blade, it remains there unless it is moved due to one of
the situations described above.

De-coupling of NMS and Datapath Components


If the Protocol Processor Controller fails, the Protocol Processor Blades continue to function
normally since the NMS and Protocol Processor Controller are independent. However, during a
period of Controller failure, automatic failover does not occur and you cannot reconfigure it.
You can build process redundancy into your design by running duplicate processes over

Technical Reference Guide 71


iDS Release 8.3
De-coupling of NMS and Datapath Components

multiple Protocol Processor Blades. A high-level architecture of the Protocol Processor, with
one possible configuration of processes across two blades is shown in Figure 35.

PP Blade 1
N M S Server

sam nc sarm t
spaw n
NM S Servers and
control

M onitor and C ontrol sada


M onitor and C ontrol

sarouter sana

pp_controller
PP Blade 2
M onitor and C ontrol

sam nc
spaw n
and
control
sarm t

sarouter

Figure 35. Protocol Processor Architecture

72 Technical Reference Guide


iDS Release 8.3
10 Distributed NMS Server

This chapter describes how you can design your network through a Distributed NMS server,
manage it through iDS supporting software, and back up or restore the configuration.
You can distribute your NMS server processes across multiple server machines. The primary
benefits of machine distribution are improved server performance and better utilization of
disk space.
iDirect recommends a distributed NMS server configuration once the number of remotes being
controlled by a single NMS exceeds 500-600. iDirect has tested the new distributed platform
with over 3000 remotes with iDS 7.0.0. Future releases continue to push this number higher.

Distributed NMS Server Architecture


The distributed NMS architecture allows you to match your NMS server processes to the server
machines. For example, you can run all servers on a single platform (the current default) you
can either assign each server process to its own server, or you can assign groups of processes
to individual servers.
Server configuration is performed one time using a special script distributed with the NMS
servers installation package. Once configured, the distribution of server processes across the
servers remains unchanged unless you reconfigure it. This is true even when you upgrade your
system.

Technical Reference Guide 73


iDS Release 8.3
iBuilder and iMonitor

The most common distribution scheme for larger networks is shown in Figure 36.

Figure 36. Sample Distributed NMS Configuration

This configuration has the following process distribution:


• NMS Server 1 runs the configuration server (nmssvr), latency server (latsvr), and the PP
controller (cntrlsvr) process.
• NMS Server 2 runs only the Statistics processes (nrdsvr).
• NMS Server 3 runs only the Event processes (evtsvr).
The busiest NMS processes, nrdsvr and evtsvr, are placed on their own servers for maximum
processing efficiency. All other NMS server processes are grouped on NMS Server 1.

iBuilder and iMonitor


From the iBuilder or iMonitor user perspective, a distributed NMS server functions identically
to a single NMS server. In both server configurations, users provide a user name, password,
and the IP address or Host Name of the NMS configuration server at the time of login. The
configuration server stores the location of all other NMS servers and provides this information
to the iBuilder or iMonitor client. Using this information, the client automatically establishes
connections to the server processes on the correct machines.
To set up a D-NMS, refer to the iBuilder User Guide.

74 Technical Reference Guide


iDS Release 8.3
dbBackup/dbRestore and the Distributed NMS

dbBackup/dbRestore and the Distributed NMS


The dbBackup and dbRestore scripts are completely compatible with the new distributed
NMS. You can have 1:1 or 1:n redundancy for your NMS servers.
1:n redundancy means that one physical machine backs up all of your active servers. If you
choose this form of redundancy, you must modify the dbBackup.ini file on each NMS server to
ensure that the separate databases are copied to separate locations on the backup machine.
The following diagram shows three servers, each copying its database to a single backup NMS.
If NMS 1 fails, you do not need to run dbRestore prior to switch-over since the configuration
data has already been sent to the backup NMS. If NMS 2 or NMS 3 fails, you need to run
dbRestore prior to the switch-over if you want to preserve and add to the archive data in the
failed server’s database. See Figure 37.

Figure 37. dbBackup and dbRestore with a Distributed NMS

Technical Reference Guide 75


iDS Release 8.3
Distributed NMS Restrictions

Distributed NMS Restrictions


Some of the server processes must be run on the configuration server, and others can be run
on separate machines, as listed below.
Server processes that must be run on the configuration server machine are:
• Control Server
• Revision Server
• SNMP Proxy Agent Server
Server processes that can run on separate machines are:
• Latency Server
• Event Server
• Real-Time Data Server (nrdsvr)

76 Technical Reference Guide


iDS Release 8.3
11 Transmission Security
(TRANSEC)

This section describes how TRANSEC and FIPS is implemented in an iDirect Network. It
includes the following sections:
• “What is TRANSEC?" defines Transmission Security.
• “iDirect TRANSEC" describes protocol implementation.
• “TRANSEC Downstream" describes the data path from the hub to the remote.
• “TRANSEC Upstream" describes the data path from the remote to the hub.
• “TRANSEC Key Management" describes public and private key usage.
• “TRANSEC Remote Admission Protocol" describes acquisition and authentication.
• “Reconfiguring the Network for TRANSEC" describes conversion requirements.

What is TRANSEC?
Transmission Security (TRANSEC) prevents an adversary from exploiting information available
in a communications channel without necessarily having defeated the encryption inherent in
the channel. Even if an encrypted wireless transmission is not compromised, information such
as timing and traffic volumes can be determined by using basic signal processing techniques.
This information could provide someone monitoring the network a variety of information on
unit activity. For example, even if an adversary cannot defeat the encryption placed on
individual packets, it might be able to determine answers to questions such as:
• What types of applications are active on the network currently?
• Who is talking to whom?
• Is the network or a particular remote site active now?
• Is it possible to determine between network activity and real world activity, based on
traffic analysis and correlation?
There are a number of components to TRANSEC, one of them being activity detection. With
current VSAT systems an adversary can determine traffic volumes and communications
activities with a simple spectrum analyzer. With a TRANSEC compliant VSAT system an
adversary is presented with a strongly encrypted and constant wall of data. Other
components of TRANSEC include remote and hub authentication. TRANSEC eliminates the
ability of an adversary to bring a non-authorized remote into a secured network.

Technical Reference Guide 77


iDS Release 8.3
iDirect TRANSEC

iDirect TRANSEC
iDirect achieves full TRANSEC compliance by presenting to an adversary who may be
eavesdropping on the RF link a constant “wall” of fixed-size, strongly encrypted (such as
Advanced Encryption Standard (AES) and 256 bit key Cipher Block Chaining (CBC) Mode) traffic
segments, which do not vary in frequency in response to network utilization.
Other than network messages that control the admission of a remote terminal into the
network, all portions of all packets are encrypted, and their original size is hidden. The
content and size of all user traffic (Layer 3 and above), as well as network link layer (Layer 2)
traffic is completely indeterminate from an adversary’s perspective. Further, no higher layer
information is revealed by monitoring the physical layer (Layer 1) signal.
The solution includes a remote-to-hub and a hub-to-remote authentication protocol based on
standard X.509 certificates designed to prevent man-in-the-middle attacks. This
authentication mechanism prevents an adversary’s remote from joining an iDirect TRANSEC
secured network. In a similar manner, it prevents an adversary from coercing a TRANSEC
remote into joining the adversary’s network. While these types of attacks are extremely
difficult to achieve even on a non-TRANSEC iDirect network, the mechanisms put in place for
the TRANSEC feature render them completely impossible.

Note: In this release, HiFin encryption cards are no longer required on you protocol
processor blades for TRANSEC key management.
All hub line cards and remote model types associated with a protocol processor must be
TRANSEC compatible. The only iDirect hardware that operate in TRANSEC mode are the M1D1-
T and M1D1-TSS Hub Line Cards, the iNFINITI 7350 and 8350 remotes, and the iConnex 100 and
iConnex 700 remotes. Therefore these are the only iDirect products that are capable of
operating in a FIPS 140-2 Level 1 compliant mode.
For more information, see “Chapter 16, Converting an Existing Network to TRANSEC” of the
iBuilder User Guide.

TRANSEC Downstream
A simplified block diagram for the iDirect TRANSEC downstream data path is shown in Figure
38. Each function represented in the diagram is implemented in software and firmware on a
TRANSEC capable line card.

Figure 38. Downstream Data Path

78 Technical Reference Guide


iDS Release 8.3
TRANSEC Downstream

Consider the diagram from left to right with variable length packets arriving on the far left
into the block named Packet Ingest. In this diagram, the encrypted path is shown as solid
black, and the unencrypted (clear) path is shown in dashed red. The Packet Ingest function
receives variable length packets which can belong to four logical classes: User Data, Bypass
Burst Time plan (BTP), Encrypted BTP, and Bypass Queue. All packets arriving at the transmit
Hub Line Card have this indication present as a pre-pended header placed there by the
protocol processor (not shown). The Packet Ingest function determines the message type and
places the packet in the appropriate queue. If the packet is not valid, it is not placed in any
queue and it is dropped.
Packets extracted from the Data Queue are always encrypted. Packets extracted from the
Clear Queue are always sent unencrypted, and time-sensitive BTP messages from the BTP
Queue can be sent in either mode. A BTP sent in the clear contains minimal traffic analysis
information for an adversary and is only utilized to allow remotes attempting to exchange
admission control messages with the hub to do so. Traffic sent in the clear bypasses the
Segmentation Engine and the AES Encryption Engine, and precedes the physical framing and
FEC engines for transmission. Clear, unencrypted packets are transmitted without regard to
segmentation; they are allowed to exist on the RF link with variable sized framing.
Encrypted traffic next enters the Segmentation Engine. The Segmentation Engine segments
incoming packets based on a configured size and provides fill-packets when necessary. The
Segmentation Engine allows the iDirect TRANSEC downstream to transmit a configurable,
fixed size TDM packet segment on a continuous basis.
After segmentation, fixed sized packets enter the Encryption Engine. The encryption
algorithm utilizes the AES algorithm with a 256 bit key and operates in CBC Mode. Packets exit
the Encryption Engine with a pre-pended header as shown in Figure 39.

SCPC TRANSEC FRAME


Encryption Header Segment FEC Coding

Code Seq Rsvd Initialization Vector FH1 F1 FHn Fn

Figure 39. SCPC TRANSEC Frame

The Encryption Header consists of five 32 bit words with four fields. The fields are:
• Code. This field indicates if the frame is encrypted or not, and if encrypted indicates the
entry within the key ring (described under the key management section later in this
document) to be utilized for this frame. The Code field is one byte in length.
• Seq. This field is a sequence number that increments with each segment. The Seq field is
two bytes in length (16 bits, unsigned).
• Rsvd. This field is 1 byte and is reserved for future use.
• Initialization Vector (IV). IV is utilized by the encryption/decryption algorithm and
contains random data. The IV field is 16 bytes in length (128 bits unsigned).
A new IV is generated for each segment. The first IV is generated from the cipher text of the
initial Known Answer Test (KAT) conducted at system boot time. Subsequent IVs are taken
from the last 128 bits of the cipher text of the previously encrypted segment. IVs are
continuously updated regardless of key rotations and they are independent of the key rotation
process. They are also continuously updated regardless of the presence of user traffic since

Technical Reference Guide 79


iDS Release 8.3
TRANSEC Upstream

the filler segments are encrypted. While no logic is included to ensure that IVs do not repeat,
the chance of repetition is very small; estimates place the probability of an IV repeating at
1:2102 for a maximum iDirect downstream data rate.
The Segment is of fixed, configurable length and consists of a series of fixed length Fragment
Headers (FH) followed by variable length data Fragments (F). The entire Segment is
encrypted in a single operation by the encryption engine. The FH contains sufficient
information for the source packet stream, post decryption on the receiver, to be
reconstructed. Each Fragment contains a portion of a source packet.
The Encryption Header is transmitted unencrypted but contains only enough information for a
receiver to decrypt the segment if it is in possession of the symmetric key.
Once an encrypted packet exits the Encryption Engine it undergoes normal processing such as
framing and forward error correction coding. These functions are essentially independent of
TRANSEC but complete the downstream transmission chain and are thus depicted in figure 1.

TRANSEC Upstream
A simplified block diagram for the iDirect TRANSEC upstream data path is shown in Figure 40.
The functions represented in this diagram are implemented in software and firmware on a
TRANSEC capable remote.

Figure 40. Upstream Data Path

The encrypted path is shown is solid black, and the unencrypted (clear) path is shown in
dashed red. The Packet Ingest function determines the message type and places the packet in
the appropriate queue or drops it if it is not valid.
Consider the diagram from left to right with variable length packets arriving on the far left
into the block named Packet Ingest. The upstream (remote to hub) path differs from the
downstream (hub to remote) in that on the upstream is configured for TDMA. Variable length
packets from a remote LAN are segmented in software, and can be considered as part of the
Packet Ingest function. Therefore there is no need for the firmware level segmentation
present in the downstream. Additionally, since the remote is not responsible for the
generation of BTPs, there is no need for the additional queues present in the downstream.
Packets extracted from the Data Queue are always encrypted. Packets exacted from the Clear
Queue are always sent unencrypted. The overwhelming majority of traffic will be extracted
from the Data Queue. Traffic sent in the clear bypasses the Encryption Engine and precedes
the FEC engine for transmission.

80 Technical Reference Guide


iDS Release 8.3
TRANSEC Upstream

The encryption algorithm utilizes AES algorithm with a 256 bit key and will operate in CBC
Mode. Packets exit the Encryption Engine with a pre-pended header as described in Figure 41.

Figure 41. TDMA TRANSEC Slot

Note: TRANSEC overhead reduces the payload size shown in Table 5 on page 40 by the
following amounts for each FEC rate: .431: 7 bytes; .533: 4 bytes; . 660: 4
bytes; .793: 6 bytes.
The Encryption Header consists of a single 32 bit word with 3 fields. The fields are:
IV Seed. This field is a 29 bit field utilized to generate an 128 bit IV. The IV Seed field starts at
zero and increments for each transmitted burst. The full 128 bit IV is generated from the
padded seed by passing it though the encryption engine. The IV is expanded into a 128-bit IV
by encrypting it with the current AES key for the inroute. Remotes can therefore expand the
same seed into the same full IV. However, this does not create any problems because due to
addressing requirements, it is impossible for any two remotes within the same upstream to
generate the same plain text data. While no logic is included to ensure that IVs do not repeat
for a single terminal, repetition is impossible because the key rotates every two hours by
default. Since the seed increments for each transmission burst, the number of total bursts
prior to a seed wrapping around is 229 or 536,870,912. Given the two-hour key rotation
period, a single terminal would need to send over 75,000 TDMA bursts per second to exhaust
the range of the seed. This exceeds any possible iDirect upstream data rate by far.
Key ID. This field indicates the entry within the key ring (described under the key
management section later in this document) to be utilized for this frame.
Enc. This field indicates if the frame is encrypted or not.
The Segment is of fixed, configurable length and consists of what we might call the standard
iDirect TDMA frame. A description of the details of the standard frame are beyond the scope
of this document, but as a general description, consist of a Demand Header which indicates
the amount of bandwidth a remote is requesting, the iDirect Link Layer (LL) Header, and
ultimately the actual Payload. This Segment is encrypted. The Encryption Header is
transmitted unencrypted but contains only enough information for a receiver to decrypt the
segment if it is in possession of the symmetric key.
Once an encrypted packet exits the Encryption Engine it undergoes normal processing such as
forward error correction coding. This function is essentially independent of TRANSEC but
completes the upstream transmission chain (as shown in figure 3).
A remote will always burst in its assigned slots even when traffic is not present by generating
encrypted fill payloads as needed. The iDirect Hub dynamic allocation algorithm will always
operate in a mode whereby all available time slots within all time plans are filled.

Technical Reference Guide 81


iDS Release 8.3
TRANSEC Key Management

TRANSEC Key Management


All hosts in an iDirect Network must have X.509 public key certificates. Hosts include NMS
servers, protocol processor blades, TRANSEC hub line cards, and TRANSEC remotes.
Certificates are required to join an authenticated network. They serve to prevent man-in-the-
middle attacks and unauthorized admission to the network. You must use the iDirect
Certificate Authority (CA) utility (called the CA Foundry) to issue the certificates for your
TRANSEC network. For more information on using and creating certificates, see “Appendix A,
Using the iDirect CA Foundry” of the iBuilder User Guide.

Note: In this release, HiFin encryption cards are no longer required on you protocol
processor blades for TRANSEC key management.
Key Distribution Protocol (Figure 42), Key Rolling (Figure 43), and Host Keying Protocol (Figure
44) are based on standard techniques utilized within an X.509 based PKI.

Figure 42. Key Distribution Protocol

Key Distribution Protocol assumes that upon the receipt of a certificate from a peer that the
host is able to validate and establish a chain of trust based on the contents of the certificate.
iDirect TRANSEC utilizes standard X.509 certificates and methodologies to verify the peer’s
certificate.
After the completion of the sequence shown in Figure 42, a peer may provide a key update
message again in an unsolicited fashion as needed. The data structure utilized to complete
key update (also called a key roll) is shown in Figure 43.

82 Technical Reference Guide


iDS Release 8.3
TRANSEC Key Management

Figure 43. Key Rolling and Key Ring

This data structure conceptually consists of a set of pointers (Current, Next, Fallow), a two
bit identification field (utilized in the Encryption Headers described above), and the actual
symmetric keys themselves. A key update consists of generating a new key, placing it in the
last fallow slot just prior to the Current pointer, updating the next pointers (circular update
so 11 rolls to 00) and current pointers and generating a Key Update message reflecting these
changes. The key roll mechanism allows for multiple keys to be “in play” simultaneously so
that seamless key rolls can be achieved. By default the iDirect TRANSEC solution rolls any
symmetric key every two hours, but this is a user configurable parameter. The iDirect Host
Keying Protocol is shown Figure 44.

Figure 44. Host Keying Protocol

Technical Reference Guide 83


iDS Release 8.3
TRANSEC Remote Admission Protocol

This protocol describes how hosts are originally provided an X.509 certificate from a
Certificate Authority. iDirect provides a Certificate Authority Foundry module with its
TRANSEC hub. Host key generation is done on the host in all cases.

TRANSEC Remote Admission Protocol


Remotes acquire into the network over the clear channel. Specifically, a protocol processor
blade is designated to be in charge of controlling remote admission into the network. The only
time unencrypted traffic is permitted to traverse the network is during the remote admission
sequence. When a remote is given the opportunity to acquire into the network, the
acquisition sequence takes place as follows:
First, the protocol processor generates two time plans per inroute. One is the normal time
plan utilized to indicate to remotes which slots in which inroutes they may burst on. This time
plan is always encrypted. The second time plan is not encrypted, and it indicates the owner of
the acquisition slot and which remotes may burst in the clear (unencrypted) on selected slots.
The union of the two time plans covers all slots in all inroutes.
The time plans are then forwarded and broadcast to all remotes in the normal method.
Remotes that are not yet acquired receive the unencrypted time plan and wait for an
invitation to join the network via this unencrypted message.
The remote designated in the acquisition slot acquires in the normal fashion by sending an
unencrypted response in the acquisition slot of a specific inroute.
Once the physical layer acquisition occurs, the remote must follow the key distribution
protocol before it is trusted by the network, and for it to trust the network it is a part of. This
step must be carried out in the clear. Therefore remotes in this state will request bandwidth
normally and they will be granted unencrypted TDMA slots. The hub and remotes exchange
key negotiation messages in the cleartext channel. Three message types exist:
• Solicitations, which are used to synchronize, request, inform, and acknowledge a peer.
• Certificate Presentations, which contain X.509 certificates.
• Key Updates, which contain AES key information that is signed and RSA encrypted; the
RSA encryption is accomplished by using the remote’s public key and the signature is
created by using the hub’s private key.
After authentication, the key update message must also be completed in the clear. The actual
symmetric keys are encrypted using the remote’s public key information obtained in the
exchanged certificate. Once the symmetric key is exchanged, the remote enters the network
as a trusted entity, and begins normal operation in an encrypted mode.

Reconfiguring the Network for TRANSEC


Once you have ensured that all hardware is TRANSEC-compatible and you have issued
certificates to all X.509 hosts, you can reconfigure your network to operate in TRANSEC mode.
For detailed configuration procedures, see “Reconfiguring the Network for TRANSEC” section
in the “Converting an Existing Network to TRANSEC” chapter of the iBuilder User Guide.

84 Technical Reference Guide


iDS Release 8.3
12 Fast Acquisition

The Fast Acquisition feature reduces the average acquisition time for remotes, particularly in
large networks with hundreds or thousands of remotes. The acquisition messaging process
used in prior versions is included in this release. However, the Protocol Processor now makes
better use of the information available regarding hub receive frequency offsets common to all
remotes to reduce the overall network acquisition time. No additional license requirements
are required for this feature.

Feature Description
Fast Acquisition is configured on a per-remote basis. When a remote is attempting to acquire
the network, the Protocol Processor determines the frequency offset at which a remote
should transmit and conveys it to the remote in a time plan message. From the time plan
message, the remote learns when to transmit and at what frequency offset. The remote
transmit power level is configured in the option file. Based on the time plan message, the
remote calculates the correct Frame Start Delay (FSD). The fundamental aspects of
acquisition are how often a remote gets an opportunity to come into the network, and how
many frequency offsets need to be tried for each remote before it acquires the network.
If a remote can acquire the network more quickly by trying fewer frequency offsets, the
number of remotes that are out of the network at any one time can be reduced. This
determines how often other remotes get a chance to acquire. This feature reduces the
number of frequency offsets that need to be tried for each remote.
By using a common hub receive frequency offset, the fast acquisition algorithm can determine
an anticipated range smaller than the complete frequency sweep space configured for each
remote. As the common receive frequency offset is updated and refined, the sweep window is
reduced.
If an acquisition attempt fails within the reduced sweep window, the sweep window is
widened to include the entire sweep range. Fast Acquisition is enabled by default. You can
disable it by applying a custom key.
For a given ratio x:y, the hub informs the remote to acquire using the smaller frequency offset
range calculated based on the Fast Acquisition scheme. After x number of attempts, the
remote sweeps the entire range y times before it will sweep the narrower acquisition range.
The default ratio is 100:1. That is, try 100 frequency offsets within the reduced (common)
range before resorting to one full sweep of the remote’s frequency offsets.
If you want to modify the ratio, you can use custom keys that follow to override the defaults.
You must apply the custom key to the hub side for each remote in the network.

Technical Reference Guide 85


iDS Release 8.3
Feature Description

[REMOTE_DEFINITION]
sweep_freq_fast = 100
sweep_freq_entire_range = 1
sweep_method = 1 (Fast Acquisition enabled)
sweep_method = 0 (Fast Acquisition disabled)
Fast Acquisition cannot be used on 3100 series remotes when the upstream symbol rate is less
than 260 Ksym/s. This is because the FLL on 3100 series remotes is disabled for upstream
rates less than 260 Ksym/s.
The NMS disables Fast Acquisition for any remote that is enabled for an iDirect Music Box and
for any remote that is not configured to utilize the 10 MHz reference clock. In IF-only
networks, such as a test environment, the 10 MHz reference clock is not used.

86 Technical Reference Guide


iDS Release 8.3
13 Remote Sleep Mode

The Remote Sleep Mode feature conserves remote power consumption during periods of
network inactivity. This section explains how Remote Sleep Mode is implemented. It includes
the following sections:
• “Feature Description" explains how Remote Sleep Mode works.
• “Awakening Methods" describes how remotes exit Remote Sleep Mode.

Feature Description
Remote Sleep mode is supported on all iNFINITI series remotes. In this mode, the BUC is
powered down, thus saving power consumption.
When Sleep Mode is enabled on the iBuilder GUI for a remote, the remote enters Remote
Sleep Mode after a configurable period elapses with no data to transmit. By default, the
remote exits Remote Sleep Mode whenever packets arrive on the local LAN for transmission on
the inbound carrier.

Note: You can use the powermgmt mode set sleep console command to enable or
powermgmt mode set wakeup to disable remote sleep mode.
The stimulus for a remote to exit sleep mode is also configurable in iBuilder. You can select
which types of traffic automatically “trigger wakeup” on the remote by selecting or clearing a
check box for the any of the QoS service levels used by the remote. If no service levels are
configured to trigger wakeup the remote, you can manually force the remote to exit sleep
mode by disabling sleep mode on the remote configuration screen.

Before a remote enters sleep mode, the protocol processor continues to allocate traffic slots
(including minimum CIR) to the remote. Before it enters sleep mode, the remote notifies the
NMS and the real time state of the remote is updated in iMonitor. Once the remote enters
sleep mode, as far as the protocol processor is concerned, the remote is out of the network.
Therefore, no traffic slots are allocated to the remote while it is in sleep mode. When the
remote receives traffic that triggers wakeup, the remote returns to the network and traffic
slots are allocated as normal by the protocol processor.

Awakening Methods
There are two methods by which a remote is “awakened” from Sleep Mode. They are
“Operator-Commanded Awakening”, and “Activity-Related Awakening”.

Technical Reference Guide 87


iDS Release 8.3
Awakening Methods

Operator-Commanded Awakening
With Operator Command Awakening, you can manually force a remote into Remote Sleep
Mode and subsequently “awake” it via the NMS. This can be done remotely from the Hub since
the remote continues to receive the downstream while in sleep mode.

Activity Related Awakening


With Activity-Related Sleep Awakening, the remote enters Remote Sleep Mode after a
configurable period elapses with no data to transmit. The remote “wakes up” as soon as it
receives traffic with these service level markings. When a remote is reset, the activity timer
also resets.
When the remote sees no traffic that triggers the wake up condition for the configured sleep
time-out, it goes into Remote Sleep Mode. In this mode, all the IP traffic that does not trigger
a wake up condition is dropped. When a packet with the service level marking that triggers a
wakeup is detected, the remote resets the sleep timer and wakes up. In Remote Sleep Mode,
the remote processes the burst time plans but it does not apply them to the firmware. No
indication is sent to the remote’s router that the interface is down, and therefore the packets
from the local LAN are still passed to the remote’s distributor queues. Packets that would
wake up the interface will not be dropped by the router and are available to the layers that
process this information. The protocol layer that manages the sleep function drops the
packets that do not trigger the wakeup mode.
Power consumed by the remote under normal and low power (Partial Sleep Mode) is shown in
Table Table 9.

88 Technical Reference Guide


iDS Release 8.3
Enabling Remote Sleep Mode

Table 9. Power Consumption in Remote Sleep Mode

Enabling Remote Sleep Mode


You can enable Remote Sleep Mode by using iBuilder. You can also configure the service levels
that trigger the remote to wake up. A sleep time-out period is configurable for each remote.
The sleep time-out is the period of inactivity after which the remote enters low power mode.
The iDirect Sleep Mode feature requires a custom key in iDS Release 8.0. When you enable
Sleep Mode on the Remote QoS tab, the remote will conserve power by disabling the 10 MHz
reference for the BUC after the specified number of seconds have elapsed with no remote
upstream data transmissions. A remote should automatically wake from sleep mode when
packets arrive for transmission on the upstream carrier, provided that Trigger Wakeup is
selected for the service level associated with the packets.
However, in iDS Release 8.0, a remote will not wake from Sleep Mode even if packets arrive
for transmission that match a service level with Trigger Wakeup selected without the
appropriate custom key. You must configure the following remote-side custom key in iBuilder
on the Remote Custom Tab for all remotes with Sleep Mode enabled:

Technical Reference Guide 89


iDS Release 8.3
Enabling Remote Sleep Mode

[SAT0]
forced = 1

Note: When this custom key is set to 1, a remote with RIP enabled will always advertise
the satellite route as available on the local LAN, even if the satellite link is down.
Therefore, Sleep Mode feature is not compatible with configurations that rely on
the ability of the local router to detect loss of the satellite link.
To enable Remote Sleep Mode, see the chapter on configuring remotes in the iBuilder User
Guide.
To configure service level based wake up, see the QoS Chapter in the iBuilder User Guide.

90 Technical Reference Guide


iDS Release 8.3
14 Automatic Beam
Selection

This section contains information pertaining to Automatic Beam Selection (ABS) for roaming
remotes in a maritime environment.

Automatic Beam Selection Overview


An iDirect network is defined as a single outroute and one or more inroutes, all operating with
one satellite and one hub. A Network Management System (NMS) can manage and control
multiple networks.
You can define remotes that “roam” from network to network around the globe. These
roaming remotes are not constrained to a single location or limited to any geographic region.
Instead, by using the capabilities provided by the iDirect “Global NMS” feature, remote
terminals have true global IP access.
The decision of which network a particular remote joins is made by the remote. When joining
a new network, the remote must re-point its antenna to receive a new beam and tune to a
new outroute. Selection of the new beam can be performed manually (by using remote
modem console commands) or automatically. This chapter describes how automatic beam
selection is implemented in an iDirect network.
For detailed information on configuring and monitoring roaming remotes, see the iBuilder
User Guide and iMonitor User Guide. For additional information on the ABS feature, see the
iBuilder User Guide.

Theory of Operation
Since the term “network” is used in many ways, the term “beam” is used rather than the
term “network” to refer to an outroute and its associated inroutes.
ABS is built on iDirect’s existing mobile remote functionality. When a modem is in a particular
beam, it operates as a traditional mobile remote in that beam.
In a maritime environment, a roaming remote terminal consists of an iDirect modem and a
controllable, steerable, stabilized antenna. The ABS software in the modem can command the
antenna to find and lock to any satellite. Using iBuilder, you can define an instance of the
remote in each beam that the modem is permitted to use. You can also configure and monitor
all instances of the remote as a single entity. The remote options file (which conveys
configuration parameters to the remote from the NMS) contains the definition of each of the

Technical Reference Guide 91


iDS Release 8.3
Theory of Operation

remote’s beams. Options files for roaming remotes, called “consolidated” options files, are
described in detail in the iBuilder User Guide.
As a vessel moves from the footprint of one beam into the footprint of another, the remote
must shift from the old beam to the new beam. Automatic Beam Selection enables the remote
to select a new beam, decide when to switch, and to perform the switch-over, without human
intervention. ABS logic in the modem reads the current location from the antenna and decides
which beam will provide optimal performance for that location. This decision is made by the
remote, rather than by the NMS, because the remote must be able to select a beam even if it
is not communicating with the network.
To determine the best beam for the current location, the remote relies on a beam map file
that is downloaded from the NMS to the remote and stored in memory. The beam map file is a
large data file containing beam quality information for each point on the Earth's surface as
computed by the satellite provider. Whenever a new beam is required by remotes using ABS,
the satellite provider must generate new map data in a pre-defined format referred to as a
“conveyance beam map file.” iDirect provides a utility that converts the conveyance beam
map file from the satellite provider into a beam map file that can be used by the iDirect
system.

Note: In order to use the iDirect ABS feature, the satellite provider must enter into an
agreement with iDirect to provide the beam map data in a specified format.
The iDirect NMS software consists of multiple server applications. One such server
application, know as the map server, manages the iDirect beam maps for remotes in its
networks. The map server reads the beam maps and waits for map requests from remote
modems.
A modem has a limited amount of non-volatile storage, so it cannot save an entire map of all
beams. Instead, the remote asks the map server to send a map of a smaller area (called a
beam “maplet”) that encompasses its current location. When the vessel nears the edge of its
current maplet, the remote asks for another beam maplet centered on its new location. The
geographical size of these beam maplets varies in order to keep the file size approximately
constant. A beam maplet typically covers a 1000 km square.

Beam Characteristics: Visibility and Usability


The remote can determine two characteristics of each beam even without the map:
• A beam is defined as visible if the look elevation to the satellite is greater than the
minimum look elevation. The minimum look elevation defaults to ten degrees above the
horizon.
• A beam is usable unless an attempt to use it fails. The beam is considered unusable for a
period of one hour after the failure, or until all visible beams are unusable.
If the selected beam is unusable, the remote attempts to use another beam, provided one or
more usable beams are available. A beam can become unusable for many reasons, but each
reason ultimately results in the inability of the remote to communicate with the outside
world using the beam. Therefore the only usability check is based on the “layer 3 state” of
the satellite link, such as whether or not the remote can exchange IP data with the upstream
router.
Examples of causes that might result in a beam becoming unusable include:
• The NMS operator disables the modem instance.

92 Technical Reference Guide


iDS Release 8.3
Theory of Operation

• A Hub Line Card fails with no available backup.


• The Protocol Processor fails with no backup.
• A component in the upstream or downstream RF chain fails.
• The satellite fails.
• The beam is reconfigured.
• The remote cannot lock to the downstream carrier.
• The receive line card stops receiving the modem.
Anything that causes the remote to inhibit its transmitter causes the receive line card to stop
receiving the modem, which eventually causes Layer 3 to fail. The modem stops transmitting
if it loses downstream lock. A mobile remote will also stop transmitting under the following
conditions:
• The remote has not acquired and no GPS information is available.
• The remote antenna declares loss-of-lock.
• The antenna declares a blockage.

Selecting a Beam without a Map


Under certain circumstances the remote will not have a beam maplet that covers its current
location. When this occurs, remotes use a round-robin selection algorithm, attempting to use
each visible, usable beam defined in its options file in turn for five minutes until the remote is
acquired. This can occur under various conditions:
• When a remote is being commissioned.
• If the vessel travels with the modem turned off and must locate a beam when returned to
service.
• If the remote cannot remain in the network for an extended period due to blockage or
network outage.
• If the map server is unreachable.
In all cases, after the remote establishes communications with the map server, it immediately
asks for a new maplet. When a maplet becomes available, the remote uses the maplet to
compute the optimal beam, and switches to that beam if it is not the current beam.

Controlling the Antenna


To make the system work, the remote must be able to control the antenna. The remote
software communicates with the antenna control unit supplied with the antenna over the
local LAN. Since there is no standard antenna control protocol, the remote code must be
written specifically for each protocol. The following antenna protocols are currently
supported:
• Orbit-Marine AL-7104
• Schlumberger SpaceTrack 4000
• SeaTel DAC
• Open AMIP

Technical Reference Guide 93


iDS Release 8.3
Operational Scenarios

A steerable, stabilized antenna must know its geographical location in order to point to the
antenna. The antenna includes a GPS receiver for this purpose. The remote must also know its
geographical location to select the correct beam and to compute its distance from the
satellite. The remote periodically commands the antenna controller to send the current
location to the modem.

IP Mobility
Communications to the customer intranet (or to the Internet) are automatically re-
established after a beam switch-over. The process of joining the network after a new beam is
selected uses the same internet routing protocols that are already established in the iDirect
system. When a remote joins a beam, the Protocol Processor for that beam begins advertising
the remote's IP addresses to the upstream router using the RIP protocol. When a remote
leaves a beam, the Protocol Processor for that beam withdraws the advertisement for the
remote's IP addresses. When the upstream routers see these advertisements and withdrawals,
they communicate with each other using the appropriate IP protocols to determine their
routing tables. This permits other devices on the Internet to send data to the remote over the
new path with no manual intervention.

Operational Scenarios
This section presents a series of top-level operational scenarios that can be followed when
configuring and managing iDirect networks that contain roaming remotes using Automatic
Beam Selection. Steps for configuring network elements such as iDirect networks (beams) and
roaming remotes are documented in iBuilder User Guide. Steps specific to configuring ABS
functionality, such as adding an ABS-capable antenna or converting a conveyance beam map
file, are described in “Appendix C, Configuring Networks for Automatic Beam Selection” of
the iBuilder User Guide.

Creating the Network


This scenario outlines the steps that must be performed by the customer, the satellite
provider, and the network operator to create a network that uses ABS.
1. The customer determines the satellite provider and agree on the set of beams (satellites,
transponders, frequencies and footprints) to be used by remotes using ABS.
2. The satellite provider enters into an agreement with iDirect specifying the format of the
conveyance beam map file.
3. The satellite provider supplies the link budget for the hub and remotes.
4. iDirect delivers the map conversion program to the customer specific to the conveyance
beam map file specification.
5. The satellite provider delivers to the customer one conveyance beam map file for each
beam that the customer will use.
6. The customer orders and installs all required equipment and an NMS.
7. The NMS operator configures the beams (iDirect networks).
8. The NMS operator runs the conversion program to create the server beam map file from
the conveyance beam map file or files.

94 Technical Reference Guide


iDS Release 8.3
Operational Scenarios

9. The NMS operator runs the map server as part of the NMS.

Adding a Vessel
This scenario outlines the steps required to add a roaming remote using ABS to all available
beams.
1. The NMS operator configures the remote modem in one beam.
2. The NMS operator adds the remote to the remaining beams.
3. The NMS operator saves the modem's options file and delivers it to the installer.
4. The installer installs the modem aboard a ship.
5. The installer copies the options file to the modem using iSite.
6. The installer manually selects a beam for commissioning.
7. The modem commands the antenna to point to the satellite.
8. The modem receives the current location from antenna.
9. The installer commissions the remote in the initial beam.
10. The modem enters the network and requests a maplet from the NMS map server.
11. The modem checks the maplet. If the commissioning beam is not the best beam, the
modem switches to the best beam as indicated in the maplet. This beam is then assigned
a high preference rating by the modem to prevent the modem from switching between
overlapping beams of similar quality.
12. Assuming center beam in clear sky conditions:
13. The installer sets the initial transmit power to 3 dB above the nominal transmit power.
14. The installer sets the maximum power to 6 dB above the nominal transmit power.

Note: Check the levels the first time the remote enters each new beam and adjust the
transmit power settings if necessary.

Normal Operations
This scenario describes the events that occur during normal operations when a modem is
receiving map information from the NMS.
1. The ship leaves port and travels to next destination.
2. The modem receives the current location from antenna every five minutes.
3. While in the beam, the antenna automatically tracks the satellite.
4. As the ship approaches the edge of the current maplet, the modem requests a new
maplet from the map server.
5. When the ship reaches a location where the maplet shows a better beam, the remote
switches by doing the following:
a. a. Computes best beam.
b. b. Saves best beam to non-volatile storage.

Technical Reference Guide 95


iDS Release 8.3
Operational Scenarios

c. c. Reboots.
d. d. Reads the new best beam from non-volatile storage.
e. e. Commands the antenna to move to the correct satellite and beam.
f. f. Joins the new beam.

Mapless Operations
This scenario describes the events that occur during operations when a modem is not
receiving beam mapping information from the NMS.
1. While operational in a beam, the remote periodically asks the map server for a maplet.
The remote does not attempt to switch to a new beam unless one of the following
conditions are true:
a. a. The remote drops out of the network.
b. b. The remote receives a maplet indicating that a better beam exists.
c. c. The satellite drops below the minimum look elevation defined for that beam.
2. If not acquired, the remote selects a visible, usable beam based only on satellite
longitude and attempts to switch to that beam.
3. After five minutes, if the remote is still not acquired, it marks the new beam as unusable
and selects the best beam from the remaining visible, usable beams in the options file.
This step is repeated until the remote is acquired in a beam, or all visible beams are
marked as unusable.
4. If all visible beams are unusable, the remote marks them all as usable, and continues to
attempt to use each beam in a round-robin fashion as described in step 3.

Blockages and Beam Outages


This scenario describes the events that occur when a modem cannot join or loses the selected
beam.
1. If the remote fails to join the selected beam after five minutes, it marks the beam as
unusable and selects a new beam based on the maplet.
2. If the remote loses network connectivity for five minutes, it marks the current beam as
unusable and selects a new beam based on the maplet.
3. Any beam marked as unusable remains unusable for an hour or until all beams are marked
as unusable.
4. If only the current beam is visible, the remote will not attempt to switch from that beam,
even after losing connectivity for five minutes.

96 Technical Reference Guide


iDS Release 8.3
Operational Scenarios

Error Recovery
This section describes the actions taken by the modem under certain error conditions.
1. If the remote cannot communicate with the antenna and is not acquired into the network,
it will reboot after five minutes.
2. If the antenna is initializing, the remote waits for the initialization to complete. It will
not attempt to switch beams during this time.

Technical Reference Guide 97


iDS Release 8.3
Operational Scenarios

98 Technical Reference Guide


iDS Release 8.3
15 Hub Geographic
Redundancy

This chapter describes how you can establish a primary and back up hub that are
geographically diverse. It includes:
• “Feature Description‚" which describes how geographic redundancy is accomplished.
• “Configuring Wait Time Interval for an Out-of-Network Remote‚" which describes how you
can set the wait period before switchover.

Feature Description
The Hub Geographic Redundancy feature builds on the previously developed Global NMS
feature and the existing dbBackup/dbRestore utility. You configure the Hub Geographic
Redundancy feature by defining all the network information for both the Primary and Backup
Teleports in the Primary NMS. All remotes are configured as roaming remotes and they are
defined identically in both the Primary and Backup Teleport network configurations.
Only iNFINITI remotes can currently participate in Global NMS networks. Since the backup
teleport feature also uses the Global NMS capability, this feature is also restricted to iNFINITI
remotes.
During normal (non-failure) operations, carrier transmission is inhibited on the Backup
Teleport. During failover conditions (when roaming network remotes fail to see the
downstream carrier through the Primary Teleport NMS) you can manually enable the
downstream transmission on the Backup Teleport, allowing the remotes to automatically
(after the configured default wait period of five minutes) acquire the downstream
transmission through the Backup Teleport NMS.
iDirect recommends the following for most efficient switchover:
• A separate IP connection (at least 128 Kbps) between the Primary and Backup Teleport
NMS for database backup and restore operations. A higher rate line can be employed to
reduce this database archive time.
• The downstream carrier characteristics for the Primary and Backup Teleports MUST be
different. For example, either the FEC, frequency, frame length, or data rate values must
be different.
• On a periodic basis, backup and restore your NMS configuration database between your
Primary and Backup Teleports. See the NMS Redundancy and Failover Technical Note for
complete NMS redundancy procedures.

Technical Reference Guide 99


iDS Release 8.3
Configuring Wait Time Interval for an Out-of-Network Remote

Configuring Wait Time Interval for an Out-of-Network


Remote
If a roaming remote is configured at both a Primary and Backup Hub, and the remote loses the
Downstream carrier from the Primary Hub, the remote attempts to lock to the Downstream
carrier from the Backup Hub, after a configured interval in seconds. By default this “wait
time” before attempting the switch is 300 seconds (5 minutes). This wait time for beam
switchover can be changed by setting the net_state_timeout custom key value (in
seconds) to the desired wait period.
For example, if you want to make the wait period 10 minutes, use the following custom key:
[REMOTE_DEFINITION]
net_state_timeout=600
For further configuration information, see the chapter “Defining Network Components” in the
iBuilder User Guide.

100 Technical Reference Guide


iDS Release 8.3
16 Carrier Bandwidth
Optimization

This chapter describes carrier bandwidth optimization and carrier spacing. It includes the
following sections:
• “Overview" describes how reducing carrier spacing increases overall available bandwidth.
• “Increasing User Data Rate" provides an example of how you can increase user data rates
with out increasing occupied bandwidth.
• “Decreasing Channel Spacing to Gain Additional Bandwidth" provides an example of how
you can increase occupied bandwidth.

Overview
The Field Programmable Gated Array (FPGA) firmware uses optimized digital filtering which
reduces the amount of satellite bandwidth required for an iDirect carrier. Instead of using a
40% guard band between carriers, now the guard band may be reduced to as low as 20% on
both the broadcast Downstream channel and the TDMA Upstream. Figure 45 shows an overlay
of the original spectrum and the optimized spectrum.

Figure 45. Overlay of Carrier Spectrums

This optimization translates directly into a cost savings for existing and future networks
deployed with iDirect remote modems.

Technical Reference Guide 101


iDS Release 8.3
Increasing User Data Rate

The spectral shape of the carrier is not the only factor contributing to the guard band
requirement. Frequency stability parameters of a system may result in the need for a guard
band of slightly greater than 20% to be used. iDirect complies with the adjacent channel
interference specification in IESS 308 which accounts for adjacent channels on either side
with +7 dB higher power.
Be sure to consult the designer of your satellite link prior to changing any carrier parameters
to verify that they do not violate the policy of your satellite operator.

Increasing User Data Rate


Since the amount of required guard band between carriers has been reduced, it is now
possible to fit a higher bit rate carrier into the same satellite bandwidth that was required
previously. Therefore, a network operator can increase the bit rate of existing carriers
without purchasing additional bandwidth.
A consequence of choosing this option is that increasing the bit rate of the carrier to fill the
extra bandwidth requires slightly more power. Increasing the bit rate by 15% would result in
an additional 0.5 dB of power. Be sure to consult the provider of your link budget prior to
adjusting the bit rate of your carriers.
Frequency stability in the system may limit the amount of bit rate increase by increasing the
guard band requirement.
The example that follows illustrates a scenario applicable to a system with negligible
frequency stability concerns. It shows how the occupied bandwidth does not increase when
the user data rate increases. In this example, FEC rate 0.793 with 4 kbit Turbo Product Code is
used.
Current Carrier Parameters:
• User Bit (info) Rate:1000 kbps
• Carrier Bit Rate:1261.034 kbps
• Carrier Symbol Rate:630.517 ksps
• Occupied Bandwidth:882.724 kHz
• Guard Band Between Carriers: 40% (Channel Spacing = 1.4)
New Carrier Parameters
• User Bit (info) Rate: 1166.667 kbps
• Carrier Bit Rate: 1471.206 kbps
• Carrier Symbol Rate: 735.603 ksps
• Occupied Bandwidth: 882.724 kHz
• Guard Band Between Carriers: 20% (Channel Spacing = 1.2)
A 16.67% improvement in user data rate is achieved at no additional cost.
It is possible that due to instability of frequency references in a satellite network system, a
carrier may not fall exactly on its assigned center frequency. iDirect networks combat
frequency offset using an automatic frequency control algorithm. Any additional instability
must be accommodated by additional guard band.
The frequency references to the hub transmitter and to the satellite itself are generally very
stable so the main source of frequency instability is the downconverter at the hub. This is

102 Technical Reference Guide


iDS Release 8.3
Decreasing Channel Spacing to Gain Additional Bandwidth

because the automatic frequency control algorithm uses the hub receiver’s estimate of
frequency offset to adjust each remote transmitter frequency. Hub stations which use a
feedback control system to lock their downconverter to an accurate reference may have
negligible offsets. Hub stations using a locked LNB will have a finite frequency stability range.
Another reason to add guard band is to account for frequency stability of other carriers
directly adjacent on the satellite which are not part of an iDirect network. Be sure to review
this situation with your satellite link designer before changing carrier parameters.
The example that follows accounts for a frequency stability range for systems using
equipment with more significant stability concerns. Given the “Current Carrier Parameters”
the previous example and a total frequency stability of +/-5 kHz, compute the new carrier
parameters:
Solution:
• Subtract the total frequency uncertainty from the available bandwidth to determine the
amount of bandwidth left for the carrier (882.724 kHz – 10 kHz = 872.724 kHz).
• Divide this result by the minimum channel spacing (872.724 / 1.2 = 727.270 kHz).
• Use the result as the carrier symbol rate and compute the remaining parameters.
New Carrier Parameters
• User Bit (info) Rate: 1153.450 kbps
• Carrier Bit Rate: 1454.540 kbps
• Carrier Symbol Rate: 727.270 ksps
• Occupied Bandwidth: 882.724 kHz
• Guard Band Between Carriers: 21.375% (Channel Spacing = 1.21375)
A 15.345% improvement in user bit rate was achieved at no additional cost.

Decreasing Channel Spacing to Gain Additional


Bandwidth
The amount of required guard band between carriers can also be expressed as the channel
spacing requirement. For example, if the required guard band is 20%, the channel spacing
requirement is 1.2*Carrier_Symbol_Rate (Hz).
Therefore, a network operator may take advantage of the new carrier bandwidth
optimization by reworking their frequency plan such that excess bandwidth is available for
use by another carrier.
For example, consider an iDirect network with a user data (information) rate of 5 Mbps on the
downstream and three upstream carriers of 1 Mbps each. FEC rate 0.793 with 4 kbit TPC is
used for all carriers in this example. Figure 46 shows that an additional Upstream carrier may
be added by reducing the channel spacing of the existing carriers.

Technical Reference Guide 103


iDS Release 8.3
Decreasing Channel Spacing to Gain Additional Bandwidth

Figure 46. Adding an Upstream Carrier By Reducing Carrier Spacing

104 Technical Reference Guide


iDS Release 8.3
17 Hub Line Card Failover

This chapter describes basic hub line card failover concepts, transmit/receive verses receive-
only line card failover, failover sequence of events, and failover operation from a user’s point
of view.
For information about configuring your line cards for failover, refer the “Networks, Line
Cards, and Inroute Groups” chapter of the iBuilder User Guide.

Basic Failover Concepts


Each second, every line card sends a diagnostic message to the NMS. This message contains
the status of various onboard components. If the NMS fails to receive any diagnostic messages
from a line card for 60 seconds, and all failover prerequisites are met, it considers that the
line card may be in failed state. It takes another 15 seconds to ensure that line card has truly
failed. It then starts the failover process.
For Tx(Rx) line cards, the standby assumes the failed card’s role immediately. For Rx line
cards, the standby needs to flash a new options file and reset. The estimated time to
complete a Tx(Rx) line card failover is less than 10 seconds; the estimated time to complete a
Rx-only line card is less than 1 minute.

Note: If your Tx line card fails, or you only have a single Rx line card and it fails, all
remotes must re-acquire into the network after failover is complete.

Tx(Rx) versus Rx-Only Line Card Failover


The most important line card in a network is the Tx(Rx) line card; if this card fails, all
remotes drop out of the network. When an Rx-only card in a frequency-hopping inroute group
fails, all remotes automatically begin sharing the other inroutes. While this may result in
diminished bandwidth, remotes do not drop out of the network.
iDirect’s failover method guarantees the fastest failover possible for the Tx(Rx) line cards.
The standby line card in each network is pre-configured with the parameters of the Tx card
for that network, and has those parameters loaded into memory. The only difference between
the active Tx(Rx) card and the standby is that the standby mutes its transmitter (and
receiver). When the NMS detects a Tx(Rx) line card failure, it sends a command to the
standby to un-mute its transmitter (and receiver), and the standby immediately assumes the
role of the Tx(Rx) card.

Technical Reference Guide 105


iDS Release 8.3
Failover Sequence of Events

Rx-only line cards take longer to failover than Tx(Rx) cards because they need to receive a
new options file, flash it, and reset.

Failover Sequence of Events


The flow chart that shows the sequence of events performed on the NMS server to execute a
complete failover is shown in Figure 47. Portions of the failover sequence of events are
revealed in real-time. You may perform a historical condition query in iMonitor at any time to
see the alarms and warnings that are generated and archived during the failover operation.

Event Server
determines line
card has failed

Configuration
Server is notified

iMonitor will show the line


Automatic card in the Alarm state.
failover NO DONE. User may initiate manual
selected? failover if desired .

YES

User will have already


User initiates been notified that failover
Prerequisites Met? NO DONE. cannot happen.
manual failover

YES

Configuration All subsequent operations are


Server powers handled by the Configuration
down slot of failed Server unless otherwise noted
card.

Send command to
spare to switch
Send ACTIVE
role from Standby
options file of Rx-only line
YES NO to Primary; send
failed card to card?
ACTIVE options
spare and reset
file of failed card
but DO NOT reset

Apply necessary
changes to puma
(serial number)

Configuration Server must


Former spare gets grab exclusive write lock at
role of failed card this point. Any user with the
(Tx, TxRx, or Rx) lock will lose the lock and
and carrier/inroute any unsaved changes.
group assignments

Failed unit gets


new role: Failed.

Figure 47. Failover Sequence of Events

106 Technical Reference Guide


iDS Release 8.3
Failover Sequence of Events

Technical Reference Guide 107


iDS Release 8.3

You might also like