Optical Control Plane Tutorial
MUPBED Workshop at TNC2007, Copenhagen
Acknowledgement: The author thanks all colleagues from
the OIF for their work, which has been the basis for this
tutorial
The responsibility for the content of this tutorial is with
the author
Hans-Martin.Foisel, T-Systems / Deutsche Telekom
OIF Carrier WG Chair, OIF Vice President
www.oiforum.com
ASON/GMPLS Tutorial Outline
Introduction
Requirements & Architecture
Signaling
Routing
Control Plane Management
OIF Interoperability Demonstrations
Control Plane Applications Use Cases
Concluding remarks
Optical Control Plane Goals
Management
Plane
TMF-814
E-1
DS-1
E-3
Access
Edge
Metro
Core
Long-Haul
Core
DS-3
ATM
Optical Control
Plane
FR
10/100bT
IP
Ethernet
OC-3/12/48/192
Optical Control
Plane
STM-1o/4/16/64
FC
FICON
Offer real multi-vendor and multi-carrier inter-working
Enhance service offering with Ethernet and IP / Optical
Provide end-to-end service activation
Integrated cross-domain provisioning of switched connection services
Provide accurate inventory management
Optical Control
Plane
Realizing Optical Control Plane Goals
Framework Elements
Robust and scalable transport infrastructure that
facilitates carriage of desired services
Management plane that complements control plane in
facilitating deployment and management of services
Control plane architecture spanning user and provider
networks that supports multiple provider business
models and user service requests
Control plane protocols based upon existing and
emerging protocols of the data world
Robust Data Communications Network architecture
and mechanisms that enable interaction of the
protocols running at each node
Intelligent Transport Networks
introduce ...
A distributed
Control Plane
Signaling protocols
for dynamic setup
and teardown of
connections
Routing protocols for
automatic routing
Building on concepts/protocols from the
data world
Key Concepts Derived from the Data World
Distributed processing/knowledge/storage
Directory services
E.g., DNS, X.500
Open Distributed Processing
Standardized route determination and topology dissemination protocols
Routing information exchange mechanisms
E.g., RIP, OSPF, BGP, IS-IS/ES-IS
Flexibility in binding time decisions
Difference between provisioning and auto-discovery
Security based upon logical versus physical barriers
E.g., authentication, integrity, encryption
Differentiate between provisioning and more dynamic connection
management
Survivability
Distributed restoration using signaling
Leveraging Existing Protocol Solutions
Caveats
Internet serving
community of users
with common goals
and mutual trust:
Classical Internet
architecture
When taking protocol
solutions developed for the
classical Internet, they bring
along associated underlying
principles and architectural
aspects
Commercialization of the
Internet:
More Business Critical
Infrastructure & Availability
Requirements
Transport business &
operational requirements:
Control plane architecture
enabling boundaries for
policy and information
sharing
Optical Control Plane Capabilities
Optical
OpticalControl
Controlplane
plane
(distributed
(distributedintelligence
intelligence))
Bandwidth
request or
release from
clients
Management
System
X
Network failure
Control
Control
Plane
Plane
Signalling
Signalling
Routing
Routing
Discovery
Discovery
Improved
Improvedbandwidth
bandwidthusage/efficiency
usage/efficiency
Scheduled/unscheduled
Scheduled/unscheduledBoD
BoD
OSS
OSSsimplification
simplification
Autodiscovery
Autodiscovery
Related Standards Development Organizations
ITU-T
RFCs
Recommendations
IETF
ASON Architecture &
Requirements
GMPLS
Protocols
Interop Results
ASON/GMPLS E-NNI, UNI
Control
Plane
Mgmt.
OIF
Implementation
Agreements
Use Cases
Signalling for
Ethernet Services
TMF
Solution Sets
Ethernet
Services
9
MEF
Technical
Specifications
Protocols and Architectures
Control Plane capabilities are implemented in protocols,
whose elements can be combined to support different
architectures/implementations
Different SDOs contribute various protocol elements and
architectural components
RFC
RFC
RFC
Control Plane
Solutions
IA
Rec.
RFC
RFC
RFC
RFC
RFC
IETF
10
IA
IA
OIF
Rec.
ITU-T
Control Plane Specifications - Example
IETF
ITU-T
Requirements &
Architecture
TMF
509
RFC 3495
G.8080
AutoDiscovery
G.7714
G.7714.1
Signaling
G.7713
G.7713.2
RFC 4204
RFC 3474
G.7715
G.7715.1
RFC 4207
RFC 3473
RFC 3946
Routing
TMF
RFC 4208
RFC 4202
ENNI 1.0
ENNI 2.0
UNI 1.0
UNI 2.0
E-NNI
OSPF 1.0
OIF
G.7715.2
DCN/SCN
Management
11
G.7712
G.7718
G.7718.1
GMPLS
MIB RFCs
TMF
TMF
814
Optical Internetworking Forum (OIF)
Mission: To foster the development and deployment of
interoperable products and services for data switching
and routing using optical networking technologies
The OIF is the only industry group that brings together
professionals from the data and optical worlds
Its 100+ member companies represent the entire
industry ecosystem:
Carriers and network users
Component and systems vendors
Testing and software companies
12
OIF Technical Committee Working Groups
13
Optical Control Plane
Implement Agreement Status
OIF Control Plane IA Dashboard
Signaling
Routing
Security
OIF-UNI-01.0-R2
OIF-ENNI-01.0-OSPF OIF-SEP-01.0
OIF-UNI-01.0-R2-RSVP
OIF-SEP-02.1
OIF-ENNI-01.0
OIF-SMI-01.0
OIF-SMI-02.1
Management
OIF-CDR-01.0
Control Plane
Logging and Auditing
with Syslog
OIF-UNI-02.0
OIF-UNI-02.0-RSVP
OIF-ENNI-02.0
Draft
Straw Ballot
Letter Ballot
http://www.oiforum.com/public/impagreements.html
14
Approved IA
ASON/GMPLS Tutorial Outline
Introduction
Requirements & Architecture
Signaling
Routing
Control Plane Management
OIF Interoperability Demonstrations
Control Plane Applications Use Cases
Concluding remarks
15
Business deployment considerations
16
Optical Control Plane
Business Deployment Considerations
Optical control plane viability depends upon
supporting business as well as technical
requirements
Service Provider business models
Commercial and operational practices
Services and network infrastructure heterogeneity
Control and management plane heterogeneity
Network and equipment interoperability
Forms foundation of fundamental optical
control plane architecture principles
17
Service Provider Business Models
Internet Service Provider (ISP)
Delivers IP-based services
Owns all of its infrastructure (i.e., including fiber and duct to the
customer premises)
Leases some of its fiber or transport capability from a third party
Classical Service Provider
Offers L1/L2/L3 services
Owns its transport network infrastructure
Sells services to customers who may resell to others
Carriers Carrier (Service Broker)
Provides optical networking services
May not own transport infrastructure(s) supporting those services
(connection carried over third party networks)
Research networks (NRENs, GEANT2, Internet2)
18
Commercial & Operational Practices (1)
Enable protection of commercial business operating
practices and resources from external scrutiny or
control
An network operator is likely to support a number of user
services networks; a trust relationship cannot be assumed
between the network and these users (or among the various
users)
A network operator will not relinquish control of its resources
outside of its administrative boundaries, as the network is
a prime asset
Support a pay for service commercial model
Network operators differentiate their services by defining
their own branded bundles of functionality, service quality,
support, and pricing plans
Provided value added services must be verifiable and
billable in a value preserving way
19
Commercial & Operational Practices (2)
Protect security and reliability of optical transport
network
Optical transport network connection persistence
must not be affected failures of its control plane,
including that of the control communications network
Signaling Communications Network (SCN)
The network must be safeguarded against attacks that
may compromise its control plane, or seek
unauthorized use of its resources
Control plane security
20
Services Heterogeneity
A wide range of services may be offered; e.g.,
Classical data (e.g., best effort Internet, Frame Relay)
Ethernet (e.g., EPL, EVPL, EPLAN, EVPLAN)
L1/L2/L3 Virtual Private Network (VPN)
SONET/SDH switched connection (e.g., STS-n, VC-n)
OTH switched connection (e.g., ODU, OCh)
Many different service deployment scenarios; e.g.,
All services interface at the IP level
Various services interface at L1, L2, and L3
Various options for L1 and L2 topologies and re-configurability
in access, metro, and core networks
21
Network Infrastructure Heterogeneity
Extremely diverse network of networks, with widely varying
topologies, deployed technologies, services/applications
supported
Support operator-specific criteria including cost,
performance, and survivability characteristics
Breadth of existing and emerging data plane
technologies
Choice of infrastructure granularity options
Flexible capacity adjustment schemes
Range of single- and multi-layer survivability
strategies
Differing infrastructure evolution strategies
22
Control & Management Heterogeneity
Control plane-based subnetworks
Management plane-based subnetworks (with various operations
support system environments)
Hybrid control plane / management plane scenarios; e.g.,
Use of signaling protocols in combination with centralized route
calculation
Mix of control plane and management plane based subnetworks
Network Provider A
NMS
EMS 1
EMS 2
Management Plane Based
Connection Control
SNC 1
Network Connection
23
SNC 2
Control Plane Based
Connection Control
E
X
A
M
P
L
E
Optical Control Plane
Network Operator Deployment Observations
Optimal network layering, convergence choices,
equipment selection dependent upon multiple factors
Network size, geography, projected growth
Service offerings portfolio, QoS committed in SLAs
Cost, performance, resiliency trade-offs
Operations support system environment
Whether services traverse multiple operator domains
Differing network operator transport infrastructure,
control & management deployments and evolution
strategies
Optical control plane architecture must support multidimensional heterogeneity
24
Heterogeneity & Research Projects
NOBEL
25
Fundamental optical control plane
architecture principles
26
Optical Control Plane
Fundamental Architecture Principles (1)
Decouple services from service delivery mechanisms
Wide range of network infrastructure options
Network operator specific optimizations
Decouple QoS from realization mechanisms
Wide range of survivability options
Network operator specific approaches
Introduce call construct, which reflects a
service association that is distinct from
infrastructure/realization mechanisms
27
Optical Control Plane
Fundamental Architecture Principles (2)
Provide boundaries of policy and information sharing
Range of network operator business models
Varying trust relationships among users and providers,
among users, among providers
Targeted solutions, scalability considerations (scope of
information dissemination), etc.
Establish modular architecture with
interfaces at policy decision points
28
Optical Control Plane
Fundamental Architecture Principles (3)
Provide for various distributions of control functionality among
physical platforms
Different distributions of routing and signaling control
Fully centralized to fully distributed system designs
Decouple topology of the controlled network from that of the
network supporting control plane communications (SCN)
The transmission medium may be different for control plane messages
and transport plane data
Identifiers to distinguish transport resources
from, and among, signaling and routing
control entities, and SCN addresses
29
ASON architecture and standards status
30
Optical Control Plane
ITU-T ASON Recommendation Framework
DCN/SCN
G.7712
Signaling
G.7713
Autodisc
G.7714
Routing
initialization
Mgmt. FW
G.7715
G.7716
G.7718
Link State
G.7713.1
(PNNI-based)
G.7715.1
G.7713.3
(GMPLS-CR-LDP)
31
G.7715.2
Info Model
G.7718.1
Protocol
Specific
Recs.
G.7713.2
(GMPLS-RSVP-TE)
G.7714.1
(Discovery
SDH/OTN)
Remote Path
Query
Protocol Neutral Recs.
Rec. G.8080
Automatically Switched Optical Network (ASON)
Optical Control Plane
ITU-T ASON Architecture
ITU-T G.8080/Y.1304, Architecture of the Automatically Switched
Optical Network
First version Approved Nov.01, several subsequent Amendments,
first major revision Approved June 2006
Subsumes and deprecates ITU-T Rec. G.807, Requirements for
Automatically Switched Transport Networks, Approved July 01
Architecture considers business and operational aspects of realworld deployments
Call and connection separation, connection persistence,
customer/network address space isolation, domain constructs,
reference points and interfaces
Leverages transport layer network constructs utilized in all transport
network architecture and equipment Recommendations
Applicable to all connection-oriented transport networks (whether
circuit or packet)
32
ITU-T ASON Architecture
Calls and Connections
Objective: Support ability to offer enhanced/new
types of transport services facilitated by:
Automatic provisioning of transport network connections
Span one or more managerial/administrative domains
Involves both a Service and Connection perspective
Call : Support the provisioning of end-to-end services while
preserving the independent nature of the various businesses
involved
Connection : Automatically provision network connections
(in support of a service) that span one or more
managerial/administrative domains
33
ITU-T ASON Architecture
Domains
ASON domains represent generalization of existing
traditional concepts
Transport definitions of administrative/management domains
Internet administrative regions
Domains may express differing:
Administrative and/or managerial responsibilities
Trust relationships, addressing schemes
Distributions of control functionality
Infrastructure capabilities, survivability techniques, etc.
Domains are established by network operator policies
34
ITU-T ASON Architecture
Interfaces (1)
Service demarcation points are where call control is provided
Inter-domain interfaces are service demarcation points
UNI Service
Demarcation
Point
User
IP/MPLS
Provider
management
system
Management Plane
Router
Ethernet/
ATM / FR
SONET/ SDH
/ OTN
Router
Call
Control
Router
Provider
network
*Call
Control Optical Control Plane
Transport Plane
Design modularized around open interfaces at domain
boundaries
UNI, E-NNI, I-NNI
35
ITU-T ASON Architecture
Interfaces (2)
UNI
UNI
E-NNI
NE
Provider A
NE
NE
Provider B
NE
UNI separates the concerns of
the user and provider:
3.6 Modularity is good. If you can
keep things separate, do so. - RFC
1548
Objects referenced are User objects,
and are named in User terms
UNI enables:
Client driven end-to-end service
activation
Multi-vendor inter-working
Multi-client
IP, Ethernet, TDM, etc.
Multi-service
SONET/SDH, Ethernet, etc.
Service monitoring interface for SLA
management
36
I-NNI
E-NNI
Domain 1
E-NNI
Domain 2
ITU-T ASON Architecture
Interfaces
(3)
UNI
UNI
UNI-C UNI-N
UNI-N UNI-C
E-NNI
NE
Provider A
NE
NE
Provider B
NE
E-NNI enables:
End-to-end service activation
Multi-vendor inter-working
Multi-carrier inter-working
Independence of survivability
schemes for each domain
I-NNI
E-NNI
Domain 1
E-NNI
I-NNI supports:
Intra-domain connection
establishment
Explicit connection
operations on individual
switches
37
Domain 2
ITU-T ASON Architecture
Call Control & Interfaces
Call state is maintained at network access points, and at key
network transit points where it is necessary or desirable to apply
policy
Calls that span multiple domains are comprised of call segments, with
call control provided at service demarcation points (UNI/E-NNI)
One or more connections are established in support of individual call
segments, with scope of connection control typically limited to a single
call segment
Domain A
UNI
NE
NE
CALL
CONNECTIONS
38
UNI Call
Segment
Domain A Call Segment
Domain B
E-NNI
NE
NE
E-NNI Call
Segment
UNI
Domain B Call Segment
UNI Call
Segment
Components of Control Plane enabled
Network Domains
Management plane
CP MANAGEMENT
DCN
CONTROL PLANE
Data plane
39
Optical Control Plane Service
Permanent Connection
All intra-/inter-domain calls and connections are
provisioned by Management Plane actions
C1
Provisioned
TN1
Provisioned
TN2
Permanent connection
C: Client network domain
TN:Transport Network provider domain
40
Provisioned
C2
Optical Control Plane Service
Soft Permanent Connection
Management plane of a transport network provider
domain is initiating a call/connection
SPC initiating domain
C1
TN1
Permanent
connection
E-NNI
Switched
connection
TN2
C2
Permanent
connection
Soft Permanent Connection (SPC)
C: Client network domain
TN:Transport network provider domain
41
Optical Control Plane Service
Switched Connection
Management plane of a client domain is initiating a
call/connection
SC initiating domain
C1
TN1
UNI
TN2
E-NNI
Switched connection
C: Client network domain
TN:Transport Network provider domain
42
C2
UNI
G.805 Transport Foundation
43
G.805 Foundation Elements Transport
Resources
Introduction of automated control doesnt
remove/change the attributes of transport
resources
Control Plane needs to be able to configure the same
attributes
Introduction of automated control doesnt modify
the functional components that exist within the
transport plane
44
Transport Network/Equipment Architecture
Informal Specification Approaches
Described in terms of network elements, facilities,
and cross-connections
Facilities identified in terms of the physical layer
characteristics
Cross-connections between constituents of
facilities or embedded facilities
DS1 Service example
DS1
X
3:3 DCS
45
DS3
X
3:1 DCS
SONET
X
Regen
SONET
X
3:1 DCS
SONET
X
3:3 DCS
DS1
Transport Network/Equipment Architecture
Informal Specification Approaches
Issues
Model specific to the technologies used in the NEs
Difficult to understand network topology without
understanding details of the NEs
Subject to differing interpretations of equipment
specifications/behaviors arising from natural language
description
Usage of different terminology; e.g., in doing a functional
decomposition, different specifiers may group
functionality in different ways but use the same term to
denote the functional block
Development of more formalized specification
techniques initiated during 1988 time frame
46
Transport Network Constructs
Formal Specification Techniques
Recognize new challenges of emergent multi-carrier,
multi-vendor telecommunications environment
Increasingly complex networks and behaviors, arising from
deployment of multi-technology networks & equipment
No single network architecture, or single set of network
elements, that suited all operators
Better support for multi-carrier/multi-vendor
interoperability
Unambiguous specifications that dont impose
unnecessary architectural constraints
Network operator transport infrastructure technology
deployment choices and evolution strategies
Network equipment provider innovation re equipment types
47
Transport Network Constructs
Formal Specification Techniques - G.805
Describes the generic characteristics of networks using a
common language
Transcends technology and physical architecture choices
Provides a view of functions or entities that may be distributed
among a number of equipments
Defines elements that support the modeling of topological
and functional concepts
Topology refers to how elements of the network are
interconnected
Functions refer to how signals are transformed during their
passage through the network
Defines small number of architectural components that may
be interconnected to represent various network/equipment
configurations
48
Transport Network Constructs
Topological G.805 Layers
A layer is defined in terms of its set of signal
properties - characteristic information
Networks can be represented in terms of a stack of
client/server relationships
Helps manage the complexity created by the
presence of different types of characteristic
information in networks
Allows the management of each layer to be similar
49
Transport Network Constructs
Topological G.805 Example
DS3 Client Layer
DS3
Signal
DS3 payload
mapping into
C-3 Container
Multiplex
Section
Layer
VC-3 Path
overhead
insertion
DS3 Client Layer Network
VC-3 Path Layer
VC-3 Path Layer Network
Mapping & muxing
Alignment Mul tiplex sec tion
into ST M-1 f rame
overhead
genera
tion
of VC-3
for each VC -3
Regenerator
Section Layer
Regenerator sec tion
Mapping regenera tor
overhead genera tion section overhead & muxing
for each STM-1
into STS-N f ra me
Physical Media Layer
Multiplex Section Layer Network
Regenerator Section Layer Network
Vertical
Conversion into STM-N
physical interface
STM-N
DS3 client carried over STM-N Signal
50
Physical Media Layer Network
Transport Network Constructs
DS-1 Service Architecture & Equipment
DS -1 Path Trail
mux
DS1 Path
Connection
DS -1 Path Connection
DS -3 Path Trail
DS1 Line
Trail
DS -3 Path
Connection
DS -3 Line
Trail
STS -1 Trail
STS -1 Connection
3:3
DCS
SONET Line
Connection
SONET Line
Connection
Section
Trail
Section
Trail
Section
Conn
Section
Conn
Optical
Trail
STS -1
Connection
SONET
Line Trail
SONET
Line Trail
3:1
DCS
51
DS1 Line
Trail
DS -3 Path Connection
regen
Optical
Trail
DS1 Path
Connection
SONET
Line Conn
3:3
DCS
Section
Trail
Section
Conn
Optical
Trail
3:1
DCS
mux
Transport Functional Modeling
Topological G.805 Partitioning
Even for a single layer, complexity arises from the
many different network nodes and connections
between them
Partitioning is defined as the division of layer
networks into separate subnetworks that are
interconnected by links representing the available
transport capacity between them
Helps manage complexity by using the principle of
recursion to tailor the amount of detail to be
understood at a particular time according to the
need of the viewer
Allows the management of each partition to be
similar
52
Transport Functional Modeling
Topological G.805 Partitioning Example
DS3 Layer network
DS3 Layer network
DS3 Layer network
Horizontal
subnetwork
link
53
G.805 Transport Network Constructs
Architectural Component Definitions
Functional Entities
Adaptation: Adapts client signal into a form suitable for the server layer
Termination: Where information concerning the integrity and supervision of
adapted information may be generated and added, extracted and analyzed
Topological Entities
Trail: Provides end-end connection offering means to check transport quality
Network Connection: Same scope as trail but without ensuring integrity
Link: Represents available transport capacity between subnetworks (static)
Link Connection: Transfers information transparently across a link
Subnetwork: Describes flexible connectivity
Subnetwork Connection: Transfers information across a subnetwork
Points
Termination Connection Point (TCP): Any binding involving a termination function
source or sink
Connection Point (CP): Any binding involving an adaptation source or sink
Access Point (AP): Delimits a layer network
54
Transport Functional Entities
Trail Termination
Trail Termination Source
Adds overhead (OH) to input
information (payload) to allow
the integrity of the transfer to be
monitored
Payload
Trail Termination Sink
Payload
Trail
Removes overhead and outputs
remaining payload information
Determines integrity of the
transfer
The Characteristic Information
Network Connection
for a trail is the payload plus the
overhead
OH
55
Payload
Transport Functional Entities
Adaptation
Adaptation Source
Converts client layer
characteristic information to a
form suitable for transport over a
trail in the server layer network
This is termed Adapted
Information
Adaptation Sink
Converts the adapted
information from the server layer
network to the client layer
characteristic information
56
Client Layer CI
Client Layer
Adapted Information
Trail
Server Layer CI
G.805 Transport Network Constructs
Multi-layer Architecture: DS3 over STM-N
DS3 Client Signal
DS3 Client Signal
VC-3/DS3
Adaptation
VC-3 Trail
AP
VC-3 Network Connection
VC-3 Trail
Termination
TCP
VC-3 Subnetworks
VC-3 SNC
STM-1 MS/VC-3
Adaptation
CP
AP
CP
VC-3 Link Connection
STM-1 Trail
Adaptation
Trail Termination
TCP
57
STM-1 MS Trail
Termination
Etc.
VC-3 SNC
Key Observations
Each layer network has its own topology
NEs may have different neighbors in different layer
networks
NEs do not necessarily appear in all layer networks
NEs may perform different functions within a layer
network, or in different layer networks
Link connections in a client layer are created by
configuring trails and adaptation functions in a
server layer
Differences in server layer networks are
transparent to the client
58
Control Components
59
G.8080 Control Plane Constructs
Topological Entity Definitions
Subnetwork Point (SNP): Abstraction of a G.805 CP or TCP.
They are associated to form a connection.
Subnetwork Point Pool (SNPP): A set of subnetwork points
that are grouped for the purposes of routing
SNPP link: A link associated with SNPPs in different
subnetworks.
Routing Area: Defined by a set of subnetworks, the SNPP
links that interconnect them, and the SNPPs representing
the ends of the SNPP links exiting that routing area.
A routing area may contain smaller routing areas
interconnected by SNPP links.
The limit of subdivision results in a routing area that contains a
single subnetwork.
60
G.8080 Control Plane Constructs
Topological Entity Relationships
SNP
Relationship between the architectural entities in
Transport Plane and Control plane
CP
Adaptation
Subnetwork
Trail Termination
TCP
SNC
SNP Link Connection
Link Connection
SNP
Trail
SNP: Subnetwork Point
SNPP: SNP Pool
SNPP Link
61
SNC
G.8080 Control Plane Constructs
Control plane architecture described in terms of components
and interfaces
Represent logical functions (abstract entities) rather than
physical implementations
The actual location/distribution of the components is not
constrained
To facilitate the construction of different scenarios,
leverages the Unified Modeling Language (UML)
Not all of the reference points (UNI, E-NNI) need to be
instantiated
A single instantiation of a G.8080 control plane may control
multiple layer networks with an explicit definition of the
interlayer interaction (including none)
62
Introduction to ASON Components
Monitor port
Policy port
Config port
DA
LRM
CCC/
NCC
CC
PC
LRM - Link Resource Manager
CCC Calling/Called Party Call Controller
NCC Network Call Controller
CC - Connection Controller
RC - Routing Controller
PC Protocol Controller
63
RC
TAP
TP
DA Discovery Agent
TAP Termination & Adaptation Performer
TP Traffic Policing Component
Link Resource Manager
Responsible for control-plane
local link connection inventory
Resources provided through
configuration or discovery
Receives requests for resources
from Connection Controller
Provides information to Routing
to facilitate Topology
advertisements
64
Monitor port
Policy port Config port
DA
LRM
CCC/
NCC
CC
PC
RC
TAP
TP
Call Controller
Responsible for providing a
service across the network
Orchestrates components to
meet service requested
Different domains can have
different policies
Invoked by Management
Request or by Signaling
messages
Interacts with peer Call
Controllers via Protocol
Controller
65
Monitor port
Policy port Config port
DA
LRM
CCC/
NCC
CC
PC
RC
TAP
TP
Connection Controller
Responsible for establishing
connections across a domain
Requests Route to use from
Routing Controller
Requests specific local link
resources from LRM
Interacts with peer Connection
Controllers via Protocol
Controller
66
Monitor port
Policy port Config port
DA
LRM
CCC/
NCC
CC
PC
RC
TAP
TP
Routing Controller
Responsible for providing paths
between two points in the
network
Maintains topology view
Paths are calculated to meet
service constraints
Signal type
Diversity
Interacts with peer
Routing Controllers via Protocol
Controller
67
Monitor port
Policy port Config port
DA
LRM
CCC/
NCC
CC
PC
RC
TAP
TP
Protocol Controller(s)
Responsible for providing
protocol specific behavior
Can be separate per client
function, or a merged function
Monitor port
Policy port Config port
CCC/NCC and CC
CCC/
NCC
CC
PC
68
DA
LRM
RC
TAP
TP
Example Component Interactions
Call Request
Connection Request
Call Accept
Call Accept
Connection Indication
Path Computation
function in Routing
Component
NCC
CC
69
CC
CC
NCC
Identifiers
70
Identifiers Names & Addresses
An identifier provides a set of characteristics for
an entity that makes it uniquely recognizable
Name: identifies an entity
Unique only if it is unique within the context, or namespace,
it is being used in
The same entity may have more than one name in different
namespaces
Address: identifies a position in a specific topology
Unique for the topology
Typically hierarchically composed; allows for address
summarization for locations that are close together
Addresses should reflect connectivity, not identity
71
Categories of Identifiers
Management plane identifiers
Transport plane identifiers (G.805)
Identifiers for transport resources that are used by the
control plane
Identifiers for Signaling & Routing Protocol Controllers (PCs)
Identifiers for locating PCs in the SCN
Identifiers to distinguish transport resources
from, and among, signaling and routing
control entities, and SCN addresses
72
Identifier Spaces
MANAGEMENT PLANE (MP)
(e.g. CTP, TTP)
DCN
MCN/SCN addresses
CONTROL PLANE (CP)
SC PC, RC PC ID
(e.g., G.8080 CCC, NCC, CC, RC)
UNI/E-NNI TRI
SNPP, SNP ID
DATA PLANE
(e.g., G.805 CP, TCP)
73
Node ID
Relationship with GMPLS Architecture
74
Relationship with GMPLS Architecture
Models
75
Differing terminology and descriptive techniques
More classical MPLS terminology (e.g., LSP) as compared
to transport functional modeling terminology
Natural language architecture descriptions as compared to
formalized control plane component architecture
Peer model, also called the integrated model, corresponds to
ASON architecture with no UNI or E-NNI interfaces
instantiated
Assumes a community of users with mutual trust and
shared goals
No inherent policy or security boundaries
Routing and signaling protocols flow within the network
without any filtering or other constraints imposed
Relationship with GMPLS Architecture
Models
Overlay model, most closely corresponds to ASON
architecture with UNI (with no E-NNI interfaces instantiated)
Edge nodes are not aware of the topology of the core
nodes (core nodes act more as a closed system)
Core and edge nodes may have a routing protocol
interaction for exchange of reachability information to
other edge nodes
Augmented model, most closely corresponds to an ASON
architecture in which E-NNI interfaces have been
instantiated
76
Reflects the case of policy driven exchange of routing and
topology information between core and edge nodes
ASON/GMPLS Tutorial Outline
Introduction
Requirements & Architecture
Signaling
Routing
Control Plane Management
OIF Interoperability Demonstrations
Control Plane Applications Use Cases
Concluding remarks
77
Signaling in Transport Networks
Essentially a Management Plane function
Distributed Connection Management
Signaling has existed for many years in telephony, ISDN, ATM,
and MPLS.
Signalling is extended for transport networks due to
Fixed granularities defined multiplexing hierarchy
Protection functions in the data plane
Separation of data plane from control and management planes
Addressing/Naming - Separation of spaces between data plane
and control plane
Connection centric rather than Protocol centric
Connection exists even if control plane ceases
78
Protocols and Architectures
Signaling capabilities are implemented in protocols,
whose pieces can be combined according to different
architectures.
Different SDOs contribute pieces and architectures.
Control Plane
Solutions
RFC
RFC
RFC
IA
Rec.
RFC
RFC
RFC
RFC
RFC
IETF
79
IA
IA
OIF
Rec.
ITU-T
Signaling in ASON Architecture
Architectural concepts for ASON signaling include:
Calls, connections, call/connection separation
Reference points, Addressing
Signaling protocols implemented at UNI, INNI, ENNI reference
points.
Call and Connection setup implemented in protocol with user/service
and network addressing.
Domain A
UNI
NE
NE
CALL
CONNECTIONS
80
UNI Call
Segment
Domain A Call Segment
NE
NE
E-NNI Call
Segment
UNI
Domain B
E-NNI
Domain B Call Segment
UNI Call
Segment
ASON Protocol-Neutral Signaling
ITU-T Rec. G.7713/Y.1704, Distributed Call and Connection
Management (DCM)
First version Approved Nov.01, several subsequent Amendments,
first major revision Consented Feb. 06
Protocol neutral specifications encompassing UNI, I-NNI and E-NNI,
supporting both soft-permanent and switched connections
Provides distributed call and connection management
requirements
Operations procedures, signaling network resilience to user and
network defects, signal flow exception handling
Restoration for single and multiple rerouting domains
Includes attribute specifications, message specifications, state
diagrams, Call and Connection Controller management
Basis for mapping to specific protocol solutions (G.7713.x series)
81
Protocol Specific Signaling
ITU-T Recommendations for ASON signaling protocol extensions Approved
March 03
Rec. G.7713.1, DCM Signaling Mechanism Using PNNI
Rec. G.7713.2, DCM Signaling Mechanism Using GMPLS RSVP-TE
Rec. G.7713.3, DCM Signaling Mechanism Using GMPLS CR-LDP
IETF base GMPLS signaling protocol RFCs Approved by IESG, published Jan. 03
RFC 3471, GMPLS Signaling Functional Description
RFC 3472, GMPLS CR-LDP Extensions
RFC 3473, GMPLS RSVP-TE Extensions
IETF Informational RFCs containing ASON GMPLS signaling protocol extensions
(aligned with G.7713.2 & G.7713.3) and IANA Code Point Assignments Approved
by IESG, published March 03
RFC 3474, IANA Assignments for GMPLS RSVP-TE Usage and Extensions for ASON
RFC 3475, IANA Assignments for GMPLS CR-LDP Usage and Extensions for ASON
RFC 3476, IANA Assignments for LDP, RSVP, and RSVP-TE Extensions for Optical UNI
Signaling
82
OIF User Network Interface
Signaling Specifications
Control Plane work driven by Carrier Working Group requirements
Architecture consistent with ITU-T ASON Recs. G.8080, G.7713
Signaling specifications in IAs based upon IETF GMPLS RFCs and ITU-T Recs.
G.7713.2/3
Specifies detailed usage of selected options in protocols
OIF UNI 1.0 Signaling Specification published Oct. 01
Defines the signaling protocols and mechanisms implemented by client and
transport network equipment from different vendors to invoke services
Feature focus on SDH/SONET VC-3/STS-1 and higher
OIF UNI1.0R2: UNI 1.0 Signaling Specification, Release 2 published Feb. 04
OIF-UNI-01.0-R2-Common - User Network Interface (UNI) 1.0 Signaling
Specification, Release 2: Common Part
OIF-UNI-01.0-R2-RSVP - RSVP Extensions for User Network Interface (UNI) 1.0
Signaling, Release 2
Updates UNI 1.0, but does not change UNI 1.0 functionality
Reflects subsequent developments in other standards bodies
Builds upon lessons learned from the OIFs multi-vendor interoperability event
conducted at OFC 2003
83
OIF User Network Interface
Signaling Specifications (cont)
OIF UNI 2.0
Incorporates architectural enhancements per ITU-T ASON Rec.
G.8080 and G.7713 evolution
Base features
Support of Ethernet services (almost complete)
Support of G.709 (complete)
Enhanced security (complete)
Call/connection separation (complete)
Support of sub-STS1 granularity (complete)
84
OIF External Network Node Interface
Signaling Specifications
Control Plane work driven by Carrier Working Group requirements
Architecture consistent with ITU-T ASON Recs. G.8080, G.7713,
G.7715, G.7715.1
Signaling specifications in IAs based upon IETF GMPLS RFCs and ITU-T
Recs. G.7713.2/3
Specifies detailed usage of selected options in protocols
OIF E-NNI 1.0, Intra-Carrier E-NNI Signaling IA, published Feb. 04
Enables end-to-end connection management by providing a uniform
way for carriers to interconnect network domains; feature support
consistent with UNI 1.0/1.0R2
OIF E-NNI 2.0, E-NNI Signaling IA, work in progress
Updated with E-NNI Signaling 1.0 Principal Ballot comments (from
Feb. 04)
Updated to reflect ITU-T Recommendation and IETF RFC progress
Includes updates based upon lessons learned from 2004 and 2005 OIF
World Interoperability Demonstrations
Includes features to support UNI 2.0
85
ITU-T/OIF and IETF
Signaling Protocol Differences
ITU-T G.7713.2
Consistent
RFC3473 and
other base RFCs
OIF UNI 1.0 R2
OIF E-NNI 1.0
Additionally specifies
detailed usage of
selected options in
protocols
Both utilize signaling protocols defined
In IETF GMPLS RFCs
Due to concerted effort, the signaling protocols are mostly the same!
Same RSVP-TE PATH/RESV processing
Same RSVP-TE refresh mechanism
No change to defined RSVP objects
No new messages
What are the differences between ITU-T/OIF and IETF ASON/GMPLS
signaling protocols?
Three new call-related objects, and some new C-Types associated with UNI and E-NNI
Need for usage of ResvTear/RevErr (no change to procedures if used)
86
Signaling Protocol Interworking Scenario
Dynamic signalling and routing control over OTN/SONET/SDH network
Dynamic signalling for Ethernet services using ASON interlayer architecture
ITU-T/OIF
OIF UNI
Provider A
OIF
E-NNI
IETF
Provider B
Protocol
i/w
Provider C
Client
Client
OIF Signalling based
on G.7713,
G.7713.2, G.7713.3
Ethernet services
based on G.8010,
G.8011, MEF.10
87
IETF
UNI
OIF ENNI routing based
on G.7715, G.7715.1
RFC
RFC
RFC
RFC
3472
3473
3946
4203
RFC 4139
RFC 4208
OIF ASON/GMPLS Interworking Project
OIF guideline document on Signaling Protocol Interworking of
ASON / GMPLS network domains
Document defines signaling protocol interworking methods
between network domains utilizing OIF/ITU-T and IETF GMPLS
Interworking of ASON UNI and E-NNI (based on GMPLS RSVP-TE with
ASON extensions, per G.7713.2 and OIF IAs) and IETF interfaces (based
on GMPLS RSVP-TE, per RFC 3473 and RFC 4208)
Detailed interworking scenarios and functions; e.g.,
Required translation, resolution or re-mapping of address and
identifier objects
List of messages or objects supported in one specification, but not the
other, along with the resultant behavior
List of objects which are examined or processed in one specification,
but are tunneled or opaque to the other
Describes pragmatic implementations of interoperable solutions
88
Interlayer Call Technology
Client makes an Ethernet call to destination
Network triggers SONET/SDH calls to match Ethernet service request
Control plane sets up Ethernet and SONET/SDH connections, and controls
GFP/VCAT
Ethernet
callEthernet
call
completes
Client
SONET/SDH
call
OXC
UNI-C
Ethernet
call
progresses
OXC
Client
UNI-N
UNI-C
UNI-N
Ethernet
Ethernet
GFP
VCAT
Interlaye
r call
invoked
89
GFP
VCAT
UNI-N
UNI-N
SONET/SDH
connection
s
Ethernet
connection
Interlayer Signaling
Interlayer architecture enables business boundary between layers.
Service separation between layers is at interlayer NCC relationship
Note that VCAT is a separate layer
ETH NCC
ETH NCC
ETH MAC Client
ETH MAC Client
Layer boundary
VC-3 NCC
VC-3 NCC
Interlayer
Within Layer
90
ASON/GMPLS Tutorial Outline
Introduction
Requirements & Architecture
Signaling
Routing
Control Plane Management
OIF Interoperability Demonstrations
Control Plane Applications Use Cases
Concluding remarks
91
Basics of IP Routing
IP routing protocol
Exchange of information between IP routers that allow them to
determine how to forward IP packets
There are different types of routing protocols
Distance Vector (RIP, IGRP)
Path Vector (BGP)
Link State (OSPF, IS-IS)
Link State Routing protocols in particular support distribution of
network topology as links and nodes
For IP, every router must have exactly the same network topology
information (links, nodes, and link wts.)
Every router must run exactly the same path computation algorithm
Failure to insure these last two requirements can result in routing
loops and black holes
92
Operation of Link State Routing Protocols
Node B
Node D
Node G
Node H
Node A
Node J
Node E
Node C
Nodes establish routing adjacencies
Exchange local link information
Forward received link/node information
93
Node F
Routing Topology Database
Request Info
Link State Advertisements (LSA)
and other advertisements form
the Topology Database
DB A Summary
DB B Summary
Identify link by remote link
endpoint
Carry link information, e.g.,
capacity, weight
Periodic or triggered updates
Node A
Node B
Topology
DB
Topology
DB
reliably flooded
Neighbors keep identical
topology databases
Each node ends up with the full
topology of the network
94
D
C A
D
C A
Shortest Path Calculation Determines
Packet Forwarding
Shortest Path Techniques
Links are
characterized by
a single link
weight
2
NE
2
NE
4
1
NE
7
NE
1
NE
5
2
2
2
2
2
NE
3
NE
6
2
95
How is this Useful for Transport Networks?
Basic Network Inventory
Routing Protocols provide network link inventory
Useful for operations and planning
Topology and Resource Utilization
Required for distributed connection path
selection/computation
Disaster Recovery
Want timely information of whats available in the
network (nodes, link, spare capacity, etc)
96
Extended for Non-IP Networks in IETF
GMPLS
New Link and Router advertisements in RFC 3630,
4202/3
Kept separate from IP link information to avoid
confusion
Opaque LSAs kept out of the IP Topology DB
Link Switching Type and Metric
Non-IP types, e.g., TDM, WDM
Link Characteristics, e.g., Protection
Linear (1+1, 1:1, 1:N), Ring, etc...
Diverse Routing Information
Shared Risk Link Groups (SRLGs)
Other non-IP link characteristics
97
ASON Routing Specifications and Activities
Protocol-neutral routing fundamentals (G.7715, G.7715.1,
G.7715.2)
Function of routing and routing protocols in ASON
ASON link state routing
New routing protocol requirements for ASON
Protocol-specific routing (OIF)
OSPF extensions based on ASON routing requirements
Application of topology abstraction
Future work
PCE
98
ASON Routing
Routing Components
Monitor port
Policy port
Config port
RC
CCC/
NCC
CC
PC
CC - Connection Controller
RC - Routing Controller
LRM - Link Resource Manager
PC Protocol Controller
99
LRM
Primary function for ASON
transport routing is to
provide path
computation to
Connection
Management (Control
Plane).
Key modules are shown in
light blue:
Path computation and
associated distribution
of topology information
is done by the Routing
Controller (RC)
Conversion into a
specific routing protocol
and associated protocol
functions (e.g., state
machines) are done by
the Protocol Controller
(PC)
ASON Routing
IP Routing and Transport Network Routing
LSA
OSPF
Topology
Database
IP router Peers
Shortest Path
Algorithm (Dijkstra)
G.7715 Compliant
Protocol
L1 Bearer
Topology
Control Plane
Peer Routing Controllers
Source Route
Algorithm
Signaling
Data Plane
IP Forwarding
Table
SDH Path
Header
IP
Cross
Connec
t
Control Plane
Data Plane
SDH Path
IP address Next Hop
Transport Routing and Forwarding
IP
Forwarding
IP Routing and Forwarding
Data Plane in Transport Networks and classic IP Networks differ
For classic IP, every packet is forwarded based on address translation
For label switching (generalized to TDM or WDM), once a cross connection is
made, data flows without needing further path computation
100
ASON Routing
Some differences between IP and Transport Network Routing
Classic IP Routing
Distribution of Routing Always distributed
Protocol Entities
Domain-specific: may be
distributed or centralized
Path computation
Identical path computation May be different path
algorithm at each node
computation algorithms at
different nodes
Forwarding process
Path computed for each
packet at each node
Forwarding
dependency
Data cannot be forwarded
without stable routing
database
Looping
101
Transport Routing
Path computed only at
connection setup, usually
only at the source
Data can be forwarded on
existing connections but
new connections cannot be
created
Potential problem any time Prevented by strict source
the routing table changes routing
ASON Routing
Specifications
ITU-T Rec. G.7715, ASON Routing, Approved in July 02
Applicable after network has been subdivided into Routing Areas, and necessary
network resources accordingly assigned
Focus upon inter-domain routing supporting optical transport networking
application
Provides architecture, requirements, high-level attributes, messages, and state
diagrams from a protocol-neutral perspective
Protocol neutral routing requirements include support for, e.g.,
Hierarchically contained Routing Areas
Non-congruent routing adjacency topology and transport network topology
Independence from intra-domain protocol and control distribution choices
Policy constraints on information exchange (e.g., imposed at E-NNI)
Architectural evolution (levels, aggregation, segmentation)
Multiple links between nodes, allowing for link and node diversity.
Encompasses different classes of protocols (e.g., link-state, path vector)
Facilitates comparison of specific inter-domain routing protocol proposals
against quantifiable requirements
102
Link State Routing
Objective
Disseminate and update a common network topology view
across all nodes in a domain
Basic Link State Routing Functions:
Hello/Link Adjacency Procedure
Database Synchronization Procedure
Periodic or Event-driven Link Status Updates
Link State Routing Protocols
OSPF
IS-IS
PNNI
103
ASON Routing
Architecture & Requirements Link State
ITU-T Rec. G.7715.1/Y1706.1, ASON Routing Architecture and
Requirements for Link State Protocols, Approved Feb. 04
Based upon ASON foundation Recommendations (G.8080, G.7715)
Further architectural analysis for link state routing
Encompasses exchange of routing information between
hierarchical routing levels, including visibility re reachability
and topology
Node and Link routing attributes
Path computation and routing are impacted by layer specific, layer
independent, and client/server adaptation information elements
Routing protocol must be applicable to any transport layer
network, and representation of routing attributes should not
preclude their applicability to other transport network layers
Layer specific characteristics (per link attribute)
104
G.7715.1 Link Characteristics
Layer Specific
Capability
Usage
Characteristics
Signal Type
Mandatory
Optional
Link Weight
Mandatory
Optional
Resource Class
Mandatory
Optional
Local ConnectionType
Mandatory
Optional
Link Capacity
Mandatory
Optional
Link Availability
Optional
Optional
Diversity Support
Optional
Optional
Local Client Adaptations
Optional
Optional
Supported
105
Comparison with IP Link State Routing
Protocols
ASON Link State Routing relies on basic link state functions
Adjacency
Database synchronization
Periodic or event-driven advertisements
Differences
Control plane and data plane topology may be different
Automated discovery of routing peers cannot be done based on SCN
topology data plane neighbors may not be neighbors in the SCN
Optical routing advertisements are for Traffic Engineering
rather than IP routing table
Optical link state advertisements are marked as opaque and not
used for IP routing
Instead a separate transport topology database is created
106
Separation of Data and Control Plane
Pre-ASON, routing protocols have assumed a Label
Switching Router
Single node with both data and control plane functions
Single source for data, signaling and routing messages
ASON explicitly separates these
Data plane entities can be separate from control plane
Routing entity can be separate from signaling entity
Routing Implication
Must be able to separately identify the data plane entity
(link or node) from the routing controller
107
Examples of different Distributions
Possible distribution of control
Fully distributed (1:1) each network element also
participates in the control plane
Fully centralized (1:n) only one network element or
proxy participates in the control plane
Variable (m:n) small number of network elements or
proxy servers participate in the control plane
Some potential applications
Proxy for a legacy (management controlled) domain
Centralization of interoperability/E-NNI translation
functions for ease of administration
108
Client Reachability Advertisement
Routing Protocols have assumed a peer model where
client is a full peer to network elements
Clients are advertised as IP address reachability
Access links are part of the TE topology
ASON explicitly separates client and network address
spaces
Clients are identified by a separate namespace
Routing to clients needs to be supported by a separate
mechanism
Client reachability advertisement
Directory type service
109
Layering in the Data Plane
Pre-ASON, optical routing specifications gave a single
parameter for link capacity
Assumes that any signal type can use the link, subject to
pure bandwidth availability
Does not take into account layering issues
ASON requires routing to advertise per signal type
connection availability
Takes into account possible limitations (link supports
some signal types but not others)
Takes into account blocking issues (smaller signal type can
block larger signal type due to positioning in the frame)
110
Hierarchy in the Routing Architecture
Pre-ASON, routing protocols have had limited hierarchy
support
OSPF and IS-IS have limited levels (see next slide for OSPF)
PNNI has richer hierarchy up to 104 theoretical levels
ASON requires flexible hierarchy in the routing
architecture
To match transport network organization
For greater scalability
For greater policy control
Protocol extensions to support hierarchy are needed
111
Routing Hierarchy compared to OSPF
Existing routing protocols need extension to meet
ASON requirements. E.g., for OSPF,
Area boundaries fall within a router (vs. IS-IS area
boundaries, which fall on links so router belongs to a single RA)
Needs extensions for more than two hierarchical routing
levels
Requires operator intervention for re-definition of areas
Transport network architecture (G.805) allows more
flexible partitioning and multiple levels
112
ASON Routing Hierarchy
RA
Level 1
RA.1
RA.3
Level 2
RA.2
RA.1.1
Level 3
RA.1.2
RA.2.2
RA.3
RA.2.1
In ASON, multiple levels of hierarchy are supported
Domains at lower levels are encompassed by higher levels
Domains are organized as part of carrier administration
113
Protocol Extension Work in Standards
ITU-T
Has defined requirements but not protocol at this point
IETF
Has begun work through analysis of ASON requirements and
evaluation of existing routing protocols
Some initial proposals for extensions are in progress
Will need review through OSPF and IS-IS groups
OIF
Has developed and tested prototype extensions to meet
ASON requirements
Working with IETF/ITU-T to extend the standards
114
OIF External Network Node Interface
Routing Specifications
E-NNI Routing 1.0, Intra-Carrier E-NNI Routing using
OSPF, approved by Q1/07 Principal Ballot
Consistent with ITU-T ASON Recs. G.8080, G.7715 and
G.7715.1 architecture and requirements
Prototypes an instantiation of a routing protocol addressing
ASON routing requirements
Intended to enable interoperable multi-domain SPC
and SC services similar to those implemented for the
OIF Worldwide Interoperability Demonstrations in 2004
and 2005
Documents routing protocol requirements supporting the ENNI 1.0 interface, and prototype encodings used in OIF
Interop testing
Will support services provided by OIF UNI 1.0R2, UNI2.0 and
E-NNI Signaling 1.0
115
OIF E-NNI Prototype Extensions
Separation of Routing Controller and Node Identifier
Routing Controller is the control plane entity, Node ID identifies
the transport plane entity
Enabled by the addition of Local/Remote Node ID parameters in
the link status update
Identifies the link ends (data plane topology) separate from the
advertising entity (control plane topology)
Advertisement of TNA
TNA is OIFs terminology for client address
Reachability to TNA is advertised through OSPF prototype
extension
This supports a separate client namespace, in theory could be
non-IPv4
116
OIF E-NNI Prototype Extensions
Link Bandwidth
OIF extension specifies available connections for each
signal type (e.g., STS-1/VC-3, STS-3c/VC-4, etc.)
More detailed and accurate than a simple measure of
total available bandwidth for the link
Routing Hierarchy
Currently not implemented but under study
Leaking of information up and down levels and protection
from looping are key elements
117
E-NNI Topology Advertisement
Client
Device
Domain A
Domain B
OIF UNI
NE
NE
NE
RC
Client
Device
NE
OIF UNI
RC
RC
SCN
Domain C
Carrier
Network
RC
RC
NE
NE
OIF UNI
Each domains Routing Controller (RC) advertises to its peers
across the E-NNI boundary
An abstracted topology can be advertised
118
Routing Domain Abstraction Models
Abstraction must improve scalability, yet
provide more than just reachability information
1
Real domain
topology
2
3
Abstraction Models
1. Abstract node domain collapsed to a single
node; most scalable, least accurate
2. Abstract link series of interconnected edge
nodes; less scalable, more accurate
3. Pseudo-node variation of abstract link that
also shows potential server layer connectivity
119
Abstract
topology
ASON/GMPLS Tutorial Outline
Introduction
Requirements & Architecture
Signaling
Routing
Control Plane Management
OIF Interoperability Demonstrations
Control Plane Applications Use Cases
Concluding remarks
120
Motivation of Control Plane Management
To achieve and sustain automatic call & connection service
management (service management), there are many things that
need to be managed
Control plane (Cp) entity management
Initialization, configuration, policy setting
Ongoing monitoring, maintenance, recovery
Transport plane (Tp) management for ASON
ASON functionality Installation & configuration
ASON resource provisioning (e.g., names) & hand-over (from Mp to Cp)
Ongoing monitoring, maintenance, & recovery
Control plane (Cp) & Management plane (Mp) ongoing
interaction for
121
Connection management by Mp as needed
Centralized routing (i.e., Mp calculated)
Call performance measurement
Management of call admission control
Transfer of call/connection between Mp Cp
Challenges of Control Plane Management
Ensure consistent management policy across multicarrier environment, e.g.,
Network wide consistency for Cp configuration, such as
time-out setting for timers
Balance between delegation (to Cp) and ultimate control
(by Mp) (i.e., centralized vs. distributed) e.g.,
Avoid duplication of data & process
Maintain consistency between Mp and Cp database
Restore consistency without affecting active services
Smooth migration from Mp-driven service management
(Call/Connection mgmt) to hybrid or Cp-driven SM
Faults correlation and root cause analysis across Cp and
Tp in multi-domain multi-layer environment
122
Scope of Cp Management & Interactions
re
c
Di
Re
po
rt s
Data communication Supports
network
Re
r
po
Supports
Directs
ts
123
cts
Control plane
re
Di
Reports
Supports
Directs
ts
Management plane
Reports
Transport plane
Transport Resources in Mp and Cp View
SNP
CP
Relationship between the architectural entities in
Transport plane, Management plane, and Control plane
Adaptation
CTP
TTP
Subnetwork
Trail Termination
TCP
SNP
SNC
Link Connection
Transport entities
Adaptation function
Trail Termination function
CP: Connection point
TCP: Termination connection point
Management plane view
TTP: Trail Termination Point
CTP: Connection Termination Point
124
SNP Link Connection
Control plane view
SNP: Subnetwork Point
SNPP: SNP Pool
SNPP Link
Trail
SNC
Standards for Control Plane Management
TMF MTNM v3.5
Management
Plane
NMS
ITU-T G.7718
EMS
EMS
EMS
OIF-CDR-0.10
GR-1110-CORE
ITU-T G.7718.1
MTNM
Control
Plane Transport
Plane
125
G.7718.1
G.7718
Network
Element
G.8080
G.7710
M.3010
Architecture & Requirements
Rec. G.7718/Y.1709, Framework for ASON Management,
Approved Feb. 05
Deemed essential for supporting viable network deployments
Addresses the management aspects of the ASON control
plane and the interactions between the OSS (NMS, EMS)
and the ASON control plane
Provides architecture and requirements context
Management perspective on control plane components and
constructs, control-related services, domain, transport resources,
policy
Management of restoration and protection
ASON management requirements
FCAPS
Heavy input from Service providers
126
G.7718 ASON Management Requirements
Fundamental requirements:
Impact of Mp failure, Mp-Cp interface failure, and Cp failure
Configuration management
Control plane resources
Identifiers, addresses, protocol parameters (signaling & routing)
Routing areas
RA hierarchies, (dis) aggregation, assignment of Cp resources
Transport resources (in control plane view)
(de)allocation, names and identifiers, discovery, topology, resource and capacity inventory
Call and connection
setup(SPC)/modification/release
Policy
Fault management
Control plane components, resource/connection/call (service),
Performance management
Control plane components
Accounting management
Usage and call details record
127
TMF MTNM v3.5 Control Plane Management
MTNM for Multi-technology management
TMF
TMF
TMF
TMF
513 Requirements & Use cases
608 Protocol-neutral model (UML)
814 CORBA solution
814A Implementation Statement Templates and Guideline
Version 3.5 addition: Control plane & VLAN management
Key modeling approaches
Re-use the v3.0 Multi-layer approach for
Routing area (ML-RA), SNPP (ML-SNPP), SNPP Link (ML-SNPP Link),
Re-use of the Subnetwork connection (SNC) object for
Cp Connection
Scope:
Limited to retrieval of Control Plane resources, retrieval of network
topology and end-to-end Call/Connection management (provisioning
of SPCs)
128
OIF-CDR-01.0 for OIF UNI 1.0 Billing
OIF-CDR-01.0, Call Detail Records for OIF UNI 1.0 Billing, Approved
April 02
Implementation Agreement (IA) for the usage measurement
functions that an Optical Switching System will need to perform in
order to enable carriers to bill for OIF UNI 1.0 optical connections
using their legacy billing systems.
Usage measurement functions: Automatic Message Accounting
(AMA)
Data generation: UNI 1.0 CDR Information Content, as generic as
possible
Data formatting (resulted in CDR)
Billing AMA Format (BAF)
ASCII CDR (ACDR) Format
XML CDR (XCDR) Format
Data transmitting (of CDR): Typically via FTP between management
system and billing system
129
ASON/GMPLS Tutorial Outline
Introduction
Requirements & Architecture
Signaling
Routing
Control Plane Management
OIF Interoperability Demonstrations
Control Plane Applications Use Cases
Concluding remarks
130
Interoperability Demonstrations
Objectives / Goals
OIF Perspective
Member evaluation, validation, proof of concept of current OIF draft
specifications & IA for interoperable network solutions
Feedback assessment from multi-vendor testing environment to
standardization/specification work
Carrier Perspective
Early adoption, evaluation, of interoperability testing results
demonstrated in multi-vendor environment.
Feedback to vendor community on early implementations and
integrations based on practical experiences and lessons learned
Industry Perspective
Showcase OIF contributions, build market awareness of emerging
technologies, services and networking solutions.
Public forums (Optical conference & exhibitions) utilized
131
Interoperability Demos Role in Standards to Deployment
Deployment
OIF supports close relation of
standardization and R&D and
early implementations
Interoperability
Tests & demonstrations
Standards
Specifications
OIF
ITU-T
IETF
132
Field trials
Carrier
sites
OIF
Feedback
OIF performs / organizes the next
major step towards implementation
interoperability evaluations of
prototype implementations:
Prove of concept
Feedback to standardization
Fosters follow up activities
Ethernet Switched Connection Characteristics
Ethernet
Client
Carrier A
Domain
Carrier B
Domain
NE
Ethernet
UNI-C
OIF E-NNI
OIF E-NNI
OIF UNI
NE
Ethernet
Client
Carrier C
Domain
NE
NE
SDH
UNI-N
OIF UNI
NE
NE
Ethernet
UNI-N
Ethernet Layer Call/Connection Flow
UNI-C
SONET/SDH Layer Call/Connection Flow
OIF UNI 2.0 support for Ethernet clients
OIF UNI 2.0 call control based on ASON specifications
Transport devices integrate multi-layer functions at control plane and
data plane level
Ethernet Private Line Service (E-Line Service Type) triggered by OIF UNI
2.0 connection requests and provisioned by E-NNI
133
2005 Worldwide Interoperability Demo
7 participating carrier labs around the world: China, Japan,
France, Germany, Italy and USA
13 participating vendors
First multi-layer & multi-domain call/connection demonstration
Orchestrates actions between client and server layers
Integration of control plane (UNI2.0 Ethernet, E-NNI) and NGSONET/SDH (GFP-F/VCAT/LCAS) functions
Sow of on-demand Ethernet Private Line service by using
Creation of end-to-end calls and connections across multiple
network layers, network domains, multiple vendors equipment,
multiple carrier labs
OIF IAs based on ITU-T ASON standards including:
Requirements and Architecture (G.8080, G.7713, G.7715, G.7715.1)
Signaling protocols (G.7713.2)
World Interoperability Demonstration public observation:
SUPERCOMM 2005 (June 7-9, 2005, Chicago, IL)
134
Interoperability Demonstrations
Global Test Network Topology
USA
Avici
Ciena
Cisco
AT&T
Alcatel
Ciena
Cisco
Fujitsu
Lucent
Mahi
Nortel
Sycamore
Tellabs
Asia
Deutsche
Telekom
Alcatel
Ciena
Cisco
Marconi
Lucent
France
Telecom
Avici
Marconi
Sycamore
Verizon
135
Europe
Telecom
Italia
Marconi
Huawei
Lambda OS
Avici
Cisco
NTT
Avici
Fujitsu
Sycamore
Ciena
Huawei
China
Telecom
OIF Interoperability Labs in 2005
Lannion,
France
Waltham,
MA-USA
Middletow
n, NJ-USA
Beijing,
China
Berlin,
Germany
Torino,Italy
SuperComm 2005
booth
136
Musashino,
Japan
137
2007 Worldwide Interoperability Demo
On-Demand Ethernet Services over multi-domain
transport networks
7 participating carrier labs around the world: China,
Japan, France, Germany, Italy and USA
Public demonstration at ECOC2007, Sept 16 20th, 2007:
ECOC2007 Workshop on Global Interoperability in MultiDomain and Multi-Layer ASON/GMPLS Networks
ECOC2007 exhibition: Live demonstration of the OIF
Worldwide Interoperability Test results
ECOC2007 accompany program: Lab tours to DT premises,
demonstrating live the ASON/GMPLS functions of the OIF
Worldwide Test Network, the MUPBED European scale
network and enabling hands on real telecom world for
the visitors
138
ASON/GMPLS Tutorial Outline
Introduction
Requirements & Architecture
Signaling
Routing
Control Plane Management
OIF Interoperability Demonstrations
Control Plane Applications Use Cases
Concluding remarks
139
Application 1: CP for Bandwidth
Defragmentation
Scenario
After running the NG-SONET/SDH network for a while, available time slots
over SONET/SDH links become fragmented (I.e., many discontinuous, small
size clusters of bandwidth). Network Operations can invoke the control plane
on a regular basis to (1) identify the clusters for each span in the network,
and (2) run a defragmentation algorithm to pack in-use time slots into a
contiguous space.
Core Technologies
NG-SONET/SDH Defragmentation over
a single vendor domain
Site 2
OTN Control Plane (Auto-Discovery &
SONET
Path 1Self Inventory)
Site 1 2
OTN Mgmt Plane (EMS/NMS update)
Path 23
SONET
Path 13
SONET
140
Site 3
Application 2: A-Z Provisioning via EMS/NMS
and Control Plane
Scenario
NMS/EMS receives a service order for SONET STS /SDH VC from an enterprise
customer that has three sites in the region. The order specifies points A & Z
(e.g., from Site 1 to Site 2), payload rate, transparency, protection class, and
other constraints.
The NMS/EMS issues a command to the source node (attached to Site 1), which
then triggers the control plane to setup the SONET/SDH path to Site 3 according
to the requirements specified in the order. Similarly, when the customer
terminates the service, NMS/EMS will invoke the control plane to tear down the
path.
Core Technologies
OTN Control Plane (E-NNI, I-NNI)
OTN Mgmt Plane (EMS/NMS SPC support)
Site 2
Path 1Site 1 2
SONET
A
SONET
Path 13
SONET
141
Path 23
Site 3
Application 3: Bandwidth on Demand (BoD)
in Transport Networks
Scenario
An enterprise customer with three sites subscribes to BoD
SONET/SDH service with a range of SONET/SDH payload rates. The
service plan applies to all SONET/SDH connections between the
sites. Based on business needs, the customer uses UNI signaling to
dial-up the service between any two sites, sends information over
the SONET/SDH path for a unspecified period of time, then hangs
up.
NG-SONET/SDH GFP/VCAT
Site 2
OTN Control Plane (O-UNI, E-NNI,
SONET
Path UNI
1and I-NNI)
Site 1 2
OTN Mgmt Plane (EMS/NMS SC
Path 2support, TMF814)
3
SONET
Two Sub-Cases
Path 1 Case 3a: With NG-SONET/SDH
3
SONET
Virtual Concatenation (VCAT)
Site 3
Case 3b: Without VCAT
142
Application 3 (cont): Scheduled BoD
Customers with highly predictable traffic profile
Service bandwidth provisioned according to user provided time-ofday and/or day-of-week schedules, with the capability to make
bandwidth changes as needed.
Automatic tailoring service bandwidth to traffic profile
Measured Bandwidth Usage of a SAN Application
200
200 Mb/s
Mb/sec
160
120
100 Mb/s
80
40
19 Mar
(Th)
143
20 Mar
(F)
21 Mar
(Sa)
22 Mar
(Su)
23 Mar
(M)
24 Mar
(Tu)
25 Mar
(W)
Source: EMC
Application 4: GbE Service with Bandwidth
Schedule
Scenario
An enterprise customer with three sites subscribes to a GbE service with
customized bandwidth schedules for weekdays and weekend/holidays as shown
below.
Weekend
Weekdays
Schedule\Path
Path 1-2
Path 1-3
Path 2-3
Schedule\Path
Path 1-2
Path 1-3
Path 2-3
8 am 5pm
200M
100M
50M
8 am 5pm
50M
50M
50M
6pm 11pm
300M
200M
100M
6pm 11pm
50M
50M
50M
12am 7am
50M
500M
500M
12am 7am
10M
10M
10M
Site 2
Core Technologies
NG-SONET/SDH GFP/VCAT/LCAS
OTN Control Plane (E-NNI, I-NNI)
OTN Mgmt Plane (EMS/NMS w/scheduling support)
144
Site 1
Path 12
GbE
Path 23
GbE
Path 13
GbE
Site 3
Application 5: BoD - GbE Service
Scenario
An enterprise customer with three sites subscribes to a BoD GbE
service with a specified peak rate (P). The service plan applies to
all GbE connections between the sites. Based on business needs,
the customer uses UNI signaling to dial-up the service between
any two sites, sends information at rates <= P for a certain period
of time, then hangs up.
Core Technologies
OTN
Control Plane (O-UNI, E-NNI, and I-NNI)
OTN Mgmt Plane (EMS/NMS support)
Site 2
UNI
Site 1
Path 2-3
GbE
Path 1-3
145
GbE
Path 1-2
UNI
GbE
Site 3
Application 6: OSS Simplification
Traditional OSS
Service Accountin
Service
g
Assuranc
Activatio
e
& Security
n
Fault
Customer Net Topology
Correlations Billing
Assignments
Path
Admission
Facility
Computation Exceptions Control
Fault
Resource
Parameter
Isolation
Access
Cntl
Mapping
Equipment
Testing
CoS Assign.
Srvc Circuit
Protection &
Restoration
Inventory
NG-OSS
Inventory
Customer
Assignments
Facility
Service Accountin
g
Assuranc
e
&
Security
Fault
Correlations Billing
Exceptions Admission
Control
Resource
Access Cntl
Passive roles for all control plane
supported functions
Fault
Net Topology
NG-OTN
Isolation
Control Plane
Path
Computation Exceptions
Testing
Equipment Parameter
Mapping
&
Srvc Circuit CoS Assign. Protection
Restoration
Transport Network
146
Transport Network
Application 7: Control Plane for
Auto-Discovery and Self-Inventory
Scenario
Upon start up of a CP-equipped network, all NEs will discover
each other, identify resources, and create a high quality
network database containing the complete topological view
of the network and a highly accurate resource map.
During network operation, the database will be instantly
updated to reflect any change of network state, such as
resource usage/addition, path setup/tear-down, etc.
A high quality network database is essential to the high
quality OAM&P required for NG-OTN
Core Technologies
OTN Control Plane (I-NNI, E-NNI)
OTN Mgmt Plane (EMS/OSS update)
Site 2
Path 1Site 1 2
SONET
147
SONET
Path 13
SONET
Path 23
Site 3
Application 8: Control Application Plane
enabled Network interworking
Applications communicate with Adaptation Function through API
Adaptation Function administrates access to UNI
Application integrates an API or manual control
148
ASON/GMPLS Tutorial Outline
Introduction
Requirements & Architecture
Signaling
Routing
Control Plane Management
OIF Interoperability Demonstrations
Control Plane Applications Use Cases
Concluding remarks
149
ASON Reqts. & Architecture Recap
Requirements intended to enable support for
business/commercial operating practices
Formalized specification technique utilizing components and
interfaces that can be associated in various ways to describe
actual control plane implementations
150
The actual location/distribution of the control plane components is
not constrained, allowing for the range of fully distributed to
centralized implementations
Architecture does not require that the reference points always be
instantiated as external interfaces (UNI, E-NNI); instantiation of
interfaces and degree of information sharing are based upon operator
business model/policy
A single instantiation of an ASON control plane may control multiple
layer networks with an explicit definition of the interlayer
interaction (including none)
Reference point concepts similar to those of Resource and
Admission Control Function (RACF) model
Standards Development Organizations
(SDO) Interaction
-
1999/2000 MPLS: flat peer model,
data/signaling congruent, IP only, data behavior
(e.g., connection tear-down w/o request)
ITU-T ASON Umbrella
OIF
Implementation
Agreements
IETF GMPLS Umbrella
2001: Carrier requirements across IETF, OIF, and
ITU-T re need for support of commercial business
& operational practices
2003: Evolution of GMPLS signaling protocol, used
as normative base for ASON extensions
2004-2006: Ongoing communications among all
three SDOs on requirements and protocol work
Goal - Evolution towards convergence
of requirements & protocols
151
Network of the Future Future Internet
Clean Slate Internet Design (FIND, GENI)
Activities in Europe and USA
Goal: Basic re-design of the (multi-layer) network architecture, including
Internet
Paradigm shift: Customer view (business and residential) impose a
number of additional, mostly non-technical requirements
The Internet turned into a non-trusted business environment
Service-centric design of architectures, protocols and networks
Usability / ease of use is a major aspect for future applications and services,
requiring significant efforts in automation
Fundamental technical changes in network functions imposed by clean
slate design
152
Naming & addressing
Routing & signaling
Security functionality, especially authentication (advanced AAA)
Scalability
Optimization of topologies and hierarchies
Commercial role of the Internet (non-trusted environment)
Monitoring functionality (regarding network functionality)
Technical Implications of Network re-design
Clean slate design will shake the technical foundations of
protocol design as well as network architectures and
operations
Protocol: Protocols and architectures are expected to
change considerably (optics, slim modular protocol stack)
Data plane: Multi-technology environment for provisioning
of end-to-end services
Control and management plane: The Internet might
actually look more telco-like an intriguing thought!
153
Thank you!!
Q&A
[email protected]
www.oiforum.com
154
Backup
OIF documents and links
Reference Material for ITU-T ASON and Transport
Recommendations
Glossary
156
OIF Documents
OIF presentation and newsletters
www.oiforum.com
OIF Implementation Agreements
http://www.oiforum.com/public/impagreements.html
OIF workshops on ASON/GMPLS implementations in test and
carrier networks
http://www.oiforum.com/public/meetOIW050806.html
http://www.oiforum.com/public/meetOIW073106testbeds.html
http://www.oiforum.com/public/meetOIW101606.html
157
ITU-T Recommendations
Accessibility Information
Go to the publications link and choose download per URL:
http://www.itu.int/publications/EBookshop.html
There is an explicit button from the download publications page
where you can register up front for 3 free Recommendations
158
Some Key ITU-T ASON Recommendations
Fundamental (Protocol-Neutral) Architecture & Requirements
G.8080, Architecture for the automatically switched optical network
(ASON), 2006 Revision to be published imminently
G.7713, Distributed call and connection management (DCM), 2006
Revision, to be published imminently
G.7718, Framework for ASON Management, February 05
G.7714, Generalized automatic discovery for transport entities,
159
August 05 revision
ITU-T G.7715/Y.1706 - Architecture and Requirements for Routing in
the Automatic Switched Optical Networks, July 2002
ITU-T G.7715.1/Y.1706 - ASON Routing Architecture and requirements
for Link State Protocols, Feb. 04
ITU-T G.7712/Y.1703 - Architecture and specification of data
communication network*, March 03
ITU-T T G.7716 - Control Plane Initialization, Reconfiguration, and
Recovery, target Consent Nov. 06
Textbooks covering ITU-T Architecture
Aspects (e.g., Functional Modeling, ASON)
Broadband Networking: ATM, SDH, and SONET; Michael Sexton and Andrew
Reid; ISBN 0-89006-578-0 (see in particular Chapters 2 4)
http://www.amazon.com/gp/product/0890065780/ref=sib_rdr_dp/103-2003697-9480609
?%5Fencoding=UTF8&me=ATVPDKIKX0DER&no=283155&st=books&n=283155
Achieving Global Information Networking; Varma and Stephant et al; ISBN:
0890069999 (see in particular Chapters 1-4)
http://www.amazon.com/gp/product/0890069999/ref=dp_return_1/103-20036
97-9480609?%5Fencoding=UTF8&n=283155&s=books
SDH/SONET Explained in Functional Models : Modeling the Optical
Transport Network; Huub van Helvoort; ISBN 0-470-09123-1
http://www.amazon.com/gp/product/0470091231/ref=sib_rdr_dp/103-20036
97-9480609?%5Fencoding=UTF8&me=ATVPDKIKX0DER&no=283155&st=books&n=28
3155
Optical Networking Standards : A Comprehensive Guide for Professionals ;
Khurram Kazi; ISBN: 0387240624 (to be published June 2006; see for
example - Chapters 2, 16)
http://www.amazon.com/gp/product/0387240624/qid=1147161139/sr=1-1/ref=
sr_1_1/103-2003697-9480609?s=books&v=glance&n=283155
160
Some Key ITU-T Functional Modeling Rec.
Fundamental Architecture & Equipment
ITU-T Rec. G.803, Architecture of transport networks based on the
161
synchronous digital hierarchy (SDH), March 2003
ITU-T Rec. G.805 - Generic functional architecture of transport networks,
March 2000
ITU-T Rec. G.809 - Functional architecture of connectionless layer networks,
March 2003
ITU-T Rec. G.872, Architecture of optical transport networks, November 2001
ITU-T Rec. G.8010, Architecture of Ethernet Layer Networks, February 2004
ITU-T Rec. G.8110, MPLS layer network architecture, January 2005
ITU-T G.8110.1, Architecture of Transport MPLS (T-MPLS) Layer Network,
publication imminent
ITU-T G.783, Characteristics of synchronous digital hierarchy (SDH)
equipment functional blocks, March 2006
ITU-T G.8021, Characteristics of Ethernet transport network equipment
functional blocks,
G.8121, Characteristics of Transport MPLS (T-MPLS) Equipment Functional
Blocks, publication imminent
Etc.
Glossary
ACDR
AMA
ASON
AP
API
BAF
BoD
CC
CCC
CDR
CORBA
CP
Cp
DA
DCM
ECF
EMF
EMS
E-NNI
ETF
FCAPS
162
FTP
IA
I-NNI
LCAS
LRM
MIB
Mp
NCC
NE
NMS
MLRA
MLSNPP
ASCII CDR
Automatic message accounting
Automatically switched optical network
Access point
Application programming interface
Billing AMA Format
Bandwidth on Demand
Connection controller
Calling/called call controller
Call detail record
Common object request broker architecture
Connection point
Control plane
Discovery agent
Distributed Call and Connection Mngmt
Equipment control function
Equipment management function
Element management system
External NNI
Equipment transport function
Fault, Configuration, Accounting,
Performance, Security
File transfer protocol
Implementation agreement
Internal NNI
Link capacity adjustment scheme
Link resource manager
Management information base
Management plane
Network call controller
Network element
Network management system
Multi-layer routing area
Multi-layer SNPP
MTNM
NNI
OH
OSF
OSS
OTN
PC
RA
RC
SC
SCN
SNC
SPC
SNP
SNPP
SRG
STM
TAF
TAP
TCE
TCP
TNA
TP
Tp
TTP
UNI
UML
VC
VCAT
VLAN
WSF
XCDR
XML
Multi-technology network management
Network-network interface
Overhead
Operations system function
Operations support system
Optical transport network
Protocol controller
Routing area
Routing controller
Switched connection
Signaling communication network
Subnetwork connection
Soft permanent connection
Subnetwork point
SNP Pool
Share risk group
Synchronous Transport Module
Transport atomic function
Termination & adaptation performer
Transport capability exchange
Termination connection point
Transport network address
Termination point
Transport plane
Trail termination point
User-network interface
Unified modeling language
Virtual container
Virtual concatenation
Virtual local area network
Workstation function
XML CDR format
Extensible modeling language