Use Case AN
Use Case AN
Technical Specification
Use cases for Autonomous Networks
Summary
This is a deliverable of the ITU-T Focus Group on Autonomous Networks (FG-AN).
This document analyses use cases for autonomous networks. It provides use case descriptions and
indicates the basic set of possible requirements for each use case. The use cases are divided into
categories, priorities are indicated, and actor-interaction diagrams are added.
Keywords
Artificial Intelligence, autonomous networks, components, machine learning, requirements, use
cases
1 Scope
This Technical Specification analyses use cases for autonomous networks. It provides use case
descriptions and indicates the basic set of possible requirements for each use case. The use cases are
divided into categories, priorities are indicated, and actor-interaction diagrams are added.
2 References
[ITU-T Y.3172] ITU-T Recommendation Y.3172 (2019), Architectural framework for
machine learning in future networks including IMT-2020.
[ITU-T Y.3173] ITU-T Recommendation Y.3173 (2020), Framework for evaluating
intelligence levels of future networks including IMT-2020.
[ITU-T Y.3174] ITU-T Recommendation Y.3174 (2020), Framework for data handling to
enable machine learning in future networks including IMT-2020.
[ITU-T Y.3176] ITU-T Recommendation Y.3176 (2020), Machine learning marketplace
integration in future networks including IMT-2020.
[ITU-T Y.3179] ITU-T Recommendation Y.3179 (2021), Architectural framework for
machine learning model serving in future networks including IMT-2020.
4 Abbreviations
AI Artificial Intelligence
AN Autonomous Networks
CI/CD continuous integration and continuous delivery
CN Controller
ER Emergency Response
GNN Graph Neural Networks
GUI Graphical User Interface
IDSA Inter-domain Service Automation
KB Knowledge Base
KPI Key Performance Indicator
LCM Life Cycle Management
MIMO Multiple Input Multiple Output
ML Machine Learning
MLFO Machine Learning Function Orchestrator
mMTC Massive Machine Type Communications
MNO Mobile Network Operator
NF Network Function
OSS Operational Support System
nRT RIC Near Real Time RIC
QoE Quality of Experience
5 Conventions
In this Technical Specification, in alignment with the conventions of [Supplement 55 to ITU-T Y-
series Recommendations] possible requirements which are derived from a given use case, are
classified as follows:
The keywords "it is critical" indicate a possible requirement which would be necessary to be
fulfilled (e.g., by an implementation) and enabled to provide the benefits of the use case.
The keywords "it is expected" indicate a possible requirement which would be important but not
absolutely necessary to be fulfilled (e.g., by an implementation). Thus, this possible requirement
would not need to be enabled to provide complete benefits of the use case.
The keywords "it is of added value" indicate a possible requirement which would be optional to be
fulfilled (e.g., by an implementation), without implying any sense of importance regarding its
fulfilment. Thus, this possible requirement would not need to be enabled to provide complete
benefits of the use case.
6 Introduction
As the demand and expectation of communication networks has grown, so have user subscription
and new service expectation. Network operators must find new ways to address these pressures
while at the same time controlling operational cost. Autonomous networks are those that possess the
ability to monitor, operate, recover, heal, protect, optimize, and reconfigure themselves; these are
commonly known as the self-* properties. The impact of autonomy on the network will be in all
areas including planning, security, audit, inventory, optimisation, orchestration, and quality of
experience. In this context, the main concepts studied by FG AN are exploratory evolution, real-
time responsive experimentation and dynamic adaptation.
Use cases studied in this document are based on contributions and discussions with domain experts
or mentors. There are various types of use cases including those which are directly describing
various autonomous behaviours and those which describe the applications which benefit from them.
Collation of use cases included specific effort to study the impacts to the key concepts under study
in the FG AN. Effort was made to derive requirements and further to classify the requirements. A
relation to the architecture is provided in the form of guidance to components derived from the use
cases.
The main learnings from this use case analysis are:
• while the use cases for AN are quite varied and requires support from domain experts, a
common refrain has been the application of the key concepts mentioned above.
• use case analysis may need to be continued as the field is still evolving.
7 Use cases
Expected requirements
● AN-UC01-REQ-006: It is expected that AN enable exchange of knowledge between various
components in the AN, and entities in other administrative domains.
NOTE – Examples of entities in other administrative domains are network services which are not
implementing AN functionalities.
Open issues 1. Are simulators encapsulated in Sandbox? Or are they open to direct interface
from autonomous network components?
2. There are heterogeneous simulators and uniform interface with simulators do
not exist. This makes their interface and configuration non-standard and difficult
to implement.
3. Additional scenarios like addition of new simulation capabilities, flagging of
new requirement for simulation etc. need to be handled.
Use case category Cat 1: describes a scenario related to core autonomous behaviour itself.
Reference [b-Y.ML-IMT2020-SANDBOX]
Figure 3: actor interaction for Configuring and driving simulators from autonomous components in
the network
Open issues 1. Are some peers more equal than others (e.g. humans)?
2. What are the messages exchanged? e.g. request for comments? report on status?
capability exchange?
Use case category Cat 1: describes a scenario related to core autonomous behaviour itself.
Reference [b-Y.ML-IMT2020-MLFO]
7.4 Configuring and driving automation loops from autonomous components in the network
Use case id FG-AN-usecase-004
Use case name Configuring and driving automation loops from autonomous components in the
network
Base contribution [FGAN-I-12-R1]
Creation date 21/January/2021
Use case context Inspired by discussions on “demand mapping” during Y.3173 and discussions
during ITU AI/ML in 5G Challenge 2020.
Use case description There are different automation loops in various domains of the network, already
proposed by different standards bodies and industry bodies. To reflect the decisions
of autonomous behaviour in the network, the autonomous network component
requires access to automation loops. Automation loops help implement the decisions
taken by the autonomous component in the network. Moreover, it is possible that
● AN-UC04-REQ-004: It is critical that AN components consider the reports from closed loops
while deciding the AN behaviour.
NOTE – Examples of AN behaviour are evolution, experimentation and adaptation.
Figure 7: actor interaction for Configuring and driving automation loops from autonomous
components in the network
Figure 8: Component cloud for Configuring and driving automation loops from autonomous
components in the network
Figure 9: Component cloud for Domain analytics services for E2E service management
● AN-UC06-REQ-003: It is critical that AN enables capturing and using the knowledge from
domain experts and AI/ML mechanisms for recommendation of solution for root cause analysis.
NOTE – Example of representation formats for knowledge is knowledge graphs.
Expected requirements
● AN-UC06-REQ-004: It is expected that AN uses AI and big data technology to achieve full
process automation and intelligent management of wireless network.
Added value requirements
● AN-UC06-REQ-005: It is of added value that a varying set of KPIs are monitored to identify
faults.
● AN-UC06-REQ-006: It is of added value that AN solutions may be monitored optimized and
continuously improved (themselves).
NOTE – e.g. the OpenKB and recommendation algorithms may be optimized.
Expected requirements
● AN-UC07-REQ-002: It is expected that autonomous networks (AN) support representation,
autonomous analysis and continuous optimization of policies.
NOTE – Policies may be related to domain specific workflows and decisions e.g. energy usage in data
centers.
7.9 Network resource allocation for emergency management based on closed loop analysis
Use case id FG-AN-usecase-9
Use case name Network resource allocation for emergency management based on closed loop
analysis
Base contribution FGAN-I-090-R2
Creation date 22/April/2021
Use case context Discussions during [FGAN-I-055-R1], [FGAN-I-054-R1], [FGAN-I-072]
Use case description Telecommunication systems are critical pillar of emergency management. A set
of hierarchical AI/ML based closed loops could be used to intelligently deploy
and manage slice for emergency responders in the affected area. A higher
closed loop in the OSS can be used for detecting which area is affected by the
emergency and deploy a slice for emergency responders to that area. It can then
set a resource arbitration policy for the lower closed loop in RAN. The lower
loop can use this policy to intelligently share RAN resources between the
public and emergency responder slice. It can also intelligently manage ML
pipelines across the edge and emergency responder devices by using split
AI/ML models or offloading of inference tasks from the devices to the edge.
Following are related steps in this use case scenario:
1. MNO may instruct OSS to detect certain set of emergencies and
provide connectivity to emergency responders according to predefined
SLA.
2. OSS might deploy a closed loop to achieve this. It might collect data
from sources like network analytics data, social media scraping, input
from emergency responders etc.
NOTE- e.g. such closed loops may be hosted in non-RT RIC and may
be used for predictive resource allocations to specific edge locations
based on predicted needs, in turn based on detected emergency.
NOTE- the policy to reallocate resources may depend, among other
things, on the type of emergency e.g. a natural disaster, earth quake, a
law and order situation, traffic accidents, etc.
NOTE- e.g. such closed loops may be hosted nearer to edge e.g. nRT
RIC. The policy input from higher loop may indicate, among other
things, the different sources of data for the lower loop.
5. RAN domain closed loop might also decide to offload inference tasks
from ER devices to the edge or use split AI/ML model to run inference
tasks on edge and ER device. This decision might be taken based on
available network and compute resources.
NOTE- e.g. some layers of the AI/ML model may be hosted in the
wearable devices of the emergency responders, which will help in say
locating of persons under distress using various inputs.
Relation with autonomous behavior-
1. Workflows for the closed loops are independent of each other. The
only interaction between closed loops is via high level intents over the
inter-loop interface.
2. Closed loops can create new closed loops in other network domains
without human intervention.
3. Although loops are deployed in hierarchical fashion, each loop has the
ability to evolve independently. It can use different models and ML
pipelines as required. Each loop may move up or down the autonomy
levels as defined in [ITU-T Y.3173].
4. Closed loops have ability to split and provision AI/ML models to other
closed loops in automated fashion.
5. By making closed loops in edge domain autonomous, we also enable
lesser orchestration delay, better privacy and flexibility for verticals
(e.g., industrial campus networks).
6. Higher loops can use historical knowledge available to them to
optimize and generalize lower loops using high-level intent. This
increases efficiency of lower loops while preserving their autonomy.
(e.g., higher loop might know certain kind of ML models are good for
cyclone emergency management based on previous cyclones.)
NOTE: This use case might be well aligned with the use case “Composable,
hierarchical closed loops” in [FGAN-I-072] and others above.
Open issues (as seen by 1. The "propagation" and "escalation" of intents is something which needs
the proponent) study.
Expected requirements
● AN-UC09-REQ-003: it is expected that Closed loops have ability to provision or recommend
AI/ML models to other closed loops in automated fashion.
● AN-UC09-REQ-004: it is expected that closed loops in edge domain may be autonomous, in
order to enable lesser orchestration delay, better privacy and flexibility for verticals (e.g., industrial
campus networks).
● AN-UC09-REQ-005: it is expected that higher loops use the knowledge base available to
them to optimize and generalize lower loops using high-level intent.
NOTE –This increases efficiency of lower loops while preserving their autonomy. (e.g., higher loop might
know certain kind of ML models are good for cyclone emergency management based on previous cyclones.)
Figure 10: actor interaction for Network resource allocation for emergency management based on
closed loop analysis
NOTE 1 – Create a high-level abstract model for closed loops, and then create declarative policies for that
high-level model that express the “intent” of creating ML pipelines. The components (“nodes”) of the high-
level service are decomposed into more concrete services (possibly recursively). Declarative policies must be
“translated” into more concrete declarative policies on the decomposed services in conjunction. For example,
“non-RT” level service may impose certain closed loop requirements on a RIC that implements the ML
pipeline. “nRT” level service may impose some other closed loop requirements on a RIC that implements
that ML pipeline. This recursive decomposition coupled with recursive policy mapping happens all the way
down until service components can get realized on the available resources. At that point, the low-level
declarative policies must be translated somehow into imperative policies (e.g. if jitter exceeds a certain
threshold, re-prioritize the traffic associated with the service).
NOTE 2 – “imperative” policies that use the “event/condition/action” pattern, vs. declarative policies use a
“capabilities/context/constraints” pattern. Declarative policies are more suitable for top-level “intent”
statements, but they need to be translated (by the orchestrator) into corresponding “imperative” policies in
order to be actionable. The "propagation" and "escalation" of intents: the “event/condition/action” statements
are the control loops you’re referring to that make sure that service components comply with desired
behavior at all times. By coupling “event/condition/action” control loops with TOSCA’s substitution
mapping feature, you can make these control loops “cascading”, i.e. they can propagate down from high-
level abstract “intent” statements to low-level device reconfigurations, and they can escalate back up if
necessary.
NOTE 3 – The events are generated (using notifications) by nodes in the service topology model. The
conditions are evaluated based on attribute values of nodes in the service topology model. The actions are
performed on the service topology model first, and then propagated to the external world (the “resources”)
Open issues (as seen 1. Handling of accounting for such services is not clear.
by the proponent)
High
Notes on priority of
- Enables vertical driven applications and network evolution.
the use case
Reference
Figure 12: actor interaction for Inter-domain service automation (IDSA) - for microfinance
7.11 Autonomous vertical-driven edge service and middle-mile connectivity for rural
financial inclusion (FI)
Use case id FG-AN-usecase-011
Use case name Autonomous vertical-driven edge service and middle-mile connectivity for rural
financial inclusion (FI)
Base contribution AN-I-060
Creation date 31/March/2021
Use case context Discussions regarding rural broadband architecture(s) and feedback from survey to
banks during summer 2020
Open issues (as seen 1. Policy framework for such an autonomous community network anchored by a
by the proponent) specific vertical needs to be understood.
Notes on use case Cat 2: describes a scenario related to application of autonomous behaviour in the
category network.
High
Notes on priority of
- Enables vertical driven applications and network evolution.
the use case
Reference
DAF SMF
1. SmfEventSubscription_Subscribe/
SmfEventSubscription_Unsubscribe
2. SmfEventSubscription_ Notify
SMF DAF
1. DafAnalysisSubscriptions_Subscribe
2. DafAnalysisSubscriptions_ Notify
Open issues E2E automation frameworks for composition of infrastructure, NaaS and services
do not exist.
Use case category Cat 1: describes a scenario related to core autonomous behaviour itself.
Reference
In this context, we introduce the use case “Open, integrated, log analysis”.
Open issues
Use case category Cat 2: describes a scenario related to application of autonomous behaviour in the
network.
Reference
Notes on use case • Cat 2: describes a scenario related to the application of autonomous behavior in
category the network.
● AN-UC017-REQ-002: It is critical that autonomous networks (AN) learn and update the
process of information collection from users and derivation from network services.
● AN-UC017-REQ-003: It is critical that autonomous networks (AN) evolve and update the
mapping between application QoE metric, network KPIs and application KPIs.
NOTE – the process of evolution and updation may be triggered by application feature additions, network
service updates or user device updates.
Expected requirements
● AN-UC017-REQ-004: It is expected that autonomous networks (AN) enable the plugin of
QoE prediction algorithms which may be integrated based on abstract APIs exposed from AN,
which are agnostic to the type of application and the specific underlying network technology.
Figure 12: actor interaction for Quality of Experience (QoE) Prediction as-a-Service (QPaaS)
Following are related steps in this use case scenario (with O-RAN as example
architecture):
1. Analysis of heterogeneous RAN components, corresponding splits,
capabilities, deployment options and interfaces and data models (e.g. E2
nodes and E2AP support).
2. Analyse the information in the near real time RAN intelligent controller
(nRT RIC)
3. Discover the capabilities of various RAN nodes and instantiate (potentially
cloud-native versions of ) applications (e.g. xApps) based on RIC SDKs.
4. Provision and analyse the closed loops at near real time RIC.
5. In correlation with the non RT RIC, analyse the devops cycle at the near
real time RIC to provision new types of CNFs in the near real time RIC and
new types of E2 nodes (or new capability-needs in E2 nodes).
6. In the non RT RIC, analyse the devops cycles of near RT RIC, new
capability needs of E2 nodes, arrive at new use cases (e.g. what are the users
not able to do with the current network and why?)
NOTE- this fits well with the concept of NetApps and network application
orchestrator (NAO) [AN-I-065], decoupling the network operations logic
from service provider logic and providing clear business roles.
Expected requirements
● AN-UC020-REQ-003: It is expected that autonomous networks (AN), provide tailor-made
recipes for application management and optimization specific to verticals deployed at the edge.
NOTE – these recipes may be the result of offline, generalized analytics at the AN. These recipes may be
considered by NAO while designing, developing and deploying applications at the edge.
Figure 13: actor interaction for Evolving Edge applications for verticals using Private 5G
Open issues (as seen by 1. Considering "achieve optimal operation in one domain only is risky" and
the proponent) "cross-tenant metrics exchange" is needed, and considering the SA5 1:many
relationship with vendors in 28.530, what are the possible feedback from the
CSP and CSCs? Other than the stock "metric feedback" and "policy" to the NF,
is there an orthogonal feedback to "ev" to "dev" which can potentially create new
NFs?
2. How do we discover “sub-problems” on the fly?
3. How to do dynamic discovery of trade-offs per service?
4. what tools we have to “design algorithm”?
Notes on use case Cat 2: describes a scenario related to application of autonomous behaviour in
category the network.
Notes on priority of the High
use case - has the potential to impact future service development and evolution.
- reuses MLFO and Y.3172 architecture.
Reference
A simple workflow scheme for autonomy with varying autonomy levels was discussed. This
included task specification by human and task understanding, feasibility check, task planning,
task execution by system. Request for support to human and control by human can be to any of
these workflow steps. Monitoring of performance levels by humans and learning by the system
are added steps.
Thus, it may be relevant to consider a multi-agent system where the agents have varied
competences (capabilities + options for actions + constraints).
Thus, it may be relevant to consider the following aspects for this specific use
case:
1) AI-enabled applications are increasingly being deployed at the edge. Low
latency, low power consumption and small footprint are considerations for AI
applications at the edge. Accelerated, AI-enabled applications at the edge are
important enablers for future networks.
2) As AI technology evolves, AI models evolve, the acceleration platform must
also be adaptable and at the same time satisfying the requirements above. Also,
reduced time to market, development time and cost to reach production
readiness, are important factors influencing deployment decisions by network
operators. fully customized circuit board is developed for each application may
not fit this bill.
3) pluggable solutions into a larger edge application, providing both the
flexibility of a custom implementation with the ease-of-use and reduced time-
to-market of an off-the-shelf solution, are needed.
4) Adaptive computing includes hardware that can be highly optimized for
specific applications such as Field Programmable Gate Arrays (FPGAs). In
addition to FPGAs, new types of adaptive hardware such as adaptive System-
on-Chip (SoC) which contains FPGA fabric, coupled with one or more
embedded CPU subsystems, have been introduced recently.
5) prebuilt platforms and APIs, software tools enable full customization of the
adaptive hardware, enabling even more flexibility and optimization. This can be
used to design highly flexible, yet efficient systems at the edge.
6) exploiting the development and adoption of standards in interface and
protocols at the edge, different AI-enabled edge applications can use similar
hardware components.
1. Environment model including the network environment is built for the user.
e.g. radio propagation models, signal strengths with respect areas, mobility
prediction models.
3. Simulations are used (offline and/or real time) to determine the changes and
adaptations needed in the network to satisfy the needs of the user.
e.g. digital twins which include environment simulations and user specific criteria.
In summary, the use case proposes to monitor, identify the need for ev,
generate new f() to support this need, “(re-)inject” that function through devops
pipeline into the closed loops. Note that this may require multi-domain
coordination to modify the closed loops and may be challenging from an
implementation perspective. Implementation may depend on the capabilities
provided by the underlying closed loop frameworks e.g. ZSM.
Expected requirements
● AN-UC026-REQ-003: it is expected that management of network services or applications or
VNFs or configurations or AI/ML models is done at runtime in coordination with devops pipelines.
● AN-UC026-REQ-004: it is expected that domain specific closed loops allow management of
network services or applications or VNFs or configurations or AI/ML models in coordination with
AN components.
NOTE – There can be a spectrum of adaptation changes (levels of “mutation”) of network services:
a) no adaptation at all
3. This model is then used in conjunction with real data for maintenance and
optimization by intelligence maintenance assistance system. This step may involve
query of the virtual model, analysis of real alarms, cell data, along with the virtual
model, to create intelligence assistance for frontline workers. This step may use
network data management, system management and core algorithms for the whole
system. The output from this step may include 3D models which can be rendered
in AR glasses, AI processed network information for display, real-time remote
guidance information.
4. As the network services evolve and new network functions are plugged in
(virtual or physical), the following evolution steps are applied:
a. AR app is updated to collect new data, including new equipment data, and
new sensors and new environment information.
b. backstage support system is updated with new data management systems,
core algorithms etc
6. A software development kit (SDK) may be exposed to 3rd party developers who
may develop new applications to analyse the AR-collected data. This may in turn
help operators to provide new value-added applications in the intelligent
maintenance assistant system.
Open issues (as seen 1. How autonomy can be applied to simply this process or enhance to enable more?
by the proponent) 2. The bidirectional relationship: how the (real world) AR+AI system feeds the
(digital world) network management & ops (databases) + how the network supports
the AI+AR system, e.g. by providing inventory and processes information and
access to controls?
Notes on use case Cat 2: describes a scenario related to application of autonomous behaviour in the
category network.
Notes on priority of High
the use case has the potential to impact future service development and evolution.
Reference
● AN-UC029-REQ-004: It is critical that autonomous networks (AN) update the data collection
mechanisms and data analysis mechanisms along with the result rendering mechanisms based on
the analysis by AI/ML on the collected data from AR and the evolution of the underlay networks.
● AN-UC029-REQ-005: It is critical that autonomous networks (AN) provide periodic and/or
asynchronous updates to humans about the operation of the intelligent assistant system.
Expected requirements
● AN-UC029-REQ-006: It is expected that autonomous networks (AN) enable exposure of
programming capabilities to 3rd party developers for creation of novel applications which can help
automated operation and maintenance of network, including evolution and adaptation of network
functions.
NOTE – Such novel applications may analyse the data collected using AR, suggest new data collection
mechanisms based on gaps in collected data, suggest new analytical methods, or suggest new targets for
application of analysis.
7.30 Demand forecasting and live service migration methods in edge computing systems
Use case id FG-AN-usecase-030
Use case name Demand forecasting and live service migration methods in edge computing
systems
Base contribution FGAN-I-109
Creation date
01 July 2021
Use case context Discussions in the weekly meeting during presentation of FGAN-I-109
Use case description Virtualization and cloudification of services have enabled automation, flexible
placement and programmability to network topology. Efficiency of service delivery
can be significantly improved using these techniques. However, there are significant
challenges to host Ultra-reliable low-latency communication (URLLC) and massive
machine type communications (mMTC) services in 5G, in centralized topologies.
Monitoring of networks by telco operators have revealed that network topology is
not static and load is not uniform over a long service time.
This use case describes a dynamic network topology and service placement using
the Genetic Algorithm to analyze and predict services. In addition, an efficient
forecasting and live migration methods of service as an application to edge
computing systems are introduced. This approach can enable intelligent allocation
of operator equipment resources, for providing flexible and efficient topologies.
Simulation based analysis of results proved that the network equipment efficiency
can significantly be increased by these techniques.
Figure 17: actor interaction for Demand forecasting and live service migration
Open issues (as seen - there is no unified standard format for intents for closed loops
by the proponent) - there are no opensource solutions to convert intents to closed loops.
Notes on use case Cat 1: describes a scenario related to core autonomous behaviour itself.
category
Notes on priority of High
the use case has the potential to impact future service development and evolution.
Reference
● Suppose there are multiple users try to access the channel at a particular
time and resources are scarce, then we need to schedule the users into
different frame durations (msec). For this we consider auctioning
mechanism to schedule the user in a particular time slot. Utility for each
auction is a function of user requirements such as rate, latency etc.,
● Once we perform user scheduling, then we can allocate channel and
power per channel according to algorithm described above.
● In the above procedure we are performing both time domain and
frequency domain scheduling separately.
story-2: surveillance videos– high UL rate is required for cameras – macro cell
coverage and interference is problem. Allocation for rate-required UEs in the
presence of interference and low coverage by macro. Priority allocation for UL
intensive UEs to achieve QoS.
story-3: power constrained – wearables etc – privacy – user specific data cannot be
exposed to 3rd party – to do analytics, no data should be taken out of the trust zone
(enterprise or private network). All AI/ML, analytics etc needs to be done within the
private network.
1. Controllers are formed based on, e.g. Intents and/or Evolution (Ev) to optimize
resource allocation with various considerations including latency, throughput or
privacy preserving analytics.
2. Modelling of inter-controller interaction using game theory: “Players” in the
game would be the equivalence classes of controllers. “players” may be selected
from the evolvable population.
8. Outer loop: Collect the data from the set of solutions, train the AI/ML model.
infer the equilibrium from the new input data using trained model.
Open issues (as seen 1. what are the controllers trying to achieve as players? – use case story.
by the proponent) 2. How to quantify computational cost?
3. how to evaluate whether other controllers (players) are trustable (among
themselves, whereas so far FG have thought of whether controllers are
trusted/deployable)? – take this aspect of trust an input from another layer.
a. #using AI/ML to overcome some of the disadv. of game theory.
What are those? – “the different Nash equilibria and the approaches
to reach them could be learnt over some experiments using AI/ML.
And then the learnings could be used to optimize these
experiments”.
4. in the context of trust: players can be selfish, transparent. This
understanding may change as Ev progress. Trust index can be studied. –
“this is a new problem”
5. how to use RL (need quick feedback) here? use real network or simulated
data? – Use different RL mechanisms like 0 regret. Use simulated data.
#convergence may take long time in RL. Use DTwin to introduce RL in AN.
6. Small cell coverage is small, num of users is less, so the distribution of
service requirement is sparse (?). – as the coverage density increases, to
serve large num of users… capacity also increases…specific indoor
scenario like factory…but for the resource allocation may be more
complicated for macro cell.. interference with macro may be considered..
game with macro (coordination with macro) may be considered later.
Notes on use case Cat 1: describes a scenario related to core autonomous behaviour itself.
category
Notes on priority of High
the use case Justification: the architecture components derived and introduced here will help in
defining meta-evolution controllers [I-98].
Reference [b-Sankar-1, [b-Sankar-2], [b-Ahmad], [b-Al-Turjman], [b-Ciavaglia-2]
Expected requirements
● AN-UC032-REQ-002: It is expected that autonomous networks (AN) enable experimentation
with various gaming strategies, payoffs and equilibria with controllers are players.
NOTE – Experimentation may be conducted in the Sandbox and coordinated e.g. by experimentation
manager, may result in analysis of strategies, payoffs and equilibria.
Figure 20: component cloud for AI enabled Game theory based resource allocation
Step-3: link Tasks: A task corresponds to a worker utilized in the workflow. Tasks
in our workflow may receive input parameters and execution logic of our task may
be implemented in python functions called worker. Worker tasks may be registered
in the main python file “main.py” in the same directory where you just created your
worker.
NOTE- All workers which one want to use in Frinx Machine must be included in
this file.
Description of the relation (if any) of the use case with autonomous behaviour or
the key technical enablers.
- In the context of the use case, controllers (closed loops) are represented as
workflows. Modules are modelled as tasks.
- controller specification and module specifications are created using designer,
sanity checked and stored in the resource db.
- workflow manager is used to visualize the controllers, monitor and analyse
- deploy will link service-x with controllers
Step-3: Other forms of “underlays” e.g. FRINX Machine, ZSM managed domains,
would host their own ways of achieving the above mentioned integration of
controllers in their service domains. This forms the “application” (or deployment)
side of controller lifecycle.
Expected requirements
● AN-UC037-REQ-002: It is expected that autonomous networks reuse existing mechanisms
for dynamic management of agreements between different administrative domains of network
operators to achieve seamless autonomous behaviour in case of underlay networks that support
scaling across shared resource pools and across administrative domains.
NOTE – Examples of existing mechanisms for dynamic management of agreements are block-chain
mechanisms or smart contracts.
Figure 29: actor interaction for Negotiated boundaries in AN for seamless network sharing: part-2
Use case category Cat 2: describes a scenario related to application of autonomous behaviour in the
network.
Reference
7.40 Awareness in AN
Use case id FG-AN-usecase-040
Use case name Awareness in AN
Base contribution
Creation date 28/09/2021
Use case context Discover and leverage nearby control loops and AN
Today we have heterogeneous networks (HetNet) with networks like RAN, Core,
Use case description
transport and convergence of wireless and wireline of different technologies. The
evolution of HetNet technologies would comprise control loops in the journey
towards AN. Multiple control loops from different vendors and operators surround
AN at any given location or endpoint in such a HetNet scenario. AN need to be
aware of the surrounding control loops or other AN in general. It could be for
various reasons like collaboration, coordination, security, training machine
learning model, split learning, offload. AN must have an interface to communicate
and interact with other control loops in different AN and control loops within the
AN. The interface shall enable other control loops to be autonomous, discover
[b-ETSI GR ZSM 009-3] Draft Group Report ETSI GR ZSM 009-3, Zero-Touch
Network and Service Management (ZSM) Closed-loop
automation: Advanced topics.
[b-ETSI GS ZSM 002] Group Specification ETSI GS ZSM 002 (2019), Zero-touch
network and Service Management (ZSM); Reference
Architecture.
[b-ETSI GS ZSM 013] Draft Group Specification ETSI GS ZSM 013, Zero-touch
network and Service Management (ZSM); Automation of
CI/CD for ZSM services and managed services.
[b-Jagannath] Jagannath, J., Polosky, N., Jagannath, A., Restuccia, F., and
Melodia, T. (2019), Machine Learning for Wireless
Communications in the Internet of Things: A Comprehensive
Survey. Ad Hoc Networks Vol. 93, p. 101913.
[b-Liu] Liu, L., Hu, H., Luo, Y. and Wen, Y. (2020), When wireless
video streaming meets AI: A deep learning approach. IEEE
Wireless Communications, Vol. 27, No. 2, pp. 127–133.
[b-Mao] Mao, Q., Hu, F. and Hao Q. (2018), Deep Learning for
Intelligent Wireless Networks: A Comprehensive Survey. In
IEEE Communications Surveys & Tutorials, Vol. 20, No. 4,
pp. 2595-2621.
[b-Nahum] Nahum, C. V., Pinto, L., Tavares, V. B., Batista, P., Lins, S.,
Linder, N. and Klautau, A. (2020), Testbed for 5G
Connected Artificial Intelligence on Virtualized Networks.
IEEE Access, Vol. 8, pp. 223202-223213.
[b-Wang-1] Wang, C., Wu, Q., Weimer, M. and Zhu, E. (2021), FLAML:
A Fast and Lightweight AutoML Library. In Fourth
Conference on Machine Learning and Systems (MLSys
2021)
[b-Wu-2] Wu, Q., Wang, C., Langford, J., Mineiro, P. and Rossi, M.
(2021), ChaCha for Online AutoML. In Proceedings of the
38th International Conference on Machine Learning, PMLR
139, pp. 11263-11273
___________________