Team 2 Research Paper
Team 2 Research Paper
9
Cognitive Software Defined 5G Networks
2021.
Authorized licensed use limited to: California State University Fresno. Downloaded on July 01,2021 at 02:10:04 UTC from IEEE Xplore.
2021 7th International Conference on Advanced Computing & Communication Systems
(ICACCS)
Markov Decision Process (MDP) is introduced to address with the objective of limiting the time by which all parcels from
the issues present in the system by performing Dynamic the two clients are conveyed to the goal. The below average
Programming technique [4]. But this technique doesn’t involves online methodologies [14] [15] [16]. Creators in [14]
moderate the issue scale which is augmented. Hence, to allocate considered a multi-get to remote framework with EH
this issues, deep reinforcement learning (DRL), which joins transmitters, and the entrance issue was displayed as an in part
Deep Neural Networks (DNNs) and Reinforcement Learning recognizable Markov choice procedure (POMDP). In [15], the
(RL) techniques are introduced. Multi-Measurement arrangements for controlling the power ideally for EH hubs in a
information will plays an elective role in finding the route of its multi-get to framework was studied and while performing
capacity. procedure of EH will collect the elements from dam model.
Authorized licensed use limited to: California State University Fresno. Downloaded on July 01,2021 at 02:10:04 UTC from IEEE Xplore.
2021 7th International Conference on Advanced Computing & Communication Systems
(ICACCS)
M channels are autonomous and indistinguishably this TS is divided into two sections mainly, present
channel dispersed. The transmission calendar is chosen by the hand-off. state and present UE battery state. The
present channel A channel is used by the hand-off process to transmit the hub to state is represented as Ht = {H1t, •
,HNt} and the present sink. The transfer does not include the transmission of bundle if UE battery state is represented as Bt =
{B1t, • , BNt}. The the channel state is poor. At the point when a specific cradle is combination of these two will give the
St = {Ht, Bt} which full, in the event that the transfer does not transmit parcel for it, is a UE choice decision. The condition
P Kt 6 A, with |Kt| at that point, the bundle is lost if parcels keep on landing in the = K, N i=1 Iit = K will be satisfied
by using these following casing. Along these lines, and for the transmission strategies. The signal Rt is described as
shown in below process of bundles completely depends on the channel state and equation which is nothing but total
rate.
the support expresses that relate to the correspondence sets and
CO
transmission mode.
max Jl (n)
n
Authorized licensed use limited to: California State University Fresno. Downloaded on July 01,2021 at 02:10:04 UTC from IEEE Xplore.
2021 7th International Conference on Advanced Computing & Communication Systems
(ICACCS)
In this the layer which is available in the information is with likelihood 1 - , where 0 << 1. In the wake of executing the
more smaller than the reinforcement model. Qminibatch(s, a) £ chose activity At, the BS gets the remunerate Rt and
the R batch size^L, is the present state model. Here the size of framework changes to the new state. We use experience replay
activity space is L based on the condition a £ A. to store the BS's encounters at every TS, which is indicated
by
tuple et = (St, At, Rt, St+1) in a dataset D = {e1, ..., et}. The
value of L is set to replay memory size, which signifies that
L
experience tuples could be stored. Here, et is produced by the
Now to alter the position vector for Q-esteem, a control approach n(a|s). In each TS, rather than refreshing 9a
system layer is introduced ot used. The condition for this
dependent on advances from the present state, we arbitrarily test
system layer is given as i.e., Q(s, a) 6 R 1*L. from figure 3 we
a tuple (~s, a, ~ r, ~ s") from D. Refreshing system parameters
can observe that ^A(9a) is nothing but an neural system which
along these lines to dodge issues brought about by
gives the output Q(s, a). In this LSTM layer parameters {w1, •
solid
We connections among changes of a similar scene [22].
,wn}are quiet associated with the parameter wf. The quantity of
parameterize a surmised worth capacity Q(s, a; 9a) utilizing
LSTM units is donated by n. Q(St, an) is evaluated by ^A(9a)
the
proposed learning system with system parameters (loads) of
9a
when the learning schedule is opened.
as shown in figure 3. With the examined advances, yt = ~r +
y
maxa" Q("s, a"; 9 - a ) is the objective Q-esteem with
1:Initializetheexperience memory D, system loads 9 - a acquired from past cycle. Finally
multi channel
2: Initializetheparameters of activitygeneratororganize(pAwithirregularloads 0a, access control with different attribute sequences in
software
3: Initializetheabsolutenumberofscenes Ep, defined network communication.
4: Initializethe earthandget beginningperceptionSl,
V. Experimental Simulations
5:fort= 1, ■,oodo
6: in the eventthat megularO < atthat point 1:Select an arbitraryactivityAt EA; In this section, we describeNovel Re-
Enforcement
8:else Learning Approach (NRELA) to improve the
transmission
9: Compute Q(St, a) for all activitiesa E An utilizing tpA, efficiency using cognitive radio software defined
networks
through multiple channel to increase network throughput
10:
and Select At= argmax aEAQ(St, a).
energy consumption with respect to different nodes.
11: end if
For
efficient simulation setup, use NS3 with Ubuntu
operating
12: ExecuteAt, watch compensateRt andnew state St+1, system with different node communication.
Simulation
13: Storeprogress(St,At,Rt, St+1) inD, parameters used in our implementation shown in table 1.
14: Sample irregularlittle clump of advances ("s. a,‘ r,‘ s") ftomD,
Parameter Value
15: Setyt= ~rif t +1 is the terminal advance ofthe scene (t + 1= Ep); generally, yt= ~r+ y maxa' Qfs, a”; fl-a).
Area of network 1500*1500
16: Perfonn stochastic slope dropventure onthe misfortuneworkLt(0a)= (yt - Q('s, a"; 0a))2 to refresharrange Noodes with 60
parameters 0a as indicatedby (13).
presented area
17:endfor Time of Simulation 30S
Range of 250 m
Algorithm 1 NRELA algorithm procedure with Transmission
respect to access control. Speed of Mobility 0-20m/sec
Number of 10
At that accomplishes the most extreme Q(St, At), and
Authorized licensed use limited to: California State University Fresno. Downloaded on July 01,2021 at 02:10:04 UTC from IEEE Xplore.
2021 7th International Conference on Advanced Computing & Communication Systems (ICACCS)
Time comparison results in software defined networks with As the number of nodes increased then the number
of nodes communication with respect to time for packets droping outcomes in real time data transmission of host to
host in middle of data delivery by hop by hop communication. Table communication energy consumption in our NRELA
schema
2 shows analysis results with respect to time in data
gives efficient communication without loss of data delivery
in
communication between nodes.
software defined network communication as shown in
figure Number NRELA CWA-CD 4.5.
of Nodes
VI. Conclusion
10 1.3 1.28 In this paper, we present Novel Re-enforcement
Learning
20 2.0. 2.7 model (NRELA) to provide solution for the user access
control
30 3.2 2.8 and also describe the battery prediction problems with
respect
40 4.2 4.5 to multi user energy sharing based communication system.
The
50 5.6 4.9 main intent of proposed system is to maximize the uplink
sum
60 5.4 5.5 rate which is driven by system instantaneous information
in
data sharing. Energy optimization is also discussed to
minimize Table 2: Node communication with time comparison results. the packet loss. Simulation results of proposed
approach are Time comparison results for different approaches with satisfied different conditions with increase
effectiveness in respect to different nodes with efficient communication. terms of parameters like throughput, packet
loss, latency and
energy optimization.
References
[1]. Jiang Zhu, Yonghui Song, Dingde Jiang, “ A New Deep-Q- Leaming-
Based Transmission Scheduling Mechanism for the Cognitive
Internet of Things”2327-4662 (c) 2017 IEEE. Personal use is
permitted, but republication/redistribution requires IEEE
permission.
[2]. Khalil N, Abid M R, Benhaddou D, et al. Software defined sensors
networks for Internet of Things [C]// IEEE Ninth International
Conference on Intelligent Sensors, Sensor Networks and
Information Processing. IEEE, 2014:1-6.
[3]. S. Jeschke, et al. (Eds.), "Industrial Internet of Things: Cyber-
manufacturing Systems," ISBN 978-3-319-42559-7, Springer,
Switzerland, 2016.
[4]. Yang J, He S, Lin Y, et al. Multimedia cloud transmission and
Fig 4 Comparison of time with respect to node storage system based on Internet of Things[J]. Multimedia
Tools
Authorized licensed use limited to: California State University Fresno. Downloaded on July 01,2021 at 02:10:04 UTC from IEEE Xplore.
2021 7th International Conference on Advanced Computing & Communication Systems (ICACCS)
[11] . Man Chu, Hang Li, “Reinforcement Learning based Multi-Access
Control and Battery Prediction with Energy Harvesting in IoT
Systems”, arXiv:1805.05929v2 [eess.SP] 21 Sep 2018.
[12] . A. Haldorai and A. Ramu, “Security and channel noise
management in cognitive radio networks,” Computers & Electrical
Engineering, vol. 87, p. 106784, Oct.
2020. doi: 10.1016/j.compeleceng.2020.106784
[13] . A. Haldorai and A. Ramu, “Canonical Correlation Analysis Based
Hyper Basis Feedforward Neural Network Classification for
Urban Sustainability,” Neural Processing Letters, Aug. 2020.
doi:10.1007/s11063-020-10327-3
[14] . A. Cammarano, C. Petrioli, and D. Spenza, “Online energy
harvesting prediction in environmentally powered software
defined sensor networks,” IEEE Sensors J., vol. 16, no. 17, pp. 6793-
6804, Sep. 2016.
[15] . H. U. Yildiz, V. C. Gungor, and B. Tavli, “A hybrid energy
harvesting framework for energy efficiency in software defined
sensor networks based smart grid applications,” in 2018 17th
Annual Mediterranean Ad Hoc Networking Workshop (Med-Hoc-
Net). Capri, Italy: IEEE, June 2018, pp. 1-6.
[16] . A. A. Nasir, X. Zhou, S. Durrani, and R. A. Kennedy, “Relaying
protocols for software defined energy harvesting and information
processing,” IEEE Trans. Software definedCommun., vol. 12,
no. 7, pp. 3622-3636, Jul. 2013.
[17] . R. Zhang, J. Wang, Z. Zhong, C. Li, X. Du, and M. Guizani,
“Energyefficient beamforming for 3.5 GHz 5G cellular networks
based on 3D spatial channel characteristics,” Elsevier Comput.
Commun., vol. 121, no. 5, pp. 59-70, Mar. 2018.
[18] . X. Zhou, B. Bai, and W. Chen, “Greedy relay antenna selection
for sum rate maximization in amplify-and-forward mimo two-
way relay channels under a holistic power model,” IEEE Trans.
Software definedCommun., vol. 19, no. 9, pp. 1648-1651, Jun.
2015
1853
Authorized licensed use limited to: California State University Fresno. Downloaded on July 01,2021 at 02:10:04 UTC from IEEE Xplore.