Ascending Hierarchical Classification For Camera Clustering Based On FoV Overlaps For WMSN
Ascending Hierarchical Classification For Camera Clustering Based On FoV Overlaps For WMSN
See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
IET Wireless Sensor Systems
Research Article
Ala-Eddine Benrazek1 , Brahim Farou1,2, Hamid Seridi1,2, Zineddine Kouahla1,2, Muhammet Kurulay3
1LabSTIC, 8 may 1945 University, P.O. Box 401, 24000 Guelma, Algeria
2Department of Computer Science, 8 may 1945 University, P.O. Box 401, 24000 Guelma, Algeria
3Mathematics Department, Yildiz Technical University, Istanbul, Turkey
E-mail: [email protected]
Abstract: Wireless multimedia sensor networks (WMSNs) currently face the problem of rapidly decreasing energy due to the
acquisition, processing and transmission of massive multimedia data. This decrease in energy affects the life of the network,
resulting in higher overhead costs and a deterioration in quality-of-service. This study presents a new grouping strategy that
somewhat reduces energy reduction problems. The objective is to group cameras in the WMSN according to their field of view.
The proposed system begins by searching for all polygons created by the intersection of the two cameras' FoV. Based on the
generated surfaces, an ascending hierarchical classification is applied to group cameras with strongly overlapping visions fields.
The results obtained with 300 randomly positioned cameras show the effectiveness of the proposed method to minimise
redundant detection, reduce energy consumption, increase network life, and reduce network overload.
IET Wirel. Sens. Syst., 2019, Vol. 9 Iss. 6, pp. 382-388 382
© The Institution of Engineering and Technology 2019
20436394, 2019, 6, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-wss.2019.0030 by INASP/HINARI - ALGERIA, Wiley Online Library on [04/11/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Fig. 1 FoV of camera sensor [8]
assigned to their leader in a dynamic and cooperative manner for Y C = Y A + Rs × sin α + θ mod 2π (4)
target tracking. • Clusters: represent a subset of cameras with overlapping fields
Selina Sharmin et al. [16] have developed a zone coverage of view. The area of the overlap area between the FoVs of two
system, sensitive to the network lifetime, to solve the region nodes determines whether they can be in the same cluster
coverage problem. The system uses a grouping mechanism based according to the proposed clustering algorithm.
on the degree of overlap and residual energy levels. This proposal • Cut-off Threshold (β): is a value defined to determine the most
promotes network management, in the event of sensor failure, on appropriate cut-off level for a group of cameras. It must be as
improving the quality of monitoring. In [17], Premlata Sati et al. relevant as possible, as illustrated in Fig. 2. This value is used in
have presented an automatic rotation mechanism of the FoV for the clustering phase and stops the gathering process for cameras
each camera to maximise the coverage area with the minimum of with little overlap.
cameras in the surveillance zone and consequently avoid redundant • Isolated camera: A camera is considered isolated if and only if
detections. However, this approach requires prior installation of all overlapping surfaces with other cameras are null or lower
panoramic cameras. than the cut-off threshold (β).
Y B = Y A + Rs × sin α (2) If there is a vertex among the set of the triangle vertices that
satisfies the conditions (6), the coordinates of these vertices will be
XC = X A + Rs × cos α + θ mod 2π (3) added in the list of polygon points.
IET Wirel. Sens. Syst., 2019, Vol. 9 Iss. 6, pp. 382-388 383
© The Institution of Engineering and Technology 2019
20436394, 2019, 6, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-wss.2019.0030 by INASP/HINARI - ALGERIA, Wiley Online Library on [04/11/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Fig. 3 Different possibilities for FoV overlapping
Y =a×X+b
(7)
Y = a′ × X + b′
X A ≤ X ≤ X B ∧ XC ≤ X ≤ X D ∨
Fig. 5 Matrix of surfaces
X A ≤ X ≤ X B ∧ XC ≥ X ≥ X D ∨
X A ≥ X ≥ X B ∧ XC ≥ X ≥ X D ∨ 3.2 Polygon surface area calculation
X A ≥ X ≥ X B ∧ XC ≤ X ≤ X D ∨ The next step was to calculate the area of the polygons using (10)
(8) [18]:
Y A ≤ Y ≤ YB ∧ YC ≤ Y ≤ YD ∨
Let Ai xi, yi , i = 0, …, n be a polygon, where n is the number of
Y A ≤ Y ≤ YB ∧ Y ≥ Y ≥ YD ∨ polygon vertices such that A0 = An.
Y A ≥ Y ≥ YB ∧ YC ≥ Y ≥ YD ∨
n
Y A ≥ Y ≥ YB ∧ YC ≤ Y ≤ YD ∨ 1
2 i∑
Surface = xi × yi + 1 − xi × yi − 1 (10)
=0
P X, Y accpted, if (8) is cheked
(9)
P X, Y unaccepted, Otherwise Then, the polygonal surfaces generated by the intersection of the
fields of view of two cameras were calculated and recorded in the
The coordinates of each accepted intersection point was also added surface matrix. For example, the polygon area generated by the
to the polygon's point list. At the end of this step, all points of the intersection between the cameras of the identifier and, respectively,
polygon were detected and saved in a list for further processing, as was added in this area. After calculating all surfaces, a symmetrical
shown in Fig. 4. square matrix with a zero diagonal was obtained, as shown in
Fig. 5.
384 IET Wirel. Sens. Syst., 2019, Vol. 9 Iss. 6, pp. 382-388
© The Institution of Engineering and Technology 2019
20436394, 2019, 6, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-wss.2019.0030 by INASP/HINARI - ALGERIA, Wiley Online Library on [04/11/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Fig. 6 Algorithm 1: AHC for grouping cameras
3.3 Ascending hierarchical classification (AHC) 4200U CPU, 1,6 GHz processor and 4 GB RAM memory capacity.
The authors developed their own simulator in JAVA to test the
Hierarchical classification methods use an iterative process to proposed approach. The system operated without any constraints
group or redistribute original data. They are based on the selection on the location and properties of the cameras. For this purpose, all
of an aggregation criterion to determine how to agglomerate two cameras were randomly placed in the surveillance zone. The
clusters or divide a cluster. cameras were configured with an angle of FoV θ = 60° and Rs = 25
This article focused on the AHC for the successive merge of m in a sensing area of 300 m × 300 m. Three types of cameras with
cameras. At each iteration, the two cameras with the largest low, medium, and high resolution were used to carry out the
overlapping area measurement are merged into a cluster. Initially, experiments, namely:
all cameras are considered as single camera clusters. The first step
was to group the two closest cameras together in terms of overlap • Cyclops camera, with low resolution – CIF (352 × 288) [20];
area. Following an iterative process, the AHC continued to merge
• MeshEye camera, with medium resolution – SD (640 × 480)
the most overlapping clusters while respecting the cut-off threshold
[21];
(β) that was defined previously. The process stopped when all
cameras are grouped. • SleepCAM camera, with high resolution – 1080p HD (1920 ×
Maximum jump was chosen as a grouping strategy [19] due to 1080) [22, 23].
its ability to generate small homogeneous groups with large inter-
group variability. Then, this strategy was applied on the surface Fig. 7 shows the developed application. This simulator allows the
matrix obtained at the end of the previous step, as presented by user to create (Fig. 7a) cameras with the previously mentioned
Algorithm 1 (see Fig. 6). When all the cameras are grouped features or to delete them (Fig. 7d), and to modify cameras location
together, the Base Station (Cloud Computing) notifies all the (Fig. 7b and c) according to the user's needs by the translation (11)
cameras of their group ID. or the rotation (12) of the cameras.
The complexity of the proposed clustering algorithm is of In the plan, the translation of the vector u a, b transforms the
point M x, y into M′ x′, y′ as follows:
quadratic-order O n2 in the worst case.
x′ = x + a
4 Algorithm test results (11)
y′ = y + b
The results were obtained after the execution of the proposed
clustering strategy on a workstation with an Intel® Core™ i5 de
IET Wirel. Sens. Syst., 2019, Vol. 9 Iss. 6, pp. 382-388 385
© The Institution of Engineering and Technology 2019
20436394, 2019, 6, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-wss.2019.0030 by INASP/HINARI - ALGERIA, Wiley Online Library on [04/11/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Fig. 7 Simulator presentation
(a) Main interface, Dialog box for, (b) Rotation, (c) Translation and, (d) Deletion of cameras
Let O xo, yo be the origin and θ, the rotation angle. The rotation
transforms the point M x, y into M′ x′, y′ as follows:
x′ = x × cos θ − y × sin θ + xo
(12)
y′ = x × sin θ + y × cos θ + yo
386 IET Wirel. Sens. Syst., 2019, Vol. 9 Iss. 6, pp. 382-388
© The Institution of Engineering and Technology 2019
20436394, 2019, 6, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-wss.2019.0030 by INASP/HINARI - ALGERIA, Wiley Online Library on [04/11/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Fig. 12 Estimated energy consumption using MeshEye camera
where T slp and Pslp are the period and power consumption of a node
in sleep mode; Eup, Ecap, and Eproc are, respectively, the consumed
energies to activate a node, capture an image and perform the
desired task.
The number of activated cameras is obviously equal to the
number of clusters, because in each cluster, only one camera is
activated at a time. The average energy conservation is calculated
using (14):
AvgClusterSize × Ecamera
AvgEC = (14)
Ecluster
Ecluster = Ecamera + AvgClusterSize − 1 × Pslp × T where P(W ) is the power and t(s) is the running time.
Then the running time, frequency and power were calculated
Figs. 11–13 show, respectively, the variation of the energy using 300 cameras:
consumed according to the number and type of cameras with and
without clustering: Cyclops ‘CIF (352 × 288) low resolution • Average running time: The average running time of the
camera’, MeshEye Camera ‘SD (640 × 480) medium resolution’ algorithm on a workstation with an Intel® Core™ i5 - 4200U
and SleepCAM Camera ‘1080p HD (1920 × 1080) high resolution’. CPU, 1.6 GHz processor is: t(s) = 106.5 ms.
The tests conducted, presented in Figs. 11–13, show that: • Average frequency: Before the execution of the algorithm, the
average CPU frequency is 1.16 GHz. When running the
• If the camera resolution increases, the energy consumed also algorithm, the average CPU frequency is 1.94 GHz. The state
increases. change is equal to 0.78 GHz.
• If the number of cameras increases, the energy consumed • Average power: The Intel® Core™ i5 - 4200U CPU power
increases. consumption is 15 W when running with a maximum capacity
• The system with clustering consumes less energy. of 2.6 GHz. Thus, the power consumed while running a
clustering algorithm is: P(W ) = 0.78 × 15 /2.6 = 4.6 W.
Fig. 14 shows the average energy conservation rate (AvgECR) • Average energy: The average energy consumed by the clustering
between the two systems with and without clustering, using the algorithm is equal to:
three types of cameras mentioned above. The results show that the
AvgECR increases slightly when the number of cameras with low E(J ) = 4.6 × 106.5 × 10−3 = 0.48 J .
or medium resolution increases, but it increases substantially when
the number of high-resolution cameras increases.
5 Conclusion
The energy consumed during the run of the proposed clustering
algorithm is given by (15): This article presented a new method for grouping cameras
according to FoV overlap areas for WMSNs using the AHC
E(J ) = P(W ) × t(s) (15) coupled to the maximum jump criterion. The aim of this method is
to facilitate cooperation and coordination between cameras in a
IET Wirel. Sens. Syst., 2019, Vol. 9 Iss. 6, pp. 382-388 387
© The Institution of Engineering and Technology 2019
20436394, 2019, 6, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-wss.2019.0030 by INASP/HINARI - ALGERIA, Wiley Online Library on [04/11/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
group instead of the entire network to reduce energy consumption [9] Alaei, M., Barcelo-Ordinas, J.M.: ‘A collaborative node management scheme
for energy-efficient monitoring in wireless multimedia sensor networks’,
and network overload by optimising the detection of redundant Wirel. Netw., 2013, 19, (5), pp. 639–659
events, and thus, less processing, less data produced and [10] Chaurasiya, S.K., Mondal, J., Dutta, S.: ‘Field-of-view based hierarchical
transmitted that extends the life of the network. clustering to prolong network lifetime of WMSN with obstacles’. Proc. Int.
The system first detects the points of intersection of the Conf. on Electronics, Communication and Computational Engineering
(ICECCE), Hosur, 2014, pp. 72–77
camera's FoV and then records them in a list. The overlapping of [11] Kheirkhah, M.M., Khansari, M.: ‘Clustering wireless camera sensor networks
the fields of view allows to generate polygons for which the based on overlapped region detection’. Proc. 7th Int. Symp. on
surface must be calculated. Depending on the surface area, the Telecommunications (IST'2014), Tehran, 2014, pp. 712–719
system allows camera groups to be determined by an iterative [12] Mishra, S., Chaurasiya, S.K.: ‘Cluster based coverage enhancement for
directional sensor networks’. Proc. 1st Int. Conf. on Next Generation
process. The proposed system was tested in several cases and has Computing Technologies (NGCT), Dehradun, 2015, pp. 212–216
obtained regrouping rates, which stabilise at 0.75% regardless of [13] Costa, D.D., Silva, I., Guedes, L.A., et al.: ‘Optimal sensing redundancy for
the number of cameras. Based on the promising findings presented multiple perspectives of targets in wireless visual sensor networks’. Proc. 13th
in this paper, work on the remaining issues is continuing and will Int. Conf. on Industrial Informatics (INDIN), Cambridge, 2015, pp. 185–190
[14] Jung, K., Lee, J.Y., Jeong, H.Y.: ‘Improving adaptive cluster head selection of
be presented in future papers. teen protocol using fuzzy logic for WMSN’, Multimedia Tools Appl., 2017,
76, (17), pp. 18175–18190
6 Acknowledgments [15] Zarifneshat, M., Khadivi, P., Saidi, H.: ‘A semi-localized algorithm for cluster
head selection for target tracking in grid wireless sensor networks’, Ad Hoc
The authors would like to acknowledge Ms Gabriela KOUAHLA, Sens. Wirel. Netw., 2015, 25, (3–4), pp. 263–287
[16] Sharmin, S.: ‘α-Overlapping area coverage for clustered directional sensor
certified translator and language consultant, for her valuable networks’, Comput. Commun., 2017, 109, pp. 89–103
assistance in proofreading this article. [17] Sati, P., Goel, P., Goswami, S.: ‘Enhancing coverage area in self-orienting
directional sensor networks’, Int. J. Inf. Comput. Technol., 2014, 4, pp. 1661–
1666
7 References [18] Braden, B.: ‘The surveyor's area formula’, Coll. Math. J., 1986, 17, (4), pp.
[1] Gungor, V.C., Hancke, G.P.: ‘Industrial wireless sensor networks: challenges, 326–337
design principles, and technical approaches’, IEEE Trans. Ind. Electron., [19] Bruynooghe, M.: ‘Classification ascendante hiérarchique des grands
2009, 56, (10), pp. 4258–4265 ensembles de données: un algorithme rapide fondé sur la construction des
[2] Li, S., Da Xu, L., Zhao, S.: ‘The internet of things: a survey’, Inf. Syst. Front., voisinages réductibles’, Les cahiers de l'analyse de données, 1978, 3, (1), pp.
2015, 17, (2), pp. 243–259 7–33
[3] Alaei, M., Barcelo-Ordinas, J.M.: ‘Node clustering based on overlapping [20] Rahimi, M., Baer, R., Iroezi, O.I., et al.: ‘Cyclops: in situ image sensing and
FoVs for wireless multimedia sensor networks’. Proc. Wireless interpretation in wireless sensor networks’. Proc. the 3rd Int. Conf. on
Communication and Networking Conf., Sydney, 2010, pp. 1–6 Embedded Networked Sensor Systems, San Diego, CA, USA, 2005, pp. 192–
[4] Liu, X.: ‘A survey on clustering routing protocols in wireless sensor 204
networks’, Sensors, 2012, 12, (8), pp. 11113–11153 [21] Hengstler, S., Prashanth, D., Fong, S., et al.: ‘Mesheye: a hybrid-resolution
[5] Gherbi, C., Aliouat, Z., Benmohammed, M.: ‘A survey on clustering routing smart camera mote for applications in distributed intelligent surveillance’.
protocols in wireless sensor networks’, Sens. Rev., 2017, 37, (1), pp. 12–25 Proc. 6th int. Conf. on information processing in sensor networks, Cambridge,
[6] Alaei, M., Barcelo-Ordinas, J.M.: ‘A cluster-based scheduling for object MA, USA, 2007, pp. 360–369
detection in wireless multimedia sensor networks’. Proc. the 5th ACM Symp. [22] Mekonnen, T., Harjula, E., Heikkinen, A., et al.: ‘Energy efficient event
on QoS and Security for Wireless and Mobile Networks, Tenerife, Canary driven video streaming surveillance using sleepyCAM’. Proc. 2017 IEEE Int.
Islands, Spain, 2009, pp. 50–56 Conf. on Computer and Information Technology (CIT), Helsinki, Finland,
[7] Alaei, M., Barcelo-Ordinas, J.M.: ‘MCM: multi-cluster-membership approach 2017, pp. 107–113
for FoV-based cluster formation in wireless multimedia sensor networks’. [23] Mekonnen, T., Harjula, E., Koskela, T., et al.: ‘sleepyCAM: power
Proc. 6th Int. Wireless Communications and Mobile Computing Conf., Caen, management mechanism for wireless video-surveillance cameras’. Proc. 2017
France, 2010, pp. 61–1165 IEEE Int. Conf. on Communications Workshops (ICC Workshops), Paris,
[8] Alaei, M., Barcelo-Ordinas, J.M.: ‘A method for clustering and cooperation in France, 2017, pp. 91–96
wireless multimedia sensor networks’, Sensors, 2010, 10, (4), pp. 3145–3169
388 IET Wirel. Sens. Syst., 2019, Vol. 9 Iss. 6, pp. 382-388
© The Institution of Engineering and Technology 2019