0% found this document useful (0 votes)
30 views7 pages

Ascending Hierarchical Classification For Camera Clustering Based On FoV Overlaps For WMSN

The document presents a new camera clustering strategy for wireless multimedia sensor networks that aims to reduce energy consumption by grouping cameras based on the overlap of their fields of view. The proposed approach calculates overlap areas between camera fields of view and performs hierarchical clustering to group cameras with highly overlapping views. Evaluation with 300 randomly positioned cameras shows the method is effective at minimizing redundant data detection, reducing energy usage, and extending network lifetime.

Uploaded by

Farou Brahim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views7 pages

Ascending Hierarchical Classification For Camera Clustering Based On FoV Overlaps For WMSN

The document presents a new camera clustering strategy for wireless multimedia sensor networks that aims to reduce energy consumption by grouping cameras based on the overlap of their fields of view. The proposed approach calculates overlap areas between camera fields of view and performs hierarchical clustering to group cameras with highly overlapping views. Evaluation with 300 randomly positioned cameras shows the method is effective at minimizing redundant data detection, reducing energy usage, and extending network lifetime.

Uploaded by

Farou Brahim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

20436394, 2019, 6, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-wss.2019.0030 by INASP/HINARI - ALGERIA, Wiley Online Library on [04/11/2022].

See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
IET Wireless Sensor Systems

Research Article

Ascending hierarchical classification for ISSN 2043-6386


Received on 11th February 2019
Revised 14th July 2019
camera clustering based on FoV overlaps for Accepted on 24th July 2019
E-First on 5th September 2019
WMSN doi: 10.1049/iet-wss.2019.0030
www.ietdl.org

Ala-Eddine Benrazek1 , Brahim Farou1,2, Hamid Seridi1,2, Zineddine Kouahla1,2, Muhammet Kurulay3
1LabSTIC, 8 may 1945 University, P.O. Box 401, 24000 Guelma, Algeria
2Department of Computer Science, 8 may 1945 University, P.O. Box 401, 24000 Guelma, Algeria
3Mathematics Department, Yildiz Technical University, Istanbul, Turkey

E-mail: [email protected]

Abstract: Wireless multimedia sensor networks (WMSNs) currently face the problem of rapidly decreasing energy due to the
acquisition, processing and transmission of massive multimedia data. This decrease in energy affects the life of the network,
resulting in higher overhead costs and a deterioration in quality-of-service. This study presents a new grouping strategy that
somewhat reduces energy reduction problems. The objective is to group cameras in the WMSN according to their field of view.
The proposed system begins by searching for all polygons created by the intersection of the two cameras' FoV. Based on the
generated surfaces, an ascending hierarchical classification is applied to group cameras with strongly overlapping visions fields.
The results obtained with 300 randomly positioned cameras show the effectiveness of the proposed method to minimise
redundant detection, reduce energy consumption, increase network life, and reduce network overload.

1 Introduction 2 Related work


Over the past decade, researchers have paid particular attention to Several studies in the literature use clustering algorithms in
wireless multimedia sensor networks (WMSNs) to develop a more wireless sensor networks (WSNs) to develop eco-energetic routing
efficient, flexible, and cost-effective surveillance system that is protocols to increase the network monitoring life [4, 5]. In
adapted to emerging applications for smart city construction and WMSNs, Alaei et al. [3, 6–9] have proposed several camera
internet of things systems [1, 2]. This network consists of a set of classification algorithms for WMSNs based on FoV overlap areas
visual sensors represented by smart cameras distributed in the to establish cooperation between groups that have been formed to
surveillance zone and powered by batteries. In surveillance detect objects. This work aims to conserve energy and increase the
systems, the role of cameras is to extract visual data captured in the life of the network. The authors have shown how the network
environment and share it with other cameras or the base station for lifetime is extended with the proposed camera selection and
analysis and processing to track or re-identify objects. scheduling algorithm compared to non-collaborative scenarios.
Traditional WMSNs are characterised by the production of a [10] has used the same principle, where the authors propose a
large amount of multimedia data and require very high bandwidth hierarchical grouping algorithm based on the overlapped fields of
for management and sharing and an excessive computing centre for view.
data processing and analysis. However, what is really lacking in In [11], the authors have introduced a clustering algorithm that
WMSN is the rapid decrease in energy due to the continuous uses the technique of scene cutting to detect overlapping regions in
operation of cameras (autonomous cameras). order to reduce redundant video streams and minimise energy
One of the key solutions to address power consumption, waste. This method is reliable, but only in areas with low overlap.
bandwidth management and to reduce the amount of data produced Shreya Mishra et al. [12] have presented a method of grouping
by monitoring applications is clustering and camera planning. cameras according to the cameras communication radius to
Clustering has several objectives, namely [3]: network scalability, improve the coverage of directional sensor networks. The authors
reduction of power consumption to extend battery and network life, model the groups as circles representing the communication range
stabilisation of the network topology and minimisation of overall and select the first node as the centre of the circles. Unfortunately,
costs. the cameras do not have a field of vision in a circle, which makes
Each camera has a detection zone or range called a field of view this approach inapplicable outside the communication itself.
(FoV) in which a camera can detect everything. In WMSNs, Danial et al. [13] have proposed an algorithm to select the
camera FoVs can overlap, meaning that there are common sub- minimum number of cameras activated to cover all targets. The
areas that result in a network energy loss due to the redundant authors take into account the multiple view of the targets to
detection in overlaps. To address this problem, this article presents calculate the redundancy in the WMSN network. This approach
a clustering method based on the FoV overlap surface criterion requires computational power at the camera level to ensure the
instead of the distance criterion between cameras. recognition mechanism that, in most cases, fails for multiple
The main purpose of this method is to form groups to restrict reasons such as high number of targets, environmental complexity,
the communication area and reduce the redundancy of the detected and camera performance limits.
data, thus reducing power consumption and extending network life KyDong Jung et al.[14] have introduced a clustering algorithm
in the context of video surveillance system in smart cities. named FL-TEEN that uses a fuzzy inference system to improve the
The remainder of the paper is organised as follows: Section 2 adaptability of cluster head selection to enhance sensor node
presents the related work. Section 3 describes how to calculate the lifetime performance. This semi-automatic approach requires
area of overlapping polygons and explains the grouping method. human intervention to generate rules for each environment.
Results and discussions are presented in Section 4. Section 5 draws Masoud Zarifneshat et al. [15] have presented a distributed semi-
conclusions. localised clustering scheme where the camera selection process is

IET Wirel. Sens. Syst., 2019, Vol. 9 Iss. 6, pp. 382-388 382
© The Institution of Engineering and Technology 2019
20436394, 2019, 6, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-wss.2019.0030 by INASP/HINARI - ALGERIA, Wiley Online Library on [04/11/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Fig. 1 FoV of camera sensor [8]

Fig. 2 Example with cut-off threshold

assigned to their leader in a dynamic and cooperative manner for Y C = Y A + Rs × sin α + θ mod 2π (4)
target tracking. • Clusters: represent a subset of cameras with overlapping fields
Selina Sharmin et al. [16] have developed a zone coverage of view. The area of the overlap area between the FoVs of two
system, sensitive to the network lifetime, to solve the region nodes determines whether they can be in the same cluster
coverage problem. The system uses a grouping mechanism based according to the proposed clustering algorithm.
on the degree of overlap and residual energy levels. This proposal • Cut-off Threshold (β): is a value defined to determine the most
promotes network management, in the event of sensor failure, on appropriate cut-off level for a group of cameras. It must be as
improving the quality of monitoring. In [17], Premlata Sati et al. relevant as possible, as illustrated in Fig. 2. This value is used in
have presented an automatic rotation mechanism of the FoV for the clustering phase and stops the gathering process for cameras
each camera to maximise the coverage area with the minimum of with little overlap.
cameras in the surveillance zone and consequently avoid redundant • Isolated camera: A camera is considered isolated if and only if
detections. However, this approach requires prior installation of all overlapping surfaces with other cameras are null or lower
panoramic cameras. than the cut-off threshold (β).

3 Proposed clustering method 3.1 Overlap area determination


The proposed clustering method was executed in the Base Station The proposed model used irregular polygons as the overlapping
(Cloud Computing) during the initialisation phase before the fields of view between two cameras and to determine these
monitoring process begins. The proposed clustering method polygons, all the vertices generated by the intersection of two
consisted of three main steps namely: (i) determination of the FoV triangles FoVs were first identified. Fig. 3 shows some examples of
intersection polygon for two cameras having an overlap between the intersection of two fields of view.
them; (ii) calculation of the surface area of each polygon; and (iii) There are two types of vertices, (Fig. 3), generated by the
finally, application of the ascending hierarchical classification triangles lines intersection and vertices that represent the vertex of
(AHC) based on the principle of complete linkage to group the a triangle included in another triangle. First, it is necessary to look
cameras that have the largest overlap area. Prior to describe the for the vertices of the triangles inside the other triangles. Equation
steps of the clustering method, some useful concepts that have been (5) and the conditional expression (6) were used to determine if a
used in this work are presented. point is in the triangle or not.
Let T be a triangle with the vertices A X1, Y 1 , B X2, Y 2 , and
• Field of View (FoV): The FoV of a camera is defined by the area
C X3, Y 3 . M X, Y is a point in space. m, k ∈ ℝ.
in which a camera can easily and accurately detect objects
covered by the latter. According to [6], the FoV can be modelled
X = k ⋅ X1 − X2 + m × X3 − X1 + X1
by the surface of an isosceles triangle (ABC) as shown in Fig. 1. (5)
Vertex A is considered as the camera position; the other two Y = k ⋅ Y1 − Y2 + m × Y3 − Y1 + Y1
vertices are calculated by (1), (2), (3) and (4), as follows:
M ∈ ABC , if m ≥ 0 ∧ k ≥ 0 ∧ m + k ≤ 1
XB = X A + Rs × cos α (6)
(1) M ∉ ABC , Otherwise

Y B = Y A + Rs × sin α (2) If there is a vertex among the set of the triangle vertices that
satisfies the conditions (6), the coordinates of these vertices will be
XC = X A + Rs × cos α + θ mod 2π (3) added in the list of polygon points.

IET Wirel. Sens. Syst., 2019, Vol. 9 Iss. 6, pp. 382-388 383
© The Institution of Engineering and Technology 2019
20436394, 2019, 6, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-wss.2019.0030 by INASP/HINARI - ALGERIA, Wiley Online Library on [04/11/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Fig. 3 Different possibilities for FoV overlapping

Fig. 4 Example of polygon intersection result

The next step was to determine the points of intersection


between the sides of the triangles (two by two) by solving (7) to
find the point of intersection of two straight lines. The conditional
expression (9) was used to verify whether or not the intersection
point is accepted as the vertex of the polygon.
Let AB and CD be two lines expressed with (7):

Y =a×X+b
(7)
Y = a′ × X + b′

where a, b ∈ ℝ with a, b ≠ 0, 0 and a′, b′ ∈ ℝ with


a′, b′ ≠ 0, 0 .

X A ≤ X ≤ X B ∧ XC ≤ X ≤ X D ∨
Fig. 5 Matrix of surfaces
X A ≤ X ≤ X B ∧ XC ≥ X ≥ X D ∨
X A ≥ X ≥ X B ∧ XC ≥ X ≥ X D ∨ 3.2 Polygon surface area calculation
X A ≥ X ≥ X B ∧ XC ≤ X ≤ X D ∨ The next step was to calculate the area of the polygons using (10)
(8) [18]:
Y A ≤ Y ≤ YB ∧ YC ≤ Y ≤ YD ∨
Let Ai xi, yi , i = 0, …, n be a polygon, where n is the number of
Y A ≤ Y ≤ YB ∧ Y ≥ Y ≥ YD ∨ polygon vertices such that A0 = An.
Y A ≥ Y ≥ YB ∧ YC ≥ Y ≥ YD ∨
n
Y A ≥ Y ≥ YB ∧ YC ≤ Y ≤ YD ∨ 1
2 i∑
Surface = xi × yi + 1 − xi × yi − 1 (10)
=0
P X, Y accpted, if (8) is cheked
(9)
P X, Y unaccepted, Otherwise Then, the polygonal surfaces generated by the intersection of the
fields of view of two cameras were calculated and recorded in the
The coordinates of each accepted intersection point was also added surface matrix. For example, the polygon area generated by the
to the polygon's point list. At the end of this step, all points of the intersection between the cameras of the identifier and, respectively,
polygon were detected and saved in a list for further processing, as was added in this area. After calculating all surfaces, a symmetrical
shown in Fig. 4. square matrix with a zero diagonal was obtained, as shown in
Fig. 5.

384 IET Wirel. Sens. Syst., 2019, Vol. 9 Iss. 6, pp. 382-388
© The Institution of Engineering and Technology 2019
20436394, 2019, 6, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-wss.2019.0030 by INASP/HINARI - ALGERIA, Wiley Online Library on [04/11/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Fig. 6 Algorithm 1: AHC for grouping cameras

3.3 Ascending hierarchical classification (AHC) 4200U CPU, 1,6 GHz processor and 4 GB RAM memory capacity.
The authors developed their own simulator in JAVA to test the
Hierarchical classification methods use an iterative process to proposed approach. The system operated without any constraints
group or redistribute original data. They are based on the selection on the location and properties of the cameras. For this purpose, all
of an aggregation criterion to determine how to agglomerate two cameras were randomly placed in the surveillance zone. The
clusters or divide a cluster. cameras were configured with an angle of FoV θ = 60° and Rs = 25
This article focused on the AHC for the successive merge of m in a sensing area of 300 m × 300 m. Three types of cameras with
cameras. At each iteration, the two cameras with the largest low, medium, and high resolution were used to carry out the
overlapping area measurement are merged into a cluster. Initially, experiments, namely:
all cameras are considered as single camera clusters. The first step
was to group the two closest cameras together in terms of overlap • Cyclops camera, with low resolution – CIF (352 × 288) [20];
area. Following an iterative process, the AHC continued to merge
• MeshEye camera, with medium resolution – SD (640 × 480)
the most overlapping clusters while respecting the cut-off threshold
[21];
(β) that was defined previously. The process stopped when all
cameras are grouped. • SleepCAM camera, with high resolution – 1080p HD (1920 ×
Maximum jump was chosen as a grouping strategy [19] due to 1080) [22, 23].
its ability to generate small homogeneous groups with large inter-
group variability. Then, this strategy was applied on the surface Fig. 7 shows the developed application. This simulator allows the
matrix obtained at the end of the previous step, as presented by user to create (Fig. 7a) cameras with the previously mentioned
Algorithm 1 (see Fig. 6). When all the cameras are grouped features or to delete them (Fig. 7d), and to modify cameras location
together, the Base Station (Cloud Computing) notifies all the (Fig. 7b and c) according to the user's needs by the translation (11)
cameras of their group ID. or the rotation (12) of the cameras.
The complexity of the proposed clustering algorithm is of In the plan, the translation of the vector u a, b transforms the
point M x, y into M′ x′, y′ as follows:
quadratic-order O n2 in the worst case.
x′ = x + a
4 Algorithm test results (11)
y′ = y + b
The results were obtained after the execution of the proposed
clustering strategy on a workstation with an Intel® Core™ i5 de

IET Wirel. Sens. Syst., 2019, Vol. 9 Iss. 6, pp. 382-388 385
© The Institution of Engineering and Technology 2019
20436394, 2019, 6, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-wss.2019.0030 by INASP/HINARI - ALGERIA, Wiley Online Library on [04/11/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Fig. 7 Simulator presentation
(a) Main interface, Dialog box for, (b) Rotation, (c) Translation and, (d) Deletion of cameras

Let O xo, yo be the origin and θ, the rotation angle. The rotation
transforms the point M x, y into M′ x′, y′ as follows:

x′ = x × cos θ − y × sin θ + xo
(12)
y′ = x × sin θ + y × cos θ + yo

Fig. 8 shows an example illustrating the different steps that the


proposed system performs to group the cameras. Figs. 8a and c
present the first two steps: determination of the overlap area
(intersection polygon) and calculation of the area of the polygons.
Fig. 8b shows the last step of grouping the clustered cameras,
represented with the same colour, using the AHC algorithm.
The system has been tested for six scenarios containing 50, 100,
150, 200, 250, and 300 randomly positioned cameras. Knowing
that the quality of the results is closely linked to the location of the
cameras, the system was tested 50 times for all cases, where each
execution represents a random spatial dispersion of the cameras.
As a result, the system was run 300 times.
Fig. 9 shows the average number and the maximum number of
clusters in the network with a β = 0.5% cut-off threshold of the
total area of the camera's FoV. For 300 cameras, an average of 76
Fig. 8 Simulation example groups and a maximum of 84 groups with a grouping rate of 0.74%
(a) Intersection polygons, (b) Clustering result, (c) Surfaces matrix were obtained. For 150 cameras, a rate of 0.8% was recorded. For
50 cameras, the grouping rate was 0.75%. Therefore, it can be
concluded that the proposed system has achieved a stability of
about 0.75%.
Fig. 10 shows that the number of cameras in a group increases
logarithmically with the number of cameras in the network. This
increase is due to the creation of wider overlapping areas between
the cameras' fields of view. In the 300-camera test, the group size
reached 6.22 cameras while in the 50-camera network, the group
reached the threshold of 4.2 cameras.
To evaluate the proposed energy method, the same method as
mentioned in [3] was used. It is assumed that the cameras are
periodically woken up for surveillance and they can operate in
three possible states: capture, processing, and transmission. Each
camera belongs to a unique cluster (Single Cluster Membership)
according to the results obtained by the proposed method. Equation
(13) was used to calculate the energy consumption of a camera
over a time interval that has been empirically set at T = 5 s.
Fig. 9 Average and maximum number of clusters
Ecamera = (T slp × Pslp + Eup + Ecap + Eproc (13)

386 IET Wirel. Sens. Syst., 2019, Vol. 9 Iss. 6, pp. 382-388
© The Institution of Engineering and Technology 2019
20436394, 2019, 6, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-wss.2019.0030 by INASP/HINARI - ALGERIA, Wiley Online Library on [04/11/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
Fig. 12 Estimated energy consumption using MeshEye camera

Fig. 10 Average of cluster-size

Fig. 13 Estimated energy consumption using sleepCAM camera


Fig. 11 Estimated energy consumption using Cyclops camera

where T slp and Pslp are the period and power consumption of a node
in sleep mode; Eup, Ecap, and Eproc are, respectively, the consumed
energies to activate a node, capture an image and perform the
desired task.
The number of activated cameras is obviously equal to the
number of clusters, because in each cluster, only one camera is
activated at a time. The average energy conservation is calculated
using (14):

AvgClusterSize × Ecamera
AvgEC = (14)
Ecluster

where Fig. 14 Average energy conservation ratio (AvgECR)

Ecluster = Ecamera + AvgClusterSize − 1 × Pslp × T where P(W ) is the power and t(s) is the running time.
Then the running time, frequency and power were calculated
Figs. 11–13 show, respectively, the variation of the energy using 300 cameras:
consumed according to the number and type of cameras with and
without clustering: Cyclops ‘CIF (352 × 288) low resolution • Average running time: The average running time of the
camera’, MeshEye Camera ‘SD (640 × 480) medium resolution’ algorithm on a workstation with an Intel® Core™ i5 - 4200U
and SleepCAM Camera ‘1080p HD (1920 × 1080) high resolution’. CPU, 1.6 GHz processor is: t(s) = 106.5 ms.
The tests conducted, presented in Figs. 11–13, show that: • Average frequency: Before the execution of the algorithm, the
average CPU frequency is 1.16 GHz. When running the
• If the camera resolution increases, the energy consumed also algorithm, the average CPU frequency is 1.94 GHz. The state
increases. change is equal to 0.78 GHz.
• If the number of cameras increases, the energy consumed • Average power: The Intel® Core™ i5 - 4200U CPU power
increases. consumption is 15 W when running with a maximum capacity
• The system with clustering consumes less energy. of 2.6 GHz. Thus, the power consumed while running a
clustering algorithm is: P(W ) = 0.78 × 15 /2.6 = 4.6 W.
Fig. 14 shows the average energy conservation rate (AvgECR) • Average energy: The average energy consumed by the clustering
between the two systems with and without clustering, using the algorithm is equal to:
three types of cameras mentioned above. The results show that the
AvgECR increases slightly when the number of cameras with low E(J ) = 4.6 × 106.5 × 10−3 = 0.48 J .
or medium resolution increases, but it increases substantially when
the number of high-resolution cameras increases.
5 Conclusion
The energy consumed during the run of the proposed clustering
algorithm is given by (15): This article presented a new method for grouping cameras
according to FoV overlap areas for WMSNs using the AHC
E(J ) = P(W ) × t(s) (15) coupled to the maximum jump criterion. The aim of this method is
to facilitate cooperation and coordination between cameras in a

IET Wirel. Sens. Syst., 2019, Vol. 9 Iss. 6, pp. 382-388 387
© The Institution of Engineering and Technology 2019
20436394, 2019, 6, Downloaded from https://ietresearch.onlinelibrary.wiley.com/doi/10.1049/iet-wss.2019.0030 by INASP/HINARI - ALGERIA, Wiley Online Library on [04/11/2022]. See the Terms and Conditions (https://onlinelibrary.wiley.com/terms-and-conditions) on Wiley Online Library for rules of use; OA articles are governed by the applicable Creative Commons License
group instead of the entire network to reduce energy consumption [9] Alaei, M., Barcelo-Ordinas, J.M.: ‘A collaborative node management scheme
for energy-efficient monitoring in wireless multimedia sensor networks’,
and network overload by optimising the detection of redundant Wirel. Netw., 2013, 19, (5), pp. 639–659
events, and thus, less processing, less data produced and [10] Chaurasiya, S.K., Mondal, J., Dutta, S.: ‘Field-of-view based hierarchical
transmitted that extends the life of the network. clustering to prolong network lifetime of WMSN with obstacles’. Proc. Int.
The system first detects the points of intersection of the Conf. on Electronics, Communication and Computational Engineering
(ICECCE), Hosur, 2014, pp. 72–77
camera's FoV and then records them in a list. The overlapping of [11] Kheirkhah, M.M., Khansari, M.: ‘Clustering wireless camera sensor networks
the fields of view allows to generate polygons for which the based on overlapped region detection’. Proc. 7th Int. Symp. on
surface must be calculated. Depending on the surface area, the Telecommunications (IST'2014), Tehran, 2014, pp. 712–719
system allows camera groups to be determined by an iterative [12] Mishra, S., Chaurasiya, S.K.: ‘Cluster based coverage enhancement for
directional sensor networks’. Proc. 1st Int. Conf. on Next Generation
process. The proposed system was tested in several cases and has Computing Technologies (NGCT), Dehradun, 2015, pp. 212–216
obtained regrouping rates, which stabilise at 0.75% regardless of [13] Costa, D.D., Silva, I., Guedes, L.A., et al.: ‘Optimal sensing redundancy for
the number of cameras. Based on the promising findings presented multiple perspectives of targets in wireless visual sensor networks’. Proc. 13th
in this paper, work on the remaining issues is continuing and will Int. Conf. on Industrial Informatics (INDIN), Cambridge, 2015, pp. 185–190
[14] Jung, K., Lee, J.Y., Jeong, H.Y.: ‘Improving adaptive cluster head selection of
be presented in future papers. teen protocol using fuzzy logic for WMSN’, Multimedia Tools Appl., 2017,
76, (17), pp. 18175–18190
6 Acknowledgments [15] Zarifneshat, M., Khadivi, P., Saidi, H.: ‘A semi-localized algorithm for cluster
head selection for target tracking in grid wireless sensor networks’, Ad Hoc
The authors would like to acknowledge Ms Gabriela KOUAHLA, Sens. Wirel. Netw., 2015, 25, (3–4), pp. 263–287
[16] Sharmin, S.: ‘α-Overlapping area coverage for clustered directional sensor
certified translator and language consultant, for her valuable networks’, Comput. Commun., 2017, 109, pp. 89–103
assistance in proofreading this article. [17] Sati, P., Goel, P., Goswami, S.: ‘Enhancing coverage area in self-orienting
directional sensor networks’, Int. J. Inf. Comput. Technol., 2014, 4, pp. 1661–
1666
7 References [18] Braden, B.: ‘The surveyor's area formula’, Coll. Math. J., 1986, 17, (4), pp.
[1] Gungor, V.C., Hancke, G.P.: ‘Industrial wireless sensor networks: challenges, 326–337
design principles, and technical approaches’, IEEE Trans. Ind. Electron., [19] Bruynooghe, M.: ‘Classification ascendante hiérarchique des grands
2009, 56, (10), pp. 4258–4265 ensembles de données: un algorithme rapide fondé sur la construction des
[2] Li, S., Da Xu, L., Zhao, S.: ‘The internet of things: a survey’, Inf. Syst. Front., voisinages réductibles’, Les cahiers de l'analyse de données, 1978, 3, (1), pp.
2015, 17, (2), pp. 243–259 7–33
[3] Alaei, M., Barcelo-Ordinas, J.M.: ‘Node clustering based on overlapping [20] Rahimi, M., Baer, R., Iroezi, O.I., et al.: ‘Cyclops: in situ image sensing and
FoVs for wireless multimedia sensor networks’. Proc. Wireless interpretation in wireless sensor networks’. Proc. the 3rd Int. Conf. on
Communication and Networking Conf., Sydney, 2010, pp. 1–6 Embedded Networked Sensor Systems, San Diego, CA, USA, 2005, pp. 192–
[4] Liu, X.: ‘A survey on clustering routing protocols in wireless sensor 204
networks’, Sensors, 2012, 12, (8), pp. 11113–11153 [21] Hengstler, S., Prashanth, D., Fong, S., et al.: ‘Mesheye: a hybrid-resolution
[5] Gherbi, C., Aliouat, Z., Benmohammed, M.: ‘A survey on clustering routing smart camera mote for applications in distributed intelligent surveillance’.
protocols in wireless sensor networks’, Sens. Rev., 2017, 37, (1), pp. 12–25 Proc. 6th int. Conf. on information processing in sensor networks, Cambridge,
[6] Alaei, M., Barcelo-Ordinas, J.M.: ‘A cluster-based scheduling for object MA, USA, 2007, pp. 360–369
detection in wireless multimedia sensor networks’. Proc. the 5th ACM Symp. [22] Mekonnen, T., Harjula, E., Heikkinen, A., et al.: ‘Energy efficient event
on QoS and Security for Wireless and Mobile Networks, Tenerife, Canary driven video streaming surveillance using sleepyCAM’. Proc. 2017 IEEE Int.
Islands, Spain, 2009, pp. 50–56 Conf. on Computer and Information Technology (CIT), Helsinki, Finland,
[7] Alaei, M., Barcelo-Ordinas, J.M.: ‘MCM: multi-cluster-membership approach 2017, pp. 107–113
for FoV-based cluster formation in wireless multimedia sensor networks’. [23] Mekonnen, T., Harjula, E., Koskela, T., et al.: ‘sleepyCAM: power
Proc. 6th Int. Wireless Communications and Mobile Computing Conf., Caen, management mechanism for wireless video-surveillance cameras’. Proc. 2017
France, 2010, pp. 61–1165 IEEE Int. Conf. on Communications Workshops (ICC Workshops), Paris,
[8] Alaei, M., Barcelo-Ordinas, J.M.: ‘A method for clustering and cooperation in France, 2017, pp. 91–96
wireless multimedia sensor networks’, Sensors, 2010, 10, (4), pp. 3145–3169

388 IET Wirel. Sens. Syst., 2019, Vol. 9 Iss. 6, pp. 382-388
© The Institution of Engineering and Technology 2019

You might also like