7.2. Performance of the Algorithms Solving the MTCBC-k Problem
In the next experiment, the performance of different variants—without clusters, horizontal clustering, and vertical clustering—was compared with each other, as well as with and two baseline methods. The first baseline algorithm, abbreviated as BC, ensured k-barrier coverage but did not perform maximum target coverage. In detail, in the first step, the sectors encoded by the result of were set to active. For those sensors that were not selected this way, a random sector was chosen. This configuration was never changed afterwards. On the other hand, the second baseline method, abbreviated as MTC, performed maximum target coverage without ensuring barrier coverage.
During the simulations, two types of scenarios were examined. In the first, the positions of all targets were known at each step, while in the second, the algorithms could only use the positions of targets covered by a sector at that moment. In what follows, the two scenarios will be referred to as omniscient and only-camera. For all methods, the implementation varied slightly between the two scenarios.
In the only-camera scenario, all methods were modified to ensure that each sensor scanned its surroundings whenever possible. Specifically, for each sensor whose active sector was not determined by the algorithm, the sector consecutive to the currently active sector was selected to be active in the next time step. In addition, for the ILP variants, k-barrier coverage was ensured with the minimal number of sensors in order to increase the number of those sensors that could scan their surroundings.
In the omniscient scenario, only the ILP variants were modified in such a way that the permissible sector selection with the least number of changes was preferred.
The performance was analyzed based on two different criteria. In both cases, for each target, only those time steps were taken into account when it was in the ROI and was within the detection distance of at least one sensor. In the first case, tracking efficiency was measured. For each target, the number of time steps during which it was covered was divided by the total number of time steps considered for that target. Then, the average of these ratios was taken. The resulting measure is referred to as the average tracking ratio in the sequel.
The second performance indicator, average coverage ratio, characterizes the efficiency of coverage. Here, in each time step, the number of covered targets was divided by the number of all targets to be considered in that time step. After this, the average of these ratios was taken.
In the simulations, one pixel represented one meter. There were 100 targets at each time step on the field. They moved two meters per time step randomly, but with a downward tendency. The simulations ran until 1000 targets completely left the belt. Once a group left the field, a new random group was generated. The detection distance of the sensors was 100 m. Three sensor setups were tested, referred to as DENSE, SPARSE, and NARROW. The parameters of these setups can be found in
Table 2. In the DENSE setup, the sensors were placed in a
grid. Then, each sensor position was independently shifted by a vector drawn from the
normal distribution. Here,
denotes the
diagonal matrix where both diagonal elements are
. The resulting sensor network was capable of providing three-barrier coverage. The placement of the sensors was performed in a similar manner for the other two setups as well. Note that the area of the belt was the same for all setups. In the case of vertical clustering, two clusters were created. Clearly, for the MTCBC-1 task, the single horizontal cluster contains all sensors, and thus this version of the algorithm is not different from the one without clusters.
The average runtime of the algorithms is given in
Table 3. For each method, what percentage of the runtime of the optimal algorithm its own runtime represents is indicated. Optimal refers to that algorithm when
is solved in each time step without clustering. The rest of the names are self-descriptive.
The data clearly show that combining the barrier and maximum target coverage problems comes at a cost, with the increased ILP causing a slowdown of 20 times or more. If the computations can be run in parallel on clusters, the runtime can be reduced to approximately half of the optimal time with two clusters. In the case of vertical clustering, which sector will be active for some sensors is always fixed, so the task to be solved was smaller than in the case of horizontal clustering (the number of clusters was the same for both cases). As a result, the runtime of vertical clustering was slightly better. Finally, it is noteworthy that the greedy algorithm runs approximately 50 times faster than the optimal one.
The values of the average coverage and tracking ratios can be found in
Table 4 and
Table A1. As it turned out, these values were very close to each other in almost all cases, and therefore the table with the average tracking values was placed in
Appendix A. In these tables, DENSE(2) refers to that case when the MTCBC-2 problem was to be solved with the DENSE setup. The other names can be interpreted in the same way.
As expected, each method performed worse in all setups when two-barrier coverage was required, compared to when single-barrier coverage needed to be maintained. The only exception was the MTC algorithm, which is agnostic to the number of barriers by definition. This is because in the latter cases, fewer sensors were needed to form the barriers, and consequently more sensors were available to cover the targets. For a similar reason, in the SPARSE setups, where the number of cameras was lower, the performance of all the algorithms was also worse.
In terms of details, it is noteworthy that the performance of the horizontal clustering, except for the omniscient SPARSE(2) case, was at most 1–2% lower than that of the optimal method. The calculations ran in parallel on the individual clusters, so it could happen that a target was covered by sensors from both clusters, leaving other targets uncovered, which the optimal method was able to cover. If the cameras are densely placed, this is less of a problem, because even if a sensor unnecessarily covers a target, the sensors nearby can still be used to cover the other targets. However, if the number of cameras is lower, the unnecessary multiple coverages result in a noticeable performance degradation, as evidenced by the performance difference in favor of the optimal method in the SPARSE(2) setup in the omniscient scenario. In the only-camera scenario, however, multiple coverages, even if they occurred, did not cause significant performance degradation. This indicates that in the SPARSE(2) setup, the optimal method was aware of roughly the same target positions as the horizontal clustering.
It is also notable that horizontal clustering outperformed the vertical one in all setups. Multiple coverages can also occur in the case of vertical clustering, so this does not necessarily cause a difference in the performance of the two types of clustering. Thus, the difference in efficiency shows that fixing the sectors, which join the results obtained on the vertical clusters, and therefore losing flexibility, comes at a price. This becomes even more apparent in the case of the SPARSE(2) setup, where the number of possible different barrier coverage configurations was the lowest. On the other hand, there are certainly scenarios in the real world where vertical clustering performs as well as or even better than horizontal clustering. Additionally, the number of vertical clusters can be arbitrary, while the number of horizontal clusters can be at most k, when k-barrier coverage should be maintained.
Interestingly, the performance of the greedy method was quite similar to that of vertical clustering, while its runtime was 25 times faster. Additionally, its performance was less influenced by how well the positions of the targets were known. On average, the coverage performance of the optimal method was in the omniscient scenario and in the only-camera scenario, which is a drop. On the other hand, the performance of the greedy method dropped only by from to .
As expected, the MTC algorithm achieved the best coverage ratios in all cases, thanks to the fact that all cameras could be used to cover the targets. Averaging across the setups, it could reach coverage ratios of in the omniscient scenario and in the only-camera scenario. Not surprisingly, the BC method performed the worst, achieving only a coverage ratio on average.
Based on these results, a new metric, called relative coverage ratio, can be introduced that illustrates the performance of each algorithm compared to the best and worst methods. In this way, it also helps to examine the relative effectiveness of the algorithms compared to each other.
Formally, if
and
denote the average coverage ratios for the BC and MTC algorithms, the relative coverage ratio
for any method is calculated as
where
denotes the average coverage ratio of the method.
The relative coverage values of the algorithms averaged on all setups can be found in
Table 5. Note that the
relative tracking ratio metric can be introduced in a similar way. The corresponding values are given in
Table A2. The data do not offer new insights compared to what has already been stated, but they illustratively demonstrate the relative performance of the methods.