Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (625)

Search Parameters:
Keywords = moving target detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 7296 KiB  
Article
LittleFaceNet: A Small-Sized Face Recognition Method Based on RetinaFace and AdaFace
by Zhengwei Ren, Xinyu Liu, Jing Xu, Yongsheng Zhang and Ming Fang
J. Imaging 2025, 11(1), 24; https://doi.org/10.3390/jimaging11010024 - 13 Jan 2025
Viewed by 215
Abstract
For surveillance video management in university laboratories, issues such as occlusion and low-resolution face capture often arise. Traditional face recognition algorithms are typically static and rely heavily on clear images, resulting in inaccurate recognition for low-resolution, small-sized faces. To address the challenges of [...] Read more.
For surveillance video management in university laboratories, issues such as occlusion and low-resolution face capture often arise. Traditional face recognition algorithms are typically static and rely heavily on clear images, resulting in inaccurate recognition for low-resolution, small-sized faces. To address the challenges of occlusion and low-resolution person identification, this paper proposes a new face recognition framework by reconstructing Retinaface-Resnet and combining it with Quality-Adaptive Margin (adaface). Currently, although there are many target detection algorithms, they all require a large amount of data for training. However, datasets for low-resolution face detection are scarce, leading to poor detection performance of the models. This paper aims to solve Retinaface’s weak face recognition capability in low-resolution scenarios and its potential inaccuracies in face bounding box localization when faces are at extreme angles or partially occluded. To this end, Spatial Depth-wise Separable Convolutions are introduced. Retinaface-Resnet is designed for face detection and localization, while adaface is employed to address low-resolution face recognition by using feature norm approximation to estimate image quality and applying an adaptive margin function. Additionally, a multi-object tracking algorithm is used to solve the problem of moving occlusion. Experimental results demonstrate significant improvements, achieving an accuracy of 96.12% on the WiderFace dataset and a recognition accuracy of 84.36% in practical laboratory applications. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
22 pages, 1044 KiB  
Article
The Efficiency of the New Extended EWMA Control Chart for Detecting Changes Under an Autoregressive Model and Its Application
by Kotchaporn Karoon and Yupaporn Areepong
Symmetry 2025, 17(1), 104; https://doi.org/10.3390/sym17010104 - 11 Jan 2025
Viewed by 316
Abstract
Control charts are frequently used instruments for process quality monitoring. Another name for the NEEWMA control chart is the new extended exponentially weighted moving average (new extended EWMA) control chart. The lower control limit (LCL) and upper control limit (UCL) are equally spaced [...] Read more.
Control charts are frequently used instruments for process quality monitoring. Another name for the NEEWMA control chart is the new extended exponentially weighted moving average (new extended EWMA) control chart. The lower control limit (LCL) and upper control limit (UCL) are equally spaced from the center line, giving it a symmetrical design. Because of its symmetry, the NEEWMA chart is very good at identifying even the tiniest changes in operation by detecting deviations from the target in both upward and downward directions. This study derives an explicit formula for the average run length (ARL) of the NEEWMA control chart based on the autoregressive (AR) model with exponential white noise. The focus is on the zero-state performance of the NEEWMA control chart, which is derived using explicit formulas. Banach’s fixed-point theorem was used to prove existence and uniqueness of this formula. The accuracy of this formula is validated by comparing it to the numerical integral equation (NIE) method using percentage accuracy (%Acc). The results show that the NEEWMA control chart is more efficient than the ARL evaluated by the NIE method, particularly regarding computation time. The performance of the NEEWMA control chart is compared with the EWMA and extended EWMA control charts by evaluating both the ARL and standard deviation run length (SDRL). The NEEWMA control chart outperforms the others in detection performance, followed by the extended EWMA and EWMA control charts. Further verification of its superior performance is provided through comparisons using the average extra quadratic loss (AEQL) and the performance comparison index (PCI), which confirm that it outperforms both the EWMA and extended EWMA control charts across various parameters and shift sizes. Finally, an illustrative example using real-life economic data demonstrates its efficiency. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

23 pages, 10942 KiB  
Article
MambaShadowDet: A High-Speed and High-Accuracy Moving Target Shadow Detection Network for Video SAR
by Xiaowo Xu, Tianwen Zhang, Xiaoling Zhang, Wensi Zhang, Xiao Ke and Tianjiao Zeng
Remote Sens. 2025, 17(2), 214; https://doi.org/10.3390/rs17020214 - 9 Jan 2025
Viewed by 366
Abstract
Existing convolution neural network (CNN)-based video synthetic aperture radar (SAR) moving target shadow detectors are difficult to model long-range dependencies, while transformer-based ones often suffer from greater complexity. To handle these issues, this paper proposes MambaShadowDet, a novel lightweight deep learning (DL) detector [...] Read more.
Existing convolution neural network (CNN)-based video synthetic aperture radar (SAR) moving target shadow detectors are difficult to model long-range dependencies, while transformer-based ones often suffer from greater complexity. To handle these issues, this paper proposes MambaShadowDet, a novel lightweight deep learning (DL) detector based on a state space model (SSM), dedicated to high-speed and high-accuracy moving target shadow detection in video SAR images. By introducing SSM with the linear complexity into YOLOv8, MambaShadowDet effectively captures the global feature dependencies while relieving computational load. Specifically, it designs Mamba-Backbone, combining SSM and CNN to effectively extract both global contextual and local spatial information, as well as a slim path aggregation feature pyramid network (Slim-PAFPN) to enhance multi-level feature extraction and further reduce complexity. Abundant experiments on the Sandia National Laboratories (SNL) video SAR data show that MambaShadowDet achieves superior moving target shadow detection performance with a detection accuracy of 80.32% F1 score and an inference speed of 44.44 frames per second (FPS), outperforming existing models in both accuracy and speed. Full article
Show Figures

Figure 1

14 pages, 3058 KiB  
Article
A Combined Frame Difference and Convolution Method for Moving Vehicle Detection in Satellite Videos
by Xin Luo, Jiatian Li, Xiaohui A and Yuxi Deng
Sensors 2025, 25(2), 306; https://doi.org/10.3390/s25020306 - 7 Jan 2025
Viewed by 243
Abstract
To address the challenges of missed detections caused by insufficient shape and texture features and blurred boundaries in existing detection methods, this paper introduces a novel moving vehicle detection approach for satellite videos. The proposed method leverages frame difference and convolution to effectively [...] Read more.
To address the challenges of missed detections caused by insufficient shape and texture features and blurred boundaries in existing detection methods, this paper introduces a novel moving vehicle detection approach for satellite videos. The proposed method leverages frame difference and convolution to effectively integrate spatiotemporal information. First, a frame difference module (FDM) is designed, combining frame difference and convolution. This module extracts motion features between adjacent frames using frame difference, refines them through backpropagation in the neural network, and integrates them with the current frame to compensate for the missing motion features in single-frame images. Next, the initial features are processed by a backbone network to further extract spatiotemporal feature information. The neck incorporates deformable convolution, which adaptively adjusts convolution kernel sampling positions, optimizing feature representation and enabling effective multiscale information integration. Additionally, shallow large-scale feature maps, which use smaller receptive fields to focus on small targets and reduce background interference, are fed into the detection head. To enhance small-target feature representation, a small-target self-reconstruction module (SR-TOD) is introduced between the neck and the detection head. Experiments using the Jilin-1 satellite video dataset demonstrate that the proposed method outperforms comparison models, significantly reducing missed detections caused by weak color and texture features and blurred boundaries. For the satellite-video moving vehicle detection task, this method achieves notable improvements, with an average F1-score increase of 3.9% and a per-frame processing speed enhancement of 7 s compared to the next best model, DSFNet. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

33 pages, 12646 KiB  
Article
A Binocular Vision-Assisted Method for the Accurate Positioning and Landing of Quadrotor UAVs
by Jie Yang, Kunling He, Jie Zhang, Jiacheng Li, Qian Chen, Xiaohui Wei and Hanlin Sheng
Viewed by 382
Abstract
This paper introduces a vision-based target recognition and positioning system for UAV mobile landing scenarios, addressing challenges such as target occlusion due to shadows and the loss of the field of view. A novel image preprocessing technique is proposed, utilizing finite adaptive histogram [...] Read more.
This paper introduces a vision-based target recognition and positioning system for UAV mobile landing scenarios, addressing challenges such as target occlusion due to shadows and the loss of the field of view. A novel image preprocessing technique is proposed, utilizing finite adaptive histogram equalization in the HSV color space, to enhance UAV recognition and the detection of markers under shadow conditions. The system incorporates a Kalman filter-based target motion state estimation method and a binocular vision-based depth camera target height estimation method to achieve precise positioning. To tackle the problem of poor controller performance affecting UAV tracking and landing accuracy, a feedforward model predictive control (MPC) algorithm is integrated into a mobile landing control method. This enables the reliable tracking of both stationary and moving targets via the UAV. Additionally, with a consideration of the complexities of real-world flight environments, a mobile tracking and landing control strategy based on airspace division is proposed, significantly enhancing the success rate and safety of UAV mobile landings. The experimental results demonstrate a 100% target recognition success rate and high positioning accuracy, with x and y-axis errors not exceeding 0.01 m in close range, the x-axis relative error not exceeding 0.05 m, and the y-axis error not exceeding 0.03 m in the medium range. In long-range situations, the relative errors for both axes do not exceed 0.05 m. Regarding tracking accuracy, both KF and EKF exhibit good following performance with small steady-state errors when the target is stationary. Under dynamic conditions, EKF outperforms KF with better estimation results and a faster tracking speed. The landing accuracy is within 0.1 m, and the proposed method successfully accomplishes the mobile energy supply mission for the vehicle-mounted UAV system. Full article
(This article belongs to the Special Issue Swarm Intelligence in Multi-UAVs)
Show Figures

Figure 1

26 pages, 65511 KiB  
Article
Research on Cam–Kalm Automatic Tracking Technology of Low, Slow, and Small Target Based on Gm-APD LiDAR
by Dongfang Guo, Yanchen Qu, Xin Zhou, Jianfeng Sun, Shengwen Yin, Jie Lu and Feng Liu
Remote Sens. 2025, 17(1), 165; https://doi.org/10.3390/rs17010165 - 6 Jan 2025
Viewed by 343
Abstract
With the wide application of UAVs in modern intelligent warfare as well as in civil fields, the demand for C-UAS technology is increasingly urgent. Traditional detection methods have many limitations in dealing with “low, slow, and small” targets. This paper presents a pure [...] Read more.
With the wide application of UAVs in modern intelligent warfare as well as in civil fields, the demand for C-UAS technology is increasingly urgent. Traditional detection methods have many limitations in dealing with “low, slow, and small” targets. This paper presents a pure laser automatic tracking system based on Geiger-mode avalanche photodiode (Gm-APD). Combining the target motion state prediction of the Kalman filter and the adaptive target tracking of Camshift, a Cam–Kalm algorithm is proposed to achieve high-precision and stable tracking of moving targets. The proposed system also introduces two-dimensional Gaussian fitting and edge detection algorithms to automatically determine the target’s center position and the tracking rectangular box, thereby improving the automation of target tracking. Experimental results show that the system designed in this paper can effectively track UAVs in a 70 m laboratory environment and a 3.07 km to 3.32 km long-distance scene while achieving low center positioning error and MSE. This technology provides a new solution for real-time tracking and ranging of long-distance UAVs, shows the potential of pure laser approaches in long-distancelow, slow, and small target tracking, and provides essential technical support for C-UAS technology. Full article
Show Figures

Figure 1

18 pages, 4325 KiB  
Article
Bioimpedance Spectra Confirm Breast Cancer Cell Secretome Induces Early Changes in the Cytoskeleton and Migration of Mesenchymal Stem Cells
by Ana Laura Sánchez-Corrales, César Antonio González-Díaz, Claudia Camelia Calzada-Mendoza, Jesús Arrieta-Valencia, María Elena Sánchez-Mendoza, Juan Luis Amaya-Espinoza and Gisela Gutiérrez-Iglesias
Appl. Sci. 2025, 15(1), 358; https://doi.org/10.3390/app15010358 - 2 Jan 2025
Viewed by 390
Abstract
Mesenchymal stem cell (MSC) treatments take advantage of the ability of these cells to migrate to target sites, although they have been shown to move in response to tumor influence. Currently, tools are being developed to detect these opportune changes in cellular behavior [...] Read more.
Mesenchymal stem cell (MSC) treatments take advantage of the ability of these cells to migrate to target sites, although they have been shown to move in response to tumor influence. Currently, tools are being developed to detect these opportune changes in cellular behavior patterns. No reports of such changes in the morphological patterns or migration of MSCs in the presence of a tumor environment, which would provide information of high diagnostic value, have been made. We determined the changes in the cytoskeleton and migration of MSCs exposed to the secretome of breast tumor cells via bioimpedance records. MSCs were cultured and incubated in the presence of 24 and 48 h secretomes of the MCF-7 tumor cell line. The proliferation, migration, morphology, cytoskeleton, and electrical bioimpedance were evaluated at 48 h for cells treated with 24 and 48 h secretomes. Secretomes induced early morphological changes related to the migration of MSCs, directly confirmed via bioimpedance, but no changes in cell proliferation were found. These changes cannot be related to a transformation or malignancy phenotype. The modification of the bioimpedance patterns recorded from the first hours suggests that this method can be applied in an innovative way to detect early changes in a cellular population in the clinical diagnostic setting. Full article
(This article belongs to the Special Issue Advanced Technologies for Health Improvement)
Show Figures

Figure 1

21 pages, 7791 KiB  
Article
Simulation Study on Detection and Localization of a Moving Target Under Reverberation in Deep Water
by Jincong Dun, Shihong Zhou, Yubo Qi and Changpeng Liu
J. Mar. Sci. Eng. 2024, 12(12), 2360; https://doi.org/10.3390/jmse12122360 - 22 Dec 2024
Viewed by 400
Abstract
Deep-water reverberation caused by multiple reflections from the seafloor and sea surface can affect the performance of active sonars. To detect a moving target under reverberation conditions, a reverberation suppression method using multipath Doppler shift in deep water and wideband ambiguity function (WAF) [...] Read more.
Deep-water reverberation caused by multiple reflections from the seafloor and sea surface can affect the performance of active sonars. To detect a moving target under reverberation conditions, a reverberation suppression method using multipath Doppler shift in deep water and wideband ambiguity function (WAF) is proposed. Firstly, the multipath Doppler factors in the deep-water direct zone are analyzed, and they are introduced into the target scattered sound field to obtain the echo of the moving target. The mesh method is used to simulate the deep-water reverberation waveform in time domain. Then, a simulation model for an active sonar based on the source and short vertical line array is established. Reverberation and target echo in the received signal can be separated in the Doppler shift domain of the WAF. The multipath Doppler shifts in the echo are used to estimate the multipath arrival angles, which can be used for target localization. The simulation model and the reverberation suppression detection method can provide theoretical support and a technical reference for the active detection of moving targets in deep water. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

21 pages, 15422 KiB  
Article
A Lightweight Model for Weed Detection Based on the Improved YOLOv8s Network in Maize Fields
by Jinyong Huang, Xu Xia, Zhihua Diao, Xingyi Li, Suna Zhao, Jingcheng Zhang, Baohua Zhang and Guoqiang Li
Agronomy 2024, 14(12), 3062; https://doi.org/10.3390/agronomy14123062 - 22 Dec 2024
Viewed by 505
Abstract
To address the issue of the computational intensity and deployment difficulties associated with weed detection models, a lightweight target detection model for weeds based on YOLOv8s in maize fields was proposed in this study. Firstly, a lightweight network, designated as Dualconv High Performance [...] Read more.
To address the issue of the computational intensity and deployment difficulties associated with weed detection models, a lightweight target detection model for weeds based on YOLOv8s in maize fields was proposed in this study. Firstly, a lightweight network, designated as Dualconv High Performance GPU Net (D-PP-HGNet), was constructed on the foundation of the High Performance GPU Net (PP-HGNet) framework. Dualconv was introduced to reduce the computation required to achieve a lightweight design. Furthermore, Adaptive Feature Aggregation Module (AFAM) and Global Max Pooling were incorporated to augment the extraction of salient features in complex scenarios. Then, the newly created network was used to reconstruct the YOLOv8s backbone. Secondly, a four-stage inverted residual moving block (iRMB) was employed to construct a lightweight iDEMA module, which was used to replace the original C2f feature extraction module in the Neck to improve model performance and accuracy. Finally, Dualconv was employed instead of the conventional convolution for downsampling, further diminishing the network load. The new model was fully verified using the established field weed dataset. The test results showed that the modified model exhibited a notable improvement in detection performance compared with YOLOv8s. Accuracy improved from 91.2% to 95.8%, recall from 87.9% to 93.2%, and [email protected] from 90.8% to 94.5%. Furthermore, the number of GFLOPs and the model size were reduced to 12.7 G and 9.1 MB, respectively, representing a decrease of 57.4% and 59.2% compared to the original model. Compared with the prevalent target detection models, such as Faster R-CNN, YOLOv5s, and YOLOv8l, the new model showed superior performance in accuracy and lightweight. The new model proposed in this paper effectively reduces the cost of the required hardware to achieve accurate weed identification in maize fields with limited resources. Full article
(This article belongs to the Collection AI, Sensors and Robotics for Smart Agriculture)
Show Figures

Figure 1

27 pages, 24936 KiB  
Article
Multipath and Deep Learning-Based Detection of Ultra-Low Moving Targets Above the Sea
by Zhaolong Wang, Xiaokuan Zhang, Weike Feng, Binfeng Zong, Tong Wang, Cheng Qi and Xixi Chen
Remote Sens. 2024, 16(24), 4773; https://doi.org/10.3390/rs16244773 - 21 Dec 2024
Viewed by 342
Abstract
An intelligent approach is proposed and investigated in this paper for the detection of ultra-low-altitude sea-skimming moving targets for airborne pulse Doppler radar. Without suppressing interferences, the proposed method uses both target and multipath information for detection based on their distinguishable image features [...] Read more.
An intelligent approach is proposed and investigated in this paper for the detection of ultra-low-altitude sea-skimming moving targets for airborne pulse Doppler radar. Without suppressing interferences, the proposed method uses both target and multipath information for detection based on their distinguishable image features and deep learning (DL) techniques. First, the image features of the target, multipath, and sea clutter in the real-measured range-Doppler (RD) map are analyzed, based on which the target and multipath are defined together as the generalized target. Then, based on the composite electromagnetic scattering mechanism of the target and the ocean surface, a scattering-based echo generation model is established and validated to generate sufficient data for DL network training. Finally, the RD features of the generalized target are learned by training the DL-based target detector, such as you-only-look-once version 7 (YOLOv7) and Faster R-CNN. The detection results show the high performance of the proposed method on both simulated and real-measured data without suppressing interferences (e.g., clutter, jamming, and noise). In particular, even if the target is submerged in clutter, the target can still be detected by the proposed method based on the multipath feature. Full article
(This article belongs to the Special Issue Array and Signal Processing for Radar)
Show Figures

Graphical abstract

29 pages, 34806 KiB  
Article
An Adaptive YOLO11 Framework for the Localisation, Tracking, and Imaging of Small Aerial Targets Using a Pan–Tilt–Zoom Camera Network
by Ming Him Lui, Haixu Liu, Zhuochen Tang, Hang Yuan, David Williams, Dongjin Lee, K. C. Wong and Zihao Wang
Eng 2024, 5(4), 3488-3516; https://doi.org/10.3390/eng5040182 - 20 Dec 2024
Viewed by 743
Abstract
This article presents a cost-effective camera network system that employs neural network-based object detection and stereo vision to assist a pan–tilt–zoom camera in imaging fast, erratically moving small aerial targets. Compared to traditional radar systems, this approach offers advantages in supporting real-time target [...] Read more.
This article presents a cost-effective camera network system that employs neural network-based object detection and stereo vision to assist a pan–tilt–zoom camera in imaging fast, erratically moving small aerial targets. Compared to traditional radar systems, this approach offers advantages in supporting real-time target differentiation and ease of deployment. Based on the principle of knowledge distillation, a novel data augmentation method is proposed to coordinate the latest open-source pre-trained large models in semantic segmentation, text generation, and image generation tasks to train a BicycleGAN for image enhancement. The resulting dataset is tested on various model structures and backbone sizes of two mainstream object detection frameworks, Ultralytics’ YOLO and MMDetection. Additionally, the algorithm implements and compares two popular object trackers, Bot-SORT and ByteTrack. The experimental proof-of-concept deploys the YOLOv8n model, which achieves an average precision of 82.2% and an inference time of 0.6 ms. Alternatively, the YOLO11x model maximises average precision at 86.7% while maintaining an inference time of 9.3 ms without bottlenecking subsequent processes. Stereo vision achieves accuracy within a median error of 90 mm following a drone flying over 1 m/s in an 8 m × 4 m area of interest. Stable single-object tracking with the PTZ camera is successful at 15 fps with an accuracy of 92.58%. Full article
(This article belongs to the Special Issue Feature Papers in Eng 2024)
Show Figures

Figure 1

25 pages, 1330 KiB  
Article
Combined Barrier–Target Coverage for Directional Sensor Network
by Balázs Kósa, Márk Bukovinszki, Tamás V. Michaletzky and Viktor Tihanyi
Sensors 2024, 24(24), 8093; https://doi.org/10.3390/s24248093 - 18 Dec 2024
Viewed by 481
Abstract
Over the past twenty years, camera networks have become increasingly popular. In response to various demands imposed on these networks, several coverage models have been developed in the scientific literature, such as area, trap, barrier, and target coverage. In this paper, a new [...] Read more.
Over the past twenty years, camera networks have become increasingly popular. In response to various demands imposed on these networks, several coverage models have been developed in the scientific literature, such as area, trap, barrier, and target coverage. In this paper, a new type of coverage task, the Maximum Target Coverage with k-Barrier Coverage (MTCBC-k) problem, is defined. Here, the goal is to cover as many moving targets as possible from time step to time step while continuously maintaining k-barrier coverage over the region of interest (ROI). This approach is different from independently solving the two tasks and then merging the results. An Integer Linear Programming (ILP) formulation for the MTCBC-k problem is presented. Additionally, two types of camera clustering methods have been developed. This approach allows for solving smaller ILPs within clusters, and combining their solutions. Furthermore, a polynomial-time greedy algorithm has been introduced as an alternative to solve the MTCBC-k problem. An example was also provided of how the aforementioned methods can be modified to handle a more realistic scenario, where only the targets detected by the cameras are known, rather than all the targets within the ROI. The simulations were run with both dense and sparse camera placements, convincingly supporting the usefulness of the clustering and greedy methods. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

21 pages, 6412 KiB  
Article
Detection of Flight Target via Multistatic Radar Based on Geosynchronous Orbit Satellite Irradiation
by Jia Dong, Peng Liu, Bingnan Wang and Yaqiu Jin
Remote Sens. 2024, 16(23), 4582; https://doi.org/10.3390/rs16234582 - 6 Dec 2024
Viewed by 485
Abstract
As a special microwave detection system, multistatic radar has obvious advantages in covert operation, anti-jamming, and anti-stealth due to its configuration of spatial diversity. As a high-orbit irradiation source, a geosynchronous orbit satellite (GEO) has the advantages of a low revisit period, large [...] Read more.
As a special microwave detection system, multistatic radar has obvious advantages in covert operation, anti-jamming, and anti-stealth due to its configuration of spatial diversity. As a high-orbit irradiation source, a geosynchronous orbit satellite (GEO) has the advantages of a low revisit period, large beam coverage area, and stable power of ground beam compared with traditional passive radar irradiation sources. This paper focuses on the key technologies of flight target detection in multistatic radar based on geosynchronous orbit satellite irradiation with one transmitter and multiple receivers. We carry out the following work: Firstly, we aim to address the problems of low signal-to-noise ratio (SNR) and range cell migration of high-speed cruise targets. The Radon–Fourier transform constant false alarm rate detector-range cell migration correction (RFT-CFAR-RCMC) is adopted to realize the coherent integration of echoes with range cell migration correction (RCM) and Doppler phase compensation. It significantly improves the SNR. Furthermore, we utilize the staggered PRF to solve the ambiguity and obtain multi-view data. Secondly, based on the aforementioned target multi-view detection data, the linear least square (LLS) multistatic positioning method combining bistatic range positioning (BR) and time difference of arrival positioning (TDOA) is used, which constructs the BR and TDOA measurement equations and linearizes by mathematical transformation. The measurement equations are solved by the LLS method, and the target positioning and velocity inversion are realized by the fusion of multistatic data. Finally, using target positioning data as observation values of radar, the Kalman filter (KF) is used to achieve flight trajectory tracking. Numerical simulation verifies the effectiveness of the proposed process. Full article
Show Figures

Figure 1

22 pages, 4090 KiB  
Article
Visual Servoing Using Sliding-Mode Control with Dynamic Compensation for UAVs’ Tracking of Moving Targets
by Christian P. Carvajal, Víctor H. Andaluz, José Varela-Aldás, Flavio Roberti, Carolina Del-Valle-Soto and Ricardo Carelli
Drones 2024, 8(12), 730; https://doi.org/10.3390/drones8120730 - 2 Dec 2024
Viewed by 589
Abstract
An Image-Based Visual Servoing Control (IBVS) structure for target tracking by Unmanned Aerial Vehicles (UAVs) is presented. The scheme contains two stages. The first one is a sliding-model controller (SMC) that allows one to track a target with a UAV; the control strategy [...] Read more.
An Image-Based Visual Servoing Control (IBVS) structure for target tracking by Unmanned Aerial Vehicles (UAVs) is presented. The scheme contains two stages. The first one is a sliding-model controller (SMC) that allows one to track a target with a UAV; the control strategy is designed in the function of the image. The proposed SMC control strategy is commonly used in control systems that present high non-linearities and that are always exposed to external disturbances; these disturbances can be caused by environmental conditions or induced by the estimation of the position and/or velocity of the target to be tracked. In the second instance, a controller is placed to compensate the UAV dynamics; this is a controller that allows one to compensate the velocity errors that are produced by the dynamic effects of the UAV. In addition, the corresponding stability analysis of the sliding mode-based visual servo controller and the sliding mode dynamic compensation control is presented. The proposed control scheme employs the kinematics and dynamics of the robot by presenting a cascade control based on the same control strategy. In order to evaluate the proposed scheme for tracking moving targets, experimental tests are carried out in a semi-structured working environment with the hexarotor-type aerial robot. For detection and image processing, the Opencv C++ library is used; the data are published in an ROS topic at a frequency of 50 Hz. The robot controller is implemented in the mathematical software Matlab. Full article
(This article belongs to the Special Issue Flight Control and Collision Avoidance of UAVs)
Show Figures

Figure 1

22 pages, 10759 KiB  
Article
Design of a Cyber-Physical System-of-Systems Architecture for Elderly Care at Home
by José Galeas, Alberto Tudela, Óscar Pons, Juan Pedro Bandera and Antonio Bandera
Electronics 2024, 13(23), 4583; https://doi.org/10.3390/electronics13234583 - 21 Nov 2024
Viewed by 682
Abstract
The idea of introducing a robot into an Ambient Assisted Living (AAL) environment to provide additional services beyond those provided by the environment itself has been explored in numerous projects. Moreover, new opportunities can arise from this symbiosis, which usually requires both systems [...] Read more.
The idea of introducing a robot into an Ambient Assisted Living (AAL) environment to provide additional services beyond those provided by the environment itself has been explored in numerous projects. Moreover, new opportunities can arise from this symbiosis, which usually requires both systems to share the knowledge (and not just the data) they capture from the context. Thus, by using knowledge extracted from the raw data captured by the sensors deployed in the environment, the robot can know where the person is and whether he/she should perform some physical exercise, as well as whether he/she should move a chair away to allow the robot to successfully complete a task. This paper describes the design of an Ambient Assisted Living system where an IoT scheme and robot coexist as independent but connected elements, forming a cyber-physical system-of-systems architecture. The IoT environment includes cameras to monitor the person’s activity and physical position (lying down, sitting…), as well as non-invasive sensors to monitor the person’s heart or breathing rate while lying in bed or sitting in the living room. Although this manuscript focuses on how both systems handle and share the knowledge they possess about the context, a couple of example use cases are included. In the first case, the environment provides the robot with information about the positions of objects in the environment, which allows the robot to augment the metric map it uses to navigate, detecting situations that prevent it from moving to a target. If there is a person nearby, the robot will approach them to ask them to move a chair or open a door. In the second case, even more use is made of the robot’s ability to interact with the person. When the IoT system detects that the person has fallen to the ground, it passes this information to the robot so that it can go to the person, talk to them, and ask for external help if necessary. Full article
(This article belongs to the Special Issue Emerging Artificial Intelligence Technologies and Applications)
Show Figures

Figure 1

Back to TopTop