Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (3)

Search Parameters:
Keywords = real-time motion imagery recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 4515 KiB  
Article
Historical Blurry Video-Based Face Recognition
by Lujun Zhai, Suxia Cui, Yonghui Wang, Song Wang, Jun Zhou and Greg Wilsbacher
J. Imaging 2024, 10(9), 236; https://doi.org/10.3390/jimaging10090236 - 20 Sep 2024
Viewed by 1061
Abstract
Face recognition is a widely used computer vision, which plays an increasingly important role in user authentication systems, security systems, and consumer electronics. The models for most current applications are based on high-definition digital cameras. In this paper, we focus on digital images [...] Read more.
Face recognition is a widely used computer vision, which plays an increasingly important role in user authentication systems, security systems, and consumer electronics. The models for most current applications are based on high-definition digital cameras. In this paper, we focus on digital images derived from historical motion picture films. Historical motion picture films often have poorer resolution than modern digital imagery, making face detection a more challenging task. To approach this problem, we first propose a trunk–branch concatenated multi-task cascaded convolutional neural network (TB-MTCNN), which efficiently extracts facial features from blurry historical films by combining the trunk with branch networks and employing various sizes of kernels to enrich the multi-scale receptive field. Next, we build a deep neural network-integrated object-tracking algorithm to compensate for failed recognition over one or more video frames. The framework combines simple online and real-time tracking with deep data association (Deep SORT), and TB-MTCNN with the residual neural network (ResNet) model. Finally, a state-of-the-art image restoration method is employed to reduce the effect of noise and blurriness. The experimental results show that our proposed joint face recognition and tracking network can significantly reduce missed recognition in historical motion picture film frames. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

15 pages, 5405 KiB  
Article
Motor Imagery Classification Based on a Recurrent-Convolutional Architecture to Control a Hexapod Robot
by Tat’y Mwata-Velu, Jose Ruiz-Pinales, Horacio Rostro-Gonzalez, Mario Alberto Ibarra-Manzano, Jorge Mario Cruz-Duarte and Juan Gabriel Avina-Cervantes
Mathematics 2021, 9(6), 606; https://doi.org/10.3390/math9060606 - 12 Mar 2021
Cited by 20 | Viewed by 3923
Abstract
Advances in the field of Brain-Computer Interfaces (BCIs) aim, among other applications, to improve the movement capacities of people suffering from the loss of motor skills. The main challenge in this area is to achieve real-time and accurate bio-signal processing for pattern recognition, [...] Read more.
Advances in the field of Brain-Computer Interfaces (BCIs) aim, among other applications, to improve the movement capacities of people suffering from the loss of motor skills. The main challenge in this area is to achieve real-time and accurate bio-signal processing for pattern recognition, especially in Motor Imagery (MI). The significant interaction between brain signals and controllable machines requires instantaneous brain data decoding. In this study, an embedded BCI system based on fist MI signals is developed. It uses an Emotiv EPOC+ Brainwear®, an Altera SoCKit® development board, and a hexapod robot for testing locomotion imagery commands. The system is tested to detect the imagined movements of closing and opening the left and right hand to control the robot locomotion. Electroencephalogram (EEG) signals associated with the motion tasks are sensed on the human sensorimotor cortex. Next, the SoCKit processes the data to identify the commands allowing the controlled robot locomotion. The classification of MI-EEG signals from the F3, F4, FC5, and FC6 sensors is performed using a hybrid architecture of Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks. This method takes advantage of the deep learning recognition model to develop a real-time embedded BCI system, where signal processing must be seamless and precise. The proposed method is evaluated using k-fold cross-validation on both created and public Scientific-Data datasets. Our dataset is comprised of 2400 trials obtained from four test subjects, lasting three seconds of closing and opening fist movement imagination. The recognition tasks reach 84.69% and 79.2% accuracy using our data and a state-of-the-art dataset, respectively. Numerical results support that the motor imagery EEG signals can be successfully applied in BCI systems to control mobile robots and related applications such as intelligent vehicles. Full article
(This article belongs to the Special Issue Statistical Data Modeling and Machine Learning with Applications)
Show Figures

Figure 1

5748 KiB  
Article
A Real-Time High Performance Computation Architecture for Multiple Moving Target Tracking Based on Wide-Area Motion Imagery via Cloud and Graphic Processing Units
by Kui Liu, Sixiao Wei, Zhijiang Chen, Bin Jia, Genshe Chen, Haibin Ling, Carolyn Sheaff and Erik Blasch
Sensors 2017, 17(2), 356; https://doi.org/10.3390/s17020356 - 12 Feb 2017
Cited by 14 | Viewed by 6557
Abstract
This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion [...] Read more.
This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions. Full article
(This article belongs to the Special Issue Video Analysis and Tracking Using State-of-the-Art Sensors)
Show Figures

Figure 1

Back to TopTop