Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (435)

Search Parameters:
Keywords = image mosaic

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 11788 KiB  
Article
YOLOv5DA: An Improved YOLOv5 Model for Posture Detection of Grouped Pigs
by Wenhui Shi, Xiaopin Wang, Xuan Li, Yuhua Fu, Xiaolei Liu and Haiyan Wang
Appl. Sci. 2024, 14(22), 10104; https://doi.org/10.3390/app142210104 - 5 Nov 2024
Viewed by 350
Abstract
Accurate posture detection is the foundation for analyzing animal behavior, which can promote animal welfare. With the development of computer vision, such technology has been widely used in analyzing animal behavior without physical contact. However, computer vision technology for pig posture detection often [...] Read more.
Accurate posture detection is the foundation for analyzing animal behavior, which can promote animal welfare. With the development of computer vision, such technology has been widely used in analyzing animal behavior without physical contact. However, computer vision technology for pig posture detection often suffers from problems of missed or false detection due to complex scenarios. To solve the problem, this study proposed a novel object detection model YOLOv5DA, which was based on YOLOv5s and designed for pig posture detection from 2D camera video. Firstly, we established the annotated dataset (7220 images) including the training set (5776 images), validation set (722 images), and test set (722 images). Secondly, an object detection model YOLOv5DA based on YOLOv5s was proposed to recognize pig postures (standing, prone lying, and side lying), which incorporated Mosaic9 data augmentation, deformable convolution, and adaptive spatial feature fusion. The comparative and ablation experiments were conducted to verify the model’s effectiveness and reliability. Finally, we used YOLOv5DA to detect the posture distribution of pigs. The results revealed that the standing posture was more frequent in the morning and afternoon and the side-lying posture was most common at noon. This observation demonstrated that the posture of pigs is influenced by temperature variations. The study demonstrated that YOLOv5DA could accurately identify three postures of standing, prone lying, and side lying with an average precision (AP) of 99.4%, 99.1%, and 99.1%, respectively. Compared with YOLOv5s, YOLOv5DA could effectively handle occlusion while increasing the mean precision (mAP) by 1.7%. Overall, our work provided a highly accurate, effective, low-cost, and non-contact strategy of posture detection in grouped pigs, which can be used to monitor pig behavior and assist in the early prevention of disease. Full article
Show Figures

Figure 1

18 pages, 3425 KiB  
Article
Corneal Endothelial Microscopy: Does a Manual Recognition of the Endothelial Cells Help the Morphometric Analysis Compared to a Fully Automatic Approach?
by Giulia Carlotta Rizzo, Rosa Di Grassi, Erika Ponzini, Silvia Tavazzi and Fabrizio Zeri
Viewed by 397
Abstract
This study investigated whether manual integration in the recognition of the endothelial cells produces different outcomes of morphometric parameters compared to a fully automatic approach. Eight hundred and ninety endothelial images, originally acquired by the Perseus Specular Microscope (CSO, Florence, Italy), from seven [...] Read more.
This study investigated whether manual integration in the recognition of the endothelial cells produces different outcomes of morphometric parameters compared to a fully automatic approach. Eight hundred and ninety endothelial images, originally acquired by the Perseus Specular Microscope (CSO, Florence, Italy), from seven positions of right and left corneas were selected from the database of the Research Centre in Optics and Optometry at the University of Milano-Bicocca. For each image selected, two procedures of cell identification were performed by the Perseus: an automatic identification and a manual-integrated procedure to add potential additional cells with the available editing tool. At the end of both procedures, the endothelial cell density (ECD), coefficient of variation (CV), and hexagonality (HEX) of the mosaic were calculated. The HEX in the two procedures was significantly different for all comparisons (p < 0.001), but clinically negligible. No significant differences were found for the CV and ECD in the images of both eyes irrespective of the corneal position of acquisition (except for ECD in three corneal portions, p < 0.05). To conclude, it is possible to recognise a significantly higher number of cells using the manual-integrated procedure than it is using the fully automatic one, but this does not change the morphological parameters achieved. Full article
Show Figures

Figure 1

15 pages, 6456 KiB  
Article
Image Stitching of Low-Resolution Retinography Using Fundus Blur Filter and Homography Convolutional Neural Network
by Levi Santos, Maurício Almeida, João Almeida, Geraldo Braz, José Camara and António Cunha
Information 2024, 15(10), 652; https://doi.org/10.3390/info15100652 - 17 Oct 2024
Viewed by 491
Abstract
Great advances in stitching high-quality retinal images have been made in recent years. On the other hand, very few studies have been carried out on low-resolution retinal imaging. This work investigates the challenges of low-resolution retinal images obtained by the D-EYE smartphone-based fundus [...] Read more.
Great advances in stitching high-quality retinal images have been made in recent years. On the other hand, very few studies have been carried out on low-resolution retinal imaging. This work investigates the challenges of low-resolution retinal images obtained by the D-EYE smartphone-based fundus camera. The proposed method uses homography estimation to register and stitch low-quality retinal images into a cohesive mosaic. First, a Siamese neural network extracts features from a pair of images, after which the correlation of their feature maps is computed. This correlation map is fed through four independent CNNs to estimate the homography parameters, each specializing in different corner coordinates. Our model was trained on a synthetic dataset generated from the Microsoft Common Objects in Context (MSCOCO) dataset; this work added an important data augmentation phase to improve the quality of the model. Then, the same is evaluated on the FIRE retina and D-EYE datasets for performance measurement using the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM). The obtained results are promising: the average PSNR was 26.14 dB, with an SSIM of 0.96 on the D-EYE dataset. Compared to the method that uses a single neural network for homography calculations, our approach improves the PSNR by 7.96 dB and achieves a 7.86% higher SSIM score. Full article
Show Figures

Graphical abstract

14 pages, 4478 KiB  
Article
A New Kiwi Fruit Detection Algorithm Based on an Improved Lightweight Network
by Yi Yang, Lijun Su, Aying Zong, Wanghai Tao, Xiaoping Xu, Yixin Chai and Weiyi Mu
Agriculture 2024, 14(10), 1823; https://doi.org/10.3390/agriculture14101823 - 16 Oct 2024
Viewed by 605
Abstract
To address the challenges associated with kiwi fruit detection methods, such as low average accuracy, inaccurate recognition of fruits, and long recognition time, this study proposes a novel kiwi fruit recognition method based on an improved lightweight network S-YOLOv4-tiny detection algorithm. Firstly, the [...] Read more.
To address the challenges associated with kiwi fruit detection methods, such as low average accuracy, inaccurate recognition of fruits, and long recognition time, this study proposes a novel kiwi fruit recognition method based on an improved lightweight network S-YOLOv4-tiny detection algorithm. Firstly, the YOLOv4-tiny algorithm utilizes the CSPdarknet53-tiny network as a backbone feature extraction network, replacing the CSPdarknet53 network in the YOLOv4 algorithm to enhance the speed of kiwi fruit recognition. Additionally, a squeeze-and-excitation network has been incorporated into the S-YOLOv4-tiny detection algorithm to improve accurate image extraction of kiwi fruit characteristics. Finally, enhancing dataset pictures using mosaic methods has improved precision in the characteristic recognition of kiwi fruits. The experimental results demonstrate that the recognition and positioning of kiwi fruits have yielded improved outcomes. The mean average precision (mAP) stands at 89.75%, with a detection precision of 93.96% and a single-picture detection time of 8.50 ms. Compared to the YOLOv4-tiny detection algorithm network, the network in this study exhibits a 7.07% increase in mean average precision and a 1.16% acceleration in detection time. Furthermore, an enhancement method based on the Squeeze-and-Excitation Network (SENet) is proposed, as opposed to the convolutional block attention module (CBAM) and efficient channel attention (ECA). This approach effectively addresses issues related to slow training speed and low recognition accuracy of kiwi fruit, offering valuable technical insights for efficient mechanical picking methods. Full article
Show Figures

Figure 1

21 pages, 55191 KiB  
Article
Analysis of the Biennial Productivity of Arabica Coffee with Google Earth Engine in the Northeast Region of São Paulo, Brazil
by Maria Cecilia Manoel, Marcos Reis Rosa and Alfredo Pereira de Queiroz
Remote Sens. 2024, 16(20), 3833; https://doi.org/10.3390/rs16203833 - 15 Oct 2024
Viewed by 632
Abstract
Numerous challenges are associated with the classification of satellite images of coffee plantations. The spectral similarity with other types of land use, variations in altitude, topography, production system (shaded and sun), and the change in spectral signature throughout the phenological cycle are examples [...] Read more.
Numerous challenges are associated with the classification of satellite images of coffee plantations. The spectral similarity with other types of land use, variations in altitude, topography, production system (shaded and sun), and the change in spectral signature throughout the phenological cycle are examples that affect the process. This research investigates the influence of biennial Arabica coffee productivity on the accuracy of Landsat-8 image classification. The Google Earth Engine (GEE) platform and the Random Forest algorithm were used to process the annual and biennial mosaics of the Média Mogiana Region, São Paulo (Brazil), from 2017 to 2023. The parameters evaluated were the general hits of the seven classes of land use and coffee errors of commission and omission. It was found that the seasonality of the plant and its development phases were fundamental in the quality of coffee classification. The use of biennial mosaics, with Landsat-8 images, Brightness, Greenness, Wetness, SRTM data (elevation, aspect, slope), and LST data (Land Surface Temperature) also contributed to improving the process, generating a classification accuracy of 88.8% and reducing coffee omission errors to 22%. Full article
(This article belongs to the Special Issue Cropland Phenology Monitoring Based on Cloud-Computing Platforms)
Show Figures

Figure 1

18 pages, 9898 KiB  
Article
Land Cover Mapping in East China for Enhancing High-Resolution Weather Simulation Models
by Bingxin Ma, Yang Shao, Hequn Yang, Yiwen Lu, Yanqing Gao, Xinyao Wang, Ying Xie and Xiaofeng Wang
Remote Sens. 2024, 16(20), 3759; https://doi.org/10.3390/rs16203759 - 10 Oct 2024
Viewed by 708
Abstract
This study was designed to develop a 30 m resolution land cover dataset to improve the performance of regional weather forecasting models in East China. A 10-class land cover mapping scheme was established, reflecting East China’s diverse landscape characteristics and incorporating a new [...] Read more.
This study was designed to develop a 30 m resolution land cover dataset to improve the performance of regional weather forecasting models in East China. A 10-class land cover mapping scheme was established, reflecting East China’s diverse landscape characteristics and incorporating a new category for plastic greenhouses. Plastic greenhouses are key to understanding surface heterogeneity in agricultural regions, as they can significantly impact local climate conditions, such as heat flux and evapotranspiration, yet they are often not represented in conventional land cover classifications. This is mainly due to the lack of high-resolution datasets capable of detecting these small yet impactful features. For the six-province study area, we selected and processed Landsat 8 imagery from 2015–2018, filtering for cloud cover. Complementary datasets, such as digital elevation models (DEM) and nighttime lighting data, were integrated to enrich the inputs for the Random Forest classification. A comprehensive training dataset was compiled to support Random Forest training and classification accuracy. We developed an automated workflow to manage the data processing, including satellite image selection, preprocessing, classification, and image mosaicking, thereby ensuring the system’s practicality and facilitating future updates. We included three Weather Research and Forecasting (WRF) model experiments in this study to highlight the impact of our land cover maps on daytime and nighttime temperature predictions. The resulting regional land cover dataset achieved an overall accuracy of 83.2% and a Kappa coefficient of 0.81. These accuracy statistics are higher than existing national and global datasets. The model results suggest that the newly developed land cover, combined with a mosaic option in the Unified Noah scheme in WRF, provided the best overall performance for both daytime and nighttime temperature predictions. In addition to supporting the WRF model, our land cover map products, with a planned 3–5-year update schedule, could serve as a valuable data source for ecological assessments in the East China region, informing environmental policy and promoting sustainability. Full article
(This article belongs to the Topic Computer Vision and Image Processing, 2nd Edition)
Show Figures

Graphical abstract

23 pages, 12047 KiB  
Article
Autonomous Underwater Vehicle Navigation Enhancement by Optimized Side-Scan Sonar Registration and Improved Post-Processing Model Based on Factor Graph Optimization
by Lin Zhang, Lianwu Guan, Jianhui Zeng and Yanbin Gao
J. Mar. Sci. Eng. 2024, 12(10), 1769; https://doi.org/10.3390/jmse12101769 - 5 Oct 2024
Viewed by 661
Abstract
Autonomous Underwater Vehicles (AUVs) equipped with Side-Scan Sonar (SSS) play a critical role in seabed mapping, where precise navigation data are essential for mosaicking sonar images to delineate the seafloor’s topography and feature locations. However, the accuracy of AUV navigation, based on Strapdown [...] Read more.
Autonomous Underwater Vehicles (AUVs) equipped with Side-Scan Sonar (SSS) play a critical role in seabed mapping, where precise navigation data are essential for mosaicking sonar images to delineate the seafloor’s topography and feature locations. However, the accuracy of AUV navigation, based on Strapdown Inertial Navigation System (SINS)/Doppler Velocity Log (DVL) systems, tends to degrade over long-term mapping, which compromises the quality of sonar image mosaics. This study addresses the challenge by introducing a post-processing navigation method for AUV SSS surveys, utilizing Factor Graph Optimization (FGO). Specifically, the method utilizes an improved Fourier-based image registration algorithm to generate more robust relative position measurements. Then, through the integration of these measurements with data from SINS, DVL, and surface Global Navigation Satellite System (GNSS) within the FGO framework, the approach notably enhances the accuracy of the complete trajectory for AUV missions. Finally, the proposed method has been validated through both the simulation and AUV marine experiments. Full article
Show Figures

Figure 1

18 pages, 12594 KiB  
Article
A Simple Model to Study Mosaic Gene Expression in 3D Endothelial Spheroids
by Lucinda S. McRobb, Vivienne S. Lee, Fahimeh Faqihi and Marcus A. Stoodley
J. Cardiovasc. Dev. Dis. 2024, 11(10), 305; https://doi.org/10.3390/jcdd11100305 - 2 Oct 2024
Viewed by 526
Abstract
Aims: The goal of this study was to establish a simple model of 3D endothelial spheroids with mosaic gene expression using adeno-associated virus (AAV) transduction, with a future aim being to study the activity of post-zygotic mutations common to vascular malformations. Methods: In [...] Read more.
Aims: The goal of this study was to establish a simple model of 3D endothelial spheroids with mosaic gene expression using adeno-associated virus (AAV) transduction, with a future aim being to study the activity of post-zygotic mutations common to vascular malformations. Methods: In this study, 96-well U-bottom plates coated with a commercial repellent were seeded with two immortalized human endothelial cell lines and aggregation monitored using standard microscopy or live-cell analysis. The eGFP expression was used to monitor the AAV transduction. Results: HUVEC-TERT2 could not form spheroids spontaneously. The inclusion of collagen I in the growth medium could stimulate cell aggregation; however, these spheroids were not stable. In contrast, the hCMEC/D3 cells aggregated spontaneously and formed reproducible, robust 3D spheroids within 3 days, growing steadily for at least 4 weeks without the need for media refreshment. The hCMEC/D3 spheroids spontaneously developed a basement membrane, including collagen I, and expressed endothelial-specific CD31 at the spheroid surface. Serotypes AAV1 and AAV2QUADYF transduced these spheroids without toxicity and established sustained, mosaic eGFP expression. Conclusions: In the future, this simple approach to endothelial spheroid formation combined with live-cell imaging could be used to rapidly assess the 3D phenotypes and drug and radiation sensitivities arising from mosaic mutations common to brain vascular malformations. Full article
(This article belongs to the Section Basic and Translational Cardiovascular Research)
Show Figures

Figure 1

14 pages, 8002 KiB  
Article
A UAV Thermal Imaging Format Conversion System and Its Application in Mosaic Surface Microthermal Environment Analysis
by Lu Jiang, Haitao Zhao, Biao Cao, Wei He, Zengxin Yun and Chen Cheng
Sensors 2024, 24(19), 6267; https://doi.org/10.3390/s24196267 - 27 Sep 2024
Viewed by 542
Abstract
UAV thermal infrared remote sensing technology, with its high flexibility and high temporal and spatial resolution, is crucial for understanding surface microthermal environments. Despite DJI Drones’ industry-leading position, the JPG format of their thermal images limits direct image stitching and further analysis, hindering [...] Read more.
UAV thermal infrared remote sensing technology, with its high flexibility and high temporal and spatial resolution, is crucial for understanding surface microthermal environments. Despite DJI Drones’ industry-leading position, the JPG format of their thermal images limits direct image stitching and further analysis, hindering their broad application. To address this, a format conversion system, ThermoSwitcher, was developed for DJI thermal JPG images, and this system was applied to surface microthermal environment analysis, taking two regions with various local zones in Nanjing as the research area. The results showed that ThermoSwitcher can quickly and losslessly convert thermal JPG images to the Geotiff format, which is further convenient for producing image mosaics and for local temperature extraction. The results also indicated significant heterogeneity in the study area’s temperature distribution, with high temperatures concentrated on sunlit artificial surfaces, and low temperatures corresponding to building shadows, dense vegetation, and water areas. The temperature distribution and change rates in different local zones were significantly influenced by surface cover type, material thermal properties, vegetation coverage, and building layout. Higher temperature change rates were observed in high-rise building and subway station areas, while lower rates were noted in water and vegetation-covered areas. Additionally, comparing the temperature distribution before and after image stitching revealed that the stitching process affected the temperature uniformity to some extent. The described format conversion system significantly enhances preprocessing efficiency, promoting advancements in drone remote sensing and refined surface microthermal environment research. Full article
(This article belongs to the Special Issue Advances on UAV-Based Sensing and Imaging)
Show Figures

Figure 1

19 pages, 10946 KiB  
Article
Crop Growth Analysis Using Automatic Annotations and Transfer Learning in Multi-Date Aerial Images and Ortho-Mosaics
by Shubham Rana, Salvatore Gerbino, Ehsan Akbari Sekehravani, Mario Brandon Russo and Petronia Carillo
Agronomy 2024, 14(9), 2052; https://doi.org/10.3390/agronomy14092052 - 7 Sep 2024
Viewed by 1115
Abstract
Growth monitoring of crops is a crucial aspect of precision agriculture, essential for optimal yield prediction and resource allocation. Traditional crop growth monitoring methods are labor-intensive and prone to errors. This study introduces an automated segmentation pipeline utilizing multi-date aerial images and ortho-mosaics [...] Read more.
Growth monitoring of crops is a crucial aspect of precision agriculture, essential for optimal yield prediction and resource allocation. Traditional crop growth monitoring methods are labor-intensive and prone to errors. This study introduces an automated segmentation pipeline utilizing multi-date aerial images and ortho-mosaics to monitor the growth of cauliflower crops (Brassica Oleracea var. Botrytis) using an object-based image analysis approach. The methodology employs YOLOv8, a Grounding Detection Transformer with Improved Denoising Anchor Boxes (DINO), and the Segment Anything Model (SAM) for automatic annotation and segmentation. The YOLOv8 model was trained using aerial image datasets, which then facilitated the training of the Grounded Segment Anything Model framework. This approach generated automatic annotations and segmentation masks, classifying crop rows for temporal monitoring and growth estimation. The study’s findings utilized a multi-modal monitoring approach to highlight the efficiency of this automated system in providing accurate crop growth analysis, promoting informed decision-making in crop management and sustainable agricultural practices. The results indicate consistent and comparable growth patterns between aerial images and ortho-mosaics, with significant periods of rapid expansion and minor fluctuations over time. The results also indicated a correlation between the time and method of observation which paves a future possibility of integration of such techniques aimed at increasing the accuracy in crop growth monitoring based on automatically derived temporal crop row segmentation masks. Full article
Show Figures

Figure 1

26 pages, 14527 KiB  
Article
SimMolCC: A Similarity of Automatically Detected Bio-Molecule Clusters between Fluorescent Cells
by Shun Hattori, Takafumi Miki, Akisada Sanjo, Daiki Kobayashi and Madoka Takahara
Appl. Sci. 2024, 14(17), 7958; https://doi.org/10.3390/app14177958 - 6 Sep 2024
Viewed by 491
Abstract
In the field of studies on the “Neural Synapses” in the nervous system, its experts manually (or pseudo-automatically) detect the bio-molecule clusters (e.g., of proteins) in many TIRF (Total Internal Reflection Fluorescence) images of a fluorescent cell and analyze their static/dynamic behaviors. This [...] Read more.
In the field of studies on the “Neural Synapses” in the nervous system, its experts manually (or pseudo-automatically) detect the bio-molecule clusters (e.g., of proteins) in many TIRF (Total Internal Reflection Fluorescence) images of a fluorescent cell and analyze their static/dynamic behaviors. This paper proposes a novel method for the automatic detection of the bio-molecule clusters in a TIRF image of a fluorescent cell and conducts several experiments on its performance, e.g., mAP @ IoU (mean Average Precision @ Intersection over Union) and F1-score @ IoU, as an objective/quantitative means of evaluation. As a result, the best of the proposed methods achieved 0.695 as its mAP @ IoU = 0.5 and 0.250 as its F1-score @ IoU = 0.5 and would have to be improved, especially with respect to its recall @ IoU. But, the proposed method could automatically detect bio-molecule clusters that are not only circular and not always uniform in size, and it can output various histograms and heatmaps for novel deeper analyses of the automatically detected bio-molecule clusters, while the particles detected by the Mosaic Particle Tracker 2D/3D, which is one of the most conventional methods for experts, can be only circular and uniform in size. In addition, this paper defines and validates a novel similarity of automatically detected bio-molecule clusters between fluorescent cells, i.e., SimMolCC, and also shows some examples of SimMolCC-based applications. Full article
(This article belongs to the Special Issue Object Detection and Image Classification)
Show Figures

Figure 1

25 pages, 5178 KiB  
Article
Sugarcane Mosaic Virus Detection in Maize Using UAS Multispectral Imagery
by Noah Bevers, Erik W. Ohlson, Kushal KC, Mark W. Jones and Sami Khanal
Remote Sens. 2024, 16(17), 3296; https://doi.org/10.3390/rs16173296 - 5 Sep 2024
Viewed by 779
Abstract
One of the most important and widespread corn/maize virus diseases is maize dwarf mosaic (MDM), which can be induced by sugarcane mosaic virus (SCMV). This study explores a machine learning analysis of five-band multispectral imagery collected via an unmanned aerial system (UAS) during [...] Read more.
One of the most important and widespread corn/maize virus diseases is maize dwarf mosaic (MDM), which can be induced by sugarcane mosaic virus (SCMV). This study explores a machine learning analysis of five-band multispectral imagery collected via an unmanned aerial system (UAS) during the 2021 and 2022 seasons for SCMV disease detection in corn fields. The three primary objectives are to (i) determine the spectral bands and vegetation indices that are most important or correlated with SCMV infection in corn, (ii) compare spectral signatures of mock-inoculated and SCMV-inoculated plants, and (iii) compare the performance of four machine learning algorithms, including ridge regression, support vector machine (SVM), random forest, and XGBoost, in predicting SCMV during early and late stages in corn. On average, SCMV-inoculated plants had higher reflectance values for blue, green, red, and red-edge bands and lower reflectance for near-infrared as compared to mock-inoculated samples. Across both years, the XGBoost regression model performed best for predicting disease incidence percentage (R2 = 0.29, RMSE = 29.26), and SVM classification performed best for the binary prediction of SCMV-inoculated vs. mock-inoculated samples (72.9% accuracy). Generally, model performances appeared to increase as the season progressed into August and September. According to Shapley additive explanations (SHAP analysis) of the top performing models, the simplified canopy chlorophyll content index (SCCCI) and saturation index (SI) were the vegetation indices that consistently had the strongest impacts on model behavior for SCMV disease regression and classification prediction. The findings of this study demonstrate the potential for the development of UAS image-based tools for farmers, aiming to facilitate the precise identification and mapping of SCMV infection in corn. Full article
(This article belongs to the Special Issue Crops and Vegetation Monitoring with Remote/Proximal Sensing II)
Show Figures

Graphical abstract

19 pages, 12227 KiB  
Article
Local-Peak Scale-Invariant Feature Transform for Fast and Random Image Stitching
by Hao Li, Lipo Wang, Tianyun Zhao and Wei Zhao
Sensors 2024, 24(17), 5759; https://doi.org/10.3390/s24175759 - 4 Sep 2024
Viewed by 898
Abstract
Image stitching aims to construct a wide field of view with high spatial resolution, which cannot be achieved in a single exposure. Typically, conventional image stitching techniques, other than deep learning, require complex computation and are thus computationally expensive, especially for stitching large [...] Read more.
Image stitching aims to construct a wide field of view with high spatial resolution, which cannot be achieved in a single exposure. Typically, conventional image stitching techniques, other than deep learning, require complex computation and are thus computationally expensive, especially for stitching large raw images. In this study, inspired by the multiscale feature of fluid turbulence, we developed a fast feature point detection algorithm named local-peak scale-invariant feature transform (LP-SIFT), based on the multiscale local peaks and scale-invariant feature transform method. By combining LP-SIFT and RANSAC in image stitching, the stitching speed can be improved by orders compared with the original SIFT method. Benefiting from the adjustable size of the interrogation window, the LP-SIFT algorithm demonstrates comparable or even less stitching time than the other commonly used algorithms, while achieving comparable or even better stitching results. Nine large images (over 2600 × 1600 pixels), arranged randomly without prior knowledge, can be stitched within 158.94 s. The algorithm is highly practical for applications requiring a wide field of view in diverse application scenes, e.g., terrain mapping, biological analysis, and even criminal investigation. Full article
(This article belongs to the Special Issue Multi-Modal Data Sensing and Processing)
Show Figures

Figure 1

10 pages, 1136 KiB  
Brief Report
Fibroblast Activation Protein Is Expressed by Altered Osteoprogenitors and Associated to Disease Burden in Fibrous Dysplasia
by Layne N. Raborn, Zachary Michel, Michael T. Collins, Alison M. Boyce and Luis F. de Castro
Cells 2024, 13(17), 1434; https://doi.org/10.3390/cells13171434 - 27 Aug 2024
Viewed by 769
Abstract
Fibrous dysplasia (FD) is a mosaic skeletal disorder involving the development of benign, expansile fibro-osseous lesions during childhood that cause deformity, fractures, pain, and disability. There are no well-established treatments for FD. Fibroblast activation protein (FAPα) is a serine protease expressed in pathological [...] Read more.
Fibrous dysplasia (FD) is a mosaic skeletal disorder involving the development of benign, expansile fibro-osseous lesions during childhood that cause deformity, fractures, pain, and disability. There are no well-established treatments for FD. Fibroblast activation protein (FAPα) is a serine protease expressed in pathological fibrotic tissues that has promising clinical applications as a biomarker and local pro-drug activator in several pathological conditions. In this study, we explored the expression of FAP in FD tissue and cells through published genetic expression datasets and measured circulating FAPα in plasma samples from patients with FD and healthy donors. We found that FAP genetic expression was increased in FD tissue and cells, and present at higher concentrations in plasma from patients with FD compared to healthy donors. Moreover, FAPα levels were correlated with skeletal disease burden in patients with FD. These findings support further investigation of FAPα as a potential imaging and/or biomarker of FD, as well as a pro-drug activator specific to FD tissue. Full article
(This article belongs to the Section Tissues and Organs)
Show Figures

Figure 1

17 pages, 6274 KiB  
Article
Enhanced Automatic Wildfire Detection System Using Big Data and EfficientNets
by Armando Fernandes, Andrei Utkin and Paulo Chaves
Fire 2024, 7(8), 286; https://doi.org/10.3390/fire7080286 - 16 Aug 2024
Viewed by 850
Abstract
Previous works have shown the effectiveness of EfficientNet—a convolutional neural network built upon the concept of compound scaling—in automatically detecting smoke plumes at a distance of several kilometres in visible camera images. Building on these results, we have created enhanced EfficientNet models capable [...] Read more.
Previous works have shown the effectiveness of EfficientNet—a convolutional neural network built upon the concept of compound scaling—in automatically detecting smoke plumes at a distance of several kilometres in visible camera images. Building on these results, we have created enhanced EfficientNet models capable of precisely identifying the smoke location due to the introduction of a mosaic-like output and achieving extremely reduced false positive percentages due to using partial AUROC and applying class imbalance. Our EfficientNets beat InceptionV3 and MobileNetV2 in the same dataset and achieved a true detection percentage of 89.2% and a false positive percentage of only 0.306% across a test set with 17,023 images. The complete dataset used in this study contains 26,204 smoke and 51,075 non-smoke images. This makes it one of the largest, if not the most extensive, datasets reported in the scientific literature for smoke plume imagery. So, the achieved percentages are not only among the best reported for this application but are also among the most reliable due to the extent and representativeness of the dataset. Full article
(This article belongs to the Special Issue Intelligent Fire Protection)
Show Figures

Figure 1

Back to TopTop