sensors-logo

Journal Browser

Journal Browser

Event-Driven Vision Sensor Architectures and Application Scenarios

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Electronic Sensors".

Deadline for manuscript submissions: 25 March 2025 | Viewed by 830

Special Issue Editor


E-Mail Website
Guest Editor
Institute of Microelectronics of Seville (IMSE-CNM), CSIC, Universidad de Sevilla, 41092 Sevilla, Spain
Interests: event-driven vision sensors; sun sensors; imagers; vision sensors; HDR image sensors

Special Issue Information

Dear Colleagues,

Event-based vision sensors have evolved over the last decade from proof-of-concept lab prototypes to industrial products. Many companies, such as Prophesee, Sony, and Samsung, now include them in their product catalogs. The proliferation of artificial intelligence algorithms and systems has significantly contributed to incorporating event-driven sensors as a front-end to obtain pre-processed information from visual scenes.

The scope of this Special Issue is twofold:

  • Presentation, description, and implementation of new sensor architectures at the pixel and system levels. Competitive modern implementations that exploit the advantages of vertical integration technologies and hybridization processes are welcome.
  • Case studies where event-driven sensors are used successfully. Examples include industrial applications, robotics, surveillance, special navigation, scientific instrumentation, biomedical applications, etc. 

Dr. Juan Antonio Lenero-Bardallo
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • event driven
  • vision sensors
  • applications
  • event cameras applications

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 3759 KiB  
Article
Fusing Events and Frames with Coordinate Attention Gated Recurrent Unit for Monocular Depth Estimation
by Huimei Duan, Chenggang Guo and Yuan Ou
Sensors 2024, 24(23), 7752; https://doi.org/10.3390/s24237752 - 4 Dec 2024
Viewed by 583
Abstract
Monocular depth estimation is a central problem in computer vision and robot vision, aiming at obtaining the depth information of a scene from a single image. In some extreme environments such as dynamics or drastic lighting changes, monocular depth estimation methods based on [...] Read more.
Monocular depth estimation is a central problem in computer vision and robot vision, aiming at obtaining the depth information of a scene from a single image. In some extreme environments such as dynamics or drastic lighting changes, monocular depth estimation methods based on conventional cameras often perform poorly. Event cameras are able to capture brightness changes asynchronously but are not able to acquire color and absolute brightness information. Thus, it is an ideal choice to make full use of the complementary advantages of event cameras and conventional cameras. However, how to effectively fuse event data and frames to improve the accuracy and robustness of monocular depth estimation remains an urgent problem. To overcome these challenges, a novel Coordinate Attention Gated Recurrent Unit (CAGRU) is proposed in this paper. Unlike the conventional ConvGRUs, our CAGRU abandons the conventional practice of using convolutional layers for all the gates and innovatively designs the coordinate attention as an attention gate and combines it with the convolutional gate. Coordinate attention explicitly models inter-channel dependencies and coordinate information in space. The coordinate attention gate in conjunction with the convolutional gate enable the network to model feature information spatially, temporally, and internally across channels. Based on this, the CAGRU can enhance the information density of the sparse events in the spatial domain in the recursive process of temporal information, thereby achieving more effective feature screening and fusion. It can effectively integrate feature information from event cameras and standard cameras, further improving the accuracy and robustness of monocular depth estimation. The experimental results show that the method proposed in this paper achieves significant performance improvements on different public datasets. Full article
(This article belongs to the Special Issue Event-Driven Vision Sensor Architectures and Application Scenarios)
Show Figures

Figure 1

Back to TopTop