Ubiquitous Computing: Car Dashboard Prototype
using Image Processing Techniques
Abhinav Chaudhary, Ganpati Dev Pandey, Tanmay Tyagi, Aditya C Singh
Abstract—This paper presents an implementation of Ubiqui- and provide digital interpretations offers a more affordable
tous Computing in the context of developing a car dashboard means of upgrading existing vehicles.
prototype utilizing advanced image processing techniques. The The primary use cases of this project include:
system processes images of analog car dashboards, specifically
detecting and analyzing key regions such as speedometer and • Smart Dashboards for Older Vehicles: The system
RPM gauges. By employing OpenCV and Numpy libraries, can be retrofitted into older vehicles with analog dials,
multiple image processing steps, including grayscale conversion, providing drivers with real-time digital readouts without
Gaussian blur, and edge detection, were implemented to accu-
altering the existing dashboard.
rately identify circular dials. Hough Circle Detection was applied
to locate the gauges, and template matching was used to detect • Data Logging and Analytics: By integrating this system
needle positions. The results show a successful mapping of needle with a data storage platform like Firebase, it can be used
positions to corresponding values. Future improvements will to log and analyze driving behavior, performance, and
involve real-time data acquisition and extending the system’s vehicle health over time.
capabilities using ESP32.
• Vehicle Diagnostic Systems: As part of a broader au-
I. I NTRODUCTION tomotive diagnostic platform, this system could be used
to automate the detection and alerting of irregularities
Ubiquitous Computing, also known as pervasive computing, in RPM, speed, and other crucial metrics, potentially
aims to integrate computation into the everyday physical identifying vehicle issues early.
world, making it accessible and useful in various environ- • Augmented Reality (AR) Integration: With the rise
ments. The goal is to enable devices to interact seamlessly with of AR technologies, the visual data extracted from the
their surroundings and provide meaningful data to enhance dashboard can be overlaid in a heads-up display (HUD)
user experiences. In modern automobiles, there is a growing for drivers, further enhancing the driving experience.
demand for intelligent systems that can provide real-time infor- • Fleet Management: In commercial vehicles, this sys-
mation to the driver, enhancing both safety and convenience. tem could be integrated into fleet management software,
In this project, we explore the application of Ubiquitous allowing companies to monitor vehicle performance re-
Computing by developing a car dashboard prototype that motely.
processes visual inputs from analog dials. Many traditional
This project aims to contribute to the broader vision of in-
vehicles still use analog dashboards, which require manual
telligent automotive systems, where vehicles can communicate
monitoring of speed and RPM. Our motivation for this project
with drivers and surrounding environments more efficiently.
stems from the desire to bridge the gap between traditional
By using image processing techniques, we not only enhance
analog systems and modern, smart interfaces. By using image
the monitoring and interpretation of dashboard metrics but also
processing techniques, we aim to automate the interpretation
lay the groundwork for future innovations in smart vehicle
of dashboard readings, allowing for more efficient and accurate
technologies.
monitoring.
The motivation behind this project is threefold. First, it aims II. M ETHODOLOGY
to enhance user experience, as modern drivers are increasingly
accustomed to digital displays, while many older or budget- A. Image Collection
friendly vehicles still rely on analog gauges. By integrating The project dataset consists of eight images of the Honda
smart systems, we can provide an enriched user experience Jazz 1.2 V MT IVTEC dashboard, captured with an iPhone
without requiring the complete replacement of the existing 15 under various driving conditions to enhance template
infrastructure. Second, the project focuses on improving safety detection, speed, RPM, and fuel detection. The first image was
and efficiency through accurate, real-time monitoring of speed, taken from a close distance with the gear in neutral and the
RPM, and other dashboard parameters, which are crucial for accelerator pressed. Subsequent images were captured while
safe driving. Automating this process with image processing the vehicle was in motion: one at approximately 10 km/h with
can help reduce human error and provide more reliable feed- the right indicator activated and another from a slightly left
back to the driver. Lastly, this approach offers a cost-effective angle at 20 km/h, where the seat belt warning was illuminated.
modernization solution. Many vehicles on the road today are Additional images aimed at speed and RPM detection were
not equipped with digital displays, and instead of opting for taken from varying distances and angles, including stationary
expensive replacements, a system that can process analog data shots to ensure clarity in template recognition. All images were
captured from the driver’s seat, providing a comprehensive 5) Hough Circle Detection: This method is used to
view of the dashboard. identify circular shapes in the image, particularly the
speedometer and RPM gauges, which have circular dials.
B. Libraries and Tools
• Edge Detection Input: The Hough Circle Detection
For this project, the following libraries were utilized:
algorithm operates on the edge map produced by the
• OpenCV: Image processing tasks like grayscale conver-
Canny Edge Detector. It looks for circular patterns
sion, Gaussian blur, and edge detection. by considering the points that form potential circles.
• Matplotlib: Visualization of image processing outputs.
• Parameter Space: The algorithm transforms the
• Numpy: Array operations and mathematical computa-
edge points into a parametric space defined by the
tions. circle equation: (x−a)2 +(y−b)2 = r2 , where (a, b)
C. Image Processing Techniques are the coordinates of the circle’s center, and r is
The following steps outline the image processing workflow: its radius. It uses a voting procedure to determine
the possible centers and radii of the circles.
1) Grayscale Conversion: Conversion from BGR to
• Accumulator Matrix: For each edge point in the
grayscale to simplify processing. This reduces the image
image, the algorithm calculates the possible circle
from three color channels (Blue, Green, and Red) to a
centers and increments a corresponding location in
single intensity channel, which makes further processing
the accumulator matrix. The points in this matrix
computationally more efficient and helps focus on the
that receive the most votes represent the most likely
structural features of the image.
circle centers.
2) Gaussian Blur: Noise reduction is achieved using a
• Circle Detection: After identifying potential centers
Gaussian Blur filter with a 5x5 kernel. This filter
and radii from the accumulator matrix, the algorithm
smooths the image by averaging pixel values in a
selects the most likely circle parameters based on
local neighborhood, reducing high-frequency noise and
a predefined threshold. These parameters are then
irrelevant details, while preserving important edges.
used to draw the detected circles, representing the
3) Histogram Equalization: After applying the Gaussian
speedometer and RPM gauges.
Blur, histogram equalization is used to enhance the
contrast of the image by redistributing the pixel intensi- 6) ROI Extraction: Once the circular dials (speedometer
ties. This process stretches the range of pixel values, and RPM gauges) are detected, the Region of Interest
improving the visibility of edges and features in the (ROI) is extracted by cropping the identified circles from
image. the image. This step isolates the gauges for further anal-
4) Canny Edge Detection: This technique is one of the ysis, such as needle detection and reading interpretation.
most widely used edge detection algorithms. It works in
several stages: D. Speed and RPM Detection
• Gradient Calculation: The image gradients
(changes in intensity) are computed in both Once the circular dials (speedometer and RPM gauge)
horizontal and vertical directions using Sobel are extracted, the next crucial step is to detect the needle’s
operators. This highlights areas with significant position and map its angle to real-world speed or RPM values.
intensity changes, which often correspond to object The process leverages a technique called interpolation, which
boundaries. allows us to translate the needle’s angular position into its
• Non-Maximum Suppression: The algorithm then corresponding measurement value.
filters out pixels that are not part of an edge by 1) Needle Detection: The needle is detected using edge
suppressing non-maximum values in the gradient detection techniques such as Canny Edge Detection and Hough
directions. This step ensures that only the local Line Transform, which help identify the straight lines in the
maxima of the gradient are preserved, making the dial. The detected lines are then filtered to isolate the needle
edges thinner and more defined. by selecting the line that passes through the center of the dial
• Double Thresholding: Two threshold values (a and has the correct orientation.
lower and an upper threshold) are applied to classify 2) Mapping Angle to Speed and RPM: Once the needle’s
pixels as strong edges, weak edges, or non-edges. position is identified, we calculate the angle it forms relative
Pixels with gradient values above the upper thresh- to a reference point (typically the 0 mark on the gauge). The
old are marked as strong edges, while those between angle is then mapped to either the speed or RPM value using
the two thresholds are considered weak edges. linear interpolation.
• Edge Tracking by Hysteresis: Finally, weak edges The mapping formulas are as follows:
that are connected to strong edges are retained, Speed Mapping Formula:
while those that are isolated are discarded. This step
refines the edge map and reduces noise, ensuring
Angle − min angle
that the most important edges remain. Speed = ×(∆Speed)+min speed
max angle − min angle
Where: - ∆Speed = max speed − min speed - min angle = 1) Image Preprocessing: Dashboard images are prepro-
−30◦ (corresponding to 0 km/h) - max angle = 240◦ (corre- cessed with grayscale conversion, normalization, and
sponding to 220 km/h) noise reduction to enhance readability and contrast.
RPM Mapping Formula: 2) Region of Interest (ROI) Selection: Areas containing
numeric readings (e.g., speed, RPM) are designated as
Angle − min angle
RPM = × (∆RPM) + min rpm ROIs, focusing processing efforts on relevant sections to
max angle − min angle improve accuracy.
Where: - min angle = −30◦ (corresponding to 0 RPM) - 3) OCR Application: EasyOCR is applied to the selected
max angle = 120◦ (corresponding to 8,000 RPM) ROIs, utilizing deep learning models to accurately rec-
3) Gauge Calibration: The calibration process involves ognize digits and provide confidence scores.
defining the minimum and maximum angles corresponding 4) Post-processing: Detected digits are validated against
to the lowest and highest possible values for speed and expected formats, filtering out misrecognized characters
RPM. These angles are mapped linearly to the real-world to ensure reliable readings.
values displayed on the gauge. This ensures accurate real-time 5) Integration with Speed and RPM Detection: Extracted
readings, where the visual position of the needle corresponds values are integrated with mapping functions to correlate
to actual speed or RPM measurements. detected needle angles with actual speed and RPM,
enhancing analysis accuracy.
E. Template Matching
Template matching is a key technique used to identify
specific patterns within dashboard images. It is particularly III. R ESULTS
effective for detecting needle positions on gauges and various
dashboard indicators. The process involves the following steps: The prototype successfully identified the speedometer and
RPM gauges and mapped the needle positions to their cor-
• Template and Target: A smaller template image is
responding values. The results are presented in the following
created for the needle and dashboard indicators (e.g.,
subsections:
seatbelt sign, warning lights) to facilitate detection within
the larger image.
• Sliding Window Technique: The template is moved A. Detected Circles
across the target image one pixel at a time, comparing The Hough Circle Detection algorithm efficiently located
it to overlapping regions of the target. the circular dials on the dashboard. This method effectively
• Similarity Measurement: We use Normalized Cross-
identifies the boundaries of the gauges, as shown in Figure
Correlation (NCC) to calculate a correlation coefficient, 1. The detected circles serve as the foundation for subsequent
indicating the similarity between the template and the analysis of needle positions.
target image region.
• Thresholding: A threshold is applied to the NCC values
to classify significant matches. If the coefficient exceeds
this threshold, it suggests potential detection of the needle
or indicator, helping to filter out false positives.
• Indicator Detection: The technique is also applied to
identify critical indicators such as:
– Seatbelt Warning Indicator: For detecting the visibil-
ity of the seatbelt icon.
– Check Engine Light: To accurately detect any warn-
ing lights.
– Turn Signal Indicators: For identifying the active
status of turn signals.
• Post-processing: Additional validation steps may be Fig. 1. Detected circles on the dashboard using Hough Circle Detection.
implemented to confirm detections, including cross-
referencing indicators with expected dashboard behavior
based on the vehicle’s operational state. B. Detected Edges
F. Digit Detection Canny Edge Detection was applied to highlight significant
Digit detection is essential for extracting numeric values edges within the dashboard image. This step is crucial for
from dashboard gauges. This project employs EasyOCR for identifying the outlines and contours of the gauges, facilitating
effective recognition from dashboard images, following these accurate needle detection. The results of edge detection are
steps: displayed in Figure 2.
IV. D ISCUSSION AND F UTURE W ORK
The current system provides a solid foundation for a fully
functional car dashboard prototype, effectively identifying
critical indicators using Hough Circle Detection, Canny Edge
Detection, and template matching. Future work will focus on
enhancing system robustness by incorporating a wider variety
of images, including those captured from different angles,
which will improve the algorithm’s detection capabilities.
Additionally, we aim to integrate real-time data acquisition
through Firebase for dynamic dashboard updates and develop
a web interface to connect the image-processed data with an
ESP32 module and a cloud server.
Fig. 2. Detected edges on the dashboard using Canny Edge Detection.
Moreover, incorporating machine learning techniques, such
as convolutional neural networks, will further enhance needle
detection accuracy and expand the analysis to other gauges.
Gathering user feedback will be essential for iterative improve-
ments, ultimately leading to a more intuitive and comprehen-
sive user experience that may include monitoring additional
vehicle metrics like fuel efficiency and engine temperature.
V. T EAM C OLLABORATION AND W ORKFLOW
The project was executed collaboratively by four team
members, each responsible for specific tasks:
Aditya C Singh: Collected a diverse set of dashboard images
and co-created the project presentation.
Abhinav Chaudhary: Handled the report writing, image
Fig. 3. Your caption for the second image here. preprocessing, and implemented the speed and RPM detection
algorithms.
Tanmay Tyagi: Collaborated with Aditya on template
C. Matched Templates
matching and co-created the project presentation.
Template matching was employed to accurately detect the Ganpati Dev Pandey: Focused on digit detection using Easy-
needle positions and other indicators on the dashboard. The OCR to extract numeric values from the dashboard images.
results indicate successful recognition of the needle’s angle This division of labor ensured efficient progress and effec-
and other critical signs, as illustrated in Figure 4. tive communication throughout the project.
R EFERENCES
• OpenCV Documentation: https://opencv.org/
• Matplotlib Documentation: https://matplotlib.org/
• Numpy Documentation: https://numpy.org/
• Akbari Sekehravani, Ehsan, Babulak, Eduard, Masoodi,
Mehdi, Implementing Canny Edge Detection Algorithm
for Noisy Image, Bulletin of Electrical Engineering
and Informatics, vol. 9, no. 4, pp. 1404–1410, 2020.
DOI:10.11591/eei.v9i4.1837
• Rizon, M., Yazid, H., Saad, P., md Shakaff, A. Y., and
Saad, A., “Object Detection Using Circular Hough Trans-
form,” American Journal of Applied Sciences, vol. 2, pp.
1606–1609, 2005. DOI: 10.3844/ajassp.2005.1606.1609.
• Jaided AI, EasyOCR Documentation, 2021. https://www.
jaided.ai/easyocr/documentation/
Fig. 4. Matched templates showing detected needle positions and indicators
on the dashboard.
The overall system demonstrated high accuracy in inter-
preting the dashboard’s data, effectively mapping the detected
needle positions to corresponding speed and RPM values.