Final Project Report
Final Project Report
OF
SUBMITTED BY
Submitted by
is a bonafide student of this institute and the work has been carried out by him/her under the
supervision of Prof. Jayshree Muley and it is approved for the partial fulfillment of the
requirement of Savitribai Phule Pune University, for the award of the degree of Bachelor of
Engineering (Computer Engineering).
(Dr. V. V. Patil)
Principal,
International institute of information technology, Pune,
Place: Pune
Date:
ACKNOWLEDGEMENT
We would like to express our heartfelt gratitude to Prof. Jayshree Muley, our internal guide,
for her constant support and valuable guidance as we work on the ongoing project titled “Car
Damage Detection”. Her expertise and feedback have been instrumental in shaping the direction
of this research, and we look forward to her continued mentorship as the project progresses.
We also extend our sincere thanks to Dr. Ajitkumar Shitole, Head of the Department, for
providing us with the opportunity to undertake this project. The research involved has been a great
learning experience.
We also extend sincere thanks to all the staff members of the Department of Computer
Engineering and Dr. Vaishali V. Patil, Principal, for helping us in various aspects.
This project involves creating a Full Stack Computer Vision web application to detect and assess
vehicle damage using machine learning and deep learning techniques. The main goal is to
accurately identify various types of damage, such as dents, scratches, cracked glass, bumper dents,
shattered headlights, and more. The application uses the VGG16 convolutional neural network
(CNN), enhanced through transfer learning, to achieve high accuracy on a custom dataset of car
damages. This method improves performance while reducing training time and computational
demands.
The application features a user-friendly interface where users can upload images of damaged cars.
The backend, built with Python and Django, processes each image to pinpoint damaged areas and
classify the type and severity of damage. The application also estimates both the depth and area
size of the damage, providing users with valuable insights. The frontend, developed with HTML,
CSS, JavaScript, and Ajax, ensures a responsive and interactive user experience.
This system supports vehicle owners, insurers, and repair services by offering an efficient tool for
damage assessment. Rigorous testing demonstrates the application’s effectiveness in streamlining
damage evaluations, enhancing decision-making for repairs and insurance claims.
TABLE OF CONTENTS
LIST OF ABBREVATIONS i
LIST OF FIGURES ii
ABBREVIATION ILLUSTRATION
AI Artificial Intelligence
AJAX Asynchronous JavaScript and XML
CNN Convolutional Neural Network
CSS Cascading Style Sheets
DL Deep Learning
HTML HyperText Markup Language
ML Machine Learning
OCR Optical Character Recognition
R-CNN Region-based Convolutional Neural Network
UML Unified Modeling Language
VGG Visual Geometry Group
YOLO You Only Look Once
i
LIST OF FIGURES
ii
1. INTRODUCTION
The automotive industry has witnessed remarkable advancements in technology over the past few
decades, leading to increased efficiency, safety, and overall performance of vehicles. However, as
vehicles become more complex and integral to daily life, the need for efficient and accurate
damage assessment has become paramount. Traditional methods of assessing vehicle damage often
involve manual inspections, which can be time-consuming, subjective, and prone to human error.
This project aims to address these challenges through the development of a Full Stack Computer
Vision web application for car damage detection.
1.1Motivation
The motivation for this project arises from the need for quick, accurate car damage assessment in
cases like accidents, insurance claims, and inspections. With millions of vehicles on the road, the
risk of damage is high, highlighting the demand for efficient evaluation solutions. Leveraging
advancements in artificial intelligence and machine learning, this project aims to automate damage
assessments, reducing dependency on manual inspections and improving accuracy. By employing
deep learning, specifically the VGG16 CNN model, the project strives to create a system that
identifies car damage types and assesses their severity, representing a significant advancement in
promoting efficiency, precision, and transparency in automotive damage evaluations.
2.3 Deep Learning Based Car Damage Detection, Classification and Severity
Authors: Ritik Gandhi
Description:
This paper proposes a deep learning-based framework to automate car damage detection
and severity assessment for faster insurance claims, using models like ResNet50, YOLO, and
DenseNet, with transfer learning outperforming fine-tuning. The integrated application aims to
enhance customer service and profitability in the insurance industry.
2.8 Car Damage Detection and Analysis Using Deep Learning Algorithm For
Automotive
Authors: Rakshata P, Padma H V, Pooja M
Description:
This research explores deep learning techniques for image restoration, specifically applying
them to car damage detection by reconstructing missing parts of images, ensuring consistency with
2.9 A Deep Learning and Transfer Learning Approach for Vehicle Damage
Detection
Authors: Lin Li, Koshin Ono, Chun-Kit Ngan
Description:
This paper uses Mask R-CNN for vehicle image segmentation and CNN for damage
classification, employing transfer learning with pre-trained weights from Microsoft COCO and
ImageNet. The system, tested on 864 vehicle images, achieves 87.5% accuracy in detecting
bumper damages, aiming to reduce time and cost in insurance claim processing.
3.1 Introduction
The project aims to automate the vehicle damage detection and assessment process, allowing
insurance firms to streamline their claims management operations. The system will enable users to
upload images of damaged vehicles, which will be processed using deep learning models to
determine the location and severity of the damage.
The scope of this project encompasses the development of a web application that utilizes machine
learning and computer vision technologies to identify and assess various types of car damage. The
application will allow users to upload images of damaged vehicles, which will be analyzed to
detect and classify damages such as dents, scratches, cracks, shattered glass, and more. The system
will also provide insights into the severity of the damage by estimating both the area size and
depth.
The project aims to:
• Develop a user-friendly interface for uploading images.
• Implement VGG16 for image classification and damage detection.
• Provide detailed analysis reports for users, including type of damage and severity
assessment.
• Facilitate the decision-making process for vehicle owners, insurance companies, and repair
shops.
1. Vehicle Owners:
o Characteristics: Individuals seeking to assess damages on their vehicles.
o Requirements: User-friendly interface, accurate damage assessment, detailed
reports.
2. Insurance Agents:
o Characteristics: Professionals evaluating claims based on vehicle damage.
1. It is assumed that users have access to a stable internet connection to upload images and receive
results.
2. The application depends on the availability of a trained VGG16 model for accurate damage
detection.
3. The performance of the application is contingent on the hardware specifications of the server
hosting the application.
• Description: Users must be able to upload images of their vehicles through a simple and
intuitive interface.
• Functional Requirements:
o The system shall allow users to upload images in common formats (e.g., JPEG,
PNG).
o The system shall provide real-time feedback on the upload status.
o The system shall preprocess the uploaded images for further analysis.
• Description: The application shall utilize the VGG16 model to detect and classify damages
in the uploaded images.
• Functional Requirements:
o The system shall identify and classify specific types of damage (dents, scratches,
etc.).
o The system shall provide an accuracy score for each classification.
o The system shall estimate the area and depth of the damage.
• The web application shall have a responsive design, ensuring usability across different devices
(desktops, tablets, smartphones).
• The main user interface shall include sections for image upload, results display, and report
generation.
The application will run on standard server hardware capable of handling web requests and
running the VGG16 model.
• The application will interface with a web server (e.g., Django) to handle requests and
responses.
• The system will utilize libraries such as Keras and OpenCV for image processing and
model inference.
• The application will communicate over HTTPS to ensure secure data transfer.
• API endpoints will be defined for handling image uploads and retrieving analysis results.
• The system shall process uploaded images and return results within 5 seconds under normal
load conditions.
• The application shall support concurrent users without performance degradation.
• The system shall implement measures to prevent unauthorized access to user data.
• Regular backups shall be performed to protect against data loss.
• The application shall use encryption for data storage and transmission.
• User authentication and authorization mechanisms shall be in place to ensure secure access.
• Usability: The application shall be intuitive and easy to navigate for all user classes.
• Reliability: The system shall maintain a high level of uptime and reliability, ensuring users
can access services as needed.
1. Web Framework: Django (Python) will be used as the backend framework for developing
the application.
2. Front-end Technologies:
• HTML, CSS, JavaScript for the user interface.
• AJAX for asynchronous data loading and improved user experience.
3. Machine Learning Libraries:
• Keras for implementing the VGG16 model.
• OpenCV for image processing tasks.
4. Database Software: MySQL or PostgreSQL for managing the relational database.
5. Operating System: Windows, Linux
6.Programming Language: Python
1. Server Specifications:
• CPU: Multi-core processor (e.g., Intel Xeon or equivalent) for efficient processing.
• RAM: Minimum of 16 GB to handle concurrent user requests and image processing.
• Storage: SSD storage with at least 100 GB to accommodate images and database size.
2. Development Environment:
• Personal computer with at least 8 GB of RAM and a modern processor for development
and testing purposes.
• GPU (Graphics Processing Unit): Recommended for faster training of the VGG16 model,
especially if a large dataset is used.
The Agile Model focuses on iterative and incremental development, making it ideal for projects
where requirements are expected to evolve over time. This model may be preferable if:
• You anticipate changes or refinements as you build the car damage detection system.
• You want to continuously test and improve each module or feature throughout the project.
Phases in the Agile Model:
• Iteration Planning: Define each iteration’s goals and plan tasks like image processing,
database integration, frontend, and backend development.
• Development & Testing (Incremental): Develop and test each feature incrementally. For
instance, implement damage detection in one iteration, then work on report generation in
the next.
• Review & Feedback: Collect feedback at the end of each iteration to improve
functionality, accuracy, and user experience.
• Final Deployment & Maintenance: Deploy the complete system once all features are
integrated and thoroughly tested.
5.1 Advantages
1. Automated Damage Assessment: The system provides a quick and automated way to assess
car damages, reducing the need for manual inspection.
2. Precision in Damage Detection: By using a trained VGG16 model and computer vision
techniques, it can accurately detect various types of damages such as dents, scratches,
cracks, and broken parts.
3. Time Efficiency: Automating the process of damage detection significantly reduces the
time taken to identify and report the damage severity and size.
4. Scalability: The system can be scaled to handle multiple users and can integrate additional
functionalities for other vehicle types or additional damage categories.
5. Cost-Effective: Reduces the costs associated with manual inspections by minimizing the
labor required, potentially saving users and companies money in the long term.
6. User-Friendly Interface: With a frontend design that’s accessible to all users, the platform
makes it easy to upload images and receive damage reports.
5.2 Limitations
1. Image Quality Dependency: The system's accuracy heavily depends on the quality of the
uploaded images. Poor image resolution or lighting conditions can reduce detection
accuracy.
2. Model Limitations: The VGG16 model, while powerful, may sometimes fail to recognize
highly obscure or complex damages that deviate from the training data.
3. Damage Type Restriction: Currently limited to a set of predefined damage types, which
may not cover every possible scenario of car damage.
4. Processing Power: Deep learning models like VGG16 require substantial computational
resources, which could limit real-time application unless optimized or run on powerful
servers.
5.3 Applications
1. Insurance Claims Processing: Can be used by insurance companies to automate and speed
up the claims process by providing an accurate damage assessment report.
2. Automobile Industry: Useful for car manufacturers and service centers to quickly assess
and report vehicle damages in quality control or during servicing.
3. Car Rental and Leasing Services: The system can assist rental and leasing companies in
documenting and analyzing damage before and after rentals.
4. Automated Inspection Kiosks: This technology can be integrated into self-service
inspection kiosks in parking lots or service centers, enabling customers to inspect vehicle
damage without needing a technician.
5. E-commerce for Used Cars: Helps online used car marketplaces by providing a reliable
method to validate the condition of cars, enhancing customer trust and transparency.
Conclusions
The Automatic Damaged Vehicle Estimator developed in this project successfully automates the
process of vehicle damage detection and assessment. The system, powered by enhanced deep
learning algorithms and pre-trained models like Inception ResNetV2, significantly reduces the
time and cost associated with insurance claim processing.
Experimental evaluations have shown that Inception ResNetV2 provides superior performance in
damage detection, localization, and severity assessment compared to other models like VGG-16
and VGG-19. This web-based system is practical and scalable, making it a valuable tool for the
insurance industry, especially in handling large volumes of vehicle damage claims efficiently and
accurately.
Future Work
• Data Expansion: Incorporating a larger dataset of damaged vehicle images would improve
model accuracy and reduce overfitting.
• Integration with Repair Cost Databases: In future versions, the system could integrate
with repair part pricing databases to provide real-time repair cost estimates based on the
detected damage.
• Mobile Application: A mobile app could be developed to allow users to directly capture
and upload images of damaged vehicles, making the system more accessible.
• Additional Models: Testing additional deep learning models, such as EfficientNet or
MobileNet, could enhance the system’s performance, especially for mobile deployment
where lightweight models are essential.
• Geolocation and Tracking: Adding geolocation features could allow the system to track
the location of accidents and suggest nearby repair shops or service centers.
1. Satisfiability Analysis
- Classification and Severity Assessment: Car damage detection is feasible using CNNs like
VGG16, which can handle complex patterns and classify damage types (e.g., dents, scratches)
based on spatial features in images.
- Feasibility of Image Processing: CNNs can effectively approximate the extent and severity of
damage through pattern recognition, making real-time analysis achievable.
2. Complexity Analysis
- Problem Class (P): Classification tasks are polynomial-time (P) problems when using a trained
CNN, as image processing is feasible within polynomial time for each query.
- NP-Hard Aspect: Training CNNs is computationally intense due to parameter optimization in
high dimensions, which is considered NP-Hard, but is solvable with methods like stochastic
gradient descent (SGD).
3. Mathematical Models
- Graph Theory & Linear Algebra: CNN operations, such as matrix multiplications, effectively
process image data in polynomial time, making the solution practical.
- Probability & Statistics: Probabilistic and statistical methods aid in model tuning and improve
the accuracy and reliability of damage detection.
• Jihad Qaddour, Syeda Ayesha Siddiqa, "Automatic Damaged Vehicle Estimator Using
Enhanced Deep Learning Algorithm.", International Journal of Advanced Trends in
Computer Science and Engineering. Illinois State University, Normal, IL, USA, 2021.
DOI.
This paper enhanced Mask R-CNN with Inception ResNetV2 automates car damage
detection, localization, and severity assessment, improving the efficiency of insurance
claims.
• Hashmat Shadab Malik, Mahavir Dwivedi, S. N. Omakar, Satya Ranjan Samal, Aditya
Rathi, Edgar Bosco Monis, Bharat Khanna and Ayush Tiwari ,"Deep Learning-Based Car
Damage Detection" ,EasyChair March 22, 2020.
This study uses pre-trained CNNs and YOLO to achieve high accuracy in car damage
classification, supporting automated insurance claims processing.
• Ritik Gandhi, ‘Deep Learning Based Car Damage Detection, Classification and Severity’,
Shri Govindram Seksaria Institute of Technology and Science, Indore, India, October 06,
2021.
A framework using ResNet50, YOLO, and DenseNet provides faster insurance claims by
automating car damage detection and severity assessment.
• Xinkuang Wang, Wenjing Li, Zhongcheng Wu, “CarDD: A New Dataset for Vision-based
Car Damage Detection”, Journal of Latex Class Files, Vol. 18, No. 9, March 2022.
CarDD introduces a comprehensive dataset with over 9,000 annotated car damage images,
advancing research in vision-based car damage detection.
• Priti Warungse, Atharva Kasar, Apurva Kshirsagar, Geeta Hade, Nishant Khandhar, “Car
Damage Detection Using Computer Vision”, JETIR November 2023.
Using ResNet, this study presents a high-performance framework to streamline insurance
claims through accurate car damage detection.
• Phyu Mar Kyu, Kuntpong Woraratpanya, “Car Damage Detection and Classification”,
Conference Paper · July 2020.
VGG16 and VGG19 models are applied to detect and assess car damage severity, with
VGG19 achieving a 95.22% accuracy in insurance-related applications.
• Rakshata P, Padma H V, Pooja M, “Car Damage Detection and Analysis Using Deep
Learning Algorithm For Automotive”, International Journal of Scientific Research &
Engineering Trends 2019.
This research utilizes image reconstruction techniques for car damage detection, preserving
image features for effective insurance claim processing.
• Lin Li, Koshin Ono, Chun-Kit Ngan, “A Deep Learning and Transfer Learning Approach
for Vehicle Damage Detection”, The International FLAIRS Conference Proceedings · April
2021.
Using Mask R-CNN and CNN with transfer learning, this study achieves high accuracy in
damage detection, reducing insurance claim costs and time.
[1] P. M. Kyu and K. Woraratpanya, “Car Damage Detection and Classification,” in Proceedings
of the 11th International Conference on Advances in Information Technology, Bangkok Thailand,
Jul. 2020, pp. 1–6, DOI: 10.1145/3406601.3406651.
[2] Bengio Y. Lecun Y., Bottou L. and Haffner P., “Gradient-based learning applied to document
recognition,” Proceedings of IEEE, vol. 86, no. 11, 1998
[3] Jeffrey de Deijn,”Automatic Car Damage Recognition using Convolutional Neural Networks”
[4] QINGHUI ZHANG, XIANING CHANG AND SHANFENG BIAN “Vehicle Damage-
Detection Segmentation Algorithm Based on Improved Mask RCNN” Received December 17,
2019, accepted December 29, 2019, date of publication January 6, 2020, date of current version
January 14, 2020. Digital Object Identifier 10.1109/ACCESS.2020.2964055.
[6] Anwar, Syed Muhammad, Muhammad Majid, Adnan Qayyum, Muhammad Awais, Majdi
Alnowami, and Muhammad Khurram Khan. "Medical image analysis using convolutional neural
net works: a review." Journal of medical systems 42, no. 11 (2018): 226.
[7] Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–
117.
[8] R. Girshick, J. Donahue, T. Darrell, and J. Malik, Rich feature hierarchies for accurate object
detection and semantic segmentation," in The IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), June 2014.
[11] Najmeddine Dhieb, Hakim Ghazzai, Hichem Besbes, and Yehia Massoud. 2019. Extreme
Gradient Boosting MachineLearning Algorithm For Safe Auto Insurance Operations. In2019 IEEE
International Conference of Vehicular Electronicsand Safety (ICVES). IEEE, 1–5.
[12] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time
object detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, 2016, pp. 779– 788.
[13] Ahmed, E., Jones, M., & Marks, T. K. (2015). An improved deep learning architecture for
person re-identification. In Proceedings of the IEEE computer society conference on computer
vision and pattern recognition, Vol. 7 (pp. 3908–3916). IEEE
http://dx.doi.org/10.1109/CVPR.2015.7299016. Computer Society.
[14] Mahmood, Z.; Haneef, O.; Muhammad, N.; Khattak, S. Towards a Fully Automated Car
Parking System. IET Intell. Transp. Syst. 2018, 13, 293–302. [Google Scholar] [CrossRef].
[15] A data-based structural health monitoring approach for damage detection in steel bridges
using experimental data.
[16] A Bridge Structural Health Data Analysis Model Based on Semi Supervised Learning" (Yu
Chongchong, Wang Jingyan, Tan Li Tu Xuyan)
[17] Classification With Cooperative Semi-Supervised Learning Using Bridge Structural Health
Data" (Chongchong Yu, Lili Shang, Li Tan, Yang Yang, Xuyan Tu).
[20] Automated Crack Detection on Concrete Bridges Prateek Prasanna, Kristin J. Dana, Nenad
Gucunski, Basily B. Basily, Hung M. La, Member, IEEE.