Ai and Datascience
Ai and Datascience
INTERNSHIP PRESENTATION
On
“Artificial Intelligence and Data Science”
Carried out at
Abeyaantrix Edusoft, Davanagere.
during
5th February 2025 – 15th May 2025
Website: www.asoftlab.com
Domain:
Artificial Intelligence and Data Science
Description:
During the internship, I was assigned to work on tasks related to:
• Data Collection and Preprocessing: Cleaning raw datasets, handling missing
values, performing feature selection and engineering using Python libraries
such as Pandas and NumPy.
• Python: Primary language used for data analysis, machine learning, and deep learning
tasks.
• Jupyter Notebook: Used extensively for writing and testing Python code in an
interactive environment.
• Pandas: For data preprocessing, handling missing values, and working with
dataframes.
• Convolutional Neural Networks (CNNs) are highly effective for visual pattern
recognition tasks.
• This project aims to accurately identify sign language digits using a CNN-
based model
• To apply computer vision and deep learning techniques for gesture classification.
• To create an assistive tool that bridges communication between deaf individuals and
the hearing community.
• To enable real-time recognition of sign language digits using image or video input.
06/30/2025 11
5. Model Training:
• Feed the training data to the model.
• The model adjusts its internal weights to minimize the difference between its
predictions and the true labels.
• Use a loss function like categorical_crossentropy and an optimizer like adam.
6. Model Evaluation:
• After training, test the model on unseen images.
• Measure its accuracy — how many images it correctly classifies.
7. Programming Language: Python
• Libraries: TensorFlow, Keras, OpenCV, NumPy
• Development Environment: Jupyter Notebook, Google Colab
Training and validation accuracy showed minimal overfitting, indicating that the model
generalized well to unseen data.
Real-time testing validated the model's responsiveness and accuracy, making it suitable
for integration into assistive communication tools.
06/30/2025 Dept 15
Conclusion
• The project successfully demonstrates the use of deep learning for recognizing sign
language digits with high accuracy.
• It provides an effective and accessible tool for bridging communication gaps between
hearing and deaf individuals.
• The use of CNNs ensures accurate feature extraction and classification of complex
hand gestures.
• Real-time implementation shows the potential for practical applications in assistive
technology.
• Future improvements can include recognizing full sign language alphabets and
integrating multilingual support.