CG Mini Project

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

CHAPTER 1

INTRODUCTON

1.1 Introduction to computer graphics


Computer graphics is an art of drawing pictures, lines, charts, etc. using computers with the help
of programming. Computer graphics image is made up of number of pixels. Pixel is the smallest
addressable graphical unit represented on the computer screen.Computer graphics is a field of
computing that enables the creation, manipulation, and representation of visual images using
computers. It encompasses a wide range of techniques and technologies that have revolutionized
the way we interact with digital information and visual media.

The importance of computer graphics extends across various domains, including entertainment
(such as movies, video games, and virtual reality), design (architecture, industrial design, and
graphic design), simulation (flight simulators, medical simulations), education (interactive
learning tools, digital textbooks), and scientific visualization (data analysis, molecular modeling).

1.2 Introduction to OpenGL


Computer graphics is a field of computing that enables the creation, manipulation, and
representation of visual images using computers. It encompasses a wide range of techniques and
technologies that have revolutionized the way we interact with digital information and visual
media. Developed by Silicon Graphics Inc. (SGI) in 1992, OpenGL was designed to provide a
standardized interface for hardware-accelerated 2D and 3D graphics. Over the years, it has
become an industry-standard API and is supported by a wide range of platforms, including
Windows, macOS, Linux, and mobile platforms like Android and iOS.

OpenGL continues to evolve with advancements in graphics hardware and software technologies.
Recent developments focus on enhancing support for modern rendering techniques, improving
compatibility with new hardware architectures, and integrating with emerging technologies such
as VR, AR, and real-time ray tracing.OpenGL remains a fundamental tool for developers and
graphics professionals.

Dept. of CSE,SaIT 2023-2024 Page No.1


DETECTION OF DROWSY DRIVER IN REAL-TIME

1.3 Introduction to OpenCV

OpenCV (Open Source Computer Vision Library) is a powerful open-source computer vision and
machine learning software library. It provides a wide range of tools and functions that facilitate
tasks such as image and video processing, object detection, facial recognition, and much more.
Originally developed by Intel, OpenCV now has a large community of contributors and is
supported on multiple platforms including Windows, Linux, macOS, Android, and iOS.
OpenCV provides functions for loading, manipulating, and saving images in various formats.
Operations like resizing, cropping, filtering (blur, sharpen), color space conversion (RGB, HSV,
grayscale), and morphological operations are supported.It enables video capture from cameras or
video files, and supports tasks like frame extraction, video stabilization, and motion tracking.

1.4 Introduction to project

A drowsy driver project in computer graphics typically involves developing a system or


application that utilizes computer vision and graphics techniques to detect and mitigate
drowsiness in drivers. This innovative field combines real-time monitoring with advanced visual
analytics to enhance road safety. Here’s an introduction to such a project:

Overview of the Drowsy Driver Project in Computer Graphics

Drowsy driving poses a significant risk on roads worldwide, leading to accidents and fatalities.
In response, researchers and engineers in computer graphics have been leveraging their expertise
to develop systems that can detect signs of driver drowsiness and intervene to prevent potential
accidents.

Real-Time Detection: Utilizing computer vision algorithms to monitor driver behavior


continuously and detect signs of drowsiness such as eye closure, head nods, or changes in facial
expressions.

Driver State Analysis: Analyzing data obtained from cameras or sensors to assess the level of
driver alertness and predict potential instances of drowsiness.

Dept. of CSE,SaIT 2023-2024 Page No.2


CHAPTER 2

INBUILT FUNCTIONS IN COMPUTER GRAPHICS

2.1 OPENGL
OpenGL (Open Graphics Library) is a cross-platform API (Application Programming Interface)
for rendering 2D and 3D vector graphics. It is widely used in computer graphics and interactive
applications, particularly in areas such as video games, virtual reality, scientific visualization, and
simulation.

Cross-platform: OpenGL is supported on multiple platforms, including Windows, macOS, Linux,


and various mobile operating systems. This allows developers to write OpenGL code that can run
on different types of hardware without significant modifications.

Rendering Pipeline: OpenGL uses a pipeline model for rendering graphics. This pipeline consists
of stages such as vertex processing, primitive assembly, rasterization, fragment processing, and
framebuffer operations. Developers can control and customize these stages to achieve different
visual effects.

Shader-based: Modern OpenGL (OpenGL 3.0 and later versions) is heavily shader-centric.
Shaders are small programs written in languages like GLSL (OpenGL Shading Language) that run
directly on the GPU. They allow developers to implement complex rendering algorithms and
achieve realistic lighting, shadow effects, and other visual enhancements.

Immediate Mode vs. Modern OpenGL: Earlier versions of OpenGL (before version 3.0) used
immediate mode rendering, where commands were issued directly to the GPU for immediate
execution. Modern OpenGL encourages the use of vertex buffer objects (VBOs) and shaders for
more efficient rendering.

Overall, OpenGL remains a fundamental tool in the field of computer graphics due to its versatility,
cross-platform support, and powerful rendering capabilities. It continues to be a popular choice for
developers seeking to create interactive and visually compelling applications.

Dept. of CSE,SaIT 2023-2024 Page No.3


DETECTION OF DROWSY DRIVER IN REAL-TIME

2.2 OPENCV
OpenCV (Open Source Computer Vision Library) is primarily designed for computer vision tasks
such as image and video processing, object detection, and machine learning. However, its
capabilities can also be useful in various aspects of computer graphics, particularly in the
preprocessing and manipulation of images and videos that are essential for graphics rendering.
Here are some specific ways OpenCV can be applied in computer graphics:

Image Loading and Preprocessing: OpenCV provides robust functions for loading images from
various formats and performing preprocessing tasks such as resizing, color space conversion, noise
reduction, and image enhancement. These operations are crucial before using images as textures
or inputs in graphical applications.

Camera Calibration: OpenCV includes tools for camera calibration, which is essential in computer
graphics for accurately rendering virtual objects in a scene that matches the perspective of a real-
world camera. This is particularly important in augmented reality (AR) and virtual reality (VR)
applications.

Augmented Reality (AR): OpenCV can be used to detect and track markers or objects in a video
stream, allowing virtual objects to be superimposed onto the real world in AR applications. This
involves image processing and real-time computer vision techniques, which OpenCV excels at.

Video Processing: OpenCV provides efficient methods for reading, writing, and processing video
streams. This is beneficial in graphics applications where real-time video input needs to be
processed, such as video games or interactive simulations.

Machine Learning Integration: OpenCV integrates with machine learning frameworks like
TensorFlow and PyTorch, enabling advanced applications such as image segmentation, object
recognition, and generative models. These techniques can enhance the realism and interactivity of
computer-generated graphics.

While OpenCV itself is not a graphics rendering library like OpenGL, it complements graphics
APIs by providing powerful tools for image and video manipulation, camera calibration, and
computer vision tasks. Integrating OpenCV with graphics libraries allows developers to create
more sophisticated and interactive graphical applications with enhanced visual quality and realism.

Dept. of CSE,SaIT 2023-2024 Page No.4


CHAPTER 3
REQUIREMENTS

3.1 Hardware Requirements

The hardware requirements for a drowsy driver detection project can vary depending on the
specific approach and technologies employed. Here’s a general outline of the typical hardware
components and considerations involved in such projects:

1. Camera System

• Type: High-resolution cameras capable of capturing detailed facial features and eye
movements are essential. Infrared (IR) cameras might be used for night-time detection.
• Placement: Ideally positioned to capture the driver’s face, including eyes, mouth, and
head movements, without obstructing the driver's view.

2. Sensors

• Eye Tracking Sensors: Either integrated into the camera system or as standalone sensors
to monitor eye movements, blink rate, and eyelid closure duration.
• Head Position Sensors: Gyroscopes or accelerometers to detect head nods or changes in
head position indicating drowsiness.

3. Computational Hardware

• Processor: High-performance CPUs (Central Processing Units) capable of real-time


image and video processing.
• Graphics Processing Unit (GPU): Depending on the complexity of image processing
and machine learning algorithms, a GPU might be necessary for parallel processing and
acceleration.
• Memory (RAM): Sufficient RAM to handle data buffers and temporary storage for
image/video frames and intermediate processing results.

Dept. of CSE,SaIT 2023-2024 Page No.5


DETECTION OF DROWSY DRIVER IN REAL-TIME

4. Display and Feedback Mechanisms

• Alert System: Displays or heads-up displays (HUDs) to communicate alerts to the driver,
such as visual warnings, auditory signals, or seat vibrations.
• User Interface: Touchscreens or control panels for configuration, monitoring, and
interaction with the system.

5. Power Supply

• Battery or Vehicle Power: Depending on the installation (in-vehicle or lab


environment), sufficient power supply to sustain continuous operation without
interruptions.

6. Data Storage and Connectivity

• Storage: Hard drives or solid-state drives (SSDs) for storing video footage, images, and
processed data.
• Connectivity: Interfaces (e.g., USB, Ethernet) for data transfer and communication with
external systems or cloud services, if applicable.

7. Environmental Considerations

• Temperature and Vibration Resistance: Components should be selected or ruggedized


to withstand the conditions inside a vehicle (e.g., temperature variations, vibrations).

8. Compliance and Safety

• Regulatory Compliance: Ensure compliance with relevant safety and privacy


regulations governing driver monitoring systems.
• Data Security: Implement measures to protect collected data and ensure user privacy.

Dept. of CSE,SaIT 2023-2024 Page No.6


DETECTION OF DROWSY DRIVER IN REAL-TIME

3.2 Software Requirements

To implement a drowsy driver detection system using cv2, numpy, mediapipe, pygame, and
scipy, you can leverage each library's strengths for different aspects of the project. Here’s how
each of these software components can be utilized:

1. OpenCV (cv2)

• Purpose: OpenCV is a powerful computer vision library that provides tools for image
and video processing, including face detection, eye tracking, and facial landmark
detection.
• Usage:
o Face Detection: Identify the driver's face within the camera frame.
o Eye Tracking: Track the driver's eyes to monitor blink rate and eye closure
duration.
o Image Processing: Preprocess video frames to enhance features relevant to
drowsiness detection.

2. NumPy

• Purpose: NumPy is essential for numerical computing in Python, providing support for
large, multi-dimensional arrays and matrices.
• Usage:
o Data Manipulation: Efficiently handle and manipulate image and video data
arrays from OpenCV.
o Mathematical Operations: Perform calculations and transformations on image
data during preprocessing or feature extraction.
• Data Preprocessing: Before analysis, NumPy can be used for tasks like resizing images,
converting color spaces, or normalizing pixel values.

Dept. of CSE,SaIT 2023-2024 Page No.7


DETECTION OF DROWSY DRIVER IN REAL-TIME

3. MediaPipe

• Purpose: MediaPipe offers ready-to-use, high-level building blocks for performing tasks
such as hand tracking, pose estimation, and face detection.
• Usage:
o Facial Landmark Detection: Precisely locate key points on the driver's face,
such as eyes, mouth, and nose.
o Gesture Recognition: Potentially detect head nods or other facial expressions
indicative of drowsiness.

4. Pygame

• Purpose: Pygame is a cross-platform set of Python modules designed for writing video
games.
• Usage:
o User Interface: Create simple graphical interfaces or alerts to notify the driver of
detected drowsiness.
o Audio Alerts: Play sound effects or alarms when drowsy behavior is detected.

5. SciPy

• Purpose: SciPy builds on NumPy and provides additional scientific computing tools and
algorithms.
• Usage:
o Statistical Analysis: Perform statistical tests or calculations on data related to
driver behavior or drowsiness indicators.
o Signal Processing: Utilize signal processing functions for analyzing
physiological data if integrated with sensors like heart rate monitors.

By integrating cv2, numpy, mediapipe, pygame, and scipy into your drowsy driver detection
project, you can leverage their combined capabilities to effectively monitor and mitigate drowsy
driving risks.

Dept. of CSE,SaIT 2023-2024 Page No.8


CHAPTER 4
IMPLEMENTATION

To implement the drowsy driver detection system using the provided Python code, you'll need to
follow these steps:

Step-by-Step Implementation:

Install Required Libraries:

Ensure you have the necessary libraries installed. You can install them using pip if they are not
already installed:

pip install opencv-python pygame numpy mediapipe

Prepare Your Environment:

• Place your alarm sound file (700-hz-beeps-86815.mp3) in the same directory as your
Python script.

Write and Execute the Python Script:


import cv2

import pygame

import numpy as np

from scipy.spatial import distance as dist

import mediapipe as mp

# Initialize Pygame mixer for alarm sound

pygame.mixer.init()

sound = pygame.mixer.Sound("700-hz-beeps-86815.mp3")

# Constants

Dept. of CSE,SaIT 2023-2024 Page No.9


DETECTION OF DROWSY DRIVER IN REAL-TIME

EYE_AR_THRESH = 0.3

EYE_AR_CONSEC_FRAMES = 15 # 10 seconds at 30 fps

# Function to compute the eye aspect ratio (EAR)

def eye_aspect_ratio(eye):

A = dist.euclidean(eye[1], eye[5])

B = dist.euclidean(eye[2], eye[4])

C = dist.euclidean(eye[0], eye[3])

ear = (A + B) / (2.0 * C)

return ear

# Initialize Mediapipe Face Mesh

mp_face_mesh = mp.solutions.face_mesh

face_mesh = mp_face_mesh.FaceMesh(max_num_faces=1, refine_landmarks=True)

# Initialize video capture

cap = cv2.VideoCapture(0)

frame_counter = 0

drowsy = False

while True:

ret, frame = cap.read()

if not ret:

break

frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)

Dept. of CSE,SaIT 2023-2024 Page No.10


DETECTION OF DROWSY DRIVER IN REAL-TIME

results = face_mesh.process(frame_rgb)

if results.multi_face_landmarks:

for face_landmarks in results.multi_face_landmarks:

leftEye = [face_landmarks.landmark[i] for i in [33, 160, 158, 133, 153, 144]]

rightEye = [face_landmarks.landmark[i] for i in [362, 385, 387, 263, 373, 380]]

leftEye = [(int(p.x * frame.shape[1]), int(p.y * frame.shape[0])) for p in leftEye]

rightEye = [(int(p.x * frame.shape[1]), int(p.y * frame.shape[0])) for p in rightEye]

# Draw polylines around the eyes

cv2.polylines(frame, [np.array(leftEye, dtype=np.int32)], isClosed=True, color=(0, 255,


0), thickness=1)

cv2.polylines(frame, [np.array(rightEye, dtype=np.int32)], isClosed=True, color=(0, 255,


0), thickness=1)

leftEAR = eye_aspect_ratio(leftEye)

rightEAR = eye_aspect_ratio(rightEye)

ear = (leftEAR + rightEAR) / 2.0

if ear < EYE_AR_THRESH:

frame_counter += 1

if frame_counter >= EYE_AR_CONSEC_FRAMES:

Dept. of CSE,SaIT 2023-2024 Page No.11


DETECTION OF DROWSY DRIVER IN REAL-TIME

cv2.putText(frame, "DROWSINESS DETECTED", (10, 30),


cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 0, 255), 2)

if not drowsy:

sound.play()

drowsy = True

else:

frame_counter = 0

if drowsy:

sound.stop()

drowsy = False

cv2.imshow('Drowsiness Detection', frame)

if cv2.waitKey(1) & 0xFF == ord('q'):

break

# Release video capture and close all windows

cap.release()

cv2.destroyAllWindows()

# Stop Pygame mixer

pygame.mixer.quit()

Dept. of CSE,SaIT 2023-2024 Page No.12


Explanation:

• Imports: Import necessary libraries (cv2, pygame, numpy, scipy.spatial.distance,


mediapipe).
• Pygame Initialization: Initialize Pygame mixer and load the alarm sound.
• Constants: Define thresholds (EYE_AR_THRESH and EYE_AR_CONSEC_FRAMES)
for drowsiness detection.
• eye_aspect_ratio() Function: Calculates the eye aspect ratio using Euclidean distances
between facial landmarks.
• Mediapipe Face Mesh Initialization: Sets up the face mesh for detecting facial
landmarks.
• Video Capture Loop: Continuously captures frames from the webcam (cap =
cv2.VideoCapture(0)).
• Face Detection and Eye Tracking: Uses Mediapipe to detect facial landmarks and track
eyes. Draws polylines around the eyes.
• Drowsiness Detection: Computes EAR for both eyes and detects drowsiness based on
the defined thresholds. Displays "DROWSINESS DETECTED" and plays the alarm
sound when drowsiness is detected.
• User Interface: Displays the processed frame with annotations (cv2.imshow()).
• Key Press Event: Press 'q' to quit the loop (if cv2.waitKey(1) & 0xFF == ord('q')).
• Cleanup: Releases video capture and closes all OpenCV windows (cap.release(),
cv2.destroyAllWindows()). Stops Pygame mixer (pygame.mixer.quit()).

Execution:

• Run the script (python your_script_name.py).


• Ensure your webcam is connected and positioned correctly.
• Observe the output window for drowsiness detection alerts based on eye closures.

Dept. of CSE,SaIT 2023-2024 Page No.13


CHAPTER 5
SNAPSHOTS

Fig 5.1: Active status using small semicircle

Fig:5.2 Drowsiness detected

Dept. of CSE,SaIT 2023-2024 Page No.14


CONCLUSION
In conclusion, drowsy driver detection projects represent a crucial advancement in automotive
safety technology. By leveraging computer vision and machine learning, these systems aim to
prevent accidents caused by driver drowsiness, thereby saving lives and reducing road accidents'
societal impact. Continued research and development in this field promise to further improve road
safety and enhance the driving experience for all motorists. The future use of drowsy driver
detection projects is poised to transform road safety and transportation efficiency across various
sectors.

Continued innovation and deployment of these technologies will play a vital role in achieving
these outcomes and improving overall quality of life through enhanced transportation safety.
FUTURE WORK

1. Automotive Industry Integration

• In-Vehicle Systems: Integration of drowsy driver detection into vehicles as a standard


safety feature, similar to existing technologies like lane departure warning systems and
adaptive cruise control.
• Autonomous Vehicles: Essential for ensuring passenger safety in autonomous vehicles,
where human oversight may still be required in certain scenarios or as a fail-safe
mechanism.

2. Fleet Management and Logistics

• Commercial Fleets: Deployment in commercial vehicle fleets (e.g., trucks, buses) to


mitigate risks associated with driver fatigue, reduce accidents, and optimize operational
efficiency.
• Transportation Logistics: Enhancing logistics operations by ensuring drivers are alert
and capable of safely navigating long-haul routes, minimizing delays and improving
delivery reliability.

3. Public Transport and Mass Transit

• Public Transportation: Implementation in buses, trains, and other forms of public


transport to safeguard passengers and operators from accidents caused by driver
drowsiness.
• Airline Pilots: Adaptation for use in aviation to monitor pilots' alertness during long
flights and ensure aviation safety standards are maintained.

4. Personal Safety and Consumer Electronics


• Wearable Devices: Development of wearable devices or smart glasses equipped with
drowsy driver detection capabilities, providing real-time alerts and actionable feedback to
individual drivers.
• Consumer Vehicles: Integration into personal vehicles through aftermarket solutions or
as part of advanced driver assistance systems (ADAS) to enhance driver safety for all
motorists.

5. Health and Wellbeing Applications

• Health Monitoring: Potential integration with health monitoring systems to detect


fatigue-related health conditions and provide early intervention or medical alerts to
drivers.
• Public Health Initiatives: Support for public health initiatives aimed at reducing road
accidents and promoting safe driving behaviors through technology-driven interventions.
REFERENCES
• YOUTUBE
• CHATGPT
• GITHUB

You might also like