ML Learning 2
ML Learning 2
PROJECT - H
ika modaledudama.
PROJECT - H 1
ah tarwata ade chrome lo python download chesko!
like this ➖
STEP1:- Installing Python:
3. Click on the link that says "Download Python" (this should lead to the official
Python website).
5. Once the download is complete, open the downloaded file and follow the
installation instructions:
On Windows: Run the installer, check "Add Python to PATH," and follow the
setup wizard.
On Mac: Open the downloaded .pkg file and follow the installation
instructions.
1. Open the File Explorer (Windows), Finder (Mac), or your file manager
(Linux).
2. Navigate to a location where you want to create your project folder (e.g.,
Documents).
PROJECT - H 2
1. Open Visual Studio Code.
1. In the terminal, type the following commands and press Enter after each
one:
cd path/to/your/project
PROJECT - H 3
tells Python to create a virtual environment
python -m venv venv
named venv in your project directory. You can name your virtual
environment anything you like, but venv is a common convention.
After creating the virtual environment, you need to activate it. The command to
activate the virtual environment depends on your operating system.
On Windows:
.\venv\Scripts\activate
source venv/bin/activate
When the virtual environment is activated, you should see the environment
name (e.g., (venv) ) at the beginning of your command prompt.
PROJECT - H 4
To verify that the packages are installed correctly, you can use the following
command:
pip list
This command will list all the packages installed in the virtual environment, and
you should see opencv-python , mediapipe , tensorflow , and numpy among them.
Summary
1. Create a Virtual Environment:
On Windows:
.\venv\Scripts\activate
source venv/bin/activate
PROJECT - H 5
4. Verify the Installation:
pip list
After completing these steps, you will have a virtual environment set up with all
the necessary packages installed. Let me know once you have completed this
setup, and we can proceed with the next steps of the project!
We'll start by writing the initial code to capture video from the webcam and
detect hands using MediaPipe.
Step-by-Step Implementation
1. Import Libraries
PROJECT - H 6
First, we need to import the necessary libraries. This includes OpenCV for video
capture and MediaPipe for hand tracking.
# Explanation:
# cv2: OpenCV library for video capture and processing
# mediapipe: Library for hand tracking and landmark detection
# Explanation:
# mp_hands: Accesses the hands solution in MediaPipe
# hands: Initializes the hands module for hand detection
# mp_draw: Utility to draw hand landmarks on the frames
PROJECT - H 7
# Explanation:
# cap: Captures video from the default camera (usually the we
bcam)
if results.multi_hand_landmarks:
for hand_landmarks in results.multi_hand_landmarks:
mp_draw.draw_landmarks(frame, hand_landmarks, mp_
hands.HAND_CONNECTIONS)
# Explanation:
# ret, frame: Reads a frame from the video capture
# cv2.flip(frame, 1): Flips the frame horizontally for a
mirror view
# cv2.cvtColor(frame, cv2.COLOR_BGR2RGB): Converts the fr
ame from BGR to RGB
# hands.process(frame_rgb): Processes the frame to detect
hand landmarks
PROJECT - H 8
# mp_draw.draw_landmarks(frame, hand_landmarks, mp_hands.
HAND_CONNECTIONS): Draws the detected hand landmarks on the f
rame
# Explanation:
# cv2.imshow('Hand Gesture Recognition', frame): Displays the
frame with hand landmarks
# cv2.waitKey(1): Waits for 1 millisecond for a key press. If
'q' is pressed, the loop breaks
6. Release Resources
Release the webcam and close all OpenCV windows when the loop is exited.
# Explanation:
# cap.release(): Releases the webcam
PROJECT - H 9
# cv2.destroyAllWindows(): Closes all OpenCV windows
Full Code
Here is the full code combining all the steps:
if results.multi_hand_landmarks:
for hand_landmarks in results.multi_hand_landmarks:
mp_draw.draw_landmarks(frame, hand_landmarks, mp_
hands.HAND_CONNECTIONS)
PROJECT - H 10
# Step 5: Display the Frame
cv2.imshow('Hand Gesture Recognition', frame)
Make sure your virtual environment is activated. If it's not, activate it using the
following commands:
On Windows:
.\venv\Scripts\activate
source venv/bin/activate
PROJECT - H 11
cd path/to/your/project
python gestures.py
What to Expect:
The script will start capturing video from your webcam.
If hand landmarks are detected, they will be displayed on the video feed.
The video feed will be shown in a window titled 'Hand Gesture Recognition'.
To exit the video feed, press the 'q' key on your keyboard.
Troubleshooting:
If you encounter errors, check the following:
PROJECT - H 12
Step-by-Step Implementation
We'll now add the following features:
PROJECT - H 13
1. Open Hand (Palm)
2. Pointing Up
if len(landmark_list) != 0:
# Example logic for gesture recognition
# Open Hand (Palm) Gesture
if landmark_list[4][1] < landmark_list[3][1] and landmark
_list[8][1] < landmark_list[6][1]:
gesture = "Syntax Sarcasm"
# Pointing Up Gesture
elif landmark_list[4][1] > landmark_list[3][1] and landma
rk_list[8][1] < landmark_list[6][1]:
gesture = "Join the Workshop"
else:
gesture = None
PROJECT - H 14
cv2.LINE_AA)
if results.multi_hand_landmarks:
for hand_landmarks in results.multi_hand_landmarks:
mp_draw.draw_landmarks(frame, hand_landmarks, mp_
hands.HAND_CONNECTIONS)
PROJECT - H 15
# Initialize list to store landmark coordinates
landmark_list = []
for id, lm in enumerate(hand_landmarks.landmark):
# Get the coordinates
h, w, c = frame.shape
cx, cy = int(lm.x * w), int(lm.y * h)
landmark_list.append([cx, cy])
PROJECT - H 16
# Step 6: Release Resources
cap.release()
cv2.destroyAllWindows()
Next Steps:
1. Run the gestures.py script to test the gesture recognition and text overlay.
What to Expect:
When you run the gestures.py script:
The script will open a window displaying the video feed from your
webcam.
How to Test:
PROJECT - H 17
python gestures.py
Hold your hand open with the palm facing the camera.
Point your index finger upwards with the other fingers folded.
Ensure your hand is pointing the first finger to the side with all fingers
closed
Gestures to Keep:
For the initial implementation, we have two simple gestures:
Use Case: This gesture is common and easy to recognize, making it ideal
for a demo.
Use Case: This gesture is distinct and easy to perform, making it suitable
for recognition.
PROJECT - H 18
Summary:
1. Run the script:
python gestures.py
enjoy pandago!
output photo pettu marchipoku!
unta malla
byeeeeee!
PROJECT - H 19