Hand Tracking in Python | MediaPipe | OpenCv | Dushyant Singh | Truth Power Info | 2022

Dushyant Singh
2 min readMar 22, 2022

--

When we watch the Marvel Ironman movies. and Tony Stark does hands action and his system & machine work on his hand and body moment. Its looking very fantastic.

All those, There is a system to recognize the object using a special kind of computer system algorithms. Deep Learning does provide this flexibility to the computer system to detect objects.

In this blog. I’m going to share the real-life hand and finger tracking application in Python.

In this article, we will use mediapipe python library to detect face and hand landmarks. We will be using a Holistic model from mediapipe solutions to detect all the face and hand landmarks. We will be also seeing how we can access different landmarks of the face and hands which can be used for different computer vision applications such as sign language detection, drowsiness detection, etc.

Required Libraries

  • Mediapipe is a cross-platform library developed by Google that provides amazing ready-to-use ML solutions for computer vision tasks.
  • OpenCV library in python is a computer vision library that is widely used for image analysis, image processing, detection, recognition, etc.

Download Code from GitHub. https://github.com/Dushyantsingh-ds/ai-projects/tree/main/Projects/Hand%20Pose%20Tracking

1. Install Dependencies

!pip install mediapipe opencv-python

2. Import Dependencies

import mediapipe as mp
import cv2
import numpy as np
import uuid
import os

3. Detect the object

mp_drawing = mp.solutions.drawing_utils
mp_hands = mp.solutions.hands
cap = cv2.VideoCapture(0)

with mp_hands.Hands(min_detection_confidence=0.8, min_tracking_confidence=0.5) as hands:
while cap.isOpened():
ret, frame = cap.read()

# BGR 2 RGB
image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)

# Flip on horizontal
image = cv2.flip(image, 1)

# Set flag
image.flags.writeable = False

# Detections
results = hands.process(image)

# Set flag to true
image.flags.writeable = True

# RGB 2 BGR
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)

# Detections
print(results)

# Rendering results
if results.multi_hand_landmarks:
for num, hand in enumerate(results.multi_hand_landmarks):
mp_drawing.draw_landmarks(image, hand, mp_hands.HAND_CONNECTIONS,
mp_drawing.DrawingSpec(color=(121, 22, 76), thickness=2, circle_radius=4),
mp_drawing.DrawingSpec(color=(250, 44, 250), thickness=2, circle_radius=2),
)


cv2.imshow('Hand Tracking', image)

if cv2.waitKey(10) & 0xFF == ord('q'):
break

cap.release()
cv2.destroyAllWindows()

Output

Connect with me:

Medium: https://dushyantsingh-ds.medium.com/
Linkedin: https://linkedin.com/in/dushyantsingh-ds/
Instagram: https://www.instagram.com/dushyantsingh.ds/
Twitter: https://twitter.com/dushyantsingh_d
Facebook: https://www.facebook.com/dushyantsingh.india
Github: https://github.com/Dushyantsingh-ds
Telegram : https://t.me/dushyantsingh_d

--

--

Dushyant Singh
Dushyant Singh

Written by Dushyant Singh

Microsoft Certified Trainer | Cloud Architect | .Net Developer

Responses (1)