YOLOv8 Pose Estimation: Next-Gen KeyPoint Detection

Allan Kouidri
-
1/30/2024
Athletes running yolov8 pose estimation

Pose estimation is a crucial aspect of computer vision that involves detecting the position and orientation of keypoints, often representing different body parts, in images. YOLOv8 has introduced advanced models specifically designed for this task, capable of accurately identifying these keypoints in various settings.

What is YOLOv8 Pose Estimation?

YOLOv8 Pose Estimation is a cutting-edge technology within the field of computer vision, specifically tailored for identifying and mapping human body keypoints in images or video frames. 

This technology interprets the human form by assigning 2D or 3D coordinates to specific body parts, such as elbows, knees, or the head, effectively capturing the posture and gestures of individuals. It does this by processing visual data and recognizing patterns corresponding to human anatomical features.

Applications

The implications of YOLOv8 Pose Estimation are far-reaching and diverse, with significant utility across various industries and domains:

  • Gesture Recognition: In user interface control and interactive systems, pose estimation allows for intuitive and natural user interaction through gestures.
  • Animation and Gaming: By capturing human motion, it facilitates the creation of realistic animations, enhancing the development process in gaming and film production.
  • Sports and Fitness: It offers advanced tools for athlete performance analysis, enabling precise monitoring of body movements for training and rehabilitation purposes.
  • Healthcare: In medical diagnostics and patient monitoring, it aids in the assessment of physical therapies and tracking of patient movements without invasive methods.
  • Security and Surveillance: Pose estimation can enhance surveillance systems by analyzing human behavior and detecting unusual activities or postures.

YOLOv8 Approach

The YOLOv8 model adopts a unique approach to pose estimation:

  • -pose Suffix Models: YOLOv8 integrates specialized models indicated by the '-pose' suffix. These models are specifically trained to handle the complexities of human pose estimation.
  • Training on COCO Keypoints Dataset: The models are trained on robust datasets like COCO keypoints, renowned for their diversity and comprehensive range of human poses and scenarios. This training ensures that the models can accurately identify and estimate human poses in various real-world settings.
  • Efficient and Accurate: YOLOv8 models balance efficiency and accuracy, making them suitable for real-time applications. They are capable of processing images quickly while maintaining a high level of precision in pose estimation.
  • Versatility in Diverse Tasks: These models are adept at handling a wide range of pose estimation tasks, from simple posture recognition to complex activities involving multiple people in dynamic environments.

List of available models

Model

size (pixels)

mAPpose 50-95

mAPpose 50

Speed A100 TensorRT (ms)

params (M)

FLOPs (B)

YOLOv8n-pose

640

50.4

80.1

1.18

3.3

9.2

YOLOv8s-pose

640

60.0

86.2

1.42

11.6

30.2

YOLOv8m-pose

640

65.0

88.8

2.0

26.4

81.0

YOLOv8l-pose

640

67.6

90.0

2.59

44.4

168.6

YOLOv8x-pose

640

69.2

90.2

3.73

69.4

263.2

YOLOv8x-pose-p6

1280

71.6

91.2

10.04

99.1

1066.4

Comparison of the YOLOv8 Pose Estimation models [1]

YOLOv8 pose models appears to be a highly accurate and fast solution for pose estimation tasks, suitable for both real-time applications and scenarios requiring detailed pose analysis. 

Its performance on standard datasets like COCO keypoints and the ability to reproduce these results are strong indicators of its reliability and practical utility.

Easily run YOLOv8 Pose Estimation

The Ikomia API allows for pose estimation with YOLOv8 with minimal coding.

Setup

To begin, it's important to first install the API in a virtual environment [2]. This setup ensures a smooth and efficient start to using the API's capabilities.


pip install ikomia

Run YOLOv8 Pose Estimation with a few lines of code

You can also directly charge the notebook we have prepared.


from ikomia.dataprocess.workflow import Workflow
from ikomia.utils import ik
from ikomia.utils.displayIO import display

# Init your workflow
wf = Workflow()

# Add algorithm
algo = wf.add_task(ik.infer_yolo_v8_pose_estimation(
                            conf_thres='0.5',
                            iou_thres='0.25',
                            input_size='640' ), auto_connect=True)

# Run on your image  
wf.run_on(url="https://images.pexels.com/photos/33703/relay-race-competition-stadium-sport.jpg?cs=srgb&dl=pexels-pixabay-33703.jpg&fm=jpg&w=1920&h=1280")

# Inpect your result
display(algo.get_image_with_graphics())

Athletes running yolov8 pose estimation

List of parameters:

  • model_name (str) - default 'yolov8m-pose': Name of the YOLOv8 pre-trained model. Other model available:

             - yolov8n-pose

             - yolov8s-pose

             - yolov8l-pose

             - yolov8x-pose

  • input_size (int) - default '640': Size of the input image.
  • iou_thres (float) - default '0.7': Intersection over Union, degree of overlap between two boxes [0,1].
  • cuda (bool): If True, CUDA-based inference (GPU). If False, run on CPU.
  • model_weight_file (str, optional): Path to model weights file .pt.

Explore further with pose/keypoints estimation

This tutorial introduced the basics of pose estimation using YOLOv8. To expand your knowledge:

  • Deepen Your Understanding: Dive into our detailed OpenPose guide for an in-depth look at another significant pose estimation technology.

  • Consult Documentation and Tools:

             - For a comprehensive presentation APIs refer to the documentation

             - Explore Ikomia HUB access the list of cutting-edge algorithms,

             - Additionally, Ikomia STUDIO offers a practical, user-friendly interface for these technologies.

References

‍[1] Ultralytics Pose Estimation models

[2] How to create a virtual environment in Python

Arrow
Arrow
No items found.
#API

Build with Python API

#STUDIO

Create with STUDIO app