ByteTrack: The Future of Multi-Object Tracking in Computer Vision

Allan Kouidri
-
11/21/2023
bytetrack featured image

In the dynamic realm of computer vision, the quest for robust and efficient Multi-Object Tracking (MOT) solutions is ever-evolving. Among the myriad of innovations, one name stands out: ByteTrack.

This groundbreaking AI algorithm is redefining the standards of accuracy and efficiency in tracking multiple objects across video frames. In this comprehensive exploration, we delve deep into the world of ByteTrack, a beacon of innovation in the MOT landscape.

From autonomous vehicles navigating bustling city streets to advanced surveillance systems monitoring for safety, ByteTrack is at the forefront, offering unparalleled precision in the most challenging environments. Join us as we unravel the intricacies of ByteTrack, offering insights into its core functionalities, integration with cutting-edge object detection frameworks, and practical applications that are reshaping industries.

This article offers an in-depth exploration of ByteTrack, highlighting its core functionalities and its unique position in the domain of multi-object tracking (MOT).

Additionally, we will provide a step-by-step tutorial on how to seamlessly combine ByteTrack with state-of-the-art object detection frameworks, complete with concise Python code examples.

What is Multi-Object Tracking (MOT)?

Before delving into ByteTrack, it is essential to understand the core of MOT. The primary goal of MOT is to provide a consistent label for each object across frames, which requires solving two main problems: detection and association. 

Detection identifies objects in each frame, and association ensures that the object identified is the same from one frame to the next. Most traditional methods handle these tasks in two steps, but ByteTrack introduces a more integrated approach.

ByteTrack street mall Australia

What is ByteTrack?

ByteTrack stands as a pioneering force in the realm of computer vision, specifically tailored for the complex task of Multi-Object Tracking (MOT). This innovative AI algorithm is not just a tool but a game-changer, designed to assign unique identifiers to objects within a video, thereby enabling the consistent and accurate tracking of each object over time.

At its core, ByteTrack transcends traditional tracking methods by leveraging advanced AI techniques. It's built upon a deep understanding of how objects move and interact in a dynamic environment, making it exceptionally adept at handling scenarios that would confound conventional tracking systems. This includes tracking objects in densely populated scenes, where occlusions and rapid movements are common, and maintaining accurate identification even when objects temporarily leave the frame or get obscured.

What sets ByteTrack apart is its robustness and adaptability. It can effectively track multiple objects across a variety of settings, from urban landscapes for autonomous vehicles to crowded public spaces for surveillance systems. This versatility is crucial in a world where the applications of computer vision are constantly expanding.

ByteTrack's efficiency is another of its standout features. It's designed to process information swiftly, making it suitable for real-time applications. This is particularly important in scenarios where immediate data processing is critical, such as in autonomous driving or emergency response situations.

Furthermore, ByteTrack's integration with state-of-the-art object detection frameworks, like YOLO (You Only Look Once) and Faster R-CNN, enhances its tracking capabilities. By starting with high-precision object detections, ByteTrack lays a solid foundation for its tracking process, ensuring that each subsequent step is built on reliable data.

In summary, ByteTrack is more than just an algorithm; it's a comprehensive solution for real-world challenges in the field of computer vision. Its ability to accurately track multiple objects in real-time, regardless of the complexities of the environment, positions it as a crucial tool in the ever-evolving landscape of AI and technology.

How does ByteTrack work?

ByteTrack is an AI-based MOT algorithm that builds upon the foundation of object detectors to provide real-time and reliable tracking of multiple objects in a video stream. Here’s a closer look at how ByteTrack functions:

ByteTrack integration with object detectors

ByteTrack starts with the output from an object detection model. Object detectors like the YOLO series (You Only Look Once) or Faster R-CNN are commonly used for this purpose. These detectors provide bounding boxes and associated confidence scores that represent the likelihood of each box containing an object.

Key frame detection and association

ByteTrack uses these detections across consecutive frames to track objects. It applies the following steps:

  1. Detection: The object detector identifies objects in each frame and assigns each a confidence score.
  2. High-Confidence association: ByteTrack first takes high-confidence detections and associates them across frames using a motion model like the Kalman filter and an assignment algorithm such as the Hungarian algorithm. This step links detections of the same object across different frames, creating a trajectory for each object.
  3. Low-Confidence association: After associating high-confidence detections, ByteTrack uniquely incorporates lower-confidence detections which would typically be ignored. It associates these detections with existing trajectories if they match the expected location and appearance of a tracked object. This step is crucial for maintaining the identity of objects when they are occluded or not clearly visible.
  4. Track lifecycle management: ByteTrack manages the creation and deletion of tracks, initializing new tracks for detected objects that don't match existing ones and terminating tracks that are no longer supported by sufficient evidence in subsequent frames.
ByteTrack associates every detection box (both low & high confidence) [1]

Optimizing track continuity

The core innovation of ByteTrack is in how it deals with detections of varying confidence. By not discarding low-confidence detections, ByteTrack effectively utilizes more information available in the video, which helps in situations where objects may be partially occluded or their appearance changes rapidly due to lighting or pose variations.

Handling occlusions and interactions

During scenarios where objects overlap or occlude each other, ByteTrack's strategy significantly improves its ability to keep tracking objects correctly. The algorithm's robustness in such challenging conditions makes it stand out from traditional MOT approaches that struggle with occlusions and interactive dynamics.

Real-time processing

ByteTrack is designed to be efficient. By relying on simple yet effective association strategies, it can process video frames in real-time, which is critical for applications like autonomous driving or real-time surveillance.

ByteTrack performance and applications

In terms of performance, ByteTrack has demonstrated outstanding results on standard benchmarks like the MOTChallenge. It excels at maintaining accurate track identities even in crowded scenes where objects frequently interact.

ByteTrack performance
ByteTrack performance [1]

The practical applications of ByteTrack are vast:

  • Autonomous vehicles: Enhancing perception systems to accurately track pedestrians, vehicles, and other elements in real-time.
  • Surveillance systems: Improving anomaly detection and activity recognition in complex environments.
  • Robotics: Allowing robots to navigate and interact with dynamic environments more effectively.
  • Sports analytics: Providing detailed tracking of athletes for performance analysis and strategic planning.

Tracking of 100m sprint athletes using ByteTrack
Source [2]

Challenges and future directions:

Despite its impressive capabilities, ByteTrack, like any AI system, is not without challenges. The reliance on the quality of initial detections can be a limiting factor; if the detector performs poorly, the tracking will suffer.

Additionally, ByteTrack’s performance can be affected by extreme conditions such as heavy occlusion, high-speed movements, or drastic appearance changes.

The future of ByteTrack involves integrating it with more advanced detectors and exploring the use of deep learning for more sophisticated association strategies. Furthermore, adapting ByteTrack for 3D tracking in autonomous systems or virtual environments could significantly enhance its utility.

Get started with Ikomia API 

Using the Ikomia API, you can effortlessly create a workflow object detection and tracking in just a few lines of code.

To get started, you need to install the API in a virtual environment.


pip install ikomia

Run ByteTrack with a few lines of code

Here we run the following workflow: 

We use the SOTA algorithm YOLOv8 for object detection followed object tracking using the ByteTrack algorithm.

To process your video, simply modify the 'input_video_path' variable with your file path.

You can also directly charge the notebook we have prepared.


from ikomia.dataprocess.workflow import Workflow
from ikomia.utils.displayIO import display
import cv2


# Replace 'your_video_path.mp4' with the actual video file path
input_video_path = 'your_video_path.mp4'
output_video_path = 'output_video.avi'

# Init your workflow
wf = Workflow()

# Add object detection algorithm
detector = wf.add_task(name="infer_yolo_v8", auto_connect=True)

# Add ByteTrack tracking algorithm
tracking = wf.add_task(name="infer_bytetrack", auto_connect=True)
tracking.set_parameters({
    "categories": "person"
})

# Open the video file
stream = cv2.VideoCapture(input_video_path)
if not stream.isOpened():
    print("Error: Could not open video.")
    exit()

# Get video properties for the output
frame_width = int(stream.get(cv2.CAP_PROP_FRAME_WIDTH))
frame_height = int(stream.get(cv2.CAP_PROP_FRAME_HEIGHT))
frame_rate = stream.get(cv2.CAP_PROP_FPS)

# Define the codec and create VideoWriter object
# The 'XVID' codec is widely supported and provides good quality
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter(output_video_path, fourcc, frame_rate, (frame_width, frame_height))

while True:
    # Read image from stream
    ret, frame = stream.read()

    # Test if the video has ended or there is an error
    if not ret:
        print("Info: End of video or error.")
        break

    # Run the workflow on current frame
    wf.run_on(array=frame)

    # Get results
    image_out = tracking.get_output(0)
    obj_detect_out = tracking.get_output(1)

    # Convert the result to BGR color space for saving and displaying
    img_res = cv2.cvtColor(image_out.get_image_with_graphics(obj_detect_out), cv2.COLOR_RGB2BGR)

    # Save the resulting frame
    out.write(img_res)

    # Display
    display(img_res, title="ByteTrack", viewer="opencv")

    # Press 'q' to quit the video processing
    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# After the loop release everything
stream.release()
out.release()
cv2.destroyAllWindows()

Tracking people on Train station using ByteTrack

To adjust the parameters, refer to the algorithm documentation available on Ikomia HUB:

Build your own workflow with Ikomia

In this tutorial, we have explored the process of creating a workflow for object detection and tracking using YOLOv8 and ByteTrack. 

Explore a wide range of ready-to-use algorithms on Ikomia HUB and enjoy the freedom to craft your own workflow with your preferred object detection model! Don't hesitate to experiment with other object tracking algorithms, such as DeepSORT, to find the one that best suits your project needs.

For a comprehensive presentation of the API, we recommend referring to our detailed documentation. For those seeking a more interactive experience, Ikomia STUDIO provides an accessible, user-friendly interface, equipping you with the same powerful functionalities found within our API.

References

[1] ByteTrack: Multi-Object Tracking by Associating Every Detection Box - https://arxiv.org/pdf/2110.06864.pdf

[2] Understanding Multiple Object Tracking using DeepSORT - https://learnopencv.com/understanding-multiple-object-tracking-using-deepsort/

[3] How to create a virtual environment

Arrow
Arrow
No items found.
#API

Build with Python API

#STUDIO

Create with STUDIO app

#SCALE

Deploy with SCALE