The Ultimate Guide to MMSegmentation and MMseg in Semantic Segmentation

Allan Kouidri
-
1/25/2024
MMSegmentation semantic segmentation on Shibuya Crossing

MMSegmentation (MMSeg) has emerged as a top-tier toolkit in the realm of semantic segmentation, gaining notable popularity in the Python community. Its comprehensive documentation and setup process, while thorough, can initially seem overwhelming for those new to the platform.

This guide aims to simplify your introduction to MMSegmentation/MMSeg, highlighting essential steps and addressing typical challenges encountered while employing the MMSegmentation/MMSeg API. 

We'll also introduce an efficient method to leverage MMSegmentation's capabilities via the Ikomia API, enhancing your overall experience in semantic segmentation.

Prepare to elevate your semantic segmentation projects!

MMSegmentation/MMSeg: the semantic segmentation toolbox

Semantic segmentation is a vital and constantly advancing area in the field of computer vision.

Among the most prominent and cutting-edge tools in this domain is MMSegmentation (MMseg), a comprehensive open-source toolbox for semantic segmentation, developed on the PyTorch platform.

What is MMSegmentation/MMSeg?

MMSegmentation is part of the OpenMMLab project and is developed by the Multimedia Laboratory at the Chinese University of Hong Kong. It specializes in semantic segmentation, a vital component in the field of computer vision. It offers an extensive collection of segmentation models and algorithms, making it a go-to choice for both researchers and practitioners in the field.

Key Features of MMSegmentation/MMSeg

  • Diverse Model Zoo: MMSegmentation provides a rich array of models, including classics like Fully Convolutional Networks (FCN) and DeepLabV3, as well as state-of-the-art models like Pyramid Scene Parsing Network (PSPNet) and HRNet.
  • Modular Design: The library’s modular design allows for easy customization and modification of components to fit specific project requirements.
  • High Performance: Optimized for both speed and accuracy, MMSegmentation ensures efficient training and inference processes.

Benefits of Using MMSegmentation/MMSeg

  • Accelerated R&D: Its extensive model support and high efficiency streamline the research and development process in semantic segmentation.
  • Ease of Experimentation: The library's modularity facilitates experimentation with different architectures and configurations, encouraging innovation.
  • Strong Community and Support: As an open-source project, MMSegmentation boasts a robust community of users and contributors, offering a wealth of resources and collaborative opportunities.

Practical Applications of MMSegmentation/MMSeg

MMSegmentation isn’t just a theoretical tool; it has practical implications in various sectors:

  • Autonomous Driving: Enhancing perception systems for better navigation and safety.
  • Medical Image Analysis: Assisting in precise segmentation in medical diagnostics.
  • Remote Sensing: Used in land cover classification and environmental monitoring.
  • Retail: In store layout analysis and customer behavior studies.

Getting Started with MMSegmentation/MMSeg

For this section, we will navigate through the MMSegmentation documentation for semantic segmentation.[1]It's advisable to review the entire setup process beforehand, as we've identified certain steps that might be tricky or simply not working.

Prerequisite

OpenMMLab suggests specific Python and PyTorch versions for optimal results.

  • Linux | Windows | macOS
  • Python 3.7 +
  • PyTorch 1.8 +
  • CUDA 10.2 +

For this demonstration, we used a Windows setup with CUDA 11.8 installed.

Environment setup

The first step in preparing your environment involves creating a Python virtual environment [2] and installing the necessary Torch dependencies.

Creating the virtual environment

We followed the recommendation by using Python 3.8:


python -m virtualenv openmmlab  --python=python3.8

Installing Torch and Torchvision

Once you activate the 'openmmlab' virtual environment, the next step is to install the required PyTorch dependencies.


pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu118

Installing MMlab dependencies 

Then we install the following dependencies:


pip install -U openmim
mim install mmengine
mim install "mmcv>=2.0.0"

Subsequently, we installed 'mmsegmentation' as a dependency:


pip install "mmsegmentation>=1.0.0"

Downloading the checkpoint

To obtain the necessary checkpoint file (.pth) and configuration file (.py) for MMSegmentation/MMSeg, use the following command:


mim download mmsegmentation --config pspnet_r50-d8_4xb2-40k_cityscapes-512x1024 --dest .

Executing this command will download both the checkpoint and the configuration file directly into your current working directory.

Inference using MMSegmentation/MMSeg API

For testing our setup, we conducted an inference test using a sample image with the PSPNetmodel. This step is crucial to verify the effectiveness of the installation and setup.


from mmseg.apis import inference_model, init_model, show_result_pyplot
import mmcv

config_file = 'pspnet_r50-d8_4xb2-40k_cityscapes-512x1024.py'
checkpoint_file = 'pspnet_r50-d8_512x1024_40k_cityscapes_20200605_003338-2966598c.pth'

# build the model from a config file and a checkpoint file
model = init_model(config_file, checkpoint_file, device='cuda:0')

# test a single image and show the results
img = 'demo.png'  # or img = mmcv.imread(img), which will only load it once
result = inference_model(model, img)
# visualize the results in a new window
show_result_pyplot(model, img, result, show=True)

We ran into the following problem: "no module name 'ftfy'"

Installation of the ftfy module:


pip3 install ftfy

Following the execution of the inference code snippet, we successfully achieved image segmentation:

MMSegmentation with mmLab API

The installation process was completed without any significant issues. During the testing phase of the inference code snippet, we encountered a minor hiccup with the missing 'ftfy' dependency, but this was quickly resolved.

In the upcoming section, we'll explore how to use MMSegmentation/MMSeg through the Ikomia API. This approach bypasses the complexity of installing various dependencies and simplifies the process into just 2 straightforward steps.

Easier MMSegmentation/MMSeg semantic segmentation with a Python API

With the Ikomia team, we've been working on a prototyping tool to avoid and speed up tedious installation and testing phases. 

We wrapped it in an open source Python API. Now we're going to explain how to use it to detect objects with MMSegmentation in less than 10 minutes.

Environment setup

As before, you need to install the API in a virtual environment. [2]

Then the only thing you need is to install ikomia:


pip install ikomia

MMSegmentation/MMSeg inference

You can also charge directly the open-source notebook we have prepared.


from ikomia.dataprocess.workflow import Workflow
from ikomia.utils.displayIO import display

# Init your workflow
wf = Workflow()

# Add object detection algorithm
segmentor = wf.add_task(name="infer_mmlab_segmentation", auto_connect=True)

segmentor.set_parameters({
        "model_name": "pspnet",
        "model_config": "pspnet_r50-d8_4xb2-40k_cityscapes-512x1024.py",
        "cuda": "True",
    })

# Run the workflow on image
wf.run_on(url="https://github.com/open-mmlab/mmsegmentation/blob/main/demo/demo.png?raw=true")

# Inpect your result
display(segmentor.get_image_with_mask())

MMsegmentation with the Ikomia API

List of parameters:

- model_name (str, default="maskformer"): model name.

- model_config (str, default="maskformer_r50-d32_8xb2-160k_ade20k-512x512"): name of the model configuration file.

- config_file (str, default=""): path to model config file (only if use_custom_model=True). The file is generated at the end of a custom training. Use algorithm train_mmlab_detection from Ikomia HUB to train custom model.

- model_weight_file (str, default=""): path to model weights file (.pt) (only if use_custom_model=True). The file is generated at the end of a custom training.

- cuda (bool, default=True): CUDA acceleration if True, run on CPU otherwise.

MMLab framework for semantic segmentation offers a large range of models. To ease the choice of couple (model_name/model_config), you can call the function get_model_zoo() to get a list of possible values.


from ikomia.dataprocess.workflow import Workflow

# Init your workflow
wf = Workflow()

# Add object detection algorithm
detector = wf.add_task(name="infer_mmlab_segmentation", auto_connect=True)

# Get list of possible models (model_name, model_config)
print(detector.get_model_zoo())

Fast MMSegmentation/MMSeg execution: from setup to results in just 8 minutes 

To carry out semantic segmentation, we simply installed Ikomia and ran the workflow code snippets. All dependencies were seamlessly handled in the background.

Creating a Semantic Segmentation Workflow Using Ikomia and MMSegmentation/MMSeg

This guide has delved into the nuances of developing a semantic segmentation workflow using MMSegmentation (MMseg).

Advancing Your Expertise in Semantic Segmentation

Customizing your model to meet specific needs and harmonizing it with other state-of-the-art models is a vital skill in the field of Computer Vision.

Keen on elevating your proficiency in semantic segmentation?

Discover how to fine-tune your semantic segmentation model →

Resources and Tools to develop advanced Computer Vision solutions

References

[1] MMSegmentation documentation.

[2] How to create a virtual environment.

Arrow
Arrow
No items found.
#API

Build with Python API

#STUDIO

Create with STUDIO app

#SCALE

Deploy with SCALE