Understanding DeepLabV3 in Image Segmentation

Allan Kouidri
-
12/12/2023
DeeplabV3 semantic segmentation

What is DeepLabV3?

DeepLabV3 is an advanced neural network architecture designed for the task of semantic image segmentation. This technique involves labeling each pixel in an image with a class, corresponding to what that pixel represents.

DeepLabV3+ is a significant advancement over its predecessors in the DeepLab series, offering enhanced accuracy and efficiency in segmenting complex images.

The Evolution of DeepLab Series

The DeepLab series has played a pivotal role in advancing semantic image segmentation research. Here's a look at its evolutionary journey:

  • DeepLabV1: This initial version introduced atrous (dilated) convolutions, a novel concept at the time. The use of atrous convolutions allowed the model to capture wider context in images without reducing their resolution, marking a significant step forward in segmentation accuracy.

  • DeepLabV2: Building on the foundations of its predecessor, DeepLabV2 introduced Atrous Spatial Pyramid Pooling (ASPP). This innovation greatly improved the model's ability to handle objects of varying scales, enhancing its versatility and effectiveness in different segmentation scenarios.

  • DeepLabV3 (2017): Further refining the concept of ASPP, DeepLabV3 integrated this advanced pooling technique into a deeper and more robust network. This version marked a substantial improvement in the series, optimizing both the depth of the network and the efficiency of scale-variant segmentation.

  • DeepLabV3+ (2018): Presented at ECCV '18, DeepLabV3 Plus is an incremental yet significant update to DeepLabV3. This latest version of the series not only surpassed its predecessor, the DeepLabV3, but also achieved state-of-the-art (SOTA) performance in mean Intersection Over Union (mIOU). Notably, it attained an impressive 89% mIOU on the PASCAL VOC 2012 test set and 82.1% on the Cityscapes dataset. These achievements underscored the series' continual progression in pushing the boundaries of semantic image segmentation.

DeepLabV3 architecture: key innovations

The architecture of DeepLabV3+ is a sophisticated blend of novel and proven techniques in the field of deep learning and computer vision. 

It represents a significant evolution from its predecessors, focusing on enhancing segmentation accuracy, particularly for object boundaries and fine details. Here's a deeper dive into the key components of the DeepLabV3+ architecture.

Encoder-Decoder structure

Encoder

The encoder in DeepLabV3+ is primarily responsible for extracting semantic information from the image. It utilizes a modified Xception model, which is a powerful deep convolutional neural network known for its efficiency and accuracy. 

The encoder employs atrous convolution to enlarge the field of view of filters, enabling the capture of broader context without reducing the spatial resolution of the feature map.

Decoder

The decoder's primary function is to refine the segmentation results, especially along object boundaries. It takes the coarse semantic features from the encoder and progressively refines them by combining them with low-level features from earlier in the network. This combination helps in capturing fine details and improves the localization of object edges.

DeepLab architecture
The encoder module of DeepLabV3 Plus captures multi-scale contextual information through atrous convolution at various scales, and the efficient decoder refines segmentation along object boundaries. [1]

Atrous separable convolution

The atrous separable convolution, a central innovation in DeepLabV3+, melds atrous convolution with depthwise separable convolution. 

  • Atrous convolution enhances the model's ability to adjust the resolution for computing feature responses within deep convolutional neural networks, providing finer control over the capture of image details.
Atrous convolution
[2]

  • Depthwise separable convolution, on the other hand, divides the process into two distinct steps: depthwise convolution (a) and pointwise convolution (b).
Atrous depthwise conv
[1]

This division not only streamlines the computational process but also reduces the overall size of the model, resulting in a more efficient yet powerful network architecture.

  • (c) Atrous Depthwise Convolution: The model incorporates atrous convolution within its depthwise convolution framework. This integration markedly lowers the computational complexity of the proposed model, while either preserving or enhancing its performance.

Atrous conv kernel
[3]

A 3x3 atrous convolution kernel effectively encompasses a receptive field equivalent to a 5x5 kernel size. Layering multiple atrous convolutional layers significantly expands the receptive field and achieves denser feature maps compared to standard convolutional layers.

This enhanced feature extraction capability is a key advantage of atrous convolutions, allowing for more detailed and comprehensive analysis of input images.

How does Atrous convolution help with segmentation?

Atrous Convolution enables the construction of deeper networks that maintain high-level information at finer resolutions without an increase in parameter count.

DeepLabV3 multigrid
[2]

The use of atrous convolution results in a backbone capable of extracting fine resolution feature maps, thus preserving more detailed information throughout the network.

Xception as backbone betwork

DeepLabV3+ adapts the Xception model as its backbone network. Xception, which stands for "Extreme Inception," is a deep convolutional neural network that replaces standard Inception modules with depthwise separable convolutions. 

This choice of backbone contributes to the efficiency and effectiveness of the model, particularly in terms of computational resource utilization and accuracy in capturing complex features.

Simplified Semantic Segmentation with DeepLab via Ikomia API

Experience an effortless approach to semantic segmentation using DeepLab with the Ikomia API. This user-friendly method significantly reduces the typical coding complexities and dependency setups.

Initial setup

To leverage the Ikomia API's full potential, start by installing it in a virtual environment [3].


pip install ikomia

Run DeepLab with a few lines of code

You can also directly charge the notebook we have prepared.


from ikomia.dataprocess.workflow import Workflow
from ikomia.utils.displayIO import display


# Init your workflow
wf = Workflow()

# Add the Deeplab algorithm
deeplab = wf.add_task(name="infer_detectron2_deeplabv3plus", auto_connect=True)
deeplab.set_parameters({"dataset": "Cityscapes"})

# Run on your image  
wf.run_on(url="https://images.pexels.com/photos/378570/pexels-photo-378570.jpeg?")

# Inspect your results
display(deeplab.get_image_with_mask())

  • dataset (str) - Default 'Cityscapes': Use model trained on the Cityscapes dataset. Use "Custom" if using a custom model.
  • config_file (str, optional): Path to the .yaml config file.
  • model_weight_file (str, optional): Path to model weights file .pth.

Creating a Semantic Segmentation Workflow with Ikomia

In this guide, we explored how to develop a semantic segmentation workflow using DeepLabV3+. 

Extending Your Skills on Semantic Segmentation

Tailoring your model to specific requirements and integrating it with other cutting-edge models is a crucial aspect in the realm of Computer Vision.

Interested in further enhancing your semantic segmentation capabilities? 

Explore fine-tuning your own semantic segmentation segmentation model →  

Resources and Tools to develop advanced Computer Vision solutions

  • For a deeper understanding of the API, our comprehensive documentation is a valuable resource. 
  • Additionally, the Ikomia HUB presents a variety of advanced algorithms to explore. 
  • And for a more accessible experience, Ikomia STUDIO offers an intuitive interface with the same extensive functionalities as the API, but tailored for ease of use.

References

[1] Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation

[2] Rethinking Atrous Convolution for Semantic Image Segmentation

[3] https://github.com/vdumoulin/conv_arithmetic/tree/master 

[4] How to create a virtual environment in Python 

Arrow
Arrow
No items found.
#API

Build with Python API

#STUDIO

Create with STUDIO app

#SCALE

Deploy with SCALE