Face Detection and Blurring: Mastering Techniques with Ikomia

Allan Kouidri
-
6/14/2023
Face detection and blurring with Ikomia API

In this case study, we will explore the process of creating a workflow for face detection using Kornia, followed by blurring faces with OpenCV. 

The Ikomia API simplifies the development of Computer Vision workflows and provides easy experimentation with different parameters to achieve optimal results.

Get started with Ikomia API

With Ikomia API, creating a face detection workflow using Kornia followed by blurring with OpenCV becomes effortless, requiring only a few lines of code. To get started, you need to install the API in a virtual environment.

How to install a virtual environment


pip install ikomia

API documentation

API repo

Run the face detection and blur algorithms with a few lines of code

You can also charge directly the open-source notebook we have prepared.


from ikomia.dataprocess.workflow import Workflow
from ikomia.utils.displayIO import display

# Init the workflow
wf = Workflow()

# Add and connect algorithms
face = wf.add_task(name="infer_face_detection_kornia", auto_connect=True)
blur = wf.add_task(name="ocv_blur", auto_connect=True)

# Set parameters
blur.set_parameters({
            "kSizeWidth": "61", 
            "kSizeHeight":"61"
        	})

# Run on your image
wf.run_on(path="img_people.jpg")

# Inspect results
display(face.get_image_with_graphics())
display(blur.get_output(0).get_image())

First, we get the output of the Kornia algorithm, displaying the bounding boxes and confidence scores.

Here is the output from the OpenCV blur algorithm.

The Kornia face detector

Kornia is an open-source Computer Vision library for Python, specifically designed for use with the PyTorch deep learning framework. It provides differentiable Computer Vision applications, such as Deep Edge detection, Semantic and Panoptic segmentation, Object Detection and Tracking, and Image classification.

The Kornia face detection uses a light weight deep learning model named YuNet. It offers a millisecond-level detection speed, making it ideal for edge computing. Testing this algorithm on my computer, it processed up to:

  • 35 frames per second (FPS) on GPU (NVIDIA GeForce RTX 3060) 
  • 70 FPS on CPU

YuNet achieves an impressive balance between accuracy and speed, outperforming other small-size detectors with its compact architecture of only 75,856 parameters. The model has been trained on WILDER FACE which is the largest public face detection dataset with 32 203 images and 393 703 faces. It achieves an impressive 81.1% mAP on the WIDER FACE validation hard track.

Here is an example taken from the YuNet paper testing the model on the world’s largest selfie.

World's larger selfie (Wu et al. 2023)

You can find the source code for Yunet here.

How does OpenCV Blur work? 

The blur algorithm in OpenCV is used to reduce image noise and smooth out details in an image by applying a blur or averaging effect.

The blur algorithm works by convolving each pixel in the image with a kernel, which is a small matrix. The kernel defines how neighboring pixel values are combined to produce the new value for the current pixel. In the case of blurring, the kernel typically consists of equal weights, resulting in a uniform averaging effect.

A 3x3 normalized box filter would look like the below:

Formula of normalized box filter K

Here is a general overview of the steps involved in the OpenCV blur algorithm:

    1. Define a kernel size, which determines the size of the neighborhood considered for each pixel.  

    2. Apply the blur operation by convolving the kernel with each pixel in the image.

    3. For each pixel, compute the average value of the pixel intensities within the neighborhood defined by the kernel.

    4. Assign the computed average value as the new pixel intensity for that pixel.

    5. Repeat the process for all pixels in the image.

    6. The resulting image will have reduced noise and a smoothed appearance.

The OpenCV blur algorithm enables you to customize the blur effect according to your own needs by specifying the kernel size and other parameters.

Step by step Face detection and blurring with the Ikomia API

In this section, we will demonstrate how to utilize the Ikomia API to create a workflow for face detection and blur processing as presented above.

Step 1: Import


from ikomia.dataprocess.workflow import Workflow
from ikomia.utils.displayIO import display

  • The ‘Workflow’ class is the base object for creating a workflow. It provides methods for setting inputs (image, video, directory), configuring task parameters, obtaining time metrics, and retrieving specific task outputs, such as graphics, segmentation masks, and texts.
  • The ‘display’ function offers a flexible and customizable way to display images (input/output) and graphics, such as bounding boxes and segmentation masks.

Step 2: Create Workflow


wf = Workflow()

We initialize a workflow instance. The “wf” object can then be used to add tasks to the workflow instance, configure their parameters, and run them on input data.

Step 3: Add and connect the algorithms

Here, we use the names of the algorithms to add them to the workflow


face = wf.add_task(name="infer_face_detection_kornia", auto_connect=True)
blur = wf.add_task(name="ocv_blur", auto_connect=True)

Step 4: Setting the parameters

The Kornia face detector takes two parameters:

  • ‘cuda’: Set to True to run the detector on GPU, False for CPU (default: True)
  • ‘conf_thres’: Minimum confidence score of the detected face (default: 0.6)

The OpenCV blur takes two parameters to control its behaviour:

  • ‘kSizeWidth’: width of the kernel size
  • ‘kSizeHeight’: height of the kernel size

Increasing the kernel size values will result in a more pronounced blurring effect.


blur.set_parameters({
           "kSizeWidth": "61", 
           "kSizeHeight":"61"
        })

Step 5: Apply your workflow on your image

You can apply the workflow to your image using the ‘run_on()’ function. In this example, we use the image path:


wf.run_on(path="img_people.jpg")

Step 6: Display your results

Finally, you can display your image results using the display function: 


display(face.get_image_with_graphics())
display(blur.get_output(0).get_image())

Here are some examples playing with the parameters:

  • Confidence threshold:


face.set_parameters({
            "conf_thres": "0.6"
                })


face.set_parameters({
            "conf_thres": "0.3"
                })

By decreasing the confidence threshold, we can increase the number of detected faces, but there is a higher risk of getting more false positives.

  • Kernel size:

blur.set_parameters({
            "kSizeWidth": "61", 
            "kSizeHeight":"61"
            })


blur.set_parameters({
            "kSizeWidth": "15", 
            "kSizeHeight":"15"
            })

In this case, we decreased the kernel size, resulting in a less pronounced blurring effect.

Build your own workflow with Ikomia

To learn more about the API, refer to the documentation. You may also check out the list of state-of-the-art algorithms on Ikomia HUB and try out Ikomia STUDIO, which offers a friendly UI with the same features as the API.

Arrow
Arrow
No items found.
#API

Build with Python API

#STUDIO

Create with STUDIO app

#SCALE

Deploy with SCALE