Streamlining Computer Vision Deployments with Ikomia SCALE on AWS Lambda

Allan Kouidri
-
4/26/2024
Cloud abstract

When working on a computer vision project, you usually start by developing your models and then move on to organizing them into workflows. But then comes the big question: “How do I deploy a computer vision model efficiently?" With many deployment options out there, picking the right one can be overwhelming.

Introducing Ikomia SCALE, a platform designed to facilitate the deployment of Computer Vision models. It helps simplify the technical aspects of onboarding and coding for various devices. Whether you're considering serverless architectures like AWS Lambda or other major platforms, Ikomia SCALE aims to make the deployment process more straightforward.

In this blog post, we'll walk you through a comprehensive guide on how to use the Ikomia ecosystem effectively:

  1. Creating a Workflow: We'll start by showing you how to build a workflow using the Ikomia API, preparing your Computer Vision model for deployment.
  2. Deployment: Next, we'll guide you through the deployment process using AWS Lambda as our example. However, the steps are similar whether you choose Google Cloud, Scaleway, or another provider, and regardless of whether you're deploying on CPU or GPU infrastructure. We'll show you just how easy it is to get your model operational.
  3. Accessing Your REST API Endpoint: Finally, we'll guide you through the process of accessing your REST API endpoint. This endpoint enables seamless interaction between your applications and your deployed model, allowing for smooth integration and communication via standard HTTP methods.

For a more detailed exploration of what Ikomia SCALE offers [1], check out their official page here.

Let’s dive in and make Computer Vision deployment as simple as it gets!

Creating your workflow: Easy text extraction

Let's explore a project similar to the one we previously detailed, which involved text extraction from ID cards using deep learning. This solution was developed using the Ikomia API. In this example, we will perform text detection and recognition, followed by key information extraction using pre-trained mmOCR models. For a more personalized project, you could fine-tune your model to better suit your specific use case.

Setup

First, you need to install the Ikomia Command Line Interface (CLI) in a virtual environment [2]:


pip install ikomia-cli[full]

The Ikomia CLI allows you to interact with Ikomia SCALE from the command line, making it easier to manage your projects.

In this section, we'll create a workflow for the following steps:

    1. Text detection

    2. Text recognition

    3. Key Information extraction 

    4. Saving the workflow


from ikomia.dataprocess.workflow import Workflow
from ikomia.utils.displayIO import display


# Init your workflow
wf = Workflow('Text recognition MMOCR')


# Add algorithms...
# for text detection
det = wf.add_task(name="infer_mmlab_text_detection", auto_connect=True)
# for text recognition
rec = wf.add_task(name="infer_mmlab_text_recognition", auto_connect=True)
# for kie
kie = wf.add_task(name="infer_mmlab_kie", auto_connect=True)

# Run on your image
wf.run_on(url="https://img.20mn.fr/swO8brjgTbyagat2g2rb-A/1444x920_nouvelle-carte-nationale-identite-francaise.jpg")

# Get results
original_image_output = kie.get_output(0)
text_detection_output = kie.get_output(1)

# Display results
display(original_image_output.get_image_with_graphics(text_detection_output))

# Save the workflow as a JSON file in your current folder
wf.save("./text_extraction_workflow.json")

Using only the pre-trained models, the text is detected and extracted fairly well. However, the key information extraction (KIE) model might need to be fine-tuned to label the extracted information more accurately.

At the end of your workflow, we saved it to a JSON file.

Push your workflow

Pre-requisites

First of all, create an Ikomia account or log in. You can choose to sign up with your email or with your Google or GitHub account.

Create a project 

A project in Ikomia SCALE serves as the main container for your workflows, grouping them together neatly. To start a new project, simply follow these steps:

  1. Navigate to the dashboard and click on the New Project button.
  2. You will be prompted to fill out a form with the following details:
    • Workspace: Choose a workspace to store your project. Select your personal workspace if you do not plan to share the project with others.
    • Project Name: Enter a name for your project.

Generate your access tokens

Access tokens are required to authenticate with Ikomia SCALE. To generate a token, use the following command:


!ikcli login --token-ttl "< token_duration_in_seconds >" --username "< your_login >" --password "< your_password >"

Replace < your_login > and < your_password > with your Ikomia SCALE credentials. After generating your token, set it as an environment variable:


%env IKOMIA_TOKEN= < your_token >

Where < your_token > is the access token you generated.

Push the workflow to SCALE

Once your access token is set, you can push your workflow to the Ikomia SCALE platform with the following command:


!ikcli project push  my_workflow.json

Replace <project_name> with the name of your project on Ikomia SCALE, such as 'Text_extraction' in this example. This command uploads your workflow JSON file directly to the specified project in SCALE, allowing it to be managed and deployed as needed.

Deploy you workflow

Now that you've added your workflow to your project on Ikomia SCALE, it's time to put it into action and create a live endpoint for use.

        1. Navigate to your project: Go to your project page on Ikomia SCALE. Here, you'll see an overview of your project and the workflows contained within.

      2. Select your workflow for deployment: Click on the workflow you wish to deploy from the list. Ikomia SCALE provides flexibility with various cloud providers and regions for deployment.

Ikomia SCALE offers three primary compute infrastructures for your deployments:

  • Serverless: Opt for a CPU-only environment where you're billed solely for the execution time of your workflow. This is an economical option for workloads with variable usage. This is the one I will choose for this use case. 
  • CPU Instances (Coming Soon): Choose CPU-only dedicated instances, where billing is based on active usage time, down to the second.
  • GPU Instances (Coming Soon): Utilize dedicated instances with GPU acceleration for intense compute tasks, also with per-second billing.

    3. Create a Deployment: On the right-hand side of the project interface, you will find the deployment settings.

  • Select the Provider: Choose a cloud provider such as AWS, Google Cloud, or others.
  • Choose Deployment Type: Pick from serverless, CPU instance, or GPU instance.
  • Pick a Region: Decide on the geographical region that best suits your latency and data residency needs.
  • Determine the Size: Select the appropriate size based on your compute and memory requirements, from XS to XL.

      4. Launch Your Deployment: Hit the 'Add deployment' button to initialize your workflow's deployment process.

The new deployment will now be listed on the left-hand side of the page under the deployments section. The time it takes for the deployment to become operational will vary based on the complexity of the workflow and the configuration options you've selected. 

In this instance, I chose the AWS serverless option with a M-sized configuration. It took approximately 10 minutes for the system to fully set up and have the deployment running smoothly.

Test your deployment 

SCALE provides a user-friendly Test Interface to ensure your deployed workflow is performing correctly.

Open the Test Interface: To access the Test Interface, go to your workflow’s page and click on the ‘Test me’ button associated with your deployment. Alternatively, you can directly visit the endpoint’s URL.

Run Your Workflow: Here’s how you can execute your workflow:

  • Upload an Image: You can upload an image file directly from your computer.
  • Choose a Sample Image: SCALE offers a selection of sample images that you can use for testing.

Once the workflow execution finishes, the Test Interface will display the results.

The interface showcases the outputs in an organized manner, including the detected text, the bounding box coordinates, and the confidence scores for each recognized element. It’s a seamless way to ensure that your workflow is correctly set up and ready for integration with your systems.

Integrating a deployment

Once you've deployed your workflow in Ikomia SCALE, each deployment provides a REST API endpoint. This endpoint facilitates the integration of your computer vision model into your applications or systems, enabling them to send images to the model and receive results in return. 

Our documentation offers detailed insights into using these REST API endpoints for effective integration.

For a deep dive into integration processes and best practices, please consult our documentation.

References

[1] Ikomia SCALE

[2] How to create a virtual environment in Python

Arrow
Arrow
No items found.
#API

Build with Python API

#STUDIO

Create with STUDIO app

#SCALE

Deploy with SCALE