Comprehensive Guide to Mastering MMDetection (MMDet) for Object Detection

Allan Kouidri
-
1/16/2024
Yolox detection street view

MMDetection/MMDet stands out as a premier object detection toolkit, particularly popular among Python enthusiasts. If you're new to MMDetection/MMDet, the initial journey through its documentation and setup process might seem a bit overwhelming.

This article is designed to walk you through the crucial steps, emphasizing key points and tackling common hurdles in using the MMDetection/MMDet API, which is a dedicated wrapper for object detection applications.

Additionally, we will introduce a streamlined approach to utilize the power of MMDetection/MMDet through the Ikomia API.

Get ready to enhance your object detection endeavors!

MMDetection/MMDet: the object detection toolbox

Object detection stands as a crucial and ever-evolving field. One of the latest and most notable tools in this domain is MMDetection, an open-source object detection toolbox based on PyTorch. 

What is MMDetection/MMDet?

MMDetection is a comprehensive toolbox that provides a wide array of object detection algorithms. Developed by the Multimedia Laboratory, CUHK, it's part of the OpenMMLab project. It's designed to facilitate research and development in object detection, instance segmentation, and other related areas.

Key Features of MMDetection/MMDet

  • Richness of models: It offers support for a wide variety of models, including popular ones like Faster R-CNN, Mask R-CNN, and YOLO, as well as cutting-edge models like Cascade R-CNN and Grid R-CNN.
  • Modularity: MMDetection is designed with a modular architecture, making it highly flexible and customizable. Researchers and developers can easily modify components to suit their specific needs.
  • High efficiency: The toolbox is optimized for both speed and memory efficiency, ensuring quick training and inference times without compromising performance.
  • Easy to use: With comprehensive documentation and a user-friendly design, MMDetection is accessible to beginners while still being powerful enough for advanced users.

Benefits of Using MMDetection/MMDet

  • Accelerated research and development: With its extensive model support and high efficiency, MMDetection accelerates the research and development process in object detection.
  • Ease of experimentation: The modular design allows for easy experimentation with different model components, fostering innovation and discovery.
  • Community support: Being an open-source project, MMDetection has a growing community of users and contributors, which means a wealth of shared knowledge and resources.

Real-world Applications

MMDetection is not just a research tool; it has practical applications in various fields:

  • Autonomous vehicles: Enhancing perception systems for improved safety and navigation.
  • Medical imaging: Assisting in the detection and diagnosis of diseases.
  • Surveillance: Improving the accuracy and efficiency of security systems.

Getting Started with MMDetection/MMDet

For this section, we will navigate through the MMDetection/MMDet documentation for object detection [1]. It's advisable to review the entire setup process beforehand, as we've identified certain steps that might be tricky or simply not working.

Prerequisite

OpenMMLab suggests specific Python and PyTorch versions for optimal results:

  • Linux | Windows | macOS
  • Python 3.7 +
  • PyTorch 1.8 +
  • CUDA 9.2 +

For this demonstration, we used a Windows setup with CUDA 11.8.

Environment setup

The first step in preparing your environment involves creating a Python virtual environment and installing the necessary Torch dependencies.

Creating the virtual environment

We followed the recommendation by using Python 3.8:


python -m virtualenv openmmlab  --python=python3.8

Installing Torch and Torchvision

Once you activate the 'openmmlab' virtual environment, the next step is to install the required PyTorch dependencies.


pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu118

Installing MMlab dependencies 

Then we install the following dependencies:


pip install -U openmim
mim install mmengine
mim install "mmcv>=2.0.0"

Subsequently, we installed 'mmdet' as a dependency:


mim install mmdet

Downloading the checkpoint

To obtain the necessary checkpoint file (.pth) and configuration file (.py) for MMDetection, use the following command:


mim download mmdet --config rtmdet_tiny_8xb32-300e_coco --dest .

Executing this command will download both the checkpoint and the configuration file directly into your current working directory.

Inference using MMDetection/MMDet API

For testing our setup, we conducted an inference test using a sample image with the RTMDet model. This step is crucial to verify the effectiveness of the installation and setup.


from mmdet.apis import init_detector, inference_detector

config_file = 'rtmdet_tiny_8xb32-300e_coco.py'
checkpoint_file = 'rtmdet_tiny_8xb32-300e_coco_20220902_112414-78e30dcc.pth'
model = init_detector(config_file, checkpoint_file, device='cpu')  # or device='cuda:0'
inference_detector(model, 'demo/demo.jpg')

We ran into the following problem:

Note: Upon reviewing the MMDetection GitHub issues, it was noted that this particular problem was reported in September 2023. However, as of the publication date of this article, no solution has been offered for it.

Inference via Bash command

To further test the RTMDet model, we employed a Bash command for inference. The command used was:


python demo/image_demo.py demo/demo.jpg rtmdet_tiny_8xb32-300e_coco.py --weights rtmdet_tiny_8xb32-300e_coco_20220902_112414-78e30dcc.pth --device cpu

This command is designed to perform object detection on the specified image ('demo/demo.jpg') using the RTMDet model and its corresponding weights, while running the process on the CPU. This time the inference ran successfully.

RTMDet object detection in park

Experiencing MMDetection/MMDet: The Challenge of a Failed Inference using MMDetection API

While installation steps ran smoothly, we encountered a significant hurdle: a failed inference attempt with the MMDetection API. This experience highlights the complexities and potential issues one might face while working with this object detection toolkit.

In the next section, how to use MMDetection via the Ikomia API in only 2 steps.

Easier MMDetection/MMDet object detection with a Python API

With the Ikomia team, we've been working on a prototyping tool to avoid and speed up tedious installation and testing phases. 

We wrapped it in an open source Python API. Now we're going to explain how to use it to detect objects with MMDetection in less than 10 minutes.

Environment setup

As before, you need to install the API in a virtual environment. [2]

Then the only thing you need is to install is Ikomia:


pip install ikomia

MMDetection/MMDet inference

You can also charge directly the open-source notebook we have prepared.


from ikomia.core import IODataType
from ikomia.dataprocess.workflow import Workflow
from ikomia.utils.displayIO import display


# Init your workflow
wf = Workflow()

# Add object detection algorithm
detector = wf.add_task(name="infer_mmlab_detection", auto_connect=True)

detector.set_parameters({
        "model_name": "rtmdet",
        "model_config": "rtmdet_tiny_8xb32-300e_coco",
        "conf_thres": "0.5",
})

# Run the workflow on image
wf.run_on(url="https://github.com/open-mmlab/mmdetection/blob/main/demo/demo.jpg?raw=true")


# Get and display results
image_output = detector.get_output(0)
detection_output = detector.get_output(1)

# MMLab detection framework mixes object detection and instance segmentation algorithms
if detection_output.data_type == IODataType.OBJECT_DETECTION:
    display(image_output.get_image_with_graphics(detection_output), title="MMLAB detection")
elif detection_output.data_type == IODataType.INSTANCE_SEGMENTATION:
    display(image_output.get_image_with_mask_and_graphics(detection_output), title="MMLAB detection")

MMDection with Ikomia API

List of parameters:

- model_name (str, default="yolox"): model name.

- model_config (str, default="yolox_s_8x8_300e_coco"): name of the model configuration file.

- conf_thres (float, default=0.5): object detection confidence.

- use_custom_model (bool, default=False): flag to enable the custom train model choice.

- config_file (str, default=""): path to model config file (only if use_custom_model=True). The file is generated at the end of a custom training. Use algorithm train_mmlab_detection from Ikomia HUB to train custom model.

- model_weight_file (str, default=""): path to model weights file (.pt) (only if use_custom_model=True). The file is generated at the end of a custom training.

- cuda (bool, default=True): CUDA acceleration if True, run on CPU otherwise.

MMLab framework for object detection and instance segmentation offers a large range of models. To ease the choice of couple (model_name/model_config), you can call the function get_model_zoo() to get a list of possible values.


from ikomia.dataprocess.workflow import Workflow

# Init your workflow
wf = Workflow()

# Add object detection algorithm
detector = wf.add_task(name="infer_mmlab_detection", auto_connect=True)

# Get list of possible models (model_name, model_config)
print(detector.get_model_zoo())

Fast MMDetection execution: from setup to results in just 8 minutes 

To carry out object detection, we simply installed Ikomia and ran the workflow code snippets. All dependencies were seamlessly handled in the background. 

Build a custom workflow with Ikomia

In this guide, we have explored the process of creating a workflow for object detection with MMDetection/MMDet. 

In object detection, it's often necessary to integrate various algorithms to meet specific requirements. For instance, combining object detection with tracking can significantly enhance the overall functionality.

Discover Deep Sort: The Future of Object Tracking Explained →  

References

[1] MMDetection documentation.

[2] How to create a virtual environment.

Arrow
Arrow
No items found.
#API

Build with Python API

#STUDIO

Create with STUDIO app

#SCALE

Deploy with SCALE