While it’s quite simple to launch object detection from pre-trained models, things may become harder when training your custom object detection model. This post describes how this task is efficiently tackled in few clicks with Ikomia platform.
Automatic grapes detection
Our use case today aims at detecting grapes in the context of image-based monitoring and field robotics in viticulture.
First, we need a suitable dataset to train our custom object detection model. We chose the Embrapa WGISD dataset available here. It’s part of the research work Grape detection, segmentation and tracking using deep neural networks and three-dimensional association by Santos et al. published in 2020 in Computers and Electronics in Agriculture.
When searching for object detection algorithms, the YOLO series is one of the most popular. Here is a non-exhaustive list of key benefits:
- fast: real-time detection on GPU-enabled devices
- efficient: YOLOv3 and more recently YOLOv4 achieve state of the art performance
- completeness: different architectures are available, from edge computing with Tiny YOLO to high efficiency with YOLOv4
- affordable: training can be executed on standard computers with only one conventional GPU
- open source framework: supported by a vibrant community
Click here to have a look at the YOLO framework.
Let’s train in Ikomia STUDIO
Training a custom object detector has never been so easy. The Ikomia HUB offers all the building blocks for our training pipeline. And all is ready to use, with no code. For sure, you will be able to launch your training in less than 5 minutes. Just follow these four steps.
1- Install algorithms from the HUB
To build our custom training workflow, we only need two algorithms.
The first one converts the dataset in Ikomia format to ensure compatibility with any training algorithm. The annotations of WGISD dataset are stored in the YOLO format. Each image comes with a text file describing object categories and box coordinates. A dedicated loader already exists in the HUB.
The second one trains effectively our object detection model based on darknet framework. Moreover, this plugin provides different model architecture. So you may choose the one that best fits to your needs. In this tutorial, we use the Tiny YoloV4 network.
Here are the steps to install them in Ikomia Studio:
- Open the HUB from Ikomia Studio
- Search for YOLO_Dataset and YoloTrain plugins (use ‘yolo’ keyword in the search bar)
- Install them sequentially

Installation of YOLO algorithms
2- Load the dataset
First, get the dataset from the Github repository:
cd your-favorite-dataset-folder
git clone https://github.com/thsant/wgisd.git
Secondly, create a file classes.txt in the wgisd folder to store class labels. The dataset consists in a single class representing grapes, so just put one line “grapes” in the file.
Then, load the dataset in Ikomia Studio with the YOLO_Dataset plugin:
- Search for the freshly installed plugin in the process library (left pane)
- Fill parameters
- data folder (should be path-to-wgisd/data)
- classes file (should be path-to-wgisd/classes.txt)
- Click Apply

YOLO dataset loading and visualization
3- Setup the YOLO train plugin
At this time, we are ready to add the YOLO training job to the workflow: So search for the freshly installed plugin in the process library (left pane).
Before launching our training, we need to dig into the available parameters.
Model choice: darknet framework provides different deep learning architectures. So we have to select the model to train for our grapes detection. As we want to make a fast and efficient training, we choose the Tiny YOLOv4. You are obviously free to select any of those available in the list.
Input size: must be a multiple of 32. Higher input resolution implies higher accuracy at the cost of memory usage. You have to set input size with respect to your GPU memory capabilities.
Train/Eval split ratio: the plugin divides automatically the dataset into a train and an evaluation subsets. A value of 0.9 means the use of 90% of the data for training and 10% for evaluation.
Hyper-parameters: common parameters that drive the optimization process. Consult the official repository for more information.
Auto-configuration: training job in darknet framework is based on configuration file. Thus, this feature generates the configuration file automatically based on best practices. Moreover, for experts who want full control, auto-configuration can be disabled. Then a valid configuration file (.cfg) must be set.
Output folder: contains all files generated during training job and necessary at inference time.
4- Start training
Now press the Apply button to add the YoloTrain job to the current workflow.
The training process starts immediately. Thanks to the seamless integration of MLflow, you monitor the training progress in live. Parameters and metrics like mean Average Precision and loss value are automatically reported and visible through the MLflow dashboard.
Please find below results obtained in a run with a Tiny YOLOv4 model with input size of 608 pixels and all others parameters set to default:
- Mean Average Precision (mAP@0.5): 85.9%
- Minimum loss value: 2.99
- Number of epochs: 2000
- Training time: 30 minutes (single GPU – NVidia GTX 1060)
- Model size: 22.4 MB
Finally, Ikomia STUDIO gives you several options:
- Modify training parameters to start a new run and compare
- Save current workflow for future training

Training process of YOLO model
Test your custom trained model
Once your custom model is trained, you can easily test it within Ikomia STUDIO. Close your previous training workflow and follow these steps:
- Open the HUB from Ikomia STUDIO
- Search for YoloV4 plugin and install it
- Open grapes images
- Select the freshly installed YoloV4 in the process library (left pane)
- Fill parameters:
- Input size: must be multiple of 32 (can be different than training input resolution)
- Model: the same as used for training (Tiny YOLOv4 for us)
- Trained on: custom
- Configuration, weights and labels files generated during training (default folder: user-folder/Ikomia/Plugins/C++/YoloTrain/data/models/)
- Press Apply
- Enjoy!

Grapes detection with TinyYOLOv4
Conclusion
I hope you will enjoy the simplicity of Ikomia STUDIO. No more time wasted to train custom object detection model such as YOLO. You may find other interesting algorithms in the HUB. So feel free to have a look and discover them.
Related post: Train deep learning models with Ikomia STUDIO
0 Comments