Mastering Canny Edge Detection with OpenCV: A Comprehensive Guide

Guillaume Demarcq
-
5/17/2023
Canny edge detection using Ikomia API

In this case study, we will explore how to implement Canny edge detection with OpenCV. By following the step-by-step process, we will be able to create a Canny edge detection workflow and analyze the results.

How does the Canny Edge detection work?

Edge detection is an essential image processing technique commonly employed in various computer vision applications, including data extraction, image segmentation, feature extraction, and pattern recognition.

This technique helps reduce the amount of noise and irrelevant details in an image while retaining its structural information. As a result, edge detection plays a crucial role in enhancing the accuracy and performance of computer vision algorithms.

Whether you're working on object detection, image recognition, or other Computer Vision tasks, edge detection is a critical step in your workflow.

Canny edge detection is widely regarded as one of the most popular and effective methods for edge detection in Computer Vision. It employs a multi-stage algorithm to detect a wide range of edges in images. This algorithm can be broken down into four basic steps:

Noise reduction


Canny Edge Detection, a widely acclaimed technique in the realm of computer vision, operates on the principle of identifying the edges in images with remarkable precision and reliability. This method stands out for its capability to capture a vast array of edges, thereby making it indispensable for tasks that range from object detection to intricate pattern recognition. Let's delve deeper into its methodology, adding a touch of nuance and adopting a more subdued tone to explore its inner workings.

At its very outset, the Canny algorithm addresses the fundamental challenge of noise in images. Imagine looking through a slightly fogged-up window; details are obscured, and clarity is compromised. In a similar vein, noise in images can mask vital details, making edge detection a daunting task. To counter this, Canny employs a Gaussian filter—a mathematical tool that, much like a gentle breeze that clears the fog from the window, smooths out the image.

This filtering acts as a preparatory step, ensuring that the subsequent stages of edge detection are not misled by spurious or irrelevant variations in pixel intensity. It's a delicate balance, where the goal is to reduce noise without erasing the important structural elements that define the edges of objects within the image.

This initial phase is foundational, setting the stage for the sophisticated processes that follow. It underscores a commitment to precision, ensuring that the edges detected are not artifacts of noise but genuine boundaries that separate distinct regions of the image. As we proceed to the subsequent steps, this focus on clarity and accuracy continues to guide the algorithm, enabling it to uncover the intricate tapestry of edges that define the visual content of the image.

example image after gaussian filter has been applied

Gradient calculation

Following the preparatory stage of noise reduction, the Canny Edge Detection algorithm ventures into the domain of gradient calculation. This phase is similar to navigating the contours of a landscape, where the aim is to discern the steepness and direction of slopes—the gradients—that signify transitions from one terrain to another. In the context of an image, these transitions are edges, where the intensity of the image shifts markedly.

Unveiling the Landscape: The Role of Sobel Filters

To achieve this, the algorithm employs Sobel filters, mathematical tools adept at measuring these gradients. By applying these filters in both the horizontal (x) and vertical (y) directions, the algorithm computes two derivatives: Gx​ and Gy​. These derivatives are similar to taking a closer look at the slopes of our landscape in both directions, allowing us to quantify how sharply the image's intensity changes.

  • Gradient Intensity Matrix: This matrix is a composite measure, derived from Gx​ and Gy​, that quantifies the "steepness" or intensity change at each point in the image. It highlights areas where the transition between light and dark is most pronounced—our potential edges.
  • Gradient Direction: Alongside intensity, knowing the direction of these changes is crucial. The gradient direction, calculated from Gx​ and Gy​, tells us the orientation of the edges. It's like understanding whether a slope ascends from left to right, from bottom to top, or in any other direction.

Interpreting the Signals: Identifying Edges

The calculation of gradients is a pivotal moment in the Canny algorithm, as it transforms the smoothed image into a map of potential edges, characterized by their intensity and direction. However, not all identified edges are of equal importance. Some might be faint whispers of transitions, while others are clear demarcations between distinct regions. It's here that the algorithm begins to sift through these signals, distinguishing the significant edges that define the structure of the image from the less important ones.

This step is about enhancing the algorithm's "vision," enabling it to see the world within the image in terms of its basic structural components. The gradient calculation is a testament to the algorithm's nuanced approach, where it meticulously charts the landscape of the image, preparing to make critical decisions about which edges truly matter. The journey through Canny Edge Detection is one of increasing refinement and focus, with each step building upon the last to reveal the image's underlying structure with precision and clarity.

example image after gradient calculation sobel filters has been applied

Non-maximum suppression


Non-maximum suppression sharpens the detected edges from the Canny Edge Detection algorithm, acting like a fine-tuning process. It's akin to finding the highest ridge in a mountain range; only the highest points are kept, while the rest are suppressed. Here's how it works:

  1. Edge Evaluation: Each pixel's gradient magnitude is checked against its neighbors in the gradient direction. This is like looking along the ridge line to see if the pixel stands at the highest point.
  2. Selective Thinning: Pixels not on the "peak" are suppressed, meaning they're set to a lower intensity or made invisible. This leaves behind only the sharpest, most defined edges, reducing them to one-pixel width.

The result is a cleaner, more precise edge map, where only the most significant edges remain, making the image's structural details crisp and clear.

example image after edges have been thinned down

Hysteresis thresholding

Hysteresis thresholding is the concluding step in the Canny Edge Detection process, acting as a decisive filter for distinguishing between true and false edges. This method involves a dual-threshold approach to finalize which edges are significant enough to be kept. It's somewhat like deciding which climbers get to stay on the mountain based on their strength and their connection to the lead climber.

  1. Two Thresholds: Imagine two gates at the mountain base—only climbers above a certain strength level (the high threshold) can pass through the first gate unaided. Climbers below a lower strength level (the low threshold) are turned away at the second gate.
  2. Selective Preservation: Climbers (or edge pixels) whose strength lies between these two thresholds can only stay if they are in a chain connected to a climber who passed through the first gate. This simulates the preservation of edge pixels between the thresholds, contingent on their connection to a "strong" edge pixel.

This approach ensures that the final edge map only includes the most relevant edges, effectively reducing noise and preventing the fragmentation of edge structures. By applying this nuanced, two-level screening, the Canny algorithm enhances the clarity and relevance of the detected edges, ensuring that the resulting image captures essential structural details with high fidelity.

example of image after Hysteresis thresholding has been applied

Run the Canny edge detection with a few lines of code

You can easily create a Canny edge detection workflow with just a few lines of code. All you need to do is install the API in a virtual environment.

First, we recommend setting-up a new virtual environment [1].


pip install ikomia

You can also charge directly the open-source notebook we have prepared.

For a step-by-step detailed approach jump to this section.


from ikomia.dataprocess.workflow import Workflow
from ikomia.utils.displayIO 
import display

# Init your workflow
wf = Workflow()

# Add the Canny Edge Detector
canny = wf.add_task(ik.ocv_canny(), auto_connect=True)

# Run on your image    
# wf.run_on(path="path/to/your/image.png")
wf.run_on(url="https://raw.githubusercontent.com/Ikomia-dev/notebooks/main/examples/img/img_work.jpg")

# Inspect your results
display(canny.get_input(0).get_image())display(canny.get_output(0).get_image())

example image with computers on the table, before edge detection is applied
example image with computers on the table, after edge detection is applied

On top, you see the original input image. And underneath, you see the output image, commonly known as the edge map.

Step by step Canny edge detection

Now let's dive into the detailed steps of creating Canny edge detection workflows using the Ikomia API.

Step 1: Import


from ikomia.dataprocess.workflow 
import Workflowfrom ikomia.utils.displayIO 
import displayfrom ikomia.utils import ik

  • Workflow is the base class object used to create a workflow object. It provides methods for setting image, video and directory inputs, setting task parameters, getting time metrics and getting specific task outputs defined by their types (graphics, segmentation masks, texts…).
  • The display function allows for flexible and customizable image (input/output) and graphics display, including bounding boxes and segmentation masks and graphics such as bounding boxes, segmentation masks, etc..
  • ik is an auto-completion system designed to conveniently and easily access algorithms and settings.

Step 2: Create workflow


wf = Workflow()

We initialize a workflow instance. The “wf” object can then be used to add tasks to the workflow instance, configure their parameters, and run them on input data.

Step 3: Add the OpenCV Canny algorithm

We can use the ik namespace to search for algorithms.


canny = wf.add_task(ik.ocv_canny(), auto_connect=True)

image of canny search

Step 4: Algorithm settings

The OpenCV Canny edge detection takes several parameters that control its behavior:

  • threshold1: the lower threshold of the hysteresis procedure. Edges with intensity gradients below this value will be discarded.
  • threshold2: the upper threshold value. Edges with intensity gradients above this value will be considered strong edges.
  • apertureSize: the size of the Sobel kernel used for edge detection. This parameter affects the level of detail in the edges detected.
  • L2gradient: a Boolean flag that indicates whether to use the L2 norm for gradient calculation. If set to 1, the algorithm will use the Euclidean distance to calculate the gradient magnitude. If set to 0 (default), the algorithm will use the L1 norm, which is less computationally expensive.

Adjusting these parameters can significantly impact the performance of the Canny edge detection. For example, increasing the threshold values will result in detecting fewer edges, while increasing the aperture size will result in detecting more detailed edges.

Step 5: Setting the parameters

Here, we use the auto-completion to find the names of the available parameters.


canny = wf.add_task(ik.ocv_canny(threshold1="100", threshold2="200", apertureSize="3", L2gradient="0"), auto_connect=True)

Step 6: Apply your workflow on your image

The run_on() function allows you to apply your workflow to the image. For this example, we get our image from an URL:


wf.run_on(url="https://raw.githubusercontent.com/Ikomia-dev/notebooks/main/examples/img/img_work.jpg")

Step 7: Display your results

Finally, you can display our image results using the display function: 


display(canny.get_output(0).get_image())

example image of final result after edge detection is over display results is active

Default parameters:


canny.set_parameters({
                ik.ocv_canny.threshold1:"0",
                ik.ocv_canny.threshold2:"255",
                ik.ocv_canny.apertureSize:"3",
                ik.ocv_canny.L2gradient:"0"
})


example image with default display results values

canny.set_parameters({
              ik.ocv_canny.threshold1:"150",
              ik.ocv_canny.threshold2:"200",
              ik.ocv_canny.apertureSize: "3",
              ik.ocv_canny.L2gradient:"0"
})


By increasing the threshold1 value, fewer edges are detected

example image with increased threshold value and fewer edges

canny.set_parameters({
              ik.ocv_canny.threshold1:"150",
              ik.ocv_canny.threshold2:"200",
              ik.ocv_canny.apertureSize:"5",
              ik.ocv_canny.L2gradient:"0"
})

By increasing the aperture size, more detailed edges can be detected.

Create your own workflow with Ikomia

To learn more about the API, refer to the documentation. You may also check out the list of state-of-the-art algorithms on Ikomia HUB and try out Ikomia STUDIO, which offers a friendly UI with the same features as the API.

References

[1] How to install a virtual environment

Arrow
Arrow
No items found.
#API

Build with Python API

#STUDIO

Create with STUDIO app

#SCALE

Deploy with SCALE