make a canny edge detection function from scratch and use it to calculate camera pose in matlab

Introduction

Canny edge detection is an image processing technique widely used in computer vision applications for detecting edges in images. It helps in identifying the boundaries between different objects in an image. In this tutorial, we will be building a canny edge detection function from scratch in Matlab and use it to estimate the camera pose.

Canny Edge Detection

The are five steps in Canny edge detection process:

  1. Gaussian blurring
  2. Gradient Calculation
  3. Non-Maximum Suppression
  4. Double Thresholding
  5. Edge Tracking by Hysteresis

1. Gaussian Blurring

The first step in Canny edge detection is to apply Gaussian filter to the image to remove noise. This can be done using the imgaussfilt function in Matlab:

main.m
function blurred = gaussian_blur(img, sigma)
kernel_size = 2*ceil(3*sigma)+1;
kernel = fspecial('gaussian',kernel_size,sigma);
blurred = imfilter(img,kernel,'replicate');
end
175 chars
6 lines

2. Gradient Calculation

The second step is to calculate the gradient of the blurred image using Sobel operators. This can be done using the imgradient function in Matlab:

main.m
function [magnitude,orientation] = gradient_calculation(img)
[Gx,Gy] = imgradientxy(img,'sobel');
magnitude = sqrt(Gx.^2 + Gy.^2);
orientation = atan2(Gy,Gx)*180/pi;
end
170 chars
6 lines

3. Non-Maximum Suppression

The third step is to perform non-maximum suppression to thin out the edges. This can be done as follows:

main.m
function suppressed = non_max_suppression(magnitude, orientation)
[rows,cols] = size(magnitude);
suppressed = zeros(rows,cols);
angle = orientation + 180;
for i=2:rows-1
    for j=2:cols-1
        q = 0;
        r = 0;
        % find neighbours pixel to compare with the central pixel
        if (0 <= angle(i,j) && angle(i,j) < 22.5) || (157.5 <= angle(i,j) && angle(i,j) <= 180)
            q = magnitude(i,j+1);
            r = magnitude(i,j-1);
        elseif (22.5 <= angle(i,j) && angle(i,j) < 67.5)
            q = magnitude(i+1,j-1);
            r = magnitude(i-1,j+1);
        elseif (67.5 <= angle(i,j) && angle(i,j) < 112.5)
            q = magnitude(i+1,j);
            r = magnitude(i-1,j);
        elseif (112.5 <= angle(i,j) && angle(i,j) < 157.5)
            q = magnitude(i-1,j-1);
            r = magnitude(i+1,j+1);
        end
        % compare neighbours to determine if central pixel is a local maximum
        if (magnitude(i,j) >= q) && (magnitude(i,j) >= r)
            suppressed(i,j) = magnitude(i,j);
        end
    end
end
end
1057 chars
30 lines

4. Double Thresholding

The fourth step is to perform double thresholding to determine potential edges. This can be done as follows:

main.m
function [thresholded, strong_edges] = double_thresholding(suppressed, low_threshold, high_threshold)
% Perform double thresholding
[rows,cols] = size(suppressed);
thresholded = zeros(rows,cols);
strong_edges = zeros(rows,cols);
% Define high and low threshold to seperate weak and strong edges
for i = 1:rows
    for j = 1:cols
        if suppressed(i,j) > high_threshold
            thresholded(i,j) = 1;
            strong_edges(i,j) = 1;
        elseif suppressed(i,j) < low_threshold
            thresholded(i,j) = 0;
        else
            thresholded(i,j) = 0.5;
        end
    end
end
end
600 chars
20 lines

5. Edge Tracking by Hysteresis

The fifth and final step is to track the edges by hysteresis. This can be done as follows:

main.m
function edge_map = hysteresis(strong_edges, thresholded)
% Perform hysteresis
[rows,cols] = size(strong_edges);
edge_map = strong_edges;
for i = 2:rows-1
    for j = 2:cols-1
        if (thresholded(i,j) == 0.5) && (any(strong_edges(i-1:i+1,j-1:j+1),'all'))
            edge_map(i,j) = 1;
        end
    end
end
end
318 chars
13 lines

Camera Pose Estimation

The canny edge detection function can be used in camera pose estimation by first detecting the edges in the reference image and the new image, and then matching the two images using some feature detection and matching algorithm such as SIFT or SURF. Once the corresponding points are extracted, the camera pose can be estimated using linear homography estimation or PnP algorithm.

main.m
% Load two images, img1 is the reference image and img2 is the new image.
img1 = imread('reference_image.jpg');
img2 = imread('new_image.jpg');

% Convert images to grayscale
gray_img1 = rgb2gray(img1);
gray_img2 = rgb2gray(img2);

% Perform canny edge detection on both images
blurred_img1 = gaussian_blur(gray_img1, 1);
[magnitude1, orientation1] = gradient_calculation(blurred_img1);
suppressed1 = non_max_suppression(magnitude1, orientation1);
[thresholded1, strong_edges1] = double_thresholding(suppressed1, 0.05, 0.15);
edge_map1 = hysteresis(strong_edges1, thresholded1);

blurred_img2 = gaussian_blur(gray_img2, 1);
[magnitude2, orientation2] = gradient_calculation(blurred_img2);
suppressed2 = non_max_suppression(magnitude2, orientation2);
[thresholded2, strong_edges2] = double_thresholding(suppressed2, 0.05, 0.15);
edge_map2 = hysteresis(strong_edges2, thresholded2);

% Perform feature matching using SIFT or any other algorithm
% Estimate camera pose using linear homography estimation or PnP algorithm
1018 chars
24 lines

This is how we can use the canny edge detection function to estimate the camera pose in Matlab.

gistlibby LogSnag