Extracts the edges of a color image without converting it to grayscale.
Changes in color are detected even when the grayscale color of two pixels are the same. The edge strength is typically greater or equal to the magnitude obtained by simply filtering a grayscale image.
Optionally, the edge orientation can also be returned.
The image generated by the example code (presented here as the screenshot) shows two edge types:
White - edges found by both methods.
Red - edges found only by the color method.
This clearly shows that a significant amount of information is lost by the standard method, but it is recovered with the gradient method.
figure, im = imread('peppers.png'); imshow(im)
%get color edges and normalize magnitude
C = coloredges(im);
C = C / max(C(:));
%get grayscale edges and normalize magnitude
G_image = single(rgb2gray(im)) / 255;
G = sqrt(imfilter(G_image, fspecial('sobel')').^2 + imfilter(G_image, fspecial('sobel')).^2);
G = G / max(G(:));
figure, imshow(uint8(255 * cat(3, C, G, G)))
The RGB color of each pixel is treated as a 3D vector, and the strength of the edge is the magnitude of the maximum gradient. This also works if the image is in any other (3-dimensional) color space. Direct formulas for the jacobian eigenvalues were used, so this function is vectorized and yields good results without sacrificing performance.
R = edge(im(:,:,1));
G = edge(im(:,:,2));
R_image = single((im(:,:,1))/255);
Ri=sqrt(imfilter(R_image, fspecial('sobel')').^2 + imfilter(R_image, fspecial('sobel')).^2);
G_image = single((im(:,:,2))/255);
Gi= sqrt(imfilter(G_image, fspecial('sobel')').^2 + imfilter(G_image, fspecial('sobel')).^2);
B_image = single((im(:,:,3))/255);
Bi= sqrt(imfilter(B_image, fspecial('sobel')').^2 + imfilter(B_image, fspecial('sobel')).^2);
figure, imshow(uint8(255 * cat(3, R, G, B)));
when i converts the given colour image to hsv image and performs this matlab code on hsv image, no edge information is getting... what is the reason??
please give the information regarding
'color image edge detection using split gaussian function'
Anyway, this is an interesting topic but I haven't studied it extensively, so if at times I seem out of touch I apologize :)
I didn't implement Simmoncelli's, but it's not surprising that the 5-point stencil is worse.
The estimation window is thin (5x1 or 1x5); it was really made for one-dimensional cases. It will be highly dependent on the direction of the axes, which makes it noisy.
(I assume you're not talking about the other 5-point stencil, which computes the Laplacian, not a directional derivative.)
The idea behind the Sobel and Prewitt filters is to take a simple difference estimator and blur it in 2 dimensions (ie, low-pass filter). The blurring is mostly isotropic, unlike 5-point stencil that gives high preference to one dimension/axis (x or y).
Following that line of reasoning, I tried convolution of a difference operator and a gaussian; it has good results and you can vary the sigma. Here's the code (also sobel and 5-point, commented out).
% yfilter = fspecial('sobel');
% yfilter = [-1, 8, -8, 1]' / 12;
sigma = 0.5;
yfilter = imfilter(fspecial('gaussian', ceil(sigma * 6), sigma), [1; 0; -1]);
Hey Joao ,
Well, you are right at some point but in Sobel-like edges you are considering only 3x3 neighborhoods, while more advanced derivatives such as Simoncelli or 5 point stencil use a larger neighborhood and they are proved to be more robust against noise.
Actually my questions was whether you evaluated any other derivatives or not. Thanks for the response. I would appreciate if you could try others and give a feedback. I happened to realize that the algorithm only works with Sobel-like derivatives.
Since some people asked by e-mail, here's more insight into the algorithm:
This seems to be an old technique and it's very well-known; I learned about it when I was a student, in my Computer Vision class. Unfortunately the lecture notes are not in English. But after searching around a bit, I found it's from this paper:
Silvano Di Zenzo, "A note on the gradient of a multi-image", 1986
You can download the PDF easily. What I did is all in that paper, but there are many ways of calculating the maximum eigenvalue (gradient magnitude). The author used those sin/cos formulas, but I calculated it directly by applying the eigenvalue formulas for 2x2 matrices, which you can find in Section 1.2 of The Matrix Cookbook (available online).
I'm not sure the derivative estimation can be improved much. I may be ignoring some more recent work, but my impression is that that line of research was dropped a while ago, and most people nowadays use simple estimates of the derivatives. The papers I read showed improvements on synthetic images but no real-life data. (Correct me if I'm wrong please!)
The reason is that obtaining "good" edges depends on your problem, and usually it doesn't make sense to optimize it at such a low-level. There's always a risk of overfitting your data. I come from a pattern recognition background, so usually we try to get good low-level features but sort them out at a higher level (eg., fitting line segments, classifying windows with SVMs).
This method yields better results when grayscale data doesn't show edges but color data does. Sobel filters are the same as simple differences (ie, xfilter=[-1 1]) with a small amount of smoothing. So I'm not sure you can squeeze much more performance out of edge detection code, even with those methods.
Although it seems theoretically correct, I have tried your code with different derivatives (such as 5 point stencil and Simoncelli's 5-tap derivatives), and the results got worse.
Could you think of any cause for this problem?
Updated example and screenshot, to show the differences between the standard method and the one presented here.