How to eliminate shadow from the foreground image?

10 views (last 30 days)
I have images of foreground and background.While i want to subtract the background from foreground,the foreground image also contains the shadow of a workpiece that i want to eliminate.
I have come across <http://en.pudn.com/downloads266/sourcecode/graph/texture_mapping/detail1221134_en.html code to eliminate the shadows > code that does these following job.
Demo Foreground image
Demo background image
resultant image detecting the shadows in foreground
Separated shadow from the foreground image
The code for the aforementioned operation is:
a=imread('foreground.jpg');
b=imread('background.jpg');
da=double(a);
db=double(b);
D=imabsdiff(a,b);
r=zeros(240,320);
h=a;
for ix=1:240
for iy=1:320
if D(ix,iy)>20
if da(ix,iy,1)~=0&da(ix,iy,2)~=0&da(ix,iy,3)~=0
if (db(ix,iy,1)/da(ix,iy,1)<4)&(db(ix,iy,1)/da(ix,iy,1)>1.5)
if (db(ix,iy,2)/da(ix,iy,2)<2.8)&(db(ix,iy,2)/da(ix,iy,2)>1.3)
if (db(ix,iy,3)/da(ix,iy,3)<2.05)&(db(ix,iy,3)/da(ix,iy,3)>1.14)
if (db(ix,iy,3)/da(ix,iy,3)<db(ix,iy,1)/da(ix,iy,1))&(db(ix,iy,3)/da(ix,iy,3)<db(ix,iy,2)/da(ix,iy,2))&(db(ix,iy,2)/da(ix,iy,2)<db(ix,iy,1)/da(ix,iy,1))
if abs(da(ix,iy,1)/(da(ix,iy,1)+da(ix,iy,2)+da(ix,iy,3))-db(ix,iy,1)/(db(ix,iy,1)+db(ix,iy,2)+db(ix,iy,3)))<0.129
if abs(da(ix,iy,2)/(da(ix,iy,1)+da(ix,iy,2)+da(ix,iy,3))-db(ix,iy,2)/(db(ix,iy,1)+db(ix,iy,2)+db(ix,iy,3)))<0.028
if abs(da(ix,iy,3)/(da(ix,iy,1)+da(ix,iy,2)+da(ix,iy,3))-db(ix,iy,3)/(db(ix,iy,1)+db(ix,iy,2)+db(ix,iy,3)))<0.143
r(ix,iy)=0;
h(ix,iy,1)=255;
h(ix,iy,2)=255;
h(ix,iy,3)=255;
end
end
end
end
end
end
end
end
end
end
end
imshow(h);
im=h-a;
imshow(im);
As i wanted to identify the shadow in my foreground image, i applied the same code for the foreground and background images.
Foreground image with shadows
Background image
I have used the following code but not able to identify the shadows in my case. I am not able to obtain the shadows as you see here Could anybody suggests how could i detect and eliminate the shadows from the foreground image?
My code:
a=imread('C:\Users\PDCA 1\Desktop\Input images\case1\materialleft_mat1.jpg');
b=imread('C:\Users\PDCA 1\Desktop\Input images\case1\backgroundleft_mat1.jpg');
da=double(a);
db=double(b);
D=imabsdiff(a,b);
N=size(D,1);
M=size(D,2);
r=zeros(N,M);
h=a;
for ix=1:N
for iy=1:M
if D(ix,iy)>20
if da(ix,iy,1)~=0&da(ix,iy,2)~=0&da(ix,iy,3)~=0
if (db(ix,iy,1)/da(ix,iy,1)<4)&(db(ix,iy,1)/da(ix,iy,1)>1.5)
if (db(ix,iy,2)/da(ix,iy,2)<2.8)&(db(ix,iy,2)/da(ix,iy,2)>1.3)
if (db(ix,iy,3)/da(ix,iy,3)<2.05)&(db(ix,iy,3)/da(ix,iy,3)>1.14)
if (db(ix,iy,3)/da(ix,iy,3)<db(ix,iy,1)/da(ix,iy,1))&(db(ix,iy,3)/da(ix,iy,3)<db(ix,iy,2)/da(ix,iy,2))&(db(ix,iy,2)/da(ix,iy,2)<db(ix,iy,1)/da(ix,iy,1))
if abs(da(ix,iy,1)/(da(ix,iy,1)+da(ix,iy,2)+da(ix,iy,3))-db(ix,iy,1)/(db(ix,iy,1)+db(ix,iy,2)+db(ix,iy,3)))<0.129
if abs(da(ix,iy,2)/(da(ix,iy,1)+da(ix,iy,2)+da(ix,iy,3))-db(ix,iy,2)/(db(ix,iy,1)+db(ix,iy,2)+db(ix,iy,3)))<0.028
if abs(da(ix,iy,3)/(da(ix,iy,1)+da(ix,iy,2)+da(ix,iy,3))-db(ix,iy,3)/(db(ix,iy,1)+db(ix,iy,2)+db(ix,iy,3)))<0.143
r(ix,iy)=0;
h(ix,iy,1)=255;
h(ix,iy,2)=255;
h(ix,iy,3)=255;
end
end
end
end
end
end
end
end
end
end
end
imshow(h);
im=h-a;
imshow(im);
The results are totally wrong.The shadows identified for my image are not at all correct.
The wrong shadow identified in the foreground image
And the erroneous shadow is given as
Can anybody please tell me where i have made the mistake and how to identify the shadow for my set of images?
Also how to get rid of the identified shadow to only obtain the foreground object without shadow weight? Any help is much appreciated.Thanks in advance.

Accepted Answer

Image Analyst
Image Analyst on 15 Mar 2015
As a purely instructional exercise, I show you how to get the weld area. I wouldn't recommend this way in the real world for reasons I explained in my other answers. This is just purely illustrative for a classroom example and not ideal, optimized, or robust enough for an industrial application. Change the filenames to get it to work.
clc;
close all;
workspace; % Make sure the workspace panel with all the variables is showing.
format longg;
format compact;
fontSize = 20;
% Check that user has the Image Processing Toolbox installed.
hasIPT = license('test', 'image_toolbox');
if ~hasIPT
% User does not have the toolbox installed.
message = sprintf('Sorry, but you do not seem to have the Image Processing Toolbox.\nDo you want to try to continue anyway?');
reply = questdlg(message, 'Toolbox missing', 'Yes', 'No', 'Yes');
if strcmpi(reply, 'No')
% User said No, so exit.
return;
end
end
%==========================================================================
baseFileName = 'part.jpg';
folder = 'C:\Users\srinivasan\Documents\Temporary';
% Get the full filename, with path prepended.
fullFileName = fullfile(folder, baseFileName);
% Check if file exists.
if ~exist(fullFileName, 'file')
% File doesn't exist -- didn't find it there. Check the search path for it.
fullFileNameOnSearchPath = baseFileName; % No path this time.
if ~exist(fullFileNameOnSearchPath, 'file')
% Still didn't find it. Alert user.
errorMessage = sprintf('Error: %s does not exist in the search path folders.', fullFileName);
uiwait(warndlg(errorMessage));
return;
end
end
grayImage = imread(fullFileName);
% Get the dimensions of the image.
% numberOfColorBands should be = 1.
[rows, columns, numberOfColorBands] = size(grayImage);
if numberOfColorBands > 1
% It's not really gray scale like we expected - it's color.
% Convert it to gray scale by taking only the green channel.
grayImage = grayImage(:, :, 2); % Take green channel.
end
% Display the original gray scale image.
subplot(2, 3, 1);
imshow(grayImage, []);
title('Original Grayscale Image', 'FontSize', fontSize);
% Enlarge figure to full screen.
set(gcf, 'Units', 'Normalized', 'OuterPosition', [0 0 1 1]);
% Give a name to the title bar.
set(gcf, 'Name', 'Demo by ImageAnalyst', 'NumberTitle', 'Off')
%==========================================================================
baseFileName = 'background.jpg';
folder = 'C:\Users\srinivasan\Documents\Temporary';
% Get the full filename, with path prepended.
fullFileName = fullfile(folder, baseFileName);
% Check if file exists.
if ~exist(fullFileName, 'file')
% File doesn't exist -- didn't find it there. Check the search path for it.
fullFileNameOnSearchPath = baseFileName; % No path this time.
if ~exist(fullFileNameOnSearchPath, 'file')
% Still didn't find it. Alert user.
errorMessage = sprintf('Error: %s does not exist in the search path folders.', fullFileName);
uiwait(warndlg(errorMessage));
return;
end
end
backgroundImage = imread(fullFileName);
% Get the dimensions of the image.
% numberOfColorBands should be = 1.
[rows, columns, numberOfColorBands] = size(backgroundImage);
if numberOfColorBands > 1
% It's not really gray scale like we expected - it's color.
% Convert it to gray scale by taking only the green channel.
backgroundImage = backgroundImage(:, :, 2); % Take green channel.
end
% Display the image.
subplot(2, 3, 2);
imshow(backgroundImage, []);
title('Background Image', 'FontSize', fontSize);
% Compute the difference image
diffImage = double(grayImage) - double(backgroundImage);
% Display the image.
subplot(2, 3, 3);
imshow(diffImage, []);
title('Difference Image', 'FontSize', fontSize);
colorbar;
axis on;
textureImage = stdfilt(diffImage, true(9));
subplot(2, 3, 4);
imshow(textureImage, []);
title('Texture Image', 'FontSize', fontSize);
colorbar;
axis on;
% Let's compute and display the histogram.
[pixelCount, grayLevels] = hist(textureImage(:), 256);
subplot(2, 3, 5);
bar(grayLevels, pixelCount);
grid on;
title('Histogram of texture image', 'FontSize', fontSize);
% Find mask
mask = textureImage > 30;
% Get the convex hull to join the two sides.
mask = bwconvhull(mask, 'union');
% Display the mask image.
subplot(2, 3, 6);
imshow(mask, []);
axis on;
title('Mask Image', 'FontSize', fontSize);
% I don't know what you want to do after this.

More Answers (2)

Image Analyst
Image Analyst on 14 Mar 2015
The mistake was not correcting your image capture geometry to avoid shadows in the first place. Do that and you don't need to do any shadow removal post-capture and your software will be much simpler.
  4 Comments
Image Analyst
Image Analyst on 14 Mar 2015
Then post the images in the workshop environment. Right now it looks like you just put the part onto a table top - a table top where there is no reason for it to not be black. Post the actual environment. And I've worked on machine vision apps of course. If you have a factory situation and you need to inspect this part for the weld (say for a medical instrument or whatever), then you will place the part into a jig where it is accurately positioned every time. If you don't, then you can't inspect welds. For example if this were just a big bin with a bunch of parts dropped in at random angles and orientations then there's no way you can reliably inspect welds. So in that case, where you have an accurately positioned part, you can simply crop the image to the known location of the part.
If you don't want to do that and insist on using your poor input image then I would then realize that this is merely a student project and not some real world industrial or scientific project where you're interested in the best analysis possible, and you're more interested in developing some algorithm for a "made up" problem. To do the accurate real world industrial problem is far easier than the student project with poor images. They require different algorithms so I need to know which it is. Hopefully you'll go with the easier and more accurate method than the harder and less accurate method.

Sign in to comment.


Srinivasan
Srinivasan on 15 Mar 2015
Thank you very much for your kind help. I am trying to find out the depth of the weld seam from the stereo images taken from two calibrated cameras.
At first i wanted to find out the depth from the right and left images using the process described here. While i have calibrated the two cameras and rectified the right and left images, i could not find out the disparity correctly as the workpieces are having texture less regions.
As i wanted to find out the correspondences between two images,i am now trying to use the feature based correspondences so that i would get the depth of the weld seam. I am trying to use the boundary points of the workpieces for the correspondences.
I have uploaded all the background and foreground images taken from the left and right camera here.
I request you to kindly suggest me a approach to determine the depth of the weld seam.The cameras are identical so i obtained the same results for calibration. I have used a interocular distance of 20mm. The rotation of left camera with respect to right camera is eye(3) as there are no rotation involved. The translation of left camera with right camera is [20;0;0].
I used matlab calibration app to determine the intrinsic and extrinsic parameters of both cameras. After determining the camera parameters of both left and right cameras,i have used the following code to obtain the stereo parameters:
stereoParams = stereoParameters(cameraParamsLeft,cameraParamsRight,eye(3),[20;0;0]);
I implore you to suggest me a way to find out the depth using the features detected in two images as i have yet to come across a matlab code that uses features in both images for stereo matching. Thanks again for you considerations.
  1 Comment
Image Analyst
Image Analyst on 15 Mar 2015
My first approach would be probably to have a robot pick up the part and put it onto a jig that rotates and then use a laser scanner to do profilometry on the part as it spins.

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!