Color Tracking in a loaded Video

13 views (last 30 days)
Raoul Can
Raoul Can on 26 Jan 2016
Commented: Iran Neto on 25 Jan 2022
Hello,
i want to track objects by color. But not with live webcam. I want to track objects by color in a loaded video, but it don'tz work:
if true
VideoReader.getFileFormats() % see full list
vidObj = VideoReader('C:\VIDEO.MP4'); % open file
get(vidObj)
nFrames=get(vidObj, 'NumberOfFrames');
width = vidObj.Width; % get image width
height =vidObj.Height; % get image height
for iFrame=1:nFrames
I = read(vidObj, iFrame); %get one RGB image
diff_im = imsubtract(I(:,:,1),1), rgb2gray(I);
diff_im = medfilt2(diff_im, [3 3]);
diff_im = im2bw(diff_im,0.18);
diff_im = bwareaopen(diff_im,300);
bw = bwlabel(diff_im, 8);
stats = regionprops(bw, 'BoundingBox', 'Centroid');
imshow(I,[]);
for object = 1:length(stats)
bb = stats(object).BoundingBox;
bc = stats(object).Centroid;
rectangle('Position',bb,'EdgeColor','r','LineWidth',2)
plot(bc(1),bc(2), '-m+')
a=text(bc(1)+15,bc(2), strcat('X: ', num2str(round(bc(1))), ' Y: ', num2str(round(bc(2)))));
set(a, 'FontName', 'Arial', 'FontWeight', 'bold', 'FontSize', 12, 'Color', 'black');
end
end
end
Can anybody help me? And my second question: If i have a snapshot from live video and i want to compare this with a second snapshot for example i one objects is at other position, how can I do this?

Accepted Answer

Raoul Can
Raoul Can on 26 Jan 2016
Edited: Raoul Can on 26 Jan 2016
I loaded my own video but I didnt see the bounding box. But I think it is what I need, thanks. Can i fill the rbounding box with the respective color for better color-detection?
I try to use a similar code for live tracking to control my robot arm. How i can implement this in my own code?
if true
.....
.....
n=length(Daten)
for i = 1: length(Daten)
drive_x = RobotArm_Koordinate_x(i)/1000
drive_y = RobotArm_Koordinate_y(i)/1000
drive_z = RobotArm_Koordinate_z(i)/1000
% NULL Position
R_DriveCoordinate(handles, x, y, z, theta);
% Drive to ...
R_DriveCoordinate(handles, drive_x, drive_y, drive_z, drive_theta(i));
....
end
At the Moment I can select a color and make a snapshot. After that I can click on "OK" button and drive to the coordinates.

More Answers (3)

Image Analyst
Image Analyst on 26 Jan 2016
See my attached demo where I track a green Sharpie marker.
  19 Comments
Image Analyst
Image Analyst on 25 Jan 2022
@Iran Neto It depends on whether you're casting to double or not. If you use a using a uint8 image, then the Color Thresholder app and rgb2hsv() will give you the value image in the range 0-1. If you're using a double version, like double(rgbImage), then rgb2hsv() will give yout he value image in the range 0-255

Sign in to comment.


Raoul Can
Raoul Can on 26 Jan 2016
Edited: Image Analyst on 25 Jan 2022
Oh I have a problem with my video. I cannot see the one with the boxes, why? And how I can change the size of the images?
I want to see only the image with the bounding boxes as full screen , how i can change this?
  1 Comment
Image Analyst
Image Analyst on 25 Jan 2022
If you don't see the boxes with your video, then you don't have anything in your video that matches the green range that I defined for my video. You'll have to adjust your thresholds.
You can maximize the window whenever you want with this code
g = gcf;
g.WindowState = 'maximized';

Sign in to comment.


Raoul Can
Raoul Can on 27 Jan 2016
Edited: Raoul Can on 27 Jan 2016
Hi,
I don't understand it. In this example we extract red from grayscale image to extract the red components in the image.
diff_im = imsubtract(data(:,:,1), rgb2gray(data));
And in your example we don't have a variable for difference so I have to select the difference (not segmentated) directly. I this correctly? Or why I hav to select the not segmentated part. I don't understand
  1 Comment
Image Analyst
Image Analyst on 25 Jan 2022
That code you showed is a different way for detecting red. I don't think it's as robust as doing a true color segmentation like I did when I converted the image to HSV color space. Doing it in RGB space like you're showing probably won't work for all shades of red. I don't get a difference image from subtracting a gray scale version of the image from the red cahnnel because I'm not doing it like that. I get the red mask by thresholding the Hue, Saturation, and Value channels. Again, this is better and more robust.

Sign in to comment.

Categories

Find more on Image Processing and Computer Vision in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!