How to choose the parameters of vision.ForeGroundDetector ?
4 views (last 30 days)
Show older comments
Hi everyone,
I am trying to use the ForegroundDetector of the computer vision toolbox in order to detect moving objects in a grayscale image.
However, I encounter some trouble to choose the right parameters of the function, that is to say: - number of gaussians - number of training frames - learning rate - initial variance.
Is there any tip to determine the best parameters, or is it only empirical depending on the images ?
I feel like the crucial parameter is the initial variance for me. But I am not really familiar with statistics and image processing, so that I don't really understand well what these parameters are...
Thanks for any help.
0 Comments
Answers (2)
Dima Lisin
on 6 Jan 2015
Hi Benjamin,
Initial variance is indeed crucial, and it depends on the range of pixel values in your video. If the data type of your frame is 'double' or 'single', with the pixel values ranging between 0 and 1, then you should use the default value of InitialVariance, which is (30/255)^2. If your frame is of type 'uint8', with the values ranging between 0 and 255, then you should set InitialVariance to 30^2.
The other parameters depend on the content of your video. You can reduce NumGaussians if you have nice static background, as in an indoor environment. On the other hand, more Gaussians help when your backround is non-stationary. Examples would be outdoor scenes with rustling leaves or the surface of a sea or a lake.
LearningRate controls how quickly your background model adapts to changes in the background. If the LearningRate is too high, then slow moving objects may become part of the background. If the LearningRate is too low, then your background model will not be able to adopt to lighting changes.
2 Comments
Stephan Zimmer
on 5 Mar 2021
Hey Benjamin,
have you found a solution to all of your questions? I am writing right now my master thesis and have the same problems as you had. I would be very happy if you could share your experience!
Best regards,
Stephan
Ahsan Malik
on 13 Nov 2015
Hello everyone, I want to ask that if there is any way to train foregrounddetector on some other video containing background and then test on the video containing same background with foreground objects. Or can we take 50 training samples other than the start .
2 Comments
Jonathan Meerholz
on 29 Sep 2020
I know this is a late response, however I belive it is still useful for others.
As far as I understand , say you setup your detector as follows:
detector = vision.ForegroundDetector(...
'NumTrainingFrames', 150, ...
'InitialVariance', 30*30);
Then the next 150 calls of the detector will be used to train the detector/ background model. Thus, you could use 150 frames of a training video to train then use it on actual footage after:
video_training = VideoReader("video_training_path"); %Path to training video
video_input = VideoReader("video_input_path"); %Path to input video
% Training Loop
for i = 1:150
frame_current = readFrame( video_training ); %Read Training frame
bw_training = detector(frame_current); %Train detector
end
% Detection Loop
while hasFrame(video_input)
frame_current = readFrame( video_input ); %Read actual video input
bw_mask = detector(frame_current); %Use trained detector
end
See Also
Categories
Find more on Computer Vision Toolbox in Help Center and File Exchange
Products
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!