Why does using normxcorr2 slow down my code seriously?

I am working on template matching in fingeprints and after extracting minutiae points, I extract a template using a window size of 29 by 29 around a minutiae point for each minutiae, then i want to get the most unique of the template points in respect to the fingperint image. I am supposed to use cross correlation to match the each extracted template round the fingeprint image to find out the one that correlates the least as when compared to its correlation on its original location and from what i've read normxcorr2 seems to be the best for that, but when i implemented it, it takes so long to finish, if i actually let it finish it could take more than an hour because it takes over a minute for just one minutiae point and there are about 200 I have extracted. The approach i used is to extract a template of the same size around each pixel in the fingerprint image and cross correlate it with the minutiae template.
Please what can I do to solve this issue now? because it runs too slowly.
I can post the code if needed. Please help.

11 Comments

You can run matlab's profile function for a 1-2 iterations. It will produce a report showing the processing time for each component in your function/script. Then you can see where the bottlenecks are.
Thank you Adam Danz for your reply, I did this earlier and i saw that the time was taken mostly in the normxcorr2 function and a function i wrote to extract a template image from a pixel coordinate
What can i do about this now?
If that's the case, I'm not sure there are many options. If you're certain that normxcorr2 is the bottleneck and that the rest of your code is running optimally in terms of coding efficiency and memory management, there may not be a way to speed that up. If you have concerns that other parts of your code may be suboptimal, you could attach the minimally required section of code and an example file to read in and I could take a look at it at some point.
The good news is that 1-2 hours is manageable for something that only needs to be done once. If it's something you'd run frequently, I can see how that time is a problem. I'd suggest saving outputs as they are produced so if you have an error, you can determine where to start the analysis again. I also recommend using dbstop if error so if you have an error, you can poke around to see what caused the error without waiting for the error to happen again.
Perhaps someone with a background in this type of analysis could suggest a different approach to try.
These are the two functions i wrote, The first one extracts an image template of a specified size around a pixel coordinate while the second one compares each template extracted from a minutia point to every template of every pixel in the fingeprint image.
Sorry this is late I've been having issues with my light.
Also, I was wondering if it makes sense to get SURF or SIFT interest points from each template and pick the one that has the most interest points as the most unique template as an alternative to this normalized cross correlation approach of finding the most unique template?
Why are you using Live Code file format instead of an m-file? Live Code is great for instructions and presentations but I wouldn't use to for lengthy data anlysis. The first thing I'd do is move the code to m-file and see if that makes a big difference.
Anyways, I can't run your code without having the inputs. You could attach a mat file or any files needed so I can run your functions.
[unRow, unCol] = getUniqueTemplate(thinImg, templatelist, winsize);
This is how i made it to be used. I've added the template list and thinImg data.
Please let me know if it is very slow on your system. Thanks a lot
And i was also wondering about using SURF for the matching. I was wondering if it makes sense to get SURF or SIFT interest points from each template and pick the one that has the most interest points as the most unique template as an alternative to this normalized cross correlation approach of finding the most unique template? Because i will need to find the location of the template in another fingerpint image of the same finger. Do you think this approach is okay?
Thanks again
Neither mat file contains the winsize variable.
I apologize, forgot to add that winsize is 29

Sign in to comment.

 Accepted Answer

The normxcorr2 function is the bottleneck of processing speed in your code consuming 49% of the total processing time however, you can make some improvements to make the process more efficient.
Using variable names from this line of code,
corrmat = normxcorr2(primtemplate, querytemplate);
it's often the case that querytemplate is a matrix of 0s and there's no purpose in sending a matrix of 0s through normxcorr2. You can reduce the number of calls to normxcorr2 by adding this condition after computing quertytemplate. It will skip to the next col loop before executing normxcorr2.
if all(querytemplate==0, 'all')
continue
end
I haven't deconstructed your code but at a glance, it looks like you may be repeating computations when a new primtemplate is chosen. Once two windows have been correlated, there's no reason to execute that computation again.
Lastly, the normxcorr2 function supports GPU arrays which can greatly speed up processing.

7 Comments

Thank you so much Adam Danz, I really appreciate the time you've taken to help me.
I will add that line to the code, I didn't really know how to check to bypass all zero arrays. Also please does it make sense to use Surf to get interest points from each of the templates and pick the one with the most interest points as the most unique?
I'm fairly convinced that normxcorr2 is not the right appoach but at the same time, I don't have another suggestion and would need to invest much more time to dig into it.
If you check out the 1st example in normxcoor2 (r2020a), the surf() reveals 1 peak. In many of the iterations in your code, the normxcoor2 results in lots of peaks with the same height and they are spread across the window (the first iteration that reaces normxcorr2 is a good example). You're just selecting the first peak in,
localcorrvalues(row,col) = corrmat(ypeak(1),xpeak(1)); %get the correlation value
and I don't know the reason behind that. Even when there is one peak and the correlation is high (for example, when tempcount==1, row==15, col==177 where the correlation at that peak is ~0.91, surf-plot below), the querytemplat is still a spase matrix (image shown below) with very little non-zero data.
I wanted to to get the template that it's correlation at other places of the fingerprint is the lowest when compared to it's correlation at it's original position, that was the idea to get the most unique template of the number of templates presented. But truly I had started thinking also that normxcorr2 may not be what I need. That's why I am thinking of using the Surf algorithm for the template matching instead. Do you think I can swap the normxcorr2 for surf algorithm?
You could try it using matchFeatures().
I will go with the surf algorithm. Thanks for your help. I really appreciate it.

Sign in to comment.

More Answers (0)

Asked:

on 6 Apr 2020

Edited:

on 9 Apr 2020

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!