This submission includes 3 mfiles and 6 image files:
1- Zernike_main.m (The main script that takes care of everything)
2- Zernikmoment.m (Calculates the Zernike moments for an NxN ROI)
3- radialpoly.m (Calculates the radial polynomials which are prerequisites for calculating Zernike moments)
4- Six .png files to test the code.
When you run the Zernike_main.m, it will calculate the Zernike moments of order n=4 and repetition m=2 for the input images. Since the first row images are just the rotated versions of a unique object (oval), the magnitudes of the Zernike moments for these three images are the same. In addition, the differences between the phases of the moments are proportional to the rotation angles of the images. Expectedly, the Zernike moments of two different shapes (e.g. oval and rectangle) are totally different. The reason of this behavior is the ability of Zernike moments in describing the shape of objects.
License agreement: To acknowledge the use of the code please cite the following papers:
A. Tahmasbi, F. Saki, S. B. Shokouhi, Classification of Benign and Malignant Masses Based on Zernike Moments, Comput. Biol. Med., vol. 41, no. 8, pp. 726-735, 2011.
F. Saki, A. Tahmasbi, H. Soltanian-Zadeh, S. B. Shokouhi, Fast opposite weight learning rules with application in breast cancer diagnosis, Comput. Biol. Med., vol. 43, no. 1, pp. 32-41, 2013.
Can this be used for image reconstruction ?
please tell me how to extract feature from xray images using zernike features?
I've tried your code. It is really good but when I tried it shows these error
Index exceeds matrix dimensions.
Error in Zernikmoment (line 47)
Product = p(x,y).*Rad.*exp(-1i*m*Theta);
Error in process_predict_leaf (line 80)
[mom, amplitude, angle] =
Zernikmoment(b,4,0); % Call
Zernikemoment fuction n=4, m=0
Can you explain about the error? I really need help as I'm new in Zernike moment.
I have question
how can I get more value of Zernike moments?
Thank you very much for the detailed answer. I understand now how it works in terms of optical aberrations. There is only one thing that remains unclear to me now, i.e. is there any way to predict which value of m would yield the best results for a particular application? Are there any rules of thumb? Or is calculating moments for different m's and comparing errors the only way of predicting the best value of m?
Yes, it should and, hopefully, it will.
Thanks for your comment. As you may know, the Zernike moments were first developed to study optical aberrations. Though they are currently heavily used in image processing applications. Speaking of optical aberrations, please take a look at the following image
You will see the pupil functions associated with different aberrations on the left-hand side and their corresponding spatial images on the right-hand side. The spatial images are just the Fourier transform (FT) of the pupil functions since a lens takes an FT.
The important point is that different combinations of order (n) and repetition (m) of the Zernike moments represent different unique optical aberrations. For example, let's focus on the tip and tilt aberrations. For both of those, the order n is 1. In this case, the repetition (m) changes the rotation of the pupil function, where m=0 leads to tip and m=1 leads to tilt. Note that the tip and tilt aberrations correspond to a horizontal shift and vertical shift in the position of the image, respectively.
The take home message is that the repetition parameter typically only rotates the aberration pattern. Hope this answers your question.
Please feel free to contact me if you have any other concerns.
Will it run on any order with appropriate value of m (repeatn)?
Thank you for the piece of code that you've shared with us. I am new to the whole Zernike polynomials idea and I don't understand what is the 'repetition' parameter and how I could predict it's value for a particular application. I've read several articles about ZM algorithms and no one really explains that, and this is a key definition. Could you explain it to me? I think many people would be grateful for that (including me of course).
Thank you for your time.
I'm trying to use your code for 1024*1024 images and I keep getting the following error,
Error using .*
Integers can only be combined with integers of the same class, or
Error in Zernikmoment (line 47)
Product = p(x,y).*Rad.*exp(-1i*m*Theta);
Can you possibly explain me what causes the error?
I have been reading the article you sent me.
Very informative...I Love the code..
Here is the main publication I was referring to:
Hope this helps.
I believe have located the article in:
The link sends detailed study on the frequency of Zernike Moments mainly seemed to be broken.
Please could you tell me the title of the publication?
I'm trying to implement the Zernike moments for fingerprints of microorganisms. I read some of his works and certainly I will quote them in my works. For then enrich my knowledge on the subject.
Can you please help me in constructing the Zernike moment based kernels . These kernels when convolved with an image should detect the edge of it.
Thanks in advance
Can you provide me the help how to reconstruct the signal back after applying Zernike.
sir please tell me how can i count the zernike moment at some predefined points using ur code.thank u
I want to use your code in my application to match templates. I have explained my issue here:
I am not very familiar with Image Moments so this might seem very trivial to you. The outputs A and phi are almost similar for images with and without noise (shown in screenshot on that link). Is this beyond the scope of your code? Is there anything I can alter so it can solve my issue?
* 'A' seems very similar
This code is excellent.
My doubt is how to use zernike moment for classification in back propagation algorithm. please help
Thanks for your feedback. I think the code should work for MxN images with minor modifications. I will try to incorporate this feature into the next update of the code.
Unfortunately, I don't have the code for Pseudo-Zernike moments.
This code is excellent.
But I wonder if it works well for the image with size M*N?
And do you happen to have the code for Pseudo-Zernike moment?
Can you please be a little more specific about what you mean by the "kernel"? My understanding is that you what to plot the radial polynomials. Is this correct?
can you please provide codes for plotting the kernel
Thank you for your interest in my code. I have addressed your concerns as follows:
1- Regarding “Product = p(x,y).*Rad.*exp(-1i*m*Theta);” I should say this is actually correct. Remember that the Zernike moments are calculated by multiplying the input image p(x,y) and the complex conjugate of 2-D Zernike basis functions (typically denoted by V_n,m = R_n,m(\rho) . exp(j m \theta). In other words, we have “Product = p(x,y) V*_m,n = p(x,y) . R_n,m(\rho) . exp( - j m \theta)” .The negative sign that you are refereeing to is, therefore, added due to the complex conjugate operator. For a clearer explanation please see my paper, p. 730, eq. 11 (available on-line at http://www.utdallas.edu/~a.tahmasbi/publications/Zernike_CBM_2011.pdf ). Also you can take a look at the classic paper on Zernike moments by Dr. Khotanzad, p. 490, eq. 5 (available on-line at http://optics.sgu.ru/~ulianov/Bazarova/LASCA_literature/InvariantImageRecognitionZernikeMoments.pdf ). You can further verify this using eq. 49 in the web page you sent me.
2- You are right. The mapping we have used in the code, loses the information of the corners of the image. This is, however, a common mapping strategy from the image space into the unit disk that researchers typically use for the calculation of the Zernike moments (see e.g. Dr. Khotanzad’s paper above). From a practical point of view, it usually does not cause any problem since the object of interest is scaled such that it is circumscribed by the unit disc.
3- The code can easily be generalized to the non-square shaped ROIs. Since this is a questions that many other people have also asked, I will update the code on the MATLAB central page for rectangular shaped images when I get a chance (probably in a few weeks). You can also do it yourself.
4- I can make some MxN test image sets, but I will need some time to do this. You can also make those images using the square images I put on the MATLAB central page (just add say 5 rows above and below the ROI). Then, you can rotate the object within the same ROI and verify if the magnitudes of the moments remain the same.
first of all, thanks for sharing your code.
secondly i would like to ask a question, in which i dont know if i make a mistake or you did in your code.
Product = p(x,y).*Rad.*exp(i*m*Theta)
to calculate the complex polynominal in all the formulars i found
(e.g: http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/SHUTLER3/node11.html )
its written that j = sqrt(-1);
you wrote in your code -1*i instead, which is not the same. because sqrt(-1) is just the definition of the complex number to be i. so am i wrong or should it be:
Product = p(x,y).*Rad.*exp(i*m*Theta)
in your code, without the -1?
i hope its understandable what i want to say.
tanks again, frederik
Hello Amir. Thank you for your code. I have used your code to calculate zernike moments from an image, but i'm having problem reconstruct the image from it's moments. Could you provide me some help please?
1. your code assume that the image is square; and if it wasn't?
2. I don't understand Product = p(x,y).*Vnm? why not Product = p.*Vnm;
Thank you for the code.
But why in the image mapping you use (2.*X-N-1) and not (2.*X-N)?
Actually want to find the global features with zernike moments of the image but dono how to proceed with it??
can Any one say how to find the Zernike moments for rgb image...Thanks in advance
Thank you for your code. can u please hepl me to modify your code substituding p(x/a+x1,y/a+x2) to p(x,y),where p(x,y) is original image, x1 and x2 are the centroid of p(x,y),x1=m10/m00,x2=m01/m00,a=sqrt(β/m00),β is a predetermined value. In fact, this is doing scale and translation normalization. Thank you very much.
1- It mainly depends on your application and you need some experience to pick the appropriate order (n) and repetition (m) for your moments. However, the rule of thumb is that the lower orders provide less information (details) but are more robust to noise. Higher orders, on the other hand, provide more information about the details of the object but are more sensitive to the measurement noise. An efficient way is to calculate the Zernike moments for a variety of m and n values. You may then apply a feature selection algorithm and remove the correlated features (moments). You can finally use a group of say 20 moments, which are reasonably uncorrelated, to pass to your classifier.
2- I think with appropriate preprocessing steps, the Zernike moments should be useful for Farsi digit recognition.
oh, excuse me. I understand my answer.
but I have other questions:
1.How do I understand What values are appropriate for Assignment to n and m?
I'm using zernike moment for handwritten farsi digits recognition.
2. Are features that obtain from Zernike moments, useful for classification in handwritten farsi digits recognition?
your code calculates one Zernike moment and its magnitude. how to change this code to calculate more Zernike moments.
Yes, the code is capable of calculating the Zernike moments of any order and repetition. All you need to do is change m (repetition) and n (order). The default ones are n=4 and m=2. Hope this helps.
Sir I am using zernike moment for handwritten character recognition, I have applied zernike moment in terms of geometric moment, but it is upto 3 and 4 order, i want to extend the code can your files can extend it or extract zernike moment
thanks for replay
I know what you say
now almost I have identification for some plamprints
I used high zernike order
and better database
also i maked unit disk for pic
I will make some addition and see what happen
In principle, the Zernike moments can be extracted from any shape. However, in your case, there could be a variety of potential issues. First of all, you need to make sure the objects (palm prints) are of equal size within the ROI. If not, you need to equalize the size of your objects in all ROIs.
The other thing is that the palm prints are more complicated object than simple shapes, e.g. oval and circle. Thus, you might need to extract a set of higher order Zernike moments. You can then use these moments to classify the palm prints of different people more reliably.
FYI, the magnitude of the Zernike moments of two similar shapes might be slightly vary due to the pixelation and noise. But the difference should be insignificant.
I used your zernike to find zernike moment for palmprint
but there are some problem
when use for that A=0,Theta=0
I make some changed to the code
I replaced p = logical(not(p));
and it is work after that with palmprint and your pics nad values equal with yours
but the problem the values does not equal for same person palmprint
Thanks! The input arguments "m" and "n" are scalars. This means that if you would like to extract say 32 moments, you need to have a "for loop" in which you change "n" and "m", and call the Zernikemoment(p, n, m). Depending on your application you can either change "n" from y to 32+y or use different combinations of "m" and "n". For more information, see Table 1 in page 731 of this article:
Hope this helps.
nice job,,, one question, if we say 32 moments so i expect when I call [ZOH AOH PhiOH] = Zernikmoment(p,n,m);
the ZOH to be an array of 32 ? or what 32 moments mean here ?
The Zernike moments are rotation-invariant, no question on it! So, if you use the sample pictures included in the package, you will see this feature.
The reason that you are getting different results for the abs of Zernike moments is explained as follows. The MATLAB function "imrotate" does not preserve the size of an object in the ROI. Please note that the ROI size will be the same but the original image will be shrank in the new ROI. Thus, you are changing the object size that alters the abs of the Zernike moments. Hope this helps.
hey..can u please help me to identify that how this code is invariant to rotation..
because if i apply 'imrotate' with 30 or 45 degree on an image, then the result is different.
Yes, that is correct. By changing 'n' you can change the order of the Zernike moments to generate a set of say 32 features. However, you should keep in mind that also the variable 'm' (i.e. the repetition of moments) plays an important rule in the behavior of the moments. Hence, 'm' should be selected adaptively by changing 'n'. To find a suitable repetition for your proposed order, please see:
Hope this helps! Let me know if you have any other concerns.
Am new to this Zernike Moments. I tried to understand your code. I have few doubts. How to find 32 order features for a given image? Is it by varying the number 'n'? Please reply.
Thanks guys! Chris: I refer you to one of our papers in which we normalize the ROIs before extracting the Zernike moments (i.e. we remove the dependency of Zernike moments on the translation and scaling of object in the preprocessing step). Here it is:
Another way is to use the Zernike moment invariants explained nicely here:
Good job!And, how is the zernike moment invariants
Nice job!! Thank you so much.
- Updated to make the submission available as a toolbox in MATLAB R2014b.
Just updated the description.
Added more sample shapes with different rotation angles.
Rephrased the summary for clarity.
Removed several minor mistakes in the description of the file.
Download apps, toolboxes, and other File Exchange content using Add-On Explorer in MATLAB.