finding non-linear gamma model's parameters using lsqnonlin

2 views (last 30 days)
I am going to do a gamma correction on some images. So according to these equations, I have to find unknown parameters using “ lsqnonlin ” function.
in this problem i have a test target like this:
which i know RGB values of every color-square as (R,G,B)_in. and i captured from this color palate with my camera and i have another RGB values as (R,G,B)_out'. so I am going to estimate A, k with " lsqnonlin function".
After estimating A,k, I have to estimate gamma value from equation (3). but I don’t know how to use " lsqnonlin function" to solve my problem(s).
and initial values are:
Any help will be appreciated. Tanx:)

Answers (2)

Image Analyst
Image Analyst on 31 Dec 2014
I understand the equations - they're simplified versions of what I've used before. But I really don't think you need to do any of that, at least not in RGB color space. What you really should be doing is RGB to LAB calibration. But first let's talk about what you think you want to do.
So, what is your RGBout? It's the sRGB values - the nominal values - that are supplied in the reference paper in your Color Checker chart by the manufacturer, x-rite. By the way, you can get the chart directly from x-rite - one of the big 4 manufacturers of spectrophotometers: http://xritephoto.com/ph_product_overview.aspx?ID=1192. That's probably the best reference RGB values you have, unless you're trying to specifically match some other camera that you're defining as the master, gold-standard camera. Anyway, the x-rite values are sRGB values and as you can see on the Wikipedia page for sRGB http://en.wikipedia.org/wiki/SRGB there is not a single gamma for sRGB, though there is a rough overall gamma of 2.4. So your reference values that you want to try to match are already non-linear. Well, if you're going to use your second set of equations to map a set or RGB values (Rout', etc.) into non-linear RGB (Rout) using the gamma formula, then the Rout' should be non-linear and the Rout should be linear, which they're not - they already have a gamma built into them if you use your ColorChecker chart sRGB values as the reference values. So what do you get if you do what you showed? Well you get a gamma for mapping a non-linear set of RGB into another non-linear set of RGB. Well if your camera was set up with a gamma of close to 2.4 (say it's 2.3) then the gamma you'll get is for mapping a curve of gamma 2.3 into a curve for gamma of about 2.4. That's not much change at all so the gamma you'll get out is close to 1. Well if a gamma is close to 1 you'll think your camera is linear - which is very deceptive because it was actually 2.3 but you didn't understand the equations so you got out wrong values. You'll think "great, I'm almost linear, just what a CCD should be" but then you'll plot the gray level intensity of the gray chips vs. the Y value of the chips (also supplied by x-rite) and you'll notice that it's not linear - there's a gamma curve of around 2.3 due to your camera. Then you'll scratch your head saying "how could that be when I just showed the gamma is 1?" Well, it's because you did not map your camera's RGB (with it's gamma of 2.3) into linear values, you mapped them into non-linear values close to what you already have, except maybe for a brightness offset.
But what if you're using a good scientific/industrial camera where you can turn the gamma off (set gamma = 1) and get linear RGB values out? Well then your equations would tell you what gamma you'd need to map your linear RGB values into sRGB values. What good is that? None, really. It would merely tell you what Wikipedia did - that the gamma of sRGB is about 2.4. But so what? How does that help you? It doesn't.
OK, so what a complicated mess. I hope you followed it but I wouldn't be surprised if you didn't because it IS complicated and it takes people years to get a good feeling on this color science topic. Even I often get confused.
So, how do you avoid this problem? Well it's an unnecessary problem that you don't even need to worry about, so don't. What you need to do is to characterize your camera so that whatever image it produces, you map that into XYZ color space. Now if you're using a good camera that you can tell it to be linear, then you can map the RGB into XYZ. The XYZ values are supplied by x-rite and you can take them as the "true" values. The great thing is that XYZ is almost linear compared to RGB. In other words, X is pretty much like red, the Y is like green and the Z is like blue, so you can get a fairly linear mapping of RGB into XYZ. This is good because you don't have to have wild, crazy, higher order curves that can throw off the estimate in between your training colors (which are the 24 chip colors).
So then you map RGB into XYZ with a fairly smooth polynomial, but with some higher order terms and cross terms, like X = function of R, G, B, R^2, G^2, B^2, R*G, R*B, and G*B plus an offset term. Then you use least squares to solve for the coefficients mapping all of those terms into an estimated X. Then you do the same for Y and Z. Then you use the analytical equations http://www.easyrgb.com/index.php?X=MATH&H=07#text7 to go from XYZ into LAB. NOW THIS IS WHAT YOU WANT! You have a calibrated system that is independent of exposure level, gamma, etc. You can take a picture of your scene with a color checker chart in it with any camera under any lighting conditions/colors (within reason) and get the same calibrated LAB values out. Now that you have that you just carry our your segmentation algorithm in LAB color space.
OK, I'm sure I've lost you by now. It's hard to impart years of color science training and practice into an Answers message. I suggest you read Charles Poynton's COlor FAQ and gamma FAQ here: http://www.poynton.com/
  6 Comments
Star Strider
Star Strider on 31 Dec 2014
Can you include Image Analyst’s approach as part of your method? It would seem to me that correcting misconceptions about the appropriate instrumentation and analysis would go far in advancing photomicroscopic diagnosis.
I speak from some experience. The group I was with was one of the first two (in the early 1990s, the other was Pfürtscheller’s group) to discover that it was possible to determine the task a person was performing by analysing the person’s EEG activation patterns. (In our group, it was my idea!) Don’t be reticent in advancing new approaches.
Image Analyst
Image Analyst on 1 Jan 2015
Well go ahead, be my guest. But I warn you that the colors they use in their paper and claim to be the official RGB values of the chips are not what is supplied. Just look at the link I gave you and see for example that the official yellow is (231,199, 31), but in their paper they give (255,217,0). Hmmmmm.... Makes you wonder where they got their values. Perhaps they are not the sRGB values but they undid the sRGB transfer function to linearize it - if they did so, they didn't mention it.
Anyway, have a crack at their Appendix A if you want. What you'll end up with is some altered RGB image different than your original one. But then what? You're still going to have to do color classification or segmentation. Anyway, good luck with this learning adventure. I hope by going through it you will start to realize what I said. Don't feel bad - it's very complicated, I know from experience of learning it myself and teaching it to hundreds of students.

Sign in to comment.


Image Analyst
Image Analyst on 30 Dec 2014
As you might guess from my icon to the left, I do color calibrated image analysis all the time . You can do color standarization, where you match your colors to some "gold standard" image (RGB-to-RGB conversion). Or you can do calibrated color analysis where you convert your colors to the CIE LAB values (RGB-to-LAB conversion). Or you can do both if you need to (which is only required under certain situations). RGB-to-LAB is really best because you're mapping the RGB to known standards whereas with RGB-to-RGB you're just mapping the RGB to some other RGB which was arbitrary to begin with. If you don't understand, ask.
If you want the gamma you can get that, but why do you want the gamma? Just to characterize the camera? Or do you really want to measure something in the image, like color, in which case you don't really need the gamma?
  1 Comment
Afsaneh
Afsaneh on 31 Dec 2014
Edited: Afsaneh on 31 Dec 2014
i am working on cancer detection and i have to have pictures with exact RGB values. so gamma Correction is important as a pre-processing step.

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!