Clarifying sub-pixel concept

Mathematically, sub-pixel is some pixel with decimal (non a.000000..., b.000000...), a and b apart from 0, coordinates.
According to the Mira convention, a pixel P(c.000000..., d.000000...), c and d apart from 0, is at center of the cell which begins at (((c - 0.5).000000...), ((d - 0.5).000000...)) and ends at (((c + 0.5).000000...), ((d + 0.5).000000...)). So, the sub-pixels relative to pixel P(c,d), for example, doesn't delete it.
What I've just spoken about is just a standardization. However can the sub-pixel be seen like this too: be a pixel P(c, d) which is divided into m x n sub-pixels, doing the pixel P(c, d) disappears?
If the two approaches are correct, what is more affordable?

16 Comments

Huh? What convention? What is Mira? What disappears? What do you mean by affordable? Basically you just have to consider whether you're working with whole pixels or center-to-center. For example is the line [0, 1, 1, 1, 0] three pixels long, or just two?
"What convention? What is Mira?"
"What disappears?"
The value that pixel P(c, d) had before it to be divided into m x n sub-pixels.
"What do you mean by affordable?"
The most accurate representation.
"For example is the line [0, 1, 1, 1, 0] three pixels long, or just two?"
I didn't understand your matrix.
Nothing disappears. You still have your original matrix.
In [0, 1, 1, 1, 0] (a white line), if 0 is the background, and 1 is the foreground, then what is the distance from one end of the foreground at element 2 to the other end of the foreground at element 5? Is it 2 or 3 pixels long? You can make a case for either answer. What if you have 3 pixels in a right triangle? What is the area? Is it 3 because there are 3 pixels, or is it sqrt(2) because, going from center to center, the area is sqrt(2)? Again, a case can be made for either case and it depends on how you want to interpret it. Have you noticed that bwarea() gives a different area than regionprops()? Why? Different interpretation, that's why.
Lucas
Lucas on 30 Jun 2016
Edited: Lucas on 30 Jun 2016
I think is better adopt the standardization I told you, because original intensity coordinates will be at the cell center.
Ben, analytically, what is sub-pixel? Wikipedia says that it is the information inside one pixel, or, in other words, sub-pixel exists if the scene information exceeds the pixel's amount. So, instantly, I was thinking about pixel to be a m x n sub-pixels downscaling, however it has no entire sense, because, if I estimate the sub-pixels (In this sense, do inverse downscaling), the native pixel will still stay there. No ever a downscaling gives intensity that was in HR image. Use P.S.F wouldn't help, because I would have to do upscaling. I have seen too that sub-pixel is a information (Edge or texture) which is small than a pixel, perhaps it falls into P.S.F and minimal separable angle.
Ben, what is the exact sub-pixel meaning? I would like so much to discover a sense to an image, in continuous domain.
I am not sure who "Ben" is referring to?
"Ben" is something like "Well". Derived from "Bem", from Brazilian portuguese.
I have no idea what you mean when you say the pixel disappears. Basically sub-pixel means smaller than a pixel or with a precision location in between pixel center locations. Don't make it more complicated than it needs to be.
Smaller than a pixel it can't be, because the pixel is the smallest image element.
A pixel of an output image is the smallest image element of that output image, but the array that represents it might have information of higher resolution. It is common, for example, to use an internal array of higher resolution when calculating anti-aliasing of lines.
Imagine, for example, that you have pixels like
**
*?
***
compared to
*
*
?
*
*
where the ? is the pixel location to be filled in. Then you might choose to sub-divide each pixel into a finer grid and do curve interpolation from several pixels back, for all of the surrounding pixels, projecting which sub-squares of the unknown pixel would "probably" be gone through, and seeing how many such sub-squares and how far into the finer grid they go.
With the top array, only the sub-squares near the left of the pixel might get filled in because of the curve of the arrangement. That would leave most of the pixel empty, probably leading to a decision to leave the pixel empty or dim for anti-aliasing. But with the bottom array, the sharp lines project further in and it is likely you would choose a strong presence for the pixel.
This shows that even if you only have "perfect" information about a fixed resolution, you might be able to interpolate that information at a finer level in order to make decisions about what value to write into the final image.
And of course there is the case where you have a high resolution image that is being transformed into a lower resolution image. You do have information at higher resolution than the output image in such a case, and you can make use of that higher resolution data to figure out what the "best" output is for the lower resolution pixel. Imagine, for example, if you are going down in resolution by a ratio of 5 input pixels to 3 output pixels, then where [x,y) is the semi-open interval that starts at x and ends before y, where you had your 5 inputs
[1,2) [2,3) [3,4) [4,5) [5,6)
your outputs are going to have to reflect
[1, 2 2/3) [2 2/3, 4 1/3) [4 1/3, 6)
which requires that pixels be "logically" fractionally long in terms of where to pick up information for constructing the final output. This particular case can be handled by replicating each original pixel 3 times, like
[1 1 1 2 2 2 3 3 3 4 4 4 5 5 5]
and then dividing that into thirds
{1 1 1 2 2} {2 3 3 3 4} {4 4 5 5 5}
so you would calculate the mean of 3 times the first pixel and 2 times the second pixel to arrive at the first output pixel, and you would calculate the mean of the second pixel, 3 times the third pixel, and the 4th pixel, to arrive at the second output pixel, and you would take the mean of 2 times the fourth pixel and 3 times the fifth pixel to arrive at the third output pixel. This is a sub-pixel calculation process: you have information about fractions of a pixel that you will use to calculate the output pixel.
Lucas
Lucas on 2 Jul 2016
Edited: Lucas on 2 Jul 2016
There's a sophisticated and much highly accurate, but traditional, approach for interpolating sub-pixels. Given:
where upper case letters represents the pixels into integer coordinates.
The sub-pixels can be defined by doing a 24-taps fractional DCT interpolation. It's an improvement to one that be done in this paper:
My default sub-pixel precision is 1/64 (1/8 x 1/8). Will be made by step-by-step interpolation.
I'm not worried about processing time.
Here, I gave an idea for improving the traditional sub-pixel estimation methods, however I am creating my idea of what the image is.
OK, fine. So you've now disproven your previous assertion that there is no location smaller than a pixel, as Walter and I have already known. You can now take those sub-pixel interpolated values and put them into a new array. Whether you call those elements "pixels" or just "elements" is a matter of semantics.
Lucas
Lucas on 2 Jul 2016
Edited: Lucas on 2 Jul 2016
Alright, Img. Analyst, your viewpoint isn't wrong.
I may, as I already wrote, consider subpixel belongs to a set of a HR image, and a pixel as a result of the HR image downscaling.
Be the pixel P(a,b). I want to do subpixel interpolation in factor 2 (The pixel P(a,b) will be one element of the 2 x 2 resultant matrix. Let's call it of Mb). This means will be created three subpixels, and pixel P(a,b) will be seen as a subpixel too.
Come back to the definition into first paragraph, I can say that the Mb matrix is set of P(a,b) subpixels if and only the result of it downscaling is the P(a,b) itself (My statement. I'm open to discussions).
Yep, just use imresize() and you should be all set.
Ok, let's suppose I interpolate a image. As I told, a set of interpolated pixels from P(a,b),including it, is it subpixels if and only, when I downscaling them, the result is the P(a,b). There's way I can do that?
Whether you scale up or scale down, if the new pixel locations don't like on integer locations of your original image, then you will be interpolating image intensity values at subpixel locations. Yes there is a way you can do that. It's with imresize(). Note that this only interpolates values - it does not give you better optical resolution like as if you had a better lens or pixel with more pixels over the field of view. If you take a picture of the moon with your telescope and it has an optical resolution of 10 meters, and your digitally resample it to have a "resolution" of 1 mm, then it's not like you'd now have a picture as sharp as if you'd scanned the moon with a microscope - it will look blurry.
You can get subpixel resolution but you have to have multiple images. As a simple example, if you had a picture of something, and then moved your sensor over in your camera, say with a piezoelectric translator, by half a pixel, then you could combine those images to come up with a new image that had twice the resolution of either one of those single images, subject to the limitations of the lens of course.

Sign in to comment.

Answers (0)

Asked:

on 30 Jun 2016

Commented:

on 2 Jul 2016

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!