How can I convert RGB image to NTSC without using 'rgb2ntsc' command?

I tried the following code. But it did not give me the same output as 'rgb2ntsc'.
%RGBImage is the rgb image which is a m*n*2 matrix
YIQ(:,:,1) = (0.299.*(RGBImage(:,:,1)) + 0.587.*(RGBImage(:,:,2)) + 0.114.*(RGBImage(:,:,3)));
YIQ(:,:,2) = (0.596.*(RGBImage(:,:,1)) - 0.274.*(RGBImage(:,:,2)) - 0.322.*(RGBImage(:,:,3)));
YIQ(:,:,3) = (0.211.*(RGBImage(:,:,1)) - 0.523.*(RGBImage(:,:,2)) + 0.312.*(RGBImage(:,:,3)));
How can I convert RGB image to NTSC without using 'rgb2ntsc' command?

4 Comments

Looks like the right formula, according to wikipedia. How much are they off? Do you want to attach your RGB image? And your code to show how they are different?
These are the images. NTSCbyCode.bmp is converted by the code mentioned above. NTSCbyMatlab.bmp is converted by MATLAB built-in function. rgbimage.bmp is the original image.
Why can't you use that built-in function? You haven't tagged this as homework, so why prevent yourself from using the convenient built-in function?
OP is long gone. The thread resurrection was my doing. I've just been prowling old threads and adding/improving answers where I think it might help (or where it interests me).

Sign in to comment.

Answers (4)

%You are using the correct approach. The answer is the same in both approaches, I have verified, can you share the RGB image, which you have checked.
RGBImage=imread('frame_32.jpg');
YIQ(:,:,1) = (0.299.*(RGBImage(:,:,1)) + 0.587.*(RGBImage(:,:,2)) + 0.114.*(RGBImage(:,:,3)));
YIQ(:,:,2) = (0.596.*(RGBImage(:,:,1)) - 0.274.*(RGBImage(:,:,2)) - 0.322.*(RGBImage(:,:,3)));
YIQ(:,:,3) = (0.211.*(RGBImage(:,:,1)) - 0.523.*(RGBImage(:,:,2)) + 0.312.*(RGBImage(:,:,3)));
subplot(121), imshow(YIQ);title('Without rgb2ntsc')
subplot(122), imshow(rgb2ntsc(RGBImage));title('Using rgb2ntsc')

3 Comments

Thank you very much for your help. I have attached my files in the above comment. The image looks same, but the values are not the same.
From that book:
"In the NTSC format, image data consists of three components: luminance (Y), hue (I), and saturation (Q), where the choice of letters YIQ is conventional."
No it's not. I and Q do not represent hue and saturation. YIQ is a transformation of YUV where I represents the in-phase component of an electrical signal and Q represents the quadrature (90deg out of phase) component of the signal.
Like all the other luma-chroma models, you'd have to convert to cylindrical coordinates to get a representation of hue and chroma. If you want to get saturation out of a luma-chroma model, you'll have to normalize chroma, where the normalization limits are a function of both Y and H. At that point, you'd have something rather esoteric. Maybe you could call it HSY or something.

Sign in to comment.

Convert your image to double and then apply this formula. The rgb2ntsc function handles its input but we must do that manually.
RGBImg = imread('%yourImage.format%'); %Input to the rgb2ntsc function
RGBImage = double(imread('%yourImage.format%')); %Image used for the formula
YIQ(:,:,1) = (0.299.*(RGBImage(:,:,1)) + 0.587.*(RGBImage(:,:,2)) + 0.114.*(RGBImage(:,:,3)));
YIQ(:,:,2) = (0.596.*(RGBImage(:,:,1)) - 0.274.*(RGBImage(:,:,2)) - 0.322.*(RGBImage(:,:,3)));
YIQ(:,:,3) = (0.211.*(RGBImage(:,:,1)) - 0.523.*(RGBImage(:,:,2)) + 0.312.*(RGBImage(:,:,3)));
figure;subplot(121);imshow(uint8(YIQ));subplot(122);imshow(rgb2ntsc(RGBImg));
It can be a lot simpler.
rgbpict = imread('peppers.png');
rgbpict = im2double(rgbpict);
% pay attn to match class and scaling of all inputs to imapplymatrix()
A = [0.299 0.587 0.114; 0.5959 -0.2746 -0.3213; 0.2115 -0.5227 0.3112];
yiqpict1 = imapplymatrix(A,rgbpict);
% compare to existing tools
yiqpict2 = rgb2ntsc(rgbpict);
immse(yiqpict1,yiqpict2) % should be close to zero
ans = 2.1763e-09
A significant part (about an order of magnitude) of the error you're seeing in your own conversion is due to the improper rounding in your transformation matrix. The remaining negligible error in this example is due to the fact that rgb2ntsc() uses fewer digits in the transformation matrix and it does the transformation by matrix division with the inverse of A. There's bound to be a difference.
% use the inverse of the matrix used by rgb2ntsc()
A = inv([1.0 0.956 0.621; 1.0 -0.272 -0.647; 1.0 -1.106 1.703]);
yiqpict1 = imapplymatrix(A,rgbpict);
% compare again
immse(yiqpict1,yiqpict2) % even closer to zero
ans = 1.5210e-33
The above example will work if you have IPT and are using R2016b or newer. Otherwise, consider the general approach here:
clc; clear all; close all
A = imread('football.jpg');
% figure; imshow(A);
T = [1.0 0.956 0.621; 1.0 -0.272 -0.647; 1.0 -1.106 1.703].';
[so(1),so(2),thirdD] = size(A);
if (thirdD == 1)
A2 = double(A)/T;
else
A2 = reshape(reshape(double(A),so(1)*so(2),thirdD)/T,so(1),so(2),thirdD);
end
%A2 = im2uint8(mat2gray(A2));
% just as double class

5 Comments

Yes, that's exactly the code from rgb2ntsc(), excepting the casting that you added. With those changes though, you've lost half of both I and Q via truncation. This is what you'd get if you converted back to RGB. Note the loss of color information.
There's a reason why rgb2ntsc() uses im2double() instead of double(). That way, the data has a consistent scale, regardless of the input class. Since I and Q are nominally centered on zero, the output of rgb2ntsc() remains floating point. To pre-empt the common solution, simply offsetting by 128 will still result in data truncation, as both I and Q span a range wider than would fit between [0 255] when scaled as they are.
For example, the range of I would be
[-1 1]*0.5959*255 + 128
ans = 1×2
-23.9545 279.9545
So if you want to fit that in uint8, you're going to have to come up with some way to rescale the data.
yes,sir, may be use mat2gray to rescale
Normalizing to the data extents is easy, but now you have completely lost the reference for scaling. You can no longer reconstruct the image, since the normalization limits used to rescale it have been lost.
You also have no way of comparing your custom-scaled images with any other YIQ images or even to other images using this custom scaling, since the scaling will differ depending on the content of each image.
If you want to rescale to fit chroma data within a fixed interval, you have to use a fixed scaling factor (e.g. 1/(2*0.5959) for I). If you want to center the data at a specific value in that interval, you have to use a fixed offset (e.g. 128).
That aside, I wasn't actually suggesting that nonstandard scaling is a good solution. You could use it, but you still wouldn't be able to compare yours to other YIQ images. The question is why the image data needs to be crammed into uint8 in the first place. If you really want your data in a luma-chroma model and you want it to be in uint8, do you really need YIQ? Why not just use YCbCr?
yes,sir,may be use double to analysis,and use uint8 to save
I suppose it depends what you want to do with it. If all you need to do is scale-independent, then using uint8-scaled double would probably be fine. Plenty of people do it. It would still be essentially a custom convention for the image, and so you wouldn't be able to compare it to a YIQ image in the standard scaling without scaling one or the other first.

Sign in to comment.

Asked:

on 30 Jul 2018

Edited:

DGM
on 5 Nov 2021

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!