How would they know that all human beings are the most sensitive to EXACTLY 33 degrees from the V axis on the UV plane?
How would they know that red is 30% as bright as white, green 59%, and blue 11%? A lot of experimental research has gone into color spaces starting with
CIE XYZ.
Psycopathic teen,
it's really the other way around as tepples alluded to. The planes are defined by human vision.
As for why you'd want YIQ, the point was to be backwards compatible with black and white television sets. The black and white TV would only pick up on the Y (intensity). Color TV's would pick up everything.
Keep in mind that another reason that television engineers did not want to broadcast the raw RGB values (or rather, Green as luminance, G-R, and G-B) has to do with bandwidth and noise. The matrices "tilt" the primary color magnitudes more in favor of what the human visual system sees. The noticeable effect of bumping up and down, say 5 counts of Red out of a familiar 255 scale depends on what values of Red, Green, and Blue you're starting with. Sometimes it makes the color appear to shift, and sometimes it makes the intensity brighter or darker, or some variation of both.
Y I Q and friends tries to make increases or decreases in value "more linear" or at least more consistently meaningful across the range. As in is it more purplish, or more greenish? on one axis, and is it more orangish or more bluish? on the other perpendicular axis.
The bandwidth (or rather, maximally how much the value can change over a given period of time) of the I and Q signals can be much, much less than the luminance because the human visual system responds a lot more to lightness and darkness, than to shade of color. Again this was important to maximize the number of over-the-air television broadcast channels in the allotted FCC frequency range.
I know that in the YUV color space they chose to multiply (B-Y) by .492 and (R-Y) by .877, so that the composite (C+Y) signal never goes below -.333 or over 1.333.
The IQ plane I know is the same as the UV plane but rotated 33 degrees so I (warmness/coolness) gets higher "bandwidth" than Q (greeness/purpleness) but what makes that particular shade of orange/blue define how warm or cool of the color? Why 33, and not 30?
Another thing I can't figure out is, if .114 grey is supposed to be the same shade as blue, why does .114 grey look significantly darker on every computer screen I can find? Is there something in the human brain that makes unsaturated colors look darker than saturated colors of the same luminance?
I have 2 predictions to what I and Q were supposed to represent.
1) The folks at NTSC liked the color RGB(1, .4, 0) and rounded the phase off to the nearest degree.
2) The I axis is supposed to be .299(R-Y) - .114(B-Y) multiplied by a scale factor, but from playing with my calculator, it seems like that it is closer to 34 degrees than 33 degrees.
EDIT: I just tried I = [.30(R-Y) - .11(B-Y)]/.41 and that comes out very close to 33 degrees.
Whatever it is, if you zero the Q component, you end up with the modern
orange-and-
teal color
palette of modern movies.
Do you think perhaps originally that someone in post-processing was red-green colorblind when they came up with this somewhat on accident? And he was just doing what it took to make it look the best according to how his vision saw things? Because that's that's what I see the orange-teal contrast doing.
/looks around
Waaah! Tepples, the contrast in your avatar, is...
AHH, OH NO, the contrast in my avatar is...
AAAAAAAH! The color scheme of this website is...
it's everywhere!
/runs away screaming
Another question. Doesn't the QAM modulation of I and Q, defeat the purpose of having them filtered by different amounts? Did they originally plan to modulate the luma's amplification high during the +Q and -Q phase, and low during the +I and -I, due to the I signal being more likely to cause noticeable dot crawl than Q, because of it's larger bandwidth? Did any television do what I just described?
Bandlimiting Q more means that sometimes (certain horizontal positions) more bandwidth can be recovered into Y, especially before comb filters were widely used. Thus it means that a medium-resolution Y content (3.6 - 0.6 ≅3MHz) or ~275 pixels per visible period can be represented by shifting it horizontally by a quarter of a colorburst cycle, or about 1/5th of a pixel.
I'm not certain what you mean by "Did they originally plan to modulate the luma's amplification high during the +Q and -Q phase, and low during the +I and -I, due to the I signal being more likely to cause noticeable dot crawl than Q, because of it's larger bandwidth?" — it sounds like you're suggesting modulating some part of the Y signal at 7.2MHz? But the bandwidth for OTA transmission was limited to about 6MHz, so that's not it.
The signal looks like this:
Y+I, Y+Q, Y-I, Y-Q
if you multiply this by this wave:
0, 1, 0, 1
you get this:
0, Y+Q, 0, Y-Q
when you use this convolution filter:
.5, .5, .5, .5
you get this:
Y, Y, Y, Y
1- You're assuming that Y is constant for the entire colorburst cycle. It's not. (If it were, Y resolution would be a mere 164 pixels, which, well, the 256-pixel and 320-pixel modes of the e.g. NES and Genesis are counterexamples.)
2- I still don't understand your original question.
Quote:
1- You're assuming that Y is constant for the entire colorburst cycle. It's not. (If it were, Y resolution would be a mere 164 pixels, which, well, the 256-pixel and 320-pixel modes of the e.g. NES and Genesis are counterexamples.)
I'm not assuming that Y is constant. That is the way LPFs work. They get rid of higher freqencies by adding together delayed versions of itself.
Quote:
it sounds like you're suggesting modulating some part of the Y signal at 7.2MHz?
I meant turning Y "on" during +Q and -Q phases and "off" during +I and -I phases, and low pass filtering it, so that high frequency orange-blue information doesn't leak into luma, while still keeping a high enough bandwidth for Y.
psycopathicteen wrote:
I'm not assuming that Y is constant. That is the way LPFs work. They get rid of higher frequencies by adding together delayed versions of itself.
But it's not done via a boxcar lowpass filter as in your example. And even after the switch from analog to digital demodulation, they were still going to use a filter with better performance than a boxcar such as a Chebyshev II or a Butterworth.
Quote:
Quote:
it sounds like you're suggesting modulating some part of the Y signal at 7.2MHz?
I meant turning Y "on" during +Q and -Q phases and "off" during +I and -I phases, and low pass filtering it, so that high frequency orange-blue information doesn't leak into luma, while still keeping a high enough bandwidth for Y.
So, yes, you're suggesting modulating luma at 7.2MHz (and then passing both baseband and 7.2MHz-modulated copies) and then lowpassing to get rid of some of the higher-frequency image. No, they didn't do that, that just destroys information.
For proper OTA broadcast, the I and Q portions are bandlimited before modulation. Thus orange-blue information doesn't usually leak into luma with a matched receiver. Proper demodulation of Y includes subtracting the remodulated I and Q, and that's all that's necessary to take advantage of the narrower bandwidth of Q. But since we have clearly seen instances of both luma->chroma and chroma->luma crosstalk, the last important part is: The color subcarrier is chosen such that it's 180deg out of phase on every scanline and every field. So a light spot due to chroma->luma interference will be dark on the scanlines above and below and also the following field.
Given the last bit, it is clear why one would think 2d or 3d demodulation of color would improve performance.
How do you calculate a butterworth filter?
That's kind of tangent-y, but do you mean "how do you apply one to discrete-time samples?" or "how do you design one?" or "how do you apply one to analog signals?"
Regardless, the
wikipedia page seems like a good place to start.