I know that this question has potential for hotly contested debate, but...
Which NES emulator has a palette that best matches a properly tuned TV? I test my game in FCEUX, as I like its debugger. But my beta-tester uses nestopia and says that the colors are too dark (I'm about to download and test nestopia myself). I also test in nintendulator. What looks bright blue in FCEUX look purple to nintendulator.
I'd like to know which emulator I should use to tweak my game's palette choices with.
ps- I've experimented with FCEUX's "config -> palette -> NTSC Color Emulation", but I can't get it to match nintendulator's without loading a custom palette.
So what are the thoughts of the NTSC world?
My Vizio TV has picture-in-picture that can be set to inset or side-by-side mode. So I hooked a laptop up to VGA input and an NES up to composite input and ran an "all colors" demo on both. I was able to get Nestopia to make a nearly indistinguishable image.
The problem with the game you're talking about is that $01-$0C are very dark colors. Their luminance is halfway between $0D (superblack) and $00 (medium gray), so small areas of them are hard to notice on a $xF (black) background.
Would you kindly post/share/email your nestopia PAL file?
I want to try to load it into FCEUX.
1. Download Nestopia
2. Options > Video > Palette > Editor > YUV > Save As
result is
this
And remember that the NTSC filter in Nestopia kind of blurs things, much as an actual TV would.
Even if it's not for the colors, you should run your software through Nestopia. In my own personal experience it has proven to be very accurate, sometimes even more so than Nintendulator. Combined with the NTSC filter, it makes a great tool for previewing programs, losing only to the real thing.
Accuracy-wise, FCEUX sucks, the only reason I use it is because of the debug tools. Nintendulator's CPU debugger is pretty good too, but PPU debugging not so much (and it's slow as hell).
Anyway, I think all 3 emulators have an important part in development, and I always use them during all my development sessions, and every week or so I test on real hardware, just to make sure everything is OK.
Thank you both for your input.
Are there any other emulators worth trying on?
I know that nesticle has a horrible reputation for accuracy. But I did like that one could use it to real-time visualize the wave forms on the 5 APU channels. Are there any better emulators that can do the same (in real-time)?
ps- I have used audacity to analyze sounds before, but never in conjunction with a NES emulator. Its kinda funny.. My company has a Sun (now Oracle) J4400 disk array. It began making a high-pitch beeping sound, but it was in a server room filled with other servers, so the entire room sounded like a jet engine. Sun support had no idea why the J4400 would beep. The tech said that he was unaware that it could. So I had a support tech at the colo record the sound with his cell phone. I used audacity to separate the white noise from the background and submitted the wave-form to second level support. The J4400 was making a 2600 Hz tone (which I found funny for other reasons).
clueless wrote:
I know that nesticle has a horrible reputation for accuracy. But I did like that one could use it to real-time visualize the wave forms on the 5 APU channels. Are there any better emulators that can do the same (in real-time)?
Some NSF players can.
tokumaru wrote:
Accuracy-wise, FCEUX sucks, the only reason I use it is because of the debug tools. Nintendulator's CPU debugger is pretty good too, but PPU debugging not so much (and it's slow as hell).
I think the PPU debug window performance problems were fixed in the latest beta. FCEUX nametable viewer causes a BSOD on my Windows.
You should try out Jnes. I loved nestopia I used to use it a lot but the sound on Jnes blows it out of the water. Other then that it looks a lot more bare (not as many features as nestopia). Like it can only do a 8 pixel wrap while nestopia can like configure the screen.
But overall it just sounds more crisp, Nestopia sounds slighty muffled when you compare it. Try it yourself with Tetris Theme: #1 you'll be rocking out in no time to Jnes
.
Also it has a video filter that looks identical to Nestopia and zsnes. You need to select under drawing method: 'HQ4X'. The Nestopia YUV palette does look sweet I ain't gonna lie, I tried to match it with Jnes the best I could. Under custom palletes select: BMFFINER3.pal, most games look exactly the same minus maybe one color will be slightly different. Honestly If you dont have them side by side you wouldn't be able to notice a difference from openeing them both.
Im not sure what other features you would need other then better pixel wrapping.. Video and sound are my biggest gripes. Anyway check it out, I would also recommend 'ZSnes' for super nintendo and DosBox for any windows games thats are to old to install onto your computer. With DosBox you might need to adjust the 'cpu emulation speed'. By default it starts off pretty low like 3000 cycles, but i remember my games running faster (depending on how fast your cpu was) so I would crank it up to around 8-10k cycles. Hope this info helps
-Game on brotha, game on
Necrobumped a thread just to promote an emulator with known inaccurate audio?
I just tried Jnes and the audio has noticeable aliasing. Tried Nestopia and it's clean.
thefox wrote:
tokumaru wrote:
Accuracy-wise, FCEUX sucks, the only reason I use it is because of the debug tools. Nintendulator's CPU debugger is pretty good too, but PPU debugging not so much (and it's slow as hell).
I think the PPU debug window performance problems were fixed in the latest beta. FCEUX nametable viewer causes a BSOD on my Windows.
It did that on my schools PC's too.
I use FCEUX for general development because of ease of debuging, and Nintendulator and Nestopia (Mainly nestopia) for accuracy tests. Mainly Nestopia as Nintendulator doesn't do right bankswitching sometimes, and has a Bankswitch and PPU andCPU problem with one of my current projects that works perfect on an NES (iirc, haven't worked on it in a little), Although Nestopia also has the PPU problem oddly. Fceux gets the splits wrong on the same project, but bankswitches and runs perfectly. Weird combo between all 3 "best" emulators.
Meh maybe but old nestopie still sounds a tad muddy to me. 'To each their own'.. As for the necro work, these roms have been long dead and need all the help from google web searches for their revival.
-Game on None-Troll-Brothas, game on
dlock wrote:
Meh maybe but old nestopie still sounds a tad muddy to me. 'To each their own'.. As for the necro work, these roms have been long dead and need all the help from google web searches for their revival.
-Game on None-Troll-Brothas, game on
For many versions, Nesticle used the wrong waveform entirely for the 50% duty cycle square.
mikejmoffitt wrote:
For many versions, Nesticle used the wrong waveform entirely for the 50% duty cycle square.
I wonder if that was a stylistic choice... it still bothered me though.
On the subject of emulator palettes vs real palettes:
Forget it. A 100% accurate reproduction of the NTSC NES's palette is impossible because the NTSC NES runs on the YIQ color space, and most computers I know of use RGB. YIQ is capable of generating colors that cannot be reproduced with RGB, and indeed, the blues and purples of the NES are out of gamut.
The only reason I'm so pessimistic on this is because I spent
way too long trying to make it work anyway.
This is what I came up with, and although it's not perfect, it looked close enough when I compared it against my CRT TV which is sitting right next to me.
Quote:
the NTSC NES runs on the YIQ color space, and most computers I know of use RGB. YIQ is capable of generating colors that cannot be reproduced with RGB, and indeed, the blues and purples of the NES are out of gamut.
...but TVs convert everything to RGB for the electron guns, so a computer can reproduce that aspect.
Reasons for inability to reproduce that come to mind:
- People remember the colors different than they were
- Different TVs produced different colors
- The colors depend on horizontally-adjacent colors, not just the pixel's palette index.
TVs do convert to RGB, but given the analog nature of the electronics used, the values don't need to be clamped to 100%. You can't output a blue that's less than 0, but you can output a blue that's greater than 255, and that's what happens on my TV, at least.
When I plug my NES into my LCD TV (where the RGB channels do need to be finite), the palette gets really ugly clamping, especially on the out-of-gamut colors, such as $22, $13, and $23. (if I recall correctly)
Then it should be possible to make a palette where white is something less than 255, to give a bit of headroom for out-of-gamut blues.
I'm pretty sure that I have the NES NTSC filter do this, so that white isn't 1.0, since some of the blues go beyond that. In other words, if an RGB TV can do it, so can an emulator.
tepples wrote:
Then it should be possible to make a palette where white is something less than 255, to give a bit of headroom for out-of-gamut blues.
I tried it, the palette comes out uncomfortably dark (or uncomfortably desaturated). I tried a couple of methods to clip the OOG colors. One was a hue-preserving clip (if B is clipped but R and G are left alone, then the hue shifts), but the color lost its brilliance. I'd have to set this simulation back up to remember exactly what happened, but it definitely looked a lot better than just a simple RGB clip, even if the saturation was wrong. I tried a hue and saturation preserving clip, and the color kept its brilliance, but the luminance was obviously wrong compared to the rest of the colors.
No matter what I did, I couldn't get the palette to look "right" unless I cranked the generator's brightness way down, which would require me cranking my display's brightness up to cancel it out. This is basically the same as displaying a color that is brighter than 1.0, which gave me the idea that I'd never be able to attain the exact palette I want, unless LCD technology changed somehow.
Then again, maybe I think it's impossible because I was completely alone when trying to solve this.
Drag wrote:
On the subject of emulator palettes vs real palettes:
Forget it. A 100% accurate reproduction of the NTSC NES's palette is impossible because the NTSC NES runs on the YIQ color space, and most computers I know of use RGB. YIQ is capable of generating colors that cannot be reproduced with RGB, and indeed, the blues and purples of the NES are out of gamut.
I intend one day when I build some computer, its NES/Famicom emulator will have a mode to output the NTSC signal directly, to result in a 100% accurate colors (and to emulate RGB Famicom when making RGB output signal). I don't know when. But, you can do the same thing in any emulators you make, if you are able to make it to output NTSC signals! But the other idea is to make the emulator to have a RGB emulation mode, using RGB palette and timing and so on (I think there is one extra dot, or one dot missing or something like that?), which could be written the game to detect NTSC/PAL/RGB/Dendy mode.
I took another stab at a palette generator.Yes, I know, there's a dozen of these already. However, I'm taking a different approach, and I'm converting YIQ to various CIE color spaces (using the physical red, green, and blue as defined by the NTSC specification), and coverting from CIE to RGB. When I say I want a palette that looks like my TV, I'm being absolutely serious, and I'm doing all I can to figure out how to
correctly simulate a CRT's colors. The only issue is that sRGB's gamut is absolutely piss-poor by comparison, since a
lot of colors wind up out of range unless you drop the saturation or contrast down.
Also, if you view source, please excuse how sloppy my coding was. :S
You're saying that the gamut of a device that merely has a brighter picture is wider? I think you might be saying that if you treat 1.0,1.0,1.0 as white in sRGB, you get a poorer gamut than if you treat say 0.5,0.5,0.5 as white. A TV basically does the latter it seems you're saying, so that it can do super-saturated colors, but also compensates by making the screen really bright so that 0.5 white appears bright. But everything in the sRGB world treats 1,1,1 as white, so you can't just crank up your monitor's brightness and use 0.5,0.5,0.5 as white, since nothing else will and thus the screen will blind you, save for your emulator. Throw in scanline emulation and you darken the picture more, prompting even more need to crank up the monitor's brightness.
There is a scanline emulation algorithm that doesn't darken as much, which involves a form of bloom. I can explain more later. But I do remember the Dreamcast's license screen using 0.75,0.75,0.75 for white.
So my current project is to try to simulate the three colored phosphors in CIELuv. The FCC specification specifies three coordinates on the CIE color space. Red, green, and blue are defined as (.67, .33), (.21, .71), and (.14, .08).
I chose CIELuv because I read somewhere that CIELab is geared more towards colored surfaces and dyes, and CIELuv is a better approximation of colored lights specifically (i.e., CRT displays).
I've converted the R, G, and B points (called "primaries") to their CIELuv equivalents. What I do is I convert YIQ to RGB, and then starting at the white point (FCC defines it as C, (.3101, .3161), but SMPTE defines it as D65 (.3127, .3290)), I treat the primaries as vectors that radiate from the white point, and I use R, G, and B from YIQ->RGB as magnitudes for the vectors. The resulting point is the color I need, but this is where I get stuck.
I was translating the result point back into CIEXYZ, and then converting XYZ -> outputRGB (where outputRGB is the color being sent to the screen), and this gave me some pretty nifty results (you can see it in the HTML5 app I made), but this creates some problems.
The main one: I need to be able to apply gamma curves to R, G, and B. If I apply it to outputRGB's R, G, and B, I get an approximation, but it isn't exactly "correct", because what I should be doing is applying the gamma curve to the "simulated" R, G, and B before converting it to outputRGB. I can't apply the curves to YIQ->RGB's R, G, and B, because those represent the difference between white and color, so instead of the individual channels getting darker, they just become unsaturated, and that's not correct.
Another observation I've made; it seems that the NES's palette is hue-shifted. If I just use a raw YIQ->RGB conversion, the hues are off by a little bit. Color x8 indeed has the colorburst hue, but on my television, it's yellow-orange instead. Indeed, other colors end up wrong too, but by shifting the hues over just a fraction, everything looks correct again. Why? I don't know. Maybe televisions shift the hues themselves, maybe the NES somehow shifts the colorburst phase by a fraction of a clock (unlikely).
Finally, most of the NES's colors are out of the sRGB gamut used by computers. I'm unsure of a good method to perform gamut-mapping; what I do right now is I desaturate the color until its R, G, and B components are all in range, and although the luminance is correct, the color itself ends up washed out.
I doubt anyone here knows about anything I've just said, but if anyone else is more familiar with gamuts, and color spaces, and all sorts of this stuff, I would really appreciate your input.
blargg wrote:
Reasons for inability to reproduce that come to mind:
- People remember the colors different than they were
- Different TVs produced different colors
- The colors depend on horizontally-adjacent colors, not just the pixel's palette index.
Also different board revisions potentially generate different amounts of noise in the output signal?
tepples wrote:
Then it should be possible to make a palette where white is something less than 255, to give a bit of headroom for out-of-gamut blues.
Supposedly in NTSC black should be 16 and white should be 235. No idea from where the hell this comes (blacker than black and whiter than white are used for blanking signals but no sane video encoder should allow those for normal colors for starters), but that's the gist of it.
Sik wrote:
Supposedly in NTSC black should be 16 and white should be 235. No idea from where the hell this comes (blacker than black and whiter than white are used for blanking signals but no sane video encoder should allow those for normal colors for starters), but that's the gist of it.
Well,
http://en.wikipedia.org/wiki/YCbCr, for starters; neither values of 16 nor 235 are far enough away from 0 and 255 to be able to also encode sync nor Y+C (140IRE) in band.
Different TVs produce different colors, yes. However, they all seem to consistently render color x8 as yellow-orange, and not as a sickly green-yellow. HDTVs are the only TVs I've seen that render color x8 as something other than yellow.
The out-of-gamut blues are extremely out of gamut. As in, white needs to almost be 150 before the blues are completely in-gamut. I mean, if you don't mind having a palette that dark, go ahead and knock yourself out.
Drag wrote:
Different TVs produce different colors, yes. However, they all seem to consistently render color x8 as yellow-orange, and not as a sickly green-yellow. HDTVs are the only TVs I've seen that render color x8 as something other than yellow.
I had a TV that could render yellow as green. You had to mess with the hue setting, though. I wouldn't be surprised if HDTVs screw up at the hue values...
Drag wrote:
The out-of-gamut blues are extremely out of gamut. As in, white needs to almost be 150 before the blues are completely in-gamut. I mean, if you don't mind having a palette that dark, go ahead and knock yourself out.
How the hell did that even work? o_O
Sik wrote:
Drag wrote:
The out-of-gamut blues are extremely out of gamut. As in, white needs to almost be 150 before the blues are completely in-gamut. I mean, if you don't mind having a palette that dark, go ahead and knock yourself out.
How the hell did that even work? o_O
It's only RGB that's limited to 0-255. CRT televisions don't operate on RGB, they operate on YIQ, which can produce colors that are WAY out of RGB range, and because this is analog electronics we're talking about, there's no problem with it. (Phosphors don't have limits like that; they just shine brighter and brighter the more energy you give to them)
So far, I've been treating YIQ like a color wheel, because that's basically what it is. I = sin(hue), Q = cos(hue). So, I figured out which "angle" is supposed to be colorburst, and lo and behold, if you do a simple YIQ->RGB conversion, you get that pale green color I keep talking about. However, if you do YIQ->CIELuv->RGB, you get yellow. So I'm not crazy, maybe the NES actually does output a pale green color for color 8, and it's just the physics of colored lights mixing together (or the definition of the NTSC color primaries) that turns it into yellow.
So I set color x8 as colorburst, and computed the rest of the hues from there. I haven't uploaded my latest version of the app yet, but I believe the hue setting should be spot on now.
--------------------------------------------------
I noticed a quirk. I went into my TV's settings and turned the saturation all the way down (my TV is capable of completely disregarding colorburst). Every 4th color (starting with color x4) has a slightly lighter luminance than the other colors. This is completely without chroma (as far as I know), so the luminance signals for those colors is slightly lighter? Does this happen for anyone else?
---------------------------------------------------
NTSC buffs, please help me with this one. I know that the hue of the picture is relative to the hue of the colorburst. Is the saturation of the colors relative to the amplitude of the colorburst too?
What happens if the colorburst has a DC offset? Does that have any effect on the colors? I hypothesized that a positive DC offset on the colorburst would lighten the colors near the colorburst hue, and darken the colors near its complement, however I don't know if this actually happens.
The reason I'm thinking about this is because the NES
may have a DC bias on its colorburst signal. If it does, and it affects the output, then I need to take it into account.
Drag wrote:
I noticed a quirk. I went into my TV's settings and turned the saturation all the way down (my TV is capable of completely disregarding colorburst). Every 4th color (starting with color x4) has a slightly lighter luminance than the other colors. This is completely without chroma (as far as I know), so the luminance signals for those colors is slightly lighter? Does this happen for anyone else?
I wonder whether that has anything to do with the tint bits, which attenuate the signal during those phases. Or it might have something to do with eight phase units (eight half cycles of the master clock) equaling one pixel, causing intermodulation between the pixel clock and chroma subcarrier.
Drag wrote:
NTSC buffs, please help me with this one. I know that the hue of the picture is relative to the hue of the colorburst. Is the saturation of the colors relative to the amplitude of the colorburst too?
Yes.
Quote:
What happens if the colorburst has a DC offset? Does that have any effect on the colors?
Supposedly, nothing. Macrovision uses a DC offset during the back porch this to defeat VCRs' AGC. However, this does imply colors when routed through a VCR will be different. (Depending on the VCR, possibly only in recordings, or possibly all the time).
If anything were to happen, it would the AGC behavior I mentioned, with displacing black level up and so increasing contrast and darkening already dark colors. There should be no phase dependence.
Quote:
The reason I'm thinking about this is because the NES may have a DC bias on its colorburst signal. If it does, and it affects the output, then I need to take it into account.
The
voltages on the wiki are colorburst low: 1V, colorburst high: 1.712V, black: 1.3V, and the average of the first two is 1.356V. So it sounds like yes, there's a slight DC offset.
Drag wrote:
It's only RGB that's limited to 0-255. CRT televisions don't operate on RGB, they operate on YIQ, which can produce colors that are WAY out of RGB range, and because this is analog electronics we're talking about, there's no problem with it. (Phosphors don't have limits like that; they just shine brighter and brighter the more energy you give to them)
Well, it's 8-bit digitized RGB that uses 0-255 to represent some range brightness for each component. The brightness of 255 isn't absolute. TVs use RGB as well, converting RF/composite/component into RGB before the amplifiers. At that point, they are voltages from 0 to some maximum, which can be mapped to the 0-255 range. Analog electronics have limits as well, and so do phosphors, they're just working well within the limits. I believe that later CRTs did some digital processing on the signal, so they may very well have digitized each component (R, G, B) into some number of bits. It would be indistingishable if they used enough bits. Also remember that the gamma-to-linear conversion occurs mostly at the phosphors, so the digitized version would have favorable non-linearity that gave more precision in darker tones where the eye is more sensitive to smaller differences.
lidnariq wrote:
Drag wrote:
NTSC buffs, please help me with this one. I know that the hue of the picture is relative to the hue of the colorburst. Is the saturation of the colors relative to the amplitude of the colorburst too?
Yes.
I always wondered the same thing too. Does a smaller colorburst make colors more saturated or less saturated?
psycopathicteen wrote:
Does a smaller colorburst make colors more saturated or less saturated?
A smaller colorburst would likely make the picture
more saturated, because the saturation levels in the picture will look bigger when you compare them to the smaller colorburst amplitude.
So my next challenge is that I don't know what exact saturation the colorburst is supposed to represent. In my app, a "sat" of 1.0 is basically a color explosion at the moment, so I know that's not correct.
For now,
I've updated the palette generator. Some of the new tweak settings are from suggestions. The hue was me, because even though a hue tweak of 0.0
should represent the exact hues being sent, it still doesn't look right unless I shift them by -0.15. Your mileage may vary.
A CRT TV technically has a "gam" of 2.2, and it does help a little bit for some of the darker colors, but it seems to mess everything else up right now. Meh.
Also, if anyone has a better suggestion for gamut mapping (converting the out-of-gamut colors to in-gamut colors), I'm open to suggestions.
Edit: If anyone's interested,
this is what I've been using on my NES for comparison. It's based on the older PALTEST.NES demo floating around here, but I needed to have the colors all touching each other. I tried to minimize the flickering glitches as much as I could. No emphasis toggling support though, sorry.
Depends on the television.
The last DSP-based ones scaled luma by the size of the sync pulse (scaling the entire input such that sync-to-blank was 40IRE), separate out chroma, and scale that such that the colorburst is also 40IRE peak-to-peak.
I believe the older analog sets only scaled everything according to the sync pulse. (Since NTSC was amplitude modulated, some kind of compensation for the distance to the transmitter is needed).
Many sets don't do this scaling for composite input, however.
OTOH, the 2600 has two variants, one which transmits its colorburst at the same amplitude as the rest of the color, and the other which attenuates the amplitude for the colorburst (ties /BLANK to CHROMA through a 680Ω resistor); I don't recall people have mentioned variation of saturation between the CX2600 and the CX2600A or the various mods (which largely seem to ignore /BLANK).
lidnariq wrote:
OTOH, the 2600 has two variants, one which transmits its colorburst at the same amplitude as the rest of the color, and the other which attenuates the amplitude for the colorburst (ties /BLANK to CHROMA through a 680Ω resistor); I don't recall people have mentioned variation of saturation between the CX2600 and the CX2600A or the various mods (which largely seem to ignore /BLANK).
That could easily be because nobody noticed or cared about it. However, I haven't thrown out the possibility that the colorburst is mainly used for its phase and nothing else. If a picture is too saturated, you can just twist the saturation knob, so it's plausible.
Scaling the entire signal level by the sync pulse works well for RF because RF uses
negative modulation, where sync is the highest signal level. Composite uses positive modulation, so it's less helpful there.
I too have been assuming that color burst is for phase.
I was using the R, G, and B primaries as defined by the FCC, but the FCC defines those primaries with a white point of C. I've been using a slightly different white point so that the gray colors looked correct, and not bluish.
If anyone is versed in the CIE color space: In order to use a different white point, can I move the R, G, and B primaries the same amount as I've moved the white point?
In other words: R' = (.67, .33), G' = (.21, .71), B' = (.14, .08), and the C white point is (.310, .316). If I use a white point of (.314, .330), and the difference between the two white points is (.004, .014), can I add (.004, .014) to my primaries to make: R' = (.674, .344), G' = (.214, .724), B' = (.144, .094)? Or is that an incorrect way to shift the white point?
I can suggest for the emulator to have two modes:
- NTSC mode: Set up the conversion matrix and attenuation amount.
- RGB mode: Set up the 64-entry palette, tint mode, and extra dot mode.
Therefore you have accurate emulation, and you can configure it, and it work both with games meant for NES and NTSC Famicom, and with games designed for RGB Famicom. And you can make it act like a black and white TV set by using only the Y signal in the conversion matrix, in case you want to test if the picture is good on black and white TV sets. And what I suggested also means, you can turn off the attenuation or set it whatever you want, allowing to test how it might look on clone systems with wrong attenuation, to not use it at all, or to use RGB tint bits to see if the game still works in that mode.
Nestopia still can make a more accurate palette than anything my own palette generator can produce.
I don't understand it; I have the three coordinates that represent the exact color a TV's red, green, and blue are. I just want to mix various combinations of various intensities of red, green, and blue. So, I'm mixing the colors in CIELuv space, which is supposed to represent the behavior of mixing differently-colored lights together (like on a CRT for example). I even tracked down the exact hue colorburst is supposed to be.
Why can't I make my palette generator produce the correct colors? Do I need better way to remap the out-of-range colors? What's so hard about this? This should be absolutely everything needed to make perfect NES palettes. :\
To address the out-of-gamut colors, I'm trying out using some rendering intents, and the first one I'm trying is absolute colorimetric.
So basically, the color's RGB is converted to CIEXYZ coordinates (X, Y, Z), and then converted to CIELuv coordinates (L, u, v).
I also have the CIExyY coordinates for the R, G, B primitives (the points on the XYZ graph which give you #FF0000, #00FF00, and #0000FF), which I've converted from xyY to XYZ and then to Luv.
That's as far as I've gotten. I don't know what I'm doing, and I can't find anything helpful from a google search. I don't know what kind of 3D shape the sRGB primaries make; in 2D, they make a triangle, but in 3D, do they linearly connect to the white and black points to make a diamond shape, or is it supposed to be rounder?
Ugh, I'm so confused. Whatever shape it makes, if the color is outside of that shape, then it's out of the sRGB range (meaning R, G, or B is <00 or >FF), and the color needs to be remapped so the point is inside of the shape. If I've interpreted correctly, absolute colorimetry means the out-of-range colors are just brought closer to the white point until they're on the edge of the sRGB's "shape", which means one or more channel is FF.
Not sure if I can help you with the color science stuff yet, but I did have a question. How did you determine that NES uses Y'IQ colorspace instead of Y'UV? To the best of my knowledge, Y'IQ hasn't been used since the 1970s due to decoder cost. And I think you had the in-phase and quadrature-phase components backwards on page 2. Composite color for Y'IQ should be:
C = Q∙sin(ωt + 33°) + I∙cos(ωt + 33°)
For Y'UV:
C = U∙sin(ωt) + V∙cos(ωt)
For Both:
ω = 2π∙fsc
fsc = 315/88 MHz
Colorburst = A∙sin(ωt ± 180°) = -A∙sin(ωt)
A = 20 IRE = 142.8mV
What does it mean to say that something "uses" a particular color space? The NES merely has particular waveforms that it outputs in response to palette indicies. These consist of either constant amplitude or a square wave of a particular phase between two amplitudes. Game programmers choose indicies based on what colors appear on a TV. They don't choose based on describing the NES circuitry as "thinking" of a particular color in a particular encoding space, outputting the proper phase of wave for it, then the TV decoding the phase and interpreting that as "asking" for a particular color, then converting that to RGB and displaying it.
In that case, the NES "uses" the color space of the TVs with which the artists tested their palette choices. So if we're emulating a TV, we have to know the color space of that TV.
Ste wrote:
Not sure if I can help you with the color science stuff yet, but I did have a question. How did you determine that NES uses Y'IQ colorspace instead of Y'UV? To the best of my knowledge, Y'IQ hasn't been used since the 1970s due to decoder cost. And I think you had the in-phase and quadrature-phase components backwards on page 2. Composite color for Y'IQ should be:
C = Q∙sin(ωt + 33°) + I∙cos(ωt + 33°)
For Y'UV:
C = U∙sin(ωt) + V∙cos(ωt)
For Both:
ω = 2π∙fsc
fsc = 315/88 MHz
Colorburst = A∙sin(ωt ± 180°) = -A∙sin(ωt)
A = 20 IRE = 142.8mV
I'm using YIQ because the NTSC standard uses YIQ for composite color encoding in every documentation I can find. Even if modern TVs use YUV decoders, YIQ and YUV are the same, the only difference is that YIQ's axis is rotated slightly. A simple tweak of the hue knob could convert between the two encodings, hypothetically.
Moreover, the NES doesn't actually "use" YIQ or YUV like you might be thinking; YIQ and YUV are the same; there's Y=Luminance, and then both I/Q and U/V are sine waves that are superimposed on the Y signal. If you disregard the exact definition of I/Q and U/V, this signal
looks like each scanline is just a sine wave where the phase is the hue, the amplitude (of the sine wave) is the chroma, and the "bias" is the luminance. This is all the NES is doing; it's outputting a hue and a luminance, it's not actually converting RGB to YIQ first or anything like that.
The reason I'm working with color spaces is because the way my TV displays rgb[FF,AA,00] (for example) is different from how rgb[FF,AA,00] appears on my computer screen, and I want to be able to display the color my TV makes, but on my computer screen. However, I'm having a LOT of trouble getting it to look right.
The biggest problem I have is that YIQ generates a lot of colors that are out of the RGB range. I need to do something called "gamut mapping" in order to make an approximation of the intended color, but I have NO idea how to do this correctly, and Google doesn't turn up any programmer-friendly help. Hence, I've been stuck.
I think what you're supposed to do is take the voltage difference between $0F and $20 to represent only 0% to 75% or 80% luma (in sRGB, #000000 to #BFBFBF or #CCCCCC). That'll darken the overall picture but give headroom for over-bright colors.
Drag wrote:
The reason I'm working with color spaces is because the way my TV displays rgb[FF,AA,00] (for example) is different from how rgb[FF,AA,00] appears on my computer screen, and I want to be able to display the color my TV makes, but on my computer screen. However, I'm having a LOT of trouble getting it to look right.
Can you elaborate on this? All I can think of is gamma. Does it display say rgb[FF,00,00] the same, and rgb[00,AA,00] the same, but not when combined? I know there was talk here many years ago about TVs doing some adjustment of hues in the skin-tone range of hues.
Well, that was just an example, really; the idea was that the colors that my (or anyone's) TV displays are different from the colors that a computer displays. TVs are a real crapshoot when it comes to standards, because every manufacturer uses a different gamut, which means the colors will always be slightly different from TV to TV. However, the colors seem to be consistently different in the same way, when you compare to a computer or a digital (LCD) tv.
Gamma probably has a huge role in this difference, and the way you need to apply gamma is by applying it individually to the R channel, the G channel, and the B channel, which means the hue shifts as the brightness changes. I still haven't figured out a good way to simulate this with the CIE graph.
As far as "looking right", I'm mostly talking about colors x2, x8, and xC. x2 ends up looking too purple in a lot of the available NES palette generators, x8 is a brilliant "simpsons" marigold color which gets browner as it gets darker (and 08 is the darkest color on the NES palette; very close to black, actually), and xC is cyan when it's light, and turns much bluer as it gets dark. These are the biggest points I have right now.
Drag wrote:
Gamma probably has a huge role in this difference, and the way you need to apply gamma is by applying it individually to the R channel, the G channel, and the B channel, which means the hue shifts as the brightness changes. I still haven't figured out a good way to simulate this with the CIE graph.
As stupid as it sounds, in what other way could one end up possibly implementing it? Doing the calculation on each component as-is seems the easiest way o_O;
Sik wrote:
As stupid as it sounds, in what other way could one end up possibly implementing it? Doing the calculation on each component as-is seems the easiest way o_O;
What I meant was:
YIQ -> CIE -> RGB
I wanted to apply the gamma to the color while it was still in the CIE stage. That way, the gamma is respective to the red, the green, and the blue defined by the FCC. If I apply the gamma to the color after I convert it to RGB, by applying gamma to R, G, and B, I'm applying the gamma to the red, the green, and the blue defined by sRGB, instead of the ones defined by FCC.
The reason I don't know how to do it is because of the way YIQ -> CIE works; I start with a luminance and then add the chroma to it. This is different from RGB, where you start with black, and then add color to it. To simulate the gamma, I need to change the luminance as well as the chroma (I think), but I don't know the way I need to do this.
Gamma should be applied to the RGB values, nothing else. The whole point about gamma is how the screen shows the RGB ramp, it has nothing to do with YUV or stuff like that.
And while we're on the topic, gamma is not an unwanted side-effect; it's a deliberate scheme for encoding luminance into an electrical/digital signal suited for human eyes' greater sensitivity to variations in luminance in the darker end than in the lighter end. A linear encoding would waste accuracy in the light tones and bring out more noise/quantization effects in the darker tones.
The 2.odd gamma characteristic was a fortunate side-effect of the roughly quadratic response of CRT kinescopes. The picture signal represents voltage, but light emission is roughly proportional to beam power, which in turn is proportional to the square of voltage: P = I²R = V²/R.
Sik wrote:
Gamma should be applied to the RGB values, nothing else. The whole point about gamma is how the screen shows the RGB ramp, it has nothing to do with YUV or stuff like that.
Ok, maybe I'm not explaining myself clearly.
YIQ generates a signal for R that goes to the electron beam, and similar signals for G and B. The red, green, and blue phosphors of the screen are excited, and the level of light they emit, compared to the voltage being sent to the gun, represents a 2.2 gamma curve.
This is what I'm trying to simulate; 3 phosphors, each where the input generates an output with a gamma curve of 2.2, combining to form a color which I can plot on the CIE graph, to be converted to sRGB to display on a computer. I'm not an idiot trying to do something incorrect with the gamma, I assure you; I want the resulting CIE color to represent the thing I just mentioned. Converting YIQ -> CIEXYZ -> sRGB, and applying the gamma right before I display it on screen is not what I'm trying to do, because the gamma is not relative to the TV's phosphors in that case.
As far as I can tell, YIQ produces a linear value for red, green, and blue. The gamma curve comes from the phosphor. If I'm rendering YIQ directly to CIEXYZ (or CIELuv, like I'm actually doing), the color I get does not take the phosphor's gamma into account. This is the problem I'm having, and why I can't get the colors to look right.
YIQ can produce negative values for R, G, and B. I have no clue how to apply a gamma curve to this kind of output, nor how a negative value affects the overall color.
Drag wrote:
As far as I can tell, YIQ produces a linear value for red, green, and blue. The gamma curve comes from the phosphor. If I'm rendering YIQ directly to CIEXYZ (or CIELuv, like I'm actually doing), the color I get does not take the phosphor's gamma into account. This is the problem I'm having, and why I can't get the colors to look right.
This is not quite right. The R, G, and B components represent
perceptually linear brightness (or something close to it), not linear light intensity. That means a 50% R component is roughly 22% as bright as a 100% R component if the CRT has a gamma of 2.2. It also means it's impossible to convert YIQ directly to CIELuv without first converting it to some intermediate RGB colorspace to do gamma correction.
Drag wrote:
YIQ can produce negative values for R, G, and B. I have no clue how to apply a gamma curve to this kind of output, nor how a negative value affects the overall color.
Those RGB values are used to drive the CRT. (Actually, they're adjusted by the contrast and brightness knobs, and
then used to drive the CRT, so they may not be negative by the time they get to the cathode.) Any negative values in a properly-adjusted TV set will represent a voltage too low to drive the phosphors, and are equivalent to zero. It is possible for negative values to affect the picture: way back when the local cable company broadcast the
SMPTE color bars on channel 70, the darker-than-black portion of the PLUGE pulse would indeed appear darker than the black level if you turned the brightness up too high.
blargg wrote:
What does it mean to say that something "uses" a particular color space?
The NES uses a particular color space in the sense that if the PPU engineer assumed that the displaying CRT used NTSC phosphors, he would have chosen resistors for the color-signal-generating resistor ladder that produce a lower saturation level than if he assumed that the displaying CRT used SMPTE-C phosphors.
Drag wrote:
Even if modern TVs use YUV decoders, YIQ and YUV are the same, the only difference is that YIQ's axis is rotated slightly.
The main difference between YUV and true YIQ decoding is that with the latter, the I signal is assumed to occupy a 1.5 MHz bandwidth (Q a 0.5 MHz bandwidth), whereas with YUV decoding, both are assumed to occupy a 0.5 MHz bandwidth. Unless you are decoding with different bandwidths for I and Q, you are not truely doing YIQ decoding, but merely "rotated YUV" decoding.
blargg wrote:
Can you elaborate on this? All I can think of is gamma. Does it display say rgb[FF,00,00] the same, and rgb[00,AA,00] the same, but not when combined? I know there was talk here many years ago about TVs doing some adjustment of hues in the skin-tone range of hues.
Television sets differ in many ways other than gamma. They differ at least also in
- phosphor chromaticities ("So what does 100% red look like?")
- white point ("What does 100% white look like?")
- behavior with out-of-spec signals.
The first two points define the color space. Three hexadecimal RGB values say nothing about how a color actually looks until you specify a color space. If none is specified, modern systems assume sRGB. The last point is of particular importance for home computers and consoles. Signals can be out-of-spec with regards to
- the peak-to-peak signal level
- the sync level relative to blanking level
- the white level relative to blanking and/or sync
- the color burst amplitude
- oversatured red/green/blue levels after decoding
In the case of the NES, pretty much everything except the peak-to-peak signal level is wrong.
Since the NTSC standard does not define how to deal with non-standard signals, this should be the greatest source of variation between different television sets and computer images. In particular, an analogue CRT can be expected to react very differently to a >100% red/green/blue signal than a digital device (which will probably just clip).
Drag wrote:
For now, I've updated the palette generator. Some of the new tweak settings are from suggestions. The hue was me, because even though a hue tweak of 0.0 should represent the exact hues being sent, it still doesn't look right unless I shift them by -0.15. Your mileage may vary.
I agree. With the color burst being at $x8, there is no red. Just pink and orange, with nothing in between.
Quote:
The main difference between YUV and true YIQ decoding is that with the latter, the I signal is assumed to occupy a 1.5 MHz bandwidth (Q a 0.5 MHz bandwidth), whereas with YUV decoding, both are assumed to occupy a 0.5 MHz bandwidth. Unless you are decoding with different bandwidths for I and Q, you are not truely doing YIQ decoding, but merely "rotated YUV" decoding.
Some machines even use 1.5Mhz for U and V which is pretty stupid since you are left with only 2Mhz of Y bandwidth, and both color axis have a cropped upper sideband, due to the 4.2Mhz channel limit.
Has it been confirmed that color burst is always at the same phase as color $8? I'm asking because of the several Famicom and Twin Famicom consoles I have, each outputs colors with a slightly different hue shift. In particular, while the -G 2C02 revision seems to place burst at color 8 +/- 0 degrees, the -H 2C02 (in a AN-505BK Twin Famicom) seems to output burst at 8 -10 degrees, equivalent to a palette setting of +10 degrees, which is very greenish.
psycopathicteen wrote:
Some machines even use 1.5Mhz for U and V
Yes, that's what SMPTE 170M "Composite Analog Video Signal - NTSC for Studio Applications" calls for.
psycopathicteen wrote:
which is pretty stupid since you are left with only 2Mhz of Y bandwidth
Not with a notch or comb filter.
psycopathicteen wrote:
and both color axis have a cropped upper sideband, due to the 4.2Mhz channel limit
... which doesn't exist in studio applications. (Leaving me to wonder what a broadcast TV station does when playing back a composite studio tape --- just crop the upper sideband, or decode and reencode with narrowband color difference signals before transmission?)
Edit: found the answer myself.
SMPTE EG 27 wrote:
When this signal is transmitted, a low-pass filter in the transmitter bandwidth limits the luminance (Y) signal and the upper sidebands of the color-difference signals (either B-Y and R-Y or I and Q) to 4.2 MHz. Transmission of equal-bandwidth color-difference signals to the receiver has the effect of limiting the recoverable chroma bandwidth to 0.6 MHz as a result of the truncation of the upper sidebands of the chroma modulation in the transmitter’s 4.2 MHz filter.
NewRisingSun wrote:
Has it been confirmed that color burst is always at the same phase as color $8? I'm asking because of the several Famicom and Twin Famicom consoles I have, each outputs colors with a slightly different hue shift. In particular, while the -G 2C02 revision seems to place burst at color 8 +/- 0 degrees, the -H 2C02 (in a AN-505BK Twin Famicom) seems to output burst at 8 -10 degrees, equivalent to a palette setting of +10 degrees, which is very greenish.
In the digital domain, it's definitely always phase $8, and there's no ability to specify anything other than some multiple of 30°. Whether subsequent analog effects skew the phase afterwards is something still under discussion. ( e.g.
viewtopic.php?t=10101 )
As far as I've been able to tell from the discussions, it really looks like there shouldn't be an appreciable phase error between colorburst and other voltages. Due to the common collector amplifier that's certainly present in the NES, and in the schematic for the Famicom, the output impedance shouldn't vary appreciably by output voltage.
Is the master clock input exactly 50% duty? If not, that might mess with the phase generator, causing the odd phases ($9, $B, $1, $3, $5, $7) to be offset.
lidnariq wrote:
NewRisingSun wrote:
Has it been confirmed that color burst is always at the same phase as color $8?
In the digital domain, it's definitely always phase $8, and there's no ability to specify anything other than some multiple of 30°.
I will repeatedly say this over and over until I understand: If colorburst is phase 8, why does the on-screen color not have the greenish-yellow colorburst hue? It
really looks like it should be color 9 and not color 8. Every single NTSC color generator matrix/formula/whatever I've used, with NES parameters plugged in (including forcing color 8 to be the literal colorburst hue) just
does not produce results that look like anything I've ever seen the NES look like, even if I set color 9 to be the colorburst hue, it still doesn't look right unless I shift the hues slightly, which screws up other colors. Moreover, there's things like gamma curves (R, G, and B each have their own independent gamma curves, but you'd never know if you asked anyone because everyone wants you to apply gamma only to the luminance, which doesn't produce the right colors, especially in the darker $0x-$1x range of the palette) to worry about too, and really, it doesn't seem like anyone's legitimately interested in a palette that resembles a physical CRT's color output. Moreover, what's the correct way to deal with out-of-gamut colors (which the NES is fond of producing, especially in the blues)?
So this is basically why I completely gave up on this.
Drag wrote:
If colorburst is phase 8, why does the on-screen color not have the greenish-yellow colorburst hue?
It has on the Sharp AN-500B Twin Famicom. The Sharp AN-505BK Twin Famicom looks even more greenish (positive hue shift, about +10°). I've never had a US NES, but all video captures seem to indicate a negative hue shift (about -10°), making on-screen color #8 look less greenish. All this on the same TV. On a different TV, the AN-500B looks just as greenish as the AN-505BK. The reason seems to be that the NES does not properly output a sine wave for the color burst but some sort of
weird filtered triangle wave, which causes further variation across models and television sets.
The PAL NES on the other hand has rock-solid (i.e. constantly washed-out) colors across all sets I've seen.
Drag wrote:
R, G, and B each have their own independent gamma curves, but you'd never know if you asked anyone because everyone wants you to apply gamma only to the luminance
There is not a single television standards document that ever calls for R/G/B having different electro-optical transfer characteristics.
Drag wrote:
especially in the darker $0x-$1x range of the palette
Those colors are very much affected by how you assume your emulated TV handles non-standard video levels, in particular, the NES' too-small sync amplitude. Also try both 0 and 7.5% black-level setup. These are more likely sources of variation than putative gamma curves.
Consider the following ways of interpreting the NES' video signal --- all valid methods found in models of TV capture cards and television sets, see attached pictures. I could post another threesome of pictures for NTSC-J with 0% black-level setup. (All pictures use a -13° hue shift from color 8, and no NTSC->sRGB color correction).
Drag wrote:
it doesn't seem like anyone's legitimately interested in a palette that resembles a physical CRT's color output.
That's what I've been doing for the past 8 years or so...
Drag wrote:
Moreover, what's the correct way to deal with out-of-gamut colors (which the NES is fond of producing, especially in the blues)
No television standards document mentions a correct way of doing so, assuming that with proper receiver adjustment, you just get the same R/G/B values that originated in the hypothetical television camera. You can either just clip at 0 and 255, or reduce saturation for that color until none clip.
NewRisingSun wrote:
Drag wrote:
R, G, and B each have their own independent gamma curves, but you'd never know if you asked anyone because everyone wants you to apply gamma only to the luminance
There is not a single television standard document that ever calls for R/G/B having different electro-optical transfer characteristics.
Gamma correction applies only to an all-positive signal. The common practice in digital video editing in the YCbCr (YUV) domain is to gamma-correct only the Y channel. But in RGB, yes you're supposed to gamma correct all three.
My guess for the phase error is slew rate. There's definitely a low-pass characteristic in the net output from the PPU. Some designs of amplifier have a "slope overload" characteristic, which produces more phase delay at high amplitudes than at low amplitudes because the overall rate of change in voltage over time has limits. There's a difference between this limit and impedance that depends on voltage: impedance that depends on voltage would affect the lower-frequency luma signal, while slew rate mostly affects chroma. There's a test for this: the chroma component of $08 or $38 has a larger amplitude than colorburst, and the chroma component of $18 or $28 has an even larger amplitude. If there's more phase delay on $18 and $28 than on $08 and $38, you've found your culprit.
And since 2006, there's been a standard for how to handle out-of-gamut colors:
xvYCC.
tepples wrote:
If there's more phase delay on $18 and $28 than on $08 and $38, you've found your culprit.
That's not what I've seen with the NES. And it doesn't explain why two consoles look different on one TV, but the same on another.
I agree however that NTSC is definitely susceptible to the slew rate phenomenon: contrary to what is typically claimed, the famous NTSC phase shift problem is not caused by strange things happening to terrestial signals in the air, but by receiver equipment and especially transmitters having an amplitude-dependent phase shift. The first NTSC transmitters
shifted the highest amplitudes 30 degrees further than the lowest amplitudes; adjusting the (amplitude-independent) hue control on the receiver could therefore only result in either bright or dim colors looking correct. Hence the popularity of PAL, whose patent specifically mentions its ability to perfectly correct this "differential phase error", as it's called.
Another question, which tepples might be able to answer: SMPTE-170M states that the peak-to-peak color burst amplitude shall be 40 IRE. The NES outputs about 50 IRE (with 75 ohm load), suggesting that saturation ought to be attenuated by 40/50. However, SMPTE-170M describes the color burst as a sine wave, whereas the NES outputs a square (or triangle?) wave. If the square-wave 3.58 MHz signal is filtered into a sine-wave, its peak-to-peak amplitude would be larger, but by how much?
NewRisingSun wrote:
If the square-wave 3.58 MHz signal is filtered into a sine-wave, its peak-to-peak amplitude would be larger, but by how much?
True, an ideal brick wall at 4.2 MHz will cause the sine wave to have a bigger peak-to-peak amplitude. But the real amplitude depends on how the filter's transfer function looks around 3.58 MHz (fundamental) and 10.74 MHz (third harmonic, which is the first overtone of a square wave). Attenuation of -6 dB per octave, for example, will turn a square wave into a triangle wave, and if the corner isn't well above 4 MHz, it'll reduce the amplitude of the fundamental.
Disclaimer: I have more of a digital signal processing background than an analog EE background.
Two consoles looking different on one TV but the same on another might be an impedance mismatch. The TV where they look the same might have a lower input load.
tepples wrote:
True, an ideal brick wall at 4.2 MHz will cause the sine wave to have a bigger peak-to-peak amplitude.
By how much?
tepples wrote:
But the real amplitude depends on how the filter's transfer function looks around 3.58 MHz (fundamental) and 10.74 MHz (third harmonic, which is the first overtone of a square wave). Attenuation of -6 dB per octave, for example, will turn a square wave into a triangle wave, and if the corner isn't well above 4 MHz, it'll reduce the amplitude of the fundamental.
The way I read the drawing in the NTSC document from 1954, it's 0% attenuation at 4.2 MHz and 100% attenuation at 4.5 MHz. (The 1941 document on the other hand has a drawing indicating 0% attenuation at 4.0 MHz and 100% attenuation at 4.5 MHz.)
NewRisingSun wrote:
tepples wrote:
True, an ideal brick wall at 4.2 MHz will cause the sine wave to have a bigger peak-to-peak amplitude.
By how much?
Google
fourier transform of square wave finds
"Square wave" on Wikipedia, which shows the amplitude of the fundamental frequency as 4/π ≈ 1.27, or 20 log(4/π) ≈ 2.1 dB above the flat part of the ideal square wave.
Empirically I had found that the value must be about 1.3, so that seems correct. Thanks.
tepples wrote:
Two consoles looking different on one TV but the same on another might be an impedance mismatch.
And slew rate would cause an impedance mismatch to affect phase?
As I said, I'm no EE, but perhaps an amplifier tracks the input voltage better when it's feeding a smaller load. I only took time to learn about slew rate in the first place when I realized it led to the same distortion phenomenon as DPCM slope overload.
I believe that slew rate is due to limited drive current of an amplifier, combined with the inevitable capacitance in parallel with the load. Here's a great picture of how a limited slew rate can cause a phase shift (red=in, green=out):