I'm not sure which forum to put this thread in; I chose NESemdev, because I guess this sort of has to do with emulation...
But I apologize if this would be the wrong forum.
I'm curious as to which NES games look better with NTSC artifacts rather than without. In my experience, Blaster Master and the Castlevania games look better with NTSC artifacts, but most early 80's (1983 - 1987) games, the Mario games and other games with bright, simplistic graphics look slightly worse. To me, it seems to be that games with detailed, gritty graphics look better with NTSC artifacts, while games with simplistic graphics (and earlier released games) look worse.
But what are other good examples of games that look good with NTSC artifacts - and those that look worse?
It's hard to say, I was impressed to see how Castlevania 3 looks better with them, while it doesn't affect this much some other games. Super C and games of the Gradius series also looks significantly better with them. Batman looks much better, too.
Final Fantasy games looks quite better without them I think, but I don't know much why.
I guess games with dark graphics looks better with them, while dark with more fantasic graphics looks better without them.
Games with outlined graphics will look less different, because this lessens the impact of NTSC limitations. I've encountered this issue connecting a SNES to my TV via S-Video versus composite; games like Super Mario World, Zelda: A Link to the Past, and Super Mario Kart look great with S-Video, while things like Donkey Kong Country look poor due to the pixel perfect image, but good with composite. Like you say, attempts at gradients and texturing look better with composite, because it adds more variation and texture, while S-Video or RGB exposes the limited number of colors of the system too much.
I also noticed the Genesis version of Toy Story looks more pixelated playing on an emulator than when I played it on a TV before, something that I recall a reviewer commented on as well. I think Toy Story is also a game that looks better with composite rather than in its pixel perfect format, since I believe it renders its graphics in pseudo-3D style similar to Donkey Kong Country.
IMO, the best graphic style for composite-predominant consoles would be styles that often use dithering, but still remain clear to look at - and also colorful depending on the atmosphere. That way, games would look fine whether with or without NTSC artifacts.
A lot of Genesis games use an odd dithering technique consisting of vertical lines, rather than the usual dot pattern, because an NTSC TV will blend the lines together (though I don't know about Toy Story, never played it). Ristar makes use of this in combination with shadow mode on the map screen, and Sonic 2 uses it in Chemical Plant Zone to simulate "transparent" tubes.
Quote:
That way, games would look fine whether with or without NTSC artifacts.
In fact my opinion is that most games looks fine with or without NTSC artifacts. I don't remember any game which looked horrible on the console looks good on an emulator or the other way arround.
I personally am not a big fan of dithering, althrough I still use this technique sometimes. To have good graphics, I usually avoid having a large surface all using the same color unless in some particular cases (background sky or something). If you have a large place with nothing in it, add details or shadow effects. With this you will usually end up with good graphics, with and without NTSC artifacts.
Better is subjective. Some people feel that simply because the NTSC emulation generates an image that you'd also see when capturing a real NES, that it's "better". Other people may feel that, although some games have graphics that were designed with this in mind, for the vast majority of games it really doesn't matter. Myself, I'm a wholehearted advocate of the NTSC emulation. I feel it really adds that touch of "authenticity" to the experience. Plus, even if one doesn't care whether or not the artifacts are emulated, I feel that if you use the NTSC emulation with RGB presets, that it's still preferable to not using it at all, because a real RGB signal, when captured, is vertically sharp but slightly soft horizontally, and when set to RGB preset Blargg's filters exhibit this same behavior. To me, that just makes it feel more "real" (example, try running any of the VS. games with an RGB preset, it looks really good IMO).
As far as Genesis games go, yeah, I'm of the belief that more games used such dithering, pretty much every main Sonic game uses it, and I've seen it in countless other games. The famed "256-color" Eternal Champions CD even used it (because in no way does it really display 256 real colors at once). It's a very effective technique on the Genesis because, at least in NTSC regions, I would guess that maybe 1% of Genesis consoles - maybe 2% at most - are hooked up with anything better than composite/RF video.
Why are the "resolution" settings for Composite and S-Video presets anything other than maximized? If I understand "resolution" correctly as the N/Y channel bandwidth, there is no channel limit in Composite and S-Video environments. The ONLY environment where there is a limit on N/Y channel bandwidth is a radio-frequency modulated signal; here, there is a 4.2 MHz limit on the N/Y bandwidth, which means an effective resolution of 4.2 MHz * 52+59/90 µs = 221.153... pixels, down from the NES' pixel clock of Fsc*6/4 * 52+59/90 µs = 282.824... pixels. That bandwidth limit is precisely the difference between RF and composite, after all.
Right now, the composite setting looks like RF, and the S-Video setting looks like a well-comb-filtered RF signal. Better change those presets. What exactly does the "Sharpness" control do?
Quote:
Why are the "resolution" settings for Composite and S-Video presets anything other than maximized?
Because a PC display has more resolution than a TV. Doesn't composite impose a pseudo-limit due to the chroma carrier's frequency, or can a good comb filter separate the two?
Quote:
What exactly does the "Sharpness" control do?
Sharpness applies edge enhancement to the resulting image, that is, where there is a delta in luma it increases this, and compensates the nearby deltas to keep the total delta the same.
Quote:
Because a PC display has more resolution than a TV.
But even a TV has more than 282 pixels of horizontal resolution, so this doesn't apply here. You can make this point for high-resolution (512x240) modes of the Super NES.
(I've once taken the time to count the number of RGB phosphors per line on my old trinitron TV, and it happened to be exactly 480 pixels horizontally.
)
Quote:
Doesn't composite impose a pseudo-limit due to the chroma carrier's frequency
Why would it? If we can produce high-resolution pictures with our algorithm without pseudo-limiting at the subcarrier frequency, why couldn't a TV set do the same? Most TV sets are actually WAY more sophisticated than our little algorithm here. You don't even need a comb filter for that, just a notch filter at 3.58 MHz would be sufficient.
Again, if there WAS a limit at 4.2 MHz, or as you suggest, at 3.58 MHz, there would be no point in using a baseband composite connection over a radio-frequency modulated signal. But there is.
Quote:
Sharpness applies edge enhancement to the resulting image
Hm. On my TV, there is indeed a sharpness control. Below center, it operates like your "resolution" (Y channel filtering, with no filtering at center), above center, it operates like your "sharpness" (edge enhancement). It might be less confusing if you combined the two in the manner I've described, and clearly indicated the center position with no filtering and no edge enhancement.
NewRisingSun wrote:
Why are the "resolution" settings for Composite and S-Video presets anything other than maximized? If I understand "resolution" correctly as the N/Y channel bandwidth, there is no channel limit in Composite and S-Video environments.
We have to distinguish between TVs and computer monitors. The Apple IIe in 80-column text mode and "double hi-res" mode was capable of generating a signal all the way up to 7.2 MHz for use with specialized monitors, but TVs of the time couldn't display it clearly. This is because they used a crossover circuit to separate the luma (0-3.0 MHz) from the chroma (3.0-4.2 MHz). Composite computer monitors such as the ones that filled school computer labs appear to have used a (more expensive) notch filter, along with a "monochrome" button to ignore the color burst and disable all chroma processing. The advantage of baseband over RF wasn't the ability to handle signals above 4.2 MHz as much as fewer processing steps, where each step introduces noise and filter roll-offs.
Quote:
The ONLY environment where there is a limit on N/Y channel bandwidth is a radio-frequency modulated signal; here, there is a 4.2 MHz limit on the N/Y bandwidth, which means an effective resolution of 4.2 MHz * 52+59/90 µs = 221.153... pixels
Harry Nyquist wrote:
You forgot to
double it.
Oh, and the Y is short. It's NIK-wist, not NYE-kwist.
For a signal in monochrome mode (luma up to 4.2), the horizontal resolution is 442 pixels. With the crossover, this drops to 3.0 MHz * 2 * (52 + 2/3) µs = 316 pixels, very close to the "320x240" commonly quoted for LDTV and each field of SDTV.
Some of the richness of color in games that use dithering, like Blaster Master, comes from aliasing. An isolated pixel looks different depending on where the pixel falls relative to the phase of the color subcarrier.
So the NES generates a 2.68 MHz signal?
Quote:
but TVs of the time couldn't display it clearly. This is because they used a crossover circuit to separate the luma (0-3.0 MHz) from the chroma (3.0-4.2 MHz).
Is that a verified or an ad-hoc explanation? I would rather assume that old TV sets, as opposed to computer monitors, didn't have a baseband composite input and thus had to have an RF-modulated, and thus bandlimited, signal fed to them, and that this is the actual cause of the lack of sharpness. There were certainly TV sets available with good notch and even comb filters available in the early-to-mid 1980s.
Quote:
Some of the richness of color in games that use dithering, like Blaster Master, comes from aliasing.
The richness of color in Blaster Master comes from chroma subsampling horizontally, not from aliasing.
Quote:
An isolated pixel looks different depending on where the pixel falls relative to the phase of the color subcarrier.
Those are cross-color and cross-luma artifacts. They cannot be used for effect with games, because the absolute phase of the color subcarrier is undefined on the Famicom.
NewRisingSun wrote:
So the NES generates a 2.68 MHz signal?
A signal with alternating white and black pixels is a square wave, with fundamental frequency 2.68 MHz. The signal has harmonics at 8 MHz and above, but these are filtered out either inside the NES or inside the TV. (I don't have an oscilloscope, so I can't test it myself.)
NewRisingSun wrote:
Those are cross-color and cross-luma artifacts. They cannot be used for effect with games, because the absolute phase of the color subcarrier is undefined.
They cannot be used
predictably, except that they are guaranteed to differ from one 8x8 or 16x16 pixel metatile to the next. The games that use subcarrier crosstalk artifacts rely on the effect when they are used
unpredictably.
Quote:
but these are filtered out either inside the NES or inside the TV.
And I'm arguing that the NES doesn't filter at all, and that the TV doesn't necessarily filter either.
Quote:
The games that use subcarrier crosstalk artifacts rely on the effect when they are used unpredictably.
And what games would that be? Again, to make that point, you have to separate crosstalk artifacts from chroma subsampling. I agree that Blaster Master and other use chroma subsampling for effect, but how and where do they use crosstalk artifacts?
NewRisingSun wrote:
And I'm arguing that the NES doesn't filter at all, and that the TV doesn't necessarily filter either.
Who has a scope to settle this?
NewRisingSun wrote:
Again, to make that point, you have to separate crosstalk artifacts from chroma subsampling. I agree that Blaster Master and other use chroma subsampling for effect, but how and where do they use crosstalk artifacts?
We could answer that by taking a screenshot in an emulator that only blurs chroma, compared to an equivalent screenshot in an emulator like Nestopia that emulates the whole composite encoder and decoder.
kevtris should be able to tell us what the NES output looked like on a scope. I doubt it's filtered.
NewRisingSun wrote:
But even a TV has more than 282 pixels of horizontal resolution, so this doesn't apply here.
I was referring to the combination of electron beam size, phosphor spacing, and how quickly the beam's amplitude can change (bandwidth).
NewRisingSun wrote:
On my TV, there is indeed a sharpness control. Below center, it operates like your "resolution" (Y channel filtering, with no filtering at center), above center, it operates like your "sharpness" (edge enhancement). It might be less confusing if you combined the two in the manner I've described, and clearly indicated the center position with no filtering and no edge enhancement.
This would be something for the emulator author to implement. I want to provide as many parameters as there are in the algorithm I use, though it is true that resolution and sharpness are similar in their negative regions.
NewRisingSun wrote:
Again, to make that point, you have to separate crosstalk artifacts from chroma subsampling. I agree that Blaster Master and other use chroma subsampling for effect, but how and where do they use crosstalk artifacts?
I think tepples' point was that games rely on all kinds of artifacts to "spice up" textures so that they don't visually repeat every 16 pixels, even though the texture is commonly that size. Vertical lines get ragged edges and dithered areas get slight color fringing. Since the period is every 3 pixels, this makes a 16x16 texture act somewhat like a 48x48 texture.
tepples wrote:
We could answer that by taking a screenshot in an emulator that only blurs chroma, compared to an equivalent screenshot in an emulator like Nestopia that emulates the whole composite encoder and decoder.
The algorithm I use processes luma and chroma separately when you have the artifacts set to -1, so Nestopia should allow this. I made some comparisons between the three (RGB, reduced chroma resolution, reduced chroma resolution and chroma->luma artifacts):
Quote:
I think tepples' point was that games rely on all kinds of artifacts to "spice up" textures so that they don't visually repeat every 16 pixels, even though the texture is commonly that size. Vertical lines get ragged edges and dithered areas get slight color fringing. Since the period is every 3 pixels, this makes a 16x16 texture act somewhat like a 48x48 texture.
I know that this is what happens; the question is if this is what the creators intended --- did they want a 16x16 texture to act like a 48x48 texture, or is that just an annoyance they put up with? I think it's the latter, again because the exact result is unpredictable, as 1) it may change whenever the screen is turned off or the unit is reset, and 2) the artifacts will be different, or even nonexistent, with different methods of Y/C separation from TV set to TV set.
I think people have this insistence on crosstalk artifacts because the Apple II uses them. The difference is that on the Apple II, they are entirely predictable and thus can be used for effect. Not on the Famicom.
As far as the bandwidth question is concerned --- while I would prefer the "composite" setting to be at maximum "resolution" as well for the reasons I outlined --- I'd rather add an RF setting for filtered Y --- we can all agree that for S-Video, "resolution" should be maximum, right?
By the way, the car in Blaster Master uses color 15 in the NTSC version, but 16 in the PAL version. Apparently, Sunsoft expects a NTSC NES to have a palette that makes color 15 look more red than magenta.
NewRisingSun wrote:
the question is if this is what the creators intended --- did they want a 16x16 texture to act like a 48x48 texture, or is that just an annoyance they put up with?
The question for me is, what did it look like when I played it so many years ago? The difference between these two questions shows two different paths one can take with an emulator. Trying to make it work how the designers of a particular game intended is going to get subjective because it's hard to know what they really intended, and each designer may have wanted the graphics to go in a different "direction".
NewRisingSun wrote:
I think people have this insistence on crosstalk artifacts because the Apple II uses them. The difference is that on the Apple II, they are entirely predictable and thus can be used for effect. Not on the Famicom.
I get your point about this, and maybe some people mistakenly believe that this effect on the NES and other systems is prominent. My insistence on them is that they exist and affect the image (and are significant, as shown above), hence are necessary to recreate the experience of playing a NES game.
NewRisingSun wrote:
As far as the bandwidth question is concerned --- while I would prefer the "composite" setting to be at maximum "resolution" as well for the reasons I outlined --- I'd rather add an RF setting for filtered Y --- we can all agree that for S-Video, "resolution" should be maximum, right?
One reason I don't have resolution increased for composite is that this makes the chroma->luma artifacts too prominent, partly due to the part of the algorithm that ensures that solid color areas don't have any repeating patterns due to roundoff error. As for S-Video and RGB, the preset has the resolution slightly reduced since again I think that a computer monitor is sharper than a TV, and so there is some room for increase. The presets aren't magic at all, just a set of parameters to make it useful with less configuration.
Quote:
The question for me is, what did it look like when I played it so many years ago?
Okay, but if that's the standard, you'd have to add all those other annoying things like differences in gamma, phosphor chromaticities and so on, which can not be added to your optimizied algorithm as they are not linear. That's why I assumed you just wanted the games to look "nice".
Quote:
partly due to the part of the algorithm that ensures that solid color areas don't have any repeating patterns due to roundoff error.
Setting resolution to maximum doesn't exhibit any repeating patterns in Nestopia... Didn't you fix the rounding errors already when you switched so single-precision numbers? I remember reading something about that.
Quote:
The presets aren't magic at all, just a set of parameters to make it useful with less configuration.
I just didn't want the inexperienced user to get a wrong impression about what S-Video is all about. That's all.
To me it's not about emulating what you see on a TV, but it's about emulating the image in the signal going to that TV. When the NTSC emulation can pass a double-blind test with a captured NES signal (which it already can for many, although trained eyes can still tell the difference), then it's good enough for me.
Quote:
To me it's not about emulating what you see on a TV, but it's about emulating the image in the signal going to that TV.
That's already been done:
You might say that's not the "image" being sent, but the truth is,
there is no image, just this waveform. The only image is what appears on a TV, and what appears depends on how a TV decodes it. The only other decoding method is what tepples described, where you convert the waveform back into the NES palette indicies that the PPU used to generate the signal, and end up with what an emulator has, which you can then display just the same, pixellated and without any blending.
So I find this an odd standard. What else matters than what you see on the TV with a NES connected? It seems like that is the only standard to apply. Unlike with the Turing Test where intelligence is hard to define clearly, it's easy to define this. Kevtris will soon have his FPGA NES outputting the exact same signal as a NES (which isn't at all difficult), so the TV is really the difficult and variable part.
blargg wrote:
You might say that's not the "image" being sent, but the truth is, there is no image, just this waveform.
No, but that waveform directly represents said image, and that's what I'm referring to. We shouldn't have to worry about all those other things that NRS referred to, IMO, simply because no two sets will display the exact same image given the exact same signal.
Quote:
The only image is what appears on a TV, and what appears depends on how a TV decodes it.
To a degree, you're correct, it depends on things like comb filters, FM traps, etc. But still, there's a "base" signal which is encoded in the NTSC standard, and that signal is pretty much the same no matter which NES generates it.
Quote:
So I find this an odd standard. What else matters than what you see on the TV with a NES connected? It seems like that is the only standard to apply.
I understand it's a very fine distinction that I'm making, but nonetheless it is a distinction. The reason I think what we have now is pretty much spot-on is because of the following - if you take the NTSC filter output, display it with a 15KHz video card on either an RGB or S-Video monitor, the result will be virtually indistinguishable from a real NES connected via composite (and if you connected that emulated output via composite it would still look really really close although you might have a few additional artifacts due to the two layers of composite, one emulated, one natural). To me,
that's the gold standard. Besides, it's virtually impossible to truly emulate scanlines on a 31KHz+ monitor unless you're running at some insanely high resolution such that you can properly handle the soft scanline transitions that you get on a real TV - f you try to do it on a 480-line viewport then it's always going to look unnatual.
Quote:
Kevtris will soon have his FPGA NES outputting the exact same signal as a NES (which isn't at all difficult), so the TV is really the difficult and variable part.
I understand it's not difficult, since the NES's NTSC output is pretty much fully understood now. My only question is, what more really needs to be done to the emulation except for maybe minor tweaking?
Quote:
We shouldn't have to worry about all those other things that NRS referred to, IMO, simply because no two sets will display the exact same image given the exact same signal.
I don't understand that sentence. The things I talked about ARE the difference between any given two sets.
Quote:
if you take the NTSC filter output, display it with a 15KHz video card on either an RGB or S-Video monitor, the result will be virtually indistinguishable from a real NES connected via composite
If you use a TV out, you don't need to pre NTSC-filter the picture, because it's the TV card's and the displaying TV set's job to do that. That will only work however if you can get the TV card to output the slightly nonstandard timing that the NES uses. If you output a pre-NTSC-filtered image with standard NTSC timing, you'll get TWO layers of artifacting.
Quote:
If you use a TV out, you don't need to pre NTSC-filter the picture, because it's the TV card's and the displaying TV set's job to do that. That will only work however if you can get the TV card to output the slightly nonstandard timing that the NES uses. If you output a pre-NTSC-filtered image with standard NTSC timing, you'll get TWO layers of artifacting.
Since PC video outputs use proper 180 degree colorburst phase shift every scanline, they tend to output pretty clean video. Or if you're using S-Video out, it's definitely clean. When I connected a TV via composite, the NTSC filter made the output look much closer than without it. Without it, you get something like the Wii's Virtual Console.
If you say so. Still, the optimal thing would be to feed a normal RGB image to the TV card and have it output at NES timings. Unfortunately, it seems one would need to write new TV card drivers for that, if they allow such adjustment at all.
I'm a bit confused over the debating here
, but I have a few questions:
- The NES does not try to filter/blend its own video signal. Is that correct, incorrect, or unknown?
- Artifacts are caused by imperfection of the TV's chroma/luma separator. With the NES's video signal, it is mostly because of this. Is this correct, incorrect, or unknown?
- If the above two are correct, would a theoretical TV with a very high quality chroma/luma separator be able to show noticeably less artifacts from a NES signal than usual?
composite video is always blended together, because it's composite video.
strangenesfreak wrote:
The NES does not try to filter/blend its own video signal. Is that correct, incorrect, or unknown?
The PPU doesn't filter luma and chroma before combining them. It is unknown whether anything filters the combined signal, but it's likely, especially on the RF side. The RF modulator has to do some filtering so that video signal components over 4.2 MHz don't bleed into the audio.
Quote:
Artifacts are caused by imperfection of the TV's chroma/luma separator. With the NES's video signal, it is mostly because of this. Is this correct, incorrect, or unknown?
Artifacts are caused by the luma and chroma signals not being properly band-limited before being combined.
Quote:
If the above two are correct, would a theoretical TV with a very high quality chroma/luma separator be able to show noticeably less artifacts from a NES signal than usual?
It depends on how high quality. An NTSC separator circuit specifically designed for Nintendo consoles could sync to the pixel clock (3/2 times the frequency of the color subcarrier), find the peak and trough in each pixel, and recover a nearly pixel-perfect stream comparable in quality to PlayChoice RGB video. But such a separator would produce worse results for any signal that isn't output by an NES or Super NES, so it's not likely that any commercial TV will implement it.
Quote:
Artifacts are caused by the luma and chroma signals not being properly band-limited before being combined.
Well, they also appear with band-limited signals. Otherwise broadcast TV would always look squeaky clean, which it doesn't (at least on NTSC).
Quote:
It depends on how high quality.
It can be mathematically shown that it is impossible to
perfectly separate the Y and C signals from any composite signal without additional information. The composite signal (N) is defined for NTSC-J as:
N = Y + U*sin(2*PI*Fsc*t) + V*cos(2*PI*Fsc*t)
(American NTSC scales Y by 0.925 and adds 7.5%. Modern NTSC en-/decoders also don't use I and Q, but U and V.)
Only N, Fsc and t are known.
"Additional information" can be the assumption that the previous line has the same picture data with the opposite subcarrier phase (that would be a line-comb filter), the assumption that the previous field has the same picture data with the opposite subcarrier phase (that would be a field-comb filter), or an assumption about a pixel clock, as tepples pointed out.
Quote:
But such a separator would produce worse results for any signal that isn't output by an NES or Super NES, so it's not likely that any commercial TV will implement it.
A commercial TV could detect nonstandard signals by determining the change of subcarrier phase from line to line. If it's 1/3 instead of 1/2, switch to "pixel clock separator".
NewRisingSun wrote:
Quote:
Artifacts are caused by the luma and chroma signals not being properly band-limited before being combined.
Well, they also appear with band-limited signals. Otherwise broadcast TV would always look squeaky clean, which it doesn't (at least on NTSC).
I wasn't talking about RF noise artifacts, which are minimal with 2 meter cables of decent quality.
Quote:
Quote:
It depends on how high quality.
It can be mathematically shown that it is impossible to
perfectly separate the Y and C signals from any composite signal without additional information.
Provided that the Y and C signals were properly band-limited (Y in 0-3.0 MHz, C in 3.0-4.2 MHz) before being combined. On most 8-bit and 16-bit computers, they are not, and Y bleeds heavily into C (artifact colors) and vice versa (jagged vertical edges).
Quote:
"Additional information" can be the assumption that the previous line has the same picture data with the opposite subcarrier phase (that would be a line-comb filter), the assumption that the previous field has the same picture data with the opposite subcarrier phase (that would be a field-comb filter), or an assumption about a pixel clock, as tepples pointed out.
And I contend that such additional information is necessary to separate Y from C when the signals have been incompletely filtered before being combined.
Quote:
Quote:
But such a separator would produce worse results for any signal that isn't output by an NES or Super NES, so it's not likely that any commercial TV will implement it.
A commercial TV could detect nonstandard signals by determining the change of subcarrier phase from line to line. If it's 1/3 instead of 1/2, switch to "pixel clock separator".
And if it's 0, then what? Are we dealing with an Apple II or a Sega Genesis? Adding a decoder variant for each brand of classic console and classic computer would increase the cost of product development and the bill of materials. In a world where Wal-Mart discontinues any product that doesn't cut its price every 12 months, that's not suitable for the mass market.
From Nestopia's NTSC filter attributes - Resolution, Sharpness, Color Bleed, Artifacts, and Fringing - which are affected by pixel clock conflict between NTSC standard and NES video signal, affected by imperfectly combined signals on the NES's part, and affected by a TV's imperfect Y/C separator?
In Nestopia 1.37:
- Both res and sharpness appear to be TV-side low-pass filtering of luma: res is IIR, while sharpness is FIR. A separator that uses a notch filter instead of a low-pass would set them higher.
- Color bleed appears to be band-pass filtering of chroma.
- Artifacts appears to represent C bleeding into Y inside the PPU, which produces jagged vertical lines.
- Fringing appears to represent Y bleeding into C inside the PPU.
As far as I can tell, res=1.0 sharp=1.0 bleed=1.0 artifacts=0.0 fringing=0.0 looks like the output of a GameCube or Wii running acNES.
Just for curiosity, but quality-wise, how are NTSC and PAL video signals different?
There is also similar artifacts on a PAL TV, I guess they aren't exactly the same tough, it's difficult to say if they look better or worse. They look pretty much the same, but they don't look very good when scrolling, unlike NTSC artifacts (or at least what Nestopia emulates, I've never seen any actual NTSC artifacts in my life).
tepples wrote:
I wasn't talking about RF noise artifacts
Neither am I. Cross-color/cross-luma artifacts appear with any composite signal, bandlimited or not.
tepples wrote:
Provided that the Y and C signals were properly band-limited (Y in 0-3.0 MHz, C in 3.0-4.2 MHz) before being combined.
You haven't read properly. I said IMPOSSIBLE. It's IMPOSSIBLE to perfectly separate Y and C from any composite signal without additional information; it does not matter whether they were bandlimited before combining or not.
Also, that's not the "proper" bandlimiting method. The proper method is to limit U and V to 1 MHz, but Y is never limited before combining.
Bregalad wrote:
it's difficult to say if they look better or worse
Cross-luma is worse because it doesn't change from field to field. Cross-color is better because the subcarrier frequency is higher at 4.43 MHz instead of 3.58 MHz.
NewRisingSun wrote:
Neither am I. Cross-color/cross-luma artifacts appear with any composite signal, bandlimited or not.
It's IMPOSSIBLE to perfectly separate Y and C from any composite signal without additional information; it does not matter whether they were bandlimited before combining or not.
Absolutely correct. It's harder to see said crosstalk in a non-interlaced signal that conforms to the NTSC standard since it doesn't crawl but rather it flickers at 60Hz, but it's nonetheless there and a trained eye can see it (I myself have witnessed this looking at the output of my Commodore 128, and it's not even standard-conforming). It's not a factor of how the signal is generated but it's an inherent limitation of the signal itself. That's why S-Video was invented, to keep the two signals separate and thus prevent any crosstalk. You still have lossiness because U and V are combined into one signal but you don't have UV bleeding into Y and vice-versa.
Quote:
It's harder to see said crosstalk in a non-interlaced signal that conforms to the NTSC standard
Is that like a negative number that's positive?
Well, I know what you mean, I'm basically talking about if you were to take a standard NTSC signal and get rid of the half-lines so that it doesn't trigger interlacing but otherwise is standard, then there would be no crawl but the crosstalk would still be there.
LocalH wrote:
Well, I know what you mean, I'm basically talking about if you were to take a standard NTSC signal and get rid of the half-lines so that it doesn't trigger interlacing but otherwise is standard, then there would be no crawl but the crosstalk would still be there.
You mean like Neo-Geo AES's composite output?
strangenesfreak wrote:
- The NES does not try to filter/blend its own video signal. Is that correct, incorrect, or unknown?
- Artifacts are caused by imperfection of the TV's chroma/luma separator. With the NES's video signal, it is mostly because of this. Is this correct, incorrect, or unknown?
To NewRisingSun: Since it appears that you and tepples disagree on a few number of things, I assume you agree with tepples on that the NES doesn't try to purposely distort its own signal, since when tepples answered that, you did not try to object. But you don't agree with tepples that poor band-limiting is the
only reason why artifacts appear. Would that mean you would agree that
inherent imperfection of chroma/luma separation is the main cause of artifacts from the NES's video signal - and any other color composite signal?
It seems that a
Wikipedia article on dot crawl may agree with that statement:
Quote:
Dot crawl can be greatly reduced by using a good comb filter in the receiver to separate the encoded chrominance signal from the luminance signal. However, the only complete solution to dot crawl is to not use composite video, and to use S-Video or component video processing instead.
Also, unless you agree with tepples on this, what do
you think Nestopia's NTSC filter attributes - Resolution, Sharpness, Color Bleed, Artifacts, and Fringing - correspond to on pixel clock conflict between NTSC standard and NES, how the NES combines luma and chroma, how the TV attempts to separate luma and chroma, and how the TV filters the video signal?
I'm still curious as to what are good examples of games that look better with NTSC artifacts - I never really intended this thread to turn into this debate. XD
strangenesfreak wrote:
Would that mean you would agree that inherent imperfection of chroma/luma separation is the main cause of artifacts from the NES's video signal - and any other color composite signal?
Yes. In any composite signal, luma and chroma occupy overlapping frequency spectra, therefore, they cannot be perfectly separated without additional assumptions. It does not matter where the signal comes from, although the lack of pre-filtering for the chroma signal component makes for
more artifacts than normal.
strangenesfreak wrote:
this, what do you think Nestopia's NTSC filter attributes - Resolution, Sharpness, Color Bleed, Artifacts, and Fringing - correspond to on pixel clock conflict between NTSC standard and NES,
The NTSC standard doesn't have a pixel clock, because it doesn't have pixels, being an analogue system.
strangenesfreak wrote:
how the NES combines luma and chroma,
You combine them by just adding them. Technically, the NES doesn't actually COMBINE anything, because it never starts out with separate luma and chroma signals; it generates the finished composite signal in one go.
strangenesfreak wrote:
how the TV attempts to separate luma and chroma,
"Artifacts" and "fringing" both are the amount of crosstalk between the luma and chroma signal components, with "artifacts" describing the resulting cross-luma, "fringing" the resulting cross-color artifacts.
strangenesfreak wrote:
and how the TV filters the video signal?
"Resolution" is the bandwidth of the demodulated luma signal component. "Sharpness" is edge enhancement on the demodulated luma signal component. "Color Bleed" is the bandwidth of the demodulated chroma signal components.
Sorry, regarding the "pixel clock" error, I meant conflict between the NES's pixel clock and the NTSC colorburst; that was mentioned in Brad Taylor's PPU document:
Quote:
You see, a 3.58 MHz signal, the NTSC colorburst, is required to be modulated
into a luminance carrying signal in order for color to be generated on an
NTSC monitor. Since the PPU's video out consists of basically square waves
(as opposed to sine waves, which would be preferred), it takes an entire
colorburst cycle (1/3.58 MHz) for an NTSC monitor to identify the color of a
PPU pixel accurately.
But now you remember that the PPU renders pixels at 5.37 MHz- 1.5x the rate
of the colorburst. This means that if a single pixel resides on a scanline
with a color different to those surrounding it, the pixel will probably be
misrepresented on the screen, sometimes appearing faintly.
Well, to somewhat fix this problem, they added this extra pixel into every
odd frame (shifting the colorburst phase over a bit), and changing the way
the monitor interprets isolated colored pixels each frame. This is why when
you play games with detailed background graphics, the background seems to
flicker a bit. Once you start scrolling the screen however, it seems as if
some pixels become invisible; this is how stationary PPU images would look
without this cycle removed from odd frames.
So how does this come in play in fringing? Would that mean fringing is based on both "natural" cross-color artifacts and this quirk of the PPU's video signal?
Taylor's document uses some very weird terminology.
In standard NTSC, one scanline holds 227+1/2 cycles of the color subcarrier (not the "burst" --- the burst are the 8-12 cycles during the back porch when the subcarrier is not suppressed). That means that the color subcarrier phase will be 180 degrees different from scanline to scanline, and field to field, causing any artifacts to be opposite from scanline to scanline and field to field, appearing as a two-stage dot crawl (0°, 180°) upon closer inspection, but canceling each other out over two (2) fields when viewed from a distance.
On the NTSC NES, one scanline holds 227+1/3 cycles of the color subcarrier. That means that the color subcarrier phase will be 120 degrees different from scanline to scanline and field to field. This means that it takes three (3) fields for any artifacts to cancel each other out, making for a more visible three-stage (0°, 120°, 240°) dot crawl.
That's why the PPU makes only one particular scanline, only during every second field, longer by one pixel, so that while there are still three positions of the subcarrier from scanline to scanline, there are only two stages (0°, 240°) from field to field, causing a less fidgety "dot jitter" instead of "dot crawl" at the expense of artifacts not canceling each other out over time.
At least one game (Battletoads) manages to bypass that extra pixel, allowing you to see the original three-stage dot crawl.
In the PAL NES, there are also three stages of the color subcarrier from scanline to scanline, but its phase is LOCKED from field to field, causing any artifacts to be unchanging and thus very visible.
None of this is adjustable by the NTSC filter.
In short: for any given spot on the screen, the color subcarrier phase and thus the kind of artifacts, will be from field to field:
Standard NTSC: 0°, 180°, 0°, 180°, 0°, 180°...
NES normal: 0°, 240°, 0°, 240°, 0°, 240°...
NES Battletoads: 0°, 120°, 240°, 0°, 120°, 240°...
PAL NES: 0°, 0°, 0°, 0°...
Whereas the color subcarrier phase will be from scanline to scanline:
Standard NTSC: 0°, 180°, 0°, 180° ...
Any NES: 0°, 120°, 240°, 0°, 120°, 240°...
Interesting, thanks for the info! So in normal rendering, the NTSC NES skips the 120° stage for the field-to-field phase alteration - but why that stage specifically? How would the signal look different if it was (0°, 120°) or (120°, 240°) instead?
Quote:
but why that stage specifically? How would the signal look different if it was (0°, 120°) or (120°, 240°) instead?
It's not that specific stage --- as I wrote before, the absolute phase of the color subcarrier is actually undefined, so it might as well be 0°/120° or 120°/240°.
I just modified the algorithm for the hypothetical situation "what if the NES DID pre-limit the chroma signal bandwidth before combining it with the luma signal?"
First, the NES as it is, with no chroma bandwidth limiting before output (and no luma limiting after decoding in the receiver). This is what you get with the NTSC filter with "field merging" disabled:
The same with chroma bandwidth limiting before output. The luma/chroma separation in the receiver is the same. Obviously, cross-color artifacts will not be reduced because luma is not limited. However, cross-luma artifacts ARE greatly reduced:
Basically, the latter is how the NES could have looked with more effort (and cost) from Nintendo.
Just for curiosity, but what should the Artifacts setting be set to for emulating the hypothetical, chroma band-limited situation you illustrated? I would think Artifacts should be changed, since you said cross-luma artifacts were reduced.