This is one of a series of attempts at showing how to generate bandlimited sound in the simplest possible way with as little math as possible. I use a NES square wave as an example, but this applies to any waveforms that are mostly flat. For a more detailed version, see my previous attempt.
This is the original waveform from the NES at its 1.79 MHz clock rate:
PC sound runs at a much lower rate than this, so samples must be taken less often:
Unfortunately, this simple approach doesn't give good sound quality. One sign of this is that the high and low parts of the waveform are sometimes one sample longer. Instead of exploring the technical reasons why this doesn't work, we can simply see what would happen if we connected the original NES signal to the sampler's line input on the PC. This is what we'd get:
The apparent noise around the transitions is caused by filtering that the PC's sampler performs before actually sampling the signal. If we looked at the signal after filtering but before sampling, this is what we'd see:
Every transition looks the same after filtering. It's the resulting sample points that differ, based on where the transition falls relative to the two nearest sample times. If it falls exactly on a sample point, you get the most ripply sampled version. If it falls exactly between sample points, you get the least ripply version. If it falls somewhere in between, you get varying degrees of ripples.
Generating a sampled version is just a matter of finding each transition in the original NES signal then adding the appropriate samples as described above. It doesn't really matter why the ripples are there, just that you generate the same result as the PC would when sampling the real thing. The following portable C sample code implements this and writes the result to a sound file:
bandlimited_square.c
This is the original waveform from the NES at its 1.79 MHz clock rate:
PC sound runs at a much lower rate than this, so samples must be taken less often:
Unfortunately, this simple approach doesn't give good sound quality. One sign of this is that the high and low parts of the waveform are sometimes one sample longer. Instead of exploring the technical reasons why this doesn't work, we can simply see what would happen if we connected the original NES signal to the sampler's line input on the PC. This is what we'd get:
The apparent noise around the transitions is caused by filtering that the PC's sampler performs before actually sampling the signal. If we looked at the signal after filtering but before sampling, this is what we'd see:
Every transition looks the same after filtering. It's the resulting sample points that differ, based on where the transition falls relative to the two nearest sample times. If it falls exactly on a sample point, you get the most ripply sampled version. If it falls exactly between sample points, you get the least ripply version. If it falls somewhere in between, you get varying degrees of ripples.
Generating a sampled version is just a matter of finding each transition in the original NES signal then adding the appropriate samples as described above. It doesn't really matter why the ripples are there, just that you generate the same result as the PC would when sampling the real thing. The following portable C sample code implements this and writes the result to a sound file:
bandlimited_square.c