Question on APU Mixer

This is an archive of a topic from NESdev BBS, taken in mid-October 2019 before a server upgrade.
View original topic
Question on APU Mixer
by on (#79949)
Hello all,

I'm currently working on finishing my APU. I'm working on the mixer module right now and want to make sure that I have it correct before I start debugging each individual channel. My question is regarding the mixer's LUT generation. I am referencing: http://wiki.nesdev.com/w/index.php/APU_Mixer_Emulation

My audio codec's sample input is a signed 16-bit integer value. So looking at the above Wiki doc it says to generate a LUT for the pulse channels with the following equation:

Code:
pulse_table[n] = 95.52 / (8128.0 / n + 100)

So what would be the correct method of converting the resulting floating point numbers to signed 16-bit integer samples for my audio codec's D-to-A converter? For example, if n=5, then according to the above equation LUT[5]=0.05535

I need to convert that to a value that makes sense for my audio codec's signed 16-bit sample input. How would I do it?

Thanks everyone!

Jonathon

by on (#79954)
Floating-point samples generally cover a range from -1.00 to 1.00. Integer samples cover a range from -32768 to 32767. So to convert floating-point samples to integer samples, first clip them to the range [-1.00, 1.00), and then multiply each sample by 32768.

by on (#79957)
So I think what you're saying is to take the pulse_out[n] and tnd_out[n] resulting floating-point values for each sample, add them together to get the final sample result, then clip them to [-1.0, 1.0) and multiply by 32768. Is that correct?

If so that won't exactly work for me. Recall that my emu is in hardware (FPGA) so having floating-point values at any step in the conversion process is going to cause problems. I need to actually convert the raw floating-point LUT values to integers so that everything is integer from the very start. So what would be the integer equivalent of the pulse and tnd LUTs assuming the final result needs to be a 16-bit signed sample?

Thanks!

EDIT: I got the answer I was looking for on #nesdev. Thanks Kevtris!

by on (#79959)
jwdonal wrote:
So I think what you're saying is to take the pulse_out[n] and tnd_out[n] resulting floating-point values for each sample, add them together to get the final sample result, then clip them to [-1.0, 1.0) and multiply by 32768. Is that correct?

That or convert pulse_out[] and tnd_out[] to integers in advance, and clip them to +/- 32768 when adding them. Is that what kevtris recommended?

by on (#79960)
Haha, yes, I suppose that would be nice if I also posted the answer. Sorry, I'm a dork. :)

Kevtris said:

1) Multiply all LUT values by 65535 and round down
2) Invert the MSbit of the final 16-bit _summed_ result (i.e. pulse_out + tnd_out) to convert the sample to signed.
3) Done!

And, obviously, if the DAC expects unsigned samples then skip step 2.

And thank you for your help tepples! You led me in the right direction!

EDIT: Just FYI, the above solution worked perfectly for me.