Hello all,
I'm currently working on finishing my APU. I'm working on the mixer module right now and want to make sure that I have it correct before I start debugging each individual channel. My question is regarding the mixer's LUT generation. I am referencing: http://wiki.nesdev.com/w/index.php/APU_Mixer_Emulation
My audio codec's sample input is a signed 16-bit integer value. So looking at the above Wiki doc it says to generate a LUT for the pulse channels with the following equation:
So what would be the correct method of converting the resulting floating point numbers to signed 16-bit integer samples for my audio codec's D-to-A converter? For example, if n=5, then according to the above equation LUT[5]=0.05535
I need to convert that to a value that makes sense for my audio codec's signed 16-bit sample input. How would I do it?
Thanks everyone!
Jonathon
I'm currently working on finishing my APU. I'm working on the mixer module right now and want to make sure that I have it correct before I start debugging each individual channel. My question is regarding the mixer's LUT generation. I am referencing: http://wiki.nesdev.com/w/index.php/APU_Mixer_Emulation
My audio codec's sample input is a signed 16-bit integer value. So looking at the above Wiki doc it says to generate a LUT for the pulse channels with the following equation:
Code:
pulse_table[n] = 95.52 / (8128.0 / n + 100)
So what would be the correct method of converting the resulting floating point numbers to signed 16-bit integer samples for my audio codec's D-to-A converter? For example, if n=5, then according to the above equation LUT[5]=0.05535
I need to convert that to a value that makes sense for my audio codec's signed 16-bit sample input. How would I do it?
Thanks everyone!
Jonathon