Since the OP was stuck on not even using multiplies, just shifts, I assumed that there were non-technical reasons for avoiding more flexible approaches, and stuck to one that was only slightly more "daring".
On the topic of compiler optimization, since
unfortunately C generally (and I believe always in C99) uses round-towards-zero for divide, rather than round-towards-negative-infinity (
Euclidian division), a compiler can't simply optimize just a signed divide-by-2^n to a right shift; when the operand is negative, it must adjust the result as well.
tepples wrote:
It's fine to rely on implementation-defined behavior so long as you use a compile-time assertion.
And so long as most implementations have this behavior, otherwise you make your code break on things that don't, leaving the user to find an alternate implementation and be sure they catch every dependence on it.
Zepper wrote:
Code:
output = (value * 192) >> 7;
output_dac = (duty_sign)? -output: output;
That would work, and you could also do the upscaling from the NES 4-bit DAC to your 16-bit sample range. You wouldn't even need a right shift anymore, just a multiplier, e.g.
multiplier=gain*0.1*32767/15 and then
output = dac * multiplier.
The NES DACs are not signed, BTW. They actually do
output_dac = duty_sign ? 0 : output. And it does make an audible difference; notes have less punch with signed.