This has been beaten to death, but I've never actually understood how it worked. I gather that the CPU has to write to audio ram through the SPC700 and they have to be "synched up", but what does this really mean? I guess it's that because the SPC700 operates at a lower frequency, you need to make sure you're writing to it when a cycle is happening on both, but the frequency of the main cpu and the SPC700 isn't even related at all, as 2.68/1= about 0.373, so I have no clue how this works. I guess it's never perfect, but I imagine there's a period of time a cycle takes up, so the SPC700 could get the data toward the end of a cycle and it wouldn't make a difference or something like that. Yeah, a cycle doesn't last until the next one, does it?
I was really wondering though is, in the 32KHz, 16 bit sound demo that apparently takes up all CPU time, is this because of decompression, or is it because of waiting for the right time to write to the SPC700? I don't even know what bit depth in sound is, but I guess it's just the range the sound wave can be at, so if I were to draw a line graph that represented sound waves, each axis would only have whole numbers and the y axis would be bit depth, and the x axis would be the "KHz" frequency. The thing is though, would you ever even really need to jump from 0 to 65536 in one Hz? Could you do a sort of lossy compression where instead of 16 bit audio, you have 8 bit audio where you can either go up 128 or down 128, or would this sound like crap even at 32KHz (or really, would this sound better than 16 bit 16KHz)? You'd also have to have the SPC 700 decompress it, which I'm not too sure how possible that is. I think I heard that the DSP (for the most part, but there's a way to get around it) actually uses lossless "BBR" compression (which I don't know how it's formatted, but I can look into it) but if it is lossless, than that means the rate of streaming could be inconsistent, unless you doctored all your samples to work around it and made them kind of lossy, but whatever. Ofcourse, constant streaming rate really only matters if you're streaming the data to the SPC700 and then streaming that data through the DSP right away in that you're not creating a buffer in audio ram.
I actually thought of something again... If the SPC700 were completely busy streaming data, couldn't it not even decompress any lossy audio samples? Grr...
I was really wondering though is, in the 32KHz, 16 bit sound demo that apparently takes up all CPU time, is this because of decompression, or is it because of waiting for the right time to write to the SPC700? I don't even know what bit depth in sound is, but I guess it's just the range the sound wave can be at, so if I were to draw a line graph that represented sound waves, each axis would only have whole numbers and the y axis would be bit depth, and the x axis would be the "KHz" frequency. The thing is though, would you ever even really need to jump from 0 to 65536 in one Hz? Could you do a sort of lossy compression where instead of 16 bit audio, you have 8 bit audio where you can either go up 128 or down 128, or would this sound like crap even at 32KHz (or really, would this sound better than 16 bit 16KHz)? You'd also have to have the SPC 700 decompress it, which I'm not too sure how possible that is. I think I heard that the DSP (for the most part, but there's a way to get around it) actually uses lossless "BBR" compression (which I don't know how it's formatted, but I can look into it) but if it is lossless, than that means the rate of streaming could be inconsistent, unless you doctored all your samples to work around it and made them kind of lossy, but whatever. Ofcourse, constant streaming rate really only matters if you're streaming the data to the SPC700 and then streaming that data through the DSP right away in that you're not creating a buffer in audio ram.
I actually thought of something again... If the SPC700 were completely busy streaming data, couldn't it not even decompress any lossy audio samples? Grr...