Hey guys!
I've been working on a Mega Man fangame in JavaScript for the past few months known as Mega Man Battle Royale - I've posted up the prototypes here: older one (more stuff), and the newer one (with audio). I've been trying my best to make all aspects of the game as polished and accurate as possible (jump curve, channel separation, sprite flicker on overlap, grabbing colors from Nestopia/FCEUX, etc.).
To view a fullscreen preview, change the word "details" in each URL to "debug". Also, note that the newer version works best in Firefox (It looks like Chrome is having some issues with looping sounds).
I stopped development on the newer version because loading times were getting excessively long for just two songs and a big sound pack (5.7MB+ in three requests to SoundCloud takes at most 10 min.). Instead, I turned to using JavaScript's Web Audio API for sound sequencing. I was able to draw out buffers for every possible note for two squares and a triangle channel, but then I got stuck on emulating the noise channel.
I'm seeing in a lot of places that the sound samples are entirely different from white noise, so I'm still super confused as to how the waveform looks (and how to make it), how to change pitch, and so on. Is there a standard for the contents of a typical 32767-bit sound sample, or is that determined somewhat randomly at runtime?
I'd also like advice as to which method I should use to retrieve sound data: reading and caching data from an .nsf file, writing a program that reads .txt's exported from FamiTracker, or manually typing each and every sound effect into the code.
In short, I have two questions:
Many thanks!
I've been working on a Mega Man fangame in JavaScript for the past few months known as Mega Man Battle Royale - I've posted up the prototypes here: older one (more stuff), and the newer one (with audio). I've been trying my best to make all aspects of the game as polished and accurate as possible (jump curve, channel separation, sprite flicker on overlap, grabbing colors from Nestopia/FCEUX, etc.).
To view a fullscreen preview, change the word "details" in each URL to "debug". Also, note that the newer version works best in Firefox (It looks like Chrome is having some issues with looping sounds).
I stopped development on the newer version because loading times were getting excessively long for just two songs and a big sound pack (5.7MB+ in three requests to SoundCloud takes at most 10 min.). Instead, I turned to using JavaScript's Web Audio API for sound sequencing. I was able to draw out buffers for every possible note for two squares and a triangle channel, but then I got stuck on emulating the noise channel.
I'm seeing in a lot of places that the sound samples are entirely different from white noise, so I'm still super confused as to how the waveform looks (and how to make it), how to change pitch, and so on. Is there a standard for the contents of a typical 32767-bit sound sample, or is that determined somewhat randomly at runtime?
I'd also like advice as to which method I should use to retrieve sound data: reading and caching data from an .nsf file, writing a program that reads .txt's exported from FamiTracker, or manually typing each and every sound effect into the code.
In short, I have two questions:
- 1) How does the NES's noise channel work and how do I imitate it?
2) What's the best way to encode sound data for use in JavaScript?
Many thanks!