Sendo wrote:
Could you explain in a bit more detail man? Please bear in mind i'm a complete noob, so i wouldn't have the first idea about creating a sound engine! (Im not sure i even know what one is in this context
)
Sorry! Sometimes I get long-winded, so lately I've been trying to condense what I say. In doing so I guess I may be too brief
. Basically by "sound engine" I just mean the code in your project that handles the making of sound effects and music.
With something like Famitracker or NT2, when you're done making all your songs they get compiled all into an NSF (NES Sound File). This is what you include in your project if you choose to go with using one of these tools. This file actually contains code within it that you execute, and it plays your songs for you. The problems with this are that the NSF may be large due to the code within it being designed for features you may not have used, and also the fact that the NSF's code uses RAM you may have allocated to something else (basically it may use a lot of resources you could conserve by making your own sound engine). But what's nice is you don't have to worry about designing your own sound engine, and you get a nice user interface to make music in.
Making a sound engine can be extremely beneficial, depending on your project's restrictions. Have you experimented with making sound on the NES by directly storing values into the NES's sound registers? This is where it would all start. Making a sound engine is basically finding out how you can get those values into the sound registers that make the sounds you want. It takes a bit of getting used to, but once you're familiar with how to make a specific pitch at a specific volume with a specific tone, you have the essentials down. It's just a matter of getting right sounds to play at the right time; that's all music really is.
One of the main reasons I designed my own sound engine was to conserve space. When you design the engine, you design how the data (music) that's fed into it is structured. For example, I might design an engine that expects one byte to define the length of the note I want played in the song, and then another to define the pitch. Or, as I have done in my current project, I might design a way for the pitch and length to fit into one byte, which in the long run, saves a lot of space. If space or RAM usage aren't really a concern for you, you may not have to be concerned with making your own engine. I will say the ones that Famitracker and NT2 offer are pretty flexible, and you should be able to do most things with them, so the amount of features supported by them shouldn't be an issue. It's just the resources they consume.
In terms of how I use Cubase 6, I just use it for actually composing the music. Once I like the way it sounds, I create a version of the music designed for my sound engine completely by hand. By that, I mean I'm basically defining the hex values that my sound engine will read to play the song. I don't have any special conversion tools or anything to streamline the process, unfortunately.
Does that sort of make sense? I know being new to this, it can be a lot to absorb; I started off with all of this with absolutely no programming experience whatsoever. So if you'd like me to clarify anything, I'd be happy to do so
.