psycopathicteen wrote:
I've ran into this situation before when I point out examples of inefficient code in games, and I get responses like "but that only takes 5% of a frame, it's probably not worth the time optimizing." I know it's a small amount of time, but it's still enough to quickly add up, if you make several other optimizations of the same size. I think people spend too much time trying to find the "king optimization" that would completely obliterate slowdown, or measuring which optimization is bigger, instead of actually making the optimizations.
If you're referring to me, I am usually merely trying to point out that it's perfectly normal to find thousands of instances of slightly sub-optimal code in a completed game. There is always something else to optimize, and the smaller the kind of optimization you start looking for, the more instances you can find. There is no such thing as perfect optimization, there is always a faster or smaller way to do it if you look long enough.
As always, unless I'm paying you, I don't have a say in what's worth your time. If you want to go and make a thousand micro-optimizations, nobody here is stopping you. If you want to prove that you can do this to save a game like Super R-Type from slowdown, go ahead and do it, but until you put up a patch I think you're just blowing smoke. The reason these optimizations weren't done in the first place are the same reason
you haven't done them. It's a lot of time and effort, and eventually you need to either ship the game or call it off.
As tepples was getting at, the usual approach to optimization in the professional world goes something like:
1. Work on completing the game.
2. Notice a performance issue that you think needs to be fixed.
3. Profile the code to measure what is contributing the most to your performance problem.
4. Use the profiling and your knowledge of the game to identify the best optimization candidate (expected performance impact vs. expected work).
5. Optimize your best candidate.
6. If performance is not good enough, go back to 3 and select the next best candidate.
7. If performance is good enough, go back to 1.
Your approach seems to just skip the profiling step, and try to tackle optimization by just arbitrarily picking routines and optimizing them. This is a backwards and blind approach. If you're not profiling, you're not optimizing. You need to measure first, optimize, then measure again after to make sure you've made a difference. If you try to approach this without assessing the impact you're making, you will waste tremendous amounts of time optimizing things that just don't contribute enough to performance.
And yeah, sometimes it
is worth trying to make a lot of micro-optimizations, but it is usually the very last resort. It is probably the most tedious, difficult, and time consuming way to try and optimize something.
One of the best things I ever read on this topic was Michael Abrash's "Black Book" of graphics programming.
It's available here for free:
http://www.drdobbs.com/parallel/graphics-programming-black-book/184404919A lot of the book deals with now-outdated hardware stuff, but there is a whole lot of good stuff on optimization and effective programming. In particular I recommend reading the first chapter which presents a simple CRC program, and then proceeds to optimize it again and again, laying out the whole approach.
Chapter 1:
http://twimgs.com/ddj/abrashblackbook/gpbb1.pdf