I'd heard that what a lot of 65816 programmers do is they usually keep the M flag 8-bit and the X flag 16-bit. I found I rarely need 16-bit indexing and started to ponder whether this might be the wrong way around, so last night I rewrote my current projects to default to 16-bit M and 8-bit X. (This actually went pretty quickly.) And... I'm not convinced that this method is any better or worse. It seems the hardware is screwy enough that both schemes are about equally bad.
Let's have a look at the implications of both schemes...
8-bit M, 16-bit X:
16-bit M, 8-bit X:
So my impression is the first scheme results in slightly smaller and faster code, but it has slightly more opportunities for gotchas that could add hours of debugging time. Of course, bits of code where every cycle counts can still be written to use whatever scheme is appropriate, so I don't think the performance hit is a significant concern.
Thoughts?
Let's have a look at the implications of both schemes...
8-bit M, 16-bit X:
- May have to toggle the size of M more often, which can be bug-prone
- Instructions such as TAX, TAY will be 16-bit moves while X is 16-bit, and it's easy (especially for 6502 coders) to forget to clear the upper byte first
16-bit M, 8-bit X:
- Encourages the use of 16-bit vars where 8-bit vars will do (slightly larger and slower code)
- Can't use STZ for 8-bit values
- Sometimes I find myself deliberately loading 8-bit variables into my 16-bit A, knowing that I will soon TAX or TAY and chop off the upper byte. I'm not sure this is good practice. I do use Hungarian notation for 65816 ASM code, so it's pretty obvious when I'm doing this.
So my impression is the first scheme results in slightly smaller and faster code, but it has slightly more opportunities for gotchas that could add hours of debugging time. Of course, bits of code where every cycle counts can still be written to use whatever scheme is appropriate, so I don't think the performance hit is a significant concern.
Thoughts?