im sure a lot of people has implemented this, but in any case i post it here cos maybe someone havent done it this way, i think its faster that testing bits. Its only for N and Z flags.
So when we have to set or clear the flags for an instruction (soppouse its a LDA that only affects N & Z:
First we clear the N and Z flag cos we dont know if the value needs to clear the flags, then we set what we pre-calculated in the initcpu() routine.
If this post is help someone im glad for you, if not please admin delete it.
Code:
BYTE g_Flags[256]
...
cpuinit()
{
WORD i;
for (i = 0; i < 256; i++)
g_Flags[i] = (i == 0 ? 0x02 : 0x00) | (i & 0x80);
}
...
cpuinit()
{
WORD i;
for (i = 0; i < 256; i++)
g_Flags[i] = (i == 0 ? 0x02 : 0x00) | (i & 0x80);
}
So when we have to set or clear the flags for an instruction (soppouse its a LDA that only affects N & Z:
Code:
.. // Code for LDA here
g_CpuContext.P &= 0x7D;
g_CpuContext.P |= g_Flags[g_CpuContext.A];
g_CpuContext.P &= 0x7D;
g_CpuContext.P |= g_Flags[g_CpuContext.A];
First we clear the N and Z flag cos we dont know if the value needs to clear the flags, then we set what we pre-calculated in the initcpu() routine.
If this post is help someone im glad for you, if not please admin delete it.