emulator debugger design (attn: byuu)

This is an archive of a topic from NESdev BBS, taken in mid-October 2019 before a server upgrade.
View original topic
emulator debugger design (attn: byuu)
by on (#174983)
You seem to be fixated on a design where the emulator components know absolutely nothing about the existence of a debugger (no "debugger cruft" in the chip classes, as you put it) I don't think that's a realistic approach, for a couple of reasons:

1) If the emulator doesn't know anything about the debugger, then the debugger conversely has to have extremely invasive knowledge of and access to the emulator. Result: you try to refactor a core component, your debugger breaks. If your design requires "#define private public" to work, you need to rethink your design.

2) The design of the emulator can unnecessarily limit what the debugger is able to do. For example, a debugger obviously needs to read from emulated memory, but reading e.g. SA-1 IRAM or BWRAM in bsnes causes a cothread switch, which will probably have bad effects if it happens from the debugger thread. Other chips have memory-mapped registers that latch counters or clear interrupt lines when read, which you likewise don't want the debugger to accidentally trigger. Historically, bsnes debuggers have handled this problem by hardcoding address ranges which the debugger isn't allowed to read from. This has two problems: (a) the address ranges with side effects on reads depend on the type of cartridge and expansion device installed, which further increases the amount of knowledge the debugger needs about the emulated system, and (b) it limits the capabilities of the debugger to a frankly unacceptable degree. Debugging SA-1 games is pretty much impossible if you can't see IRAM or BWRAM.

This is how I handle the debugger-reading-arbitrary-MMIO problem in bsnes-classic. It's shamelessly stolen from MAME, which implements a single, extremely powerful debugger across thousands of emulated systems. Yes, it requires every read/write handler in every chip to include the debugger test, but debugger_access() compiles down to nothing in a non-debug build, and all the debug-specific code for side-effect-free reads likewise gets optimized out as dead code.
Re: emulator debugger design (attn: byuu)
by on (#175019)
> You seem to be fixated on a design where the emulator components know absolutely nothing about the existence of a debugger

It's not possible and my design reflects that. For instance:
https://gitlab.com/higan/higan/blob/mas ... ry.cpp#L23

However, yes, I obviously want to minimize the intrusions as much as humanly possible.

> If your design requires "#define private public" to work, you need to rethink your design.

Do you really think there's a code quality improvement to go from this:

Code:
struct CPU {
privileged:  //public: with -DDEBUGGER, private: without
  struct Registers {
    a, x, y, s, d, b, p;
  } r;
} cpu;

void debug() {
  if(cpu.r.a) cpu.r.x += cpu.r.y;
}


To this?

Code:
struct CPU {
  enum class Register : uint {
    A, X, Y, S, D, B, P,
  };

  Reg& getRegister(Register n) {
    switch(n) {
    case A: return r.a;
    case X: return r.x;
    case Y: return r.y;
    case S: return r.s;
    case D: return r.d;
    case B: return r.b;
    case P: return r.p;
    }
    struct bad_register_exception {};
    throw bad_register_exception{};
  }

  void setRegister(Register n, Reg value) {
    getRegister(n) = value;
  }

private:
  struct Registers {
    Reg a, x, y, s, d, b, p;
  } r;
} cpu;

void debug() {
  if(cpu.getRegister(CPU::Register::A)) cpu.setRegister(CPU::Register::X,
    cpu.getRegister(CPU::Register::X) + cpu.getRegister(CPU::Register::Y)
  );
}


If so, then I really have no words. We're at a total impasse.

There's only one debugger for higan. If I change CPU::Registers, then I have to patch code anyway. It's really not that significant whether said patching goes inside the CPU core or in the debugger itself in that case.

If there were a dozen SNES debuggers maintained off some official API, then obviously the latter would start to make more sense. But as it stands, it's a crazy amount of boilerplate for no reason whatsoever.

> This is how I handle the debugger-reading-arbitrary-MMIO problem in bsnes-classic.

Yeah, I like that idea. I can wrap something around "bool debugging();" to block accesses internally.

I could further extend the uint8 return value to a maybe<uint8>, so that if there's no value to read, you get back ?? in the hex view instead of wondering why a given range is all zeroes.

> It's shamelessly stolen from MAME, which implements a single, extremely powerful debugger across thousands of emulated systems.

That would be a dream for me. I really don't want to implement separate debuggers for all of my emulators.
Re: emulator debugger design (attn: byuu)
by on (#175025)
byuu wrote:
Do you really think there's a code quality improvement to go from this:


Funny, I thought you liked standard interfaces. That's exactly how the MAME debugger is designed. At startup, each CPU in the emulated machine registers its registers with the debugger: it informs the debugger how many registers it has, what their names are, and whether each needs special treatment (e.g. because it's a set of flags, or a pseudo-register that maps to different physical registers depending on CPU mode) Then the debugger uses a standard interface with get_register(which) and set_register(which, value) methods to access the registers. Each CPU's implementation of the interface casts "which" to an appropriate enum and does whatever CPU-specific work is necessary (e.g. constraining values to 8-bit in 8-bit mode).

With this design, the debugger doesn't need individual knowledge of the dozens of emulated CPU types in MAME. When someone adds support for a new CPU, all the new code, including the CPU-specific debugger interface implementation, goes in devices/cpu/foo.

Such a design can easily be extended from CPUs to other devices with registers like the PPU.

Your design puts responsibility on the debugger to know that if flags.x == 1 then the MSB of X and Y must be 0, and if flags.e == 1 flags.m and flags.x aren't allowed to change. That's almost as bad as requiring the debugger to know what address ranges not to read from to avoid unwanted side effects on emulated state.

Your strawman example is nonsensical because a real debugger wouldn't be doing hardcoded operations on hardcoded registers, it'd be interpreting commands from the user. Your entire design appears to be optimizing for test mocks rather than actual usage. Who cares how tiny and elegant debug(vram.write, 0x2000, 0xff); is? You're never going to do that, unless your debugger UI is going to involve the user writing scripts in C++ and compiling them into the emulator (which is something I admit I wouldn't entirely put past you)