For me, this is a case of wishing I hadn't watched and skimmed
said YT videos explaining how this demo was accomplished in part. I also watched the one on
Smash It.
When viewing these videos, the first thing I asked right off the bat was "how exactly was this guy generating 65816 bytecode from C# and F#?" The answer is: he isn't. I'm not that privy to or savvy with these tools and PLs, but from what I can discern, it looks how all of this is accomplished is as follows:
C# or F# is used as the main PL. OpenGL is used to perform rendering of graphics and whatever other sequences of video/animation data on a per-frame basis (or maybe, more likely, what changed per frame), into a custom format which is later transposed into a SNES-compatible graphics data format. It most definitely is not "raw frames", but something more clever. But achieving the actual graphics on-screen is just DMA or HDMAd data to the PPU. The toolset created generates 65816 code -- and from what I can tell, the fellow is not particularly 65816-savvy (can tell from every
rep/sep line having a comment describing what the register sizes are that're being changed). So what you end up with is a toolset that just generates tons and tons of monotonously repeated 65816 code doing DMA or HDMA to blit graphics/changes where appropriate. He very rarely goes into the innards/guts of what matters -- the actual 65816 code. When he does, there's a lot of "uhhh" and gives a feeling that he/whoever wrote it once, tweaked it over time, and chose not to think about it any more. There seems to be some intelligent/proper use of HDMA here for certain things, but otherwise the "guts" of what I was hoping for wasn't covered. I'm basing most of this on
what you see here. (They also used mode 20/LoROM, interestingly enough, "out of laziness" to quote the video -- the resulting ROM size is 32mbit / 4MBytes).
The SPC700 player he doesn't really go into, but from what this thread says, it's probably a bit more impressive. SPC700 and audio in general is something I've always shied away from, so I'll just leave it at that. A tool he created called
GNU rocket was also involved (in the case of Nu), which looks to generate properly-synced A/V. That leads me to believe there's audio that's output and converted into data that's streamed/fed to the SPC700 periodically. But maybe not.
In short: this whole thing is designed
COMPLETELY for a type of A/V playback -- I would almost call it "smart/optimised FMV", with some additional effects. It's designed in such a way, I think, that any system/console/whatever could be used as a system for playback. The visuals all boil down to OpenGL and shader math, pre-rendered. It's sort of like a glorified Bad Apple release, except with more smarts to it; it's not really "software 3D", which is what I was hoping.
All of this is certainly not something I could do (or even dream of doing), I don't have this type of skill, but it did pose a kind of ethical quandary within me: how exactly was this classified as a demo, in the classic sense of the word in the demo scene?
To me, this isn't a demo at all. It isn't showing off any kind of capability of the system, it's just glorified audio/video playback. Compare this to
8088mph or anything from the late 80s/early 90s demos on the PC, Amiga, Apple IIGS, C64, etc. where code wasn't really "generated" in this way (macros excluded), instead hand-coded and visuals/effects often using hardware tweaks/capabilities (
example,
another example (latter I believe is on an emulator, given some of the visual anomalies I see)) that were unexpected, or doing stuff that was otherwise unorthodox (ex. ATX's split-screen SNES mode demo).
But that threw me for a bit of a conundrum: there was still a substantial amount of engineering and effort put forth to even accomplish said thing... so was it really worthy of being called a demo? Maybe so. I'm still not sure how I feel about it all though. I surely can't be the only old demo scener guy who is left pondering this question.