Thefox mentioned Small C in another thread where we got onto the topic of higher-level languages. I'm always toying with implementing compilers and thought this would be a great thing to investigate.
Although a high-level language is never going to produce code that's as good or better than hand-written assembly in all cases, a high-level language can drastically decrease the cost of development and maintenance (in terms of time). When combined with hand-written assembly for performance critical segments such as the display kernel, this can often produce programs which are "fast enough" for microcomputers.
We have already seen some adoption of cc65 within the community, and the "fast enough" principle has proven true in some cases. However, C89 is a big language that offers a lot of complexity that can damage the efficiency of the generated code.
So with the assumptions that the above is true, and that such a language would be valuable to some community members, I am starting research on Small C with an eye to port it to the NES platform.
QuestionsWhat would be the advantage of implementing Small-C over the existing cc65?
I'm sure more questions will abound as we try to answer the first one
LinksWikipedia's very uninformative articleDr. Dobb's Small C Resource CD-ROMSmall-C Compiler for the BBC/Master
If I remember correctly, CC65 is/was a Small C variant that has grown a little more ANSI compliance since.
cc65 docs wrote:
It is based on a C compiler that was originally adapted for the Atari 8bit computers by John R. Dunning. The original C compiler is a Small C descendant but has several extensions, and some of the limits of the original Small C compiler are gone
Honestly, I don't think a new C/Small C compiler for 6502 could realistically be much better than CC65. In theory - maybe, although the question is how much extra speed we are talking about, is it worth huge effort to develop a compiler to gain, say, 5-10%? The same % could be gained easier, just by rewriting some time critical part of code into assembly code.
Another thing is that compiled code speed is most often smaller issue than it's size.
Is LLVM flexible enough to target 8-bit MCUs? And is 6502 similar enough to an AVR or PIC?
LCC might be easier to target, though LCC is non-free in the same way as cc65 proper.
tepples wrote:
Is LLVM flexible enough to target 8-bit MCUs? And is 6502 similar enough to an AVR or PIC?
I was looking this up the other day. The answers are effectively "no", "heck no", and "sorta"
LLVM used to have a PIC16 (14-bit instruction word) target, but it was removed between 2.8 and 2.9.
The thread at
http://lists.cs.uiuc.edu/pipermail/llvm ... 43394.html provides the quote "An eight-bit, accumulator based, Harvard ISA is the Trifecta of Doom as far as LLVM is concerned, basically."
PIC16 is a little stricter than 6502 (zero or one indexing registers); PIC18 is a little looser (three indexing registers instead of two, one of which can do all sorts of funny indexing modes). But the pointer registers (FSR0,1,2) are even less first class than X and Y are.
AVRs have a (comparatively) large number of general purpose registers, so the ISA basically incomparable.
It's plausible one might manage an asm-level translation from PIC asm to 6502 asm, but I suspect the results will be inferior to what cc65 currently generates.
Shiru and I appear to be on the same page here
I don't think there would be any speed benefit over cc65 for a Small C implementation. Small C still retains concepts that have to be virtualized on the 6502; namely parameter stacks and execution frames.
Funny enough, HLLs can be compiled into very dense, but very slow programs. See
Threaded Code. Small C does not use threaded code proper, but does use subroutines for complex math and switch statement evaluation and can be made to use jsr stubs to implement comparison. This might result in Small C code being slightly smaller than cc65 code, but the same could be accomplished with a little tinkering in cc65.
So for now I don't see any reason to try and implement Small C.
Now onto the thought of a "C Subset" that might be defined for the platform. Here's my evaluation of the low-level (target) components of C and how well they fit the NES architecture.
Matches Hardware1-byte signed and unsigned math
n-byte unsigned math
8-bit signed and unsigned comparison
Static array dereference within byte range
Pointer dereference within byte range (although pointer storage is expensive)
Subroutine pointers
Easily Implemented, but Less EfficientSubroutine pointer tables
Full pointers (storage still limited or expensive)
n-byte signed math
n-byte comparison
Parameterized subroutines
BloatedExecution Frames / Automatic Storage
Parameter stack
Yay, now that's out of the way we can talk about the language features. Keeping in mind what target components each feature uses, we can categorize them similarly.
Matches HardwareStructures
Native Arrays
Native Arrays of Structures (the compiler would interleave this into byte arrays)
Complex expressions (without subroutine calls, using graph reduction to avoid a stack)
Conditional flow control (if, else, do, while, for)
Imperative flow control (break, continue, return)
1-byte math
Easily Implemented, but Less EfficientPointers
Parameterized subroutines
Subroutine return values
Table-base flow control (switch, case, default)
n-byte math
BloatedVariable length parameter lists
Recursion
Automatic storage
So if we take only those features present in the first two categories we end up with a language that is fairly easy to implement and should produce code that is similar in size and run-time speed to hand-written (but not necessarily hand-optimized) assembly. However the language would lack some of the expressiveness of C.
So then the question becomes, is this worth pursuing? Is the reduction in compiled code size worth giving up cc65 and developing a new compiler? Is the gain in productivity for the "glue code" worth the development invested in the compiler?
All I can say is, "maybe".
Any input to the babbling?
Shiru wrote:
Honestly, I don't think a new C/Small C compiler for 6502 could realistically be much better than CC65. In theory - maybe, although the question is how much extra speed we are talking about, is it worth huge effort to develop a compiler to gain, say, 5-10%? The same % could be gained easier, just by rewriting some time critical part of code into assembly code.
Another thing is that compiled code speed is most often smaller issue than it's size.
THIS.
As Shiru has pointed out to us on several occasion size is the real issue, not speed. Yes, we all look at CC65 compiled code and freak about how much slower it will be, but THAT'S NOT THE PROBLEM! There's already a great solution to that issue, just optimize the code by hand for the few times it actually matters. We want the HLL not because we can't write assembly or can't conceptualize bank-witching, but because of the time investment involved with maintaining assembly on a large project. Any HLL is going to have the issue with size ultimately, and even assembly has the same issue. It's just easier to overcome in assembly since you're already so intimate with the hardware. The way I think of it: I'd rather spend a day writing in a HLL and an extra day optimizing where needed to create a game that runs at 1/2 frame rate. As opposed to spending 5 days writing the entire thing in assembly with full frame rate.
Eventually NROM just isn't big enough and the NES's address bus can only handle so much linearly. The only 'limitless' linear solution is move to the SNES. So while more efficient/denser compiled code will allow you to fit more code into each KB, you eventually run out of space even if you have the most heavily optimized compressed assembly. Don't think that the issue isn't that big enough roms don't exist or are too expensive, because they're dirt cheap and plentiful. You can't even buy ROMs with current new stock technology that comes smaller than 128KB. But you've got to bankswitch if you want to have access to dirt cheap bits of today's market. It's only a dime's worth of hardware away, and a compiler that will allow you to reach those banks easily.
Sometimes I wonder if it's complex compiler thread handling is even that necessary depending on one's knowledge of bank switching and lack of desires to conserve rom/bank space. If given 8/16KB banks I'm pretty sure I can put all my screen drawing routines in a single bank, sprite updates in another, AI, Sound, game state logic, whatever, they all get their own bank if they come close to needing it. I think most of us can handle a specialized function call or special designators to handle different banks as long as things were able to be structured properly. My original thought was to develop each bank on it's own and manually piece it together somehow. But it's really not that easy or development friendly to do all that manually; even if there isn't any need to save bank space and segregate everything that's related to each separate piece of the game engine. If done cleanly you'd probably have all of these separate pieces in different source files anyways. It's not hard to conceptualize only being able to call functions in that file or in the fixed 'file'/bank. WRAM if available, could be divided/banked in a similar fashion I'm sure for allowing a large number of variables and/or scratch RAM.
So as I see it we just need to come up with a reasonable structure/framework that allows things to be compiled easily to get things off the ground. Any HLL would do, CC65 is already there, may as well take advantage of the groundwork it's already laid. We need to walk before we can run. We need to start simple and get a HLL compiled and up and running on something like UxROM. There's always room for improvement from the compiler and/or hardware. I'll do the compiler work myself if I have to, I just need to put down the hardware first...
One of the best features of C IMO is portability of the game to other devices like Shiru did with Zooming Secretary on the iOS. Personally I'd take any HLL though, the features that qbradq is presenting would be a great advantage assuming the compiler will let me bankswitch even if the bankswitching isn't abstracted away from me.
if cc65 is too high level, then I'm not sure what everyone is looking for that can't be done with ca65.
Perhaps:
https://www.assembla.com/spaces/ca65hl/wikiIt's still kind of a WIP, but it is fully functional as can be seen here:
http://www.romhacking.net/documents/635/For loops could be implemented easily. Parameters, I have working in my own way with another library/macpac.
Of course this is very limited compared to something that you could call C, but it can (at least for me) make things much easier and faster, and it is almost always going to generate code that is as good as assembly.
It is a complex subject. First, we should define why exactly a HLL is necessary and is preferable to assembly. Then we have to figure what is the problem with existing solutions.
My opinion is that a HLL, is necessary because it speeds up development time very significantly. It also makes code maintenance easier.
C is a nice choice because of the portability. What if you don't want the code you wrote to be specific on NES but target other platforms ? Even something "similar" such as the SNES, you'd have to rewrite all the code to take into account the extra instructions and 16-bit registers. With a HLL, it would be trivial, and the code would be optimized for the available instruction set.
This is why C is preferable to anything else, even if I am fully aware that there is quite a few problems with the C language. Yes the syntax is ambiguous, anything that uses function pointers is going to be impossible to read or write without consulting the documentation, the ways pointers are declared and used is confusing and error-prone. Casting is often necessary but is ugly. The usage of ++ or -- statements inside an expression are ambigious. The worst 2 "features" of C is using == for equality test, while using a simple = is also correct but will yield in wrong functionality, and the fact that the names "char", "short", "int" means nothing to their respective bitsizes and data range that are safe before an overflow happens.
HOWEVER despite all these problems, C is simple, easy to use, and has become a sort of universal standard among the past 20 years, especially in embedded development. You can develop in C for any platform in the world. Including the NES, thanks to CC65. So what is the problem ?
The problem is that, each time I look at code generated by CC65, it makes me an earth attack. And yes I am serious. Of course, I can understand it can not be as optimal as hand optimized assembly code, BUT I can't refrain to feel that CC65 is terrible when it comes to generating "efficient" code.
But why would that be a problem ?
The answer is that, in order to get a game running, it is of course not necessary to get the most efficient 6502 code possible for a given program, but it is desirable to get somewhat close. If the generated code is 10x what it "should" be, you'll have to use a huge ROM and a complex mapper for a game that would fit in a NROM cartridge, and the game would run at reduced frame rate for no reason.
I made some demo that simply puts some text in a dialogue box. At some point I have a routine that shifts the lines up a row. This very rootine took 3 frames to complete ! When I could do something in assembly that would probably take a few scan-lines.
CC65 is inefficient for multiple reasons, including :
1) The usage of a software stack for everything (variables, arguments, etc, etc...)
2) The usage of a AX pseudo-register. There is already few registers, why waste them this way in order to get even fewer ?
3) It never ever uses indexed adressing. Y is reserved for stack and X for the high bits of AX. Any array access will take about 15 instructions instead of 1, using multiple adding and compares before a final indirect adressing.
If a compiler for the 6502 should be (re)written, it should fix those 3 points, and probably a few others. Then it'd surpass CC65 very easily. This could also be an unofficial modified version of CC65, because it's open source.
I am just brainstorming, but I want to proof a point that is that CC65 is terribly inefficient, and that a HLL is necessary, and that C despite it's problem, is the best candidate for HLL.
cc65 maybe 'sucks', although I'd prefer to use terms like 'not very effective', 'not perfect' etc, but it is certainly better than some other C compilers for microcomputers around. Just try tcc816 in a large project to feel the difference. cc65 is already here, it works, it is proven to be good enough to make a NES game with scrolling that runs at full frame rate, even without rewriting parts of the game code into assembly. Sure, things could be better, but making a C compiler is a huge task, especially one that produces better code and is as reliable. Even having a competent and motivated person to handle the task, it may take many years, like 10. So in reality we have a very simple choice: use what we have right now to make things we want to do right now, or dream about better stuff not actually doing anything for indefinite period of time.
I edited my post above to remove all occurrence of "suck".
And yes you pretty much sums it up, but I'm still unsatisfied with the current situation.
I don't know if it would take 10 years. First of all there is no need to write a complete compiler, only the backend has to be made. No boring partenthesis matching etc, etc...
Then, Denis Ritchie says in his book it can be done in 1-2 months. Of course this is for a full time employee (or multiple employees ?) and probably for a cheap non-optimized compiler.
There are many things CC65 does rather poorly. Nested arrays [][][], for example seem to get exponentially slower with each []. Yes its software stack is cumbersome, etc.
I still find it really useful. I spend some amount of time coding around its flaws, profiling, looking at disassembly etc. to figure out how to make it do the same thing in a more efficient way, but the time it saves vs. doing that from scratch in assembly is massive, so it's still worth it.
I wish it was a more efficient compiler, but I don't expect a better one to appear any time soon. If you want to try to write one yourself (or even retarget an existing one), I wish you good luck.
As for the current state of things CC65 is pretty good vs. the alternatives. It does sometimes require me to work around its implementation, but the productivity gain is still worth it. For less advanced users who don't have the ability to address these issues with it, there's still a lot of good things that can be built with it (there are lots of gameplay types that don't require 60Hz). Slow code is still a lot better than no code.
Bregalad wrote:
anything that uses function pointers is going to be impossible to read or write without consulting the documentation
There's a rule of thumb.
Code:
// If the function's prototype looks like this:
int func(int num_words, const char *const *words);
// Then the typedef for a pointer to such a function looks like this:
typedef int (*func_type)(int num_words, const char *const *words);
It isn't any more confusing than the 6502's habit of needing to subtract 1 from each entry of a table of function pointers called with RTS.
Bregalad wrote:
The usage of ++ or -- statements inside an expression are ambigious.
In what way? Don't use more than one ++ or -- on the same variable, and don't use it on global variables in the same expression as a function call.
Bregalad wrote:
The worst 2 "features" of C is using == for equality test, while using a simple = is also correct but will yield in wrong functionality
It's possible for compilers to detect a mistaken = for equality test when parsing the expression for a
while loop's condition. C doesn't require compilers to emit a diagnostic when the top level of this expression is operator =, but many compilers do so. For example, the
-Wall flag in GCC enables this.
Bregalad wrote:
the names "char", "short", "int" means nothing to their respective bitsizes
C defines
char as the smallest addressable unit that is at least 8 bits, which in the ISAs used for video games (x86, PowerPC, and ARM) is in fact 8 bits. C99 introduced the defined data ranges of
int32_t and friends over a decade ago.
tepples wrote:
Bregalad wrote:
anything that uses function pointers is going to be impossible to read or write without consulting the documentation
There's a rule of thumb.
Code:
// If the function's prototype looks like this:
int func(int num_words, const char *const *words);
// Then the typedef for a pointer to such a function looks like this:
typedef int (*func_type)(int num_words, const char *const *words);
It becomes even easier to handle/read if you use typedef twice:
Code:
typedef int myFunctionType(int num_words, const char *const *words);
typedef myFunctionType *myFunctionPointerType;
Actually, I just prefer typedefing the function itself, and then using
myFunctionType * for pointers to it.
Quote:
There are many things CC65 does rather poorly. Nested arrays [][][], for example seem to get exponentially slower with each [].
Interesting. That's probably what was so slow in my text routine, since it used lots of bidimentional arrays. It's a matter of coding style, I use them all the time personally. (well all the time where this makes sense I mean, obviously, I don't use them just for the sake of using them).
Some people prefer to mess with pointers directly, but I really prefer using arrays and multi-dimentional arrays. It can be easily reduced to single dimentional array at compilation time, where strength reduction optimisation can create efficient code.
Exemple :
Code:
for (i = 0; i < 16; ++i)
for (j = 0; j < 16; ++j)
array[i][j] = something;
becomes, after array dimention reduction
Code:
for (i = 0; i < 16; ++i)
for (j = 0; j < 16; ++j)
array[i*16 + j] = something;
which in turn becomes after strength reduction
Code:
for (i = 0; i < 256; i+=16)
for (j = 0; j < 16; ++j)
array[i + j] = something;
Which have chances to compile into something
somewhat efficient.
PS :
Also
Quote:
I wish it was a more efficient compiler, but I don't expect a better one to appear any time soon. If you want to try to write one yourself (or even retarget an existing one), I wish you good luck.
Thank you. I've felt over the years that, if a tool that you need is insufficient I'd rather write my own. That's how I came up with CompressTools for instance, at first I was like "I wish something like this existed", and then I made it exist. It was a big project but it was totally worth it. I'll probably update it more in the future to make it better, but at least the unique concept of assembler/compresser is here.
Doing a C compiler is however a bigger task and I don't really feel like I am currently competent enough to do one unfortunately. I'd need someone more experienced to coach me in the process. If I can ever find that someone then an efficient 6502 compiler may become true. Otherwise...
I have some other ideas like a multiplatform MML compiler for music. Once all this will be done I will
eventually be able to make games
PPS :
Quote:
there are lots of gameplay types that don't require 60Hz
Yes, but the ROM usage problem remains. Code generated by CC65 will take 10 times the amount of ROM that assembly written code would take. This don't apply to data thanks god, but
Portopia, a detective game wich does not require 60Hz gameplay and that is on a NROM cartridge, would probably need a 256kb PRG-ROM if generated with CC65, if you see what I mean.
Well, I'm not the one who is demanding it fit in NROM.
I have advocated AxROM elsewhere, and I think a dedicated 32k for C code is a pretty reasonable amount of space.
Wow, some really great discussion on this topic! Let me be clear that I think cc65 is a great C compiler, it's just that some of the idioms of C are not efficient on the 6502. Seriously, think about a three-dimensional array. It's a pointer to an array of pointers to arrays of pointers to arrays. How is that going to be efficient on any platform?
Personally I want a language that cooperates well with assembly, that is easy to read and maintain and that leverages the advantages of the platform. Honestly I don't understand the portability concerns. If I wanted to port a game from NES to another platform I'd want to re-code it. Finally, code (and compilers) that are cross-platform have more difficulty leveraging the platform they are targeted for.
So this curious journey is now leading to the idea of a language that implements one or more design patterns for banked execution. Here are the patterns I am aware of. If someone is aware of others, please let me know.
Banked Data
In this model a fixed segment contains all executable code and banked segment contains static data, which is either referenced in-place or moved to RAM. This is very easy to implement but is limited in executable code size.
Banked Contiguous
This model is what Metroid uses. The common code is stored in a fixed segment. All data and code required for a given area of the game is stored on a separate banked segment. In any given area of the game, the full program code is present in a contiguous address space. This works well for medium-sized project as Metroid demonstrates, but influences the design of the game.
Banked Data with Trampoline
I am not aware of any game that uses this, but I haven't' studied many. This is the scheme I came up with for my MMC1 projects. 32K of program space is available, and routines in the fixed segment read data from banked segments and copy it to RAM, then restore the banked segment back to program space. Works pretty well but still limits executable code size.
Banked Library and Data
This is what SMB3 uses. All common code lives in two fixed segments. Another segment is used to dynamically bank in data tables as needed, and a fourth used to dynamically bank in program code as needed. Program code is organized into several libraries with a consistent header or entry point method, and contain any data tables that might be specific to that library.
Program Banking
Software is segregated into separate self-contained programs. A small fixed ROM or RAM-loaded trampoline is then used to swap between programs.
Out of all of these, I like the idea of banked libraries as found in SMB3. Let me know what other banking strategies you may have in mind!
Almost all multicarts, except CNROM multis such as Donkey Kong Classics, use Program Banking to launch the selected activity.
In addition, Action 53 uses Banked Data with Trampoline to pull lines of instruction text and compressed screenshot tiles out of their hiding places in other banks. At power-on and reset, it copies an "interbank fetch" subroutine into RAM that copies a block of data starting at a given 24-bit address to a fetch buffer in RAM. (Bit 15 of these addresses is always 1, as in Super NES LoROM.)
I imagine that Metroid's use of Banked Contiguous comes from its FDS heritage, as bank switching is extremely slow on that mapper.
Are SMB3's DPCM samples in the fixed bank or in a switchable bank? If they're in a switchable bank, it can't really use the full Banked Library and Data except in scenes whose music uses no samples.
The best compromise I can think of is a subset of Banked Library and Data. The ROM is divided into a 16K fixed bank and a 16K or 8K switchable bank. Routines that need less than 16K of data go in their own bank. Routines that need to access data in more than one 16K bank go in the fixed bank, as do interrupt handlers and routines called by routines in multiple banks.
Quote:
Seriously, think about a three-dimensional array. It's a pointer to an array of pointers to arrays of pointers to arrays. How is that going to be efficient on any platform?
Really ? I thought arrays were "flattened" to a single dimentional arrays like in my previous example. However now I am not too sure. The confusion between arrays and pointers in C has always puzzled me to be honest. It's just one of the bad things about the language. I admit that C is not a very good language
technically, it is just that it is a standard, and that it is extremely portable.
The part about "idioms" might be true, but I think it is possible to do a clever analyzis and re-organize the data in order to be efficient on 6502 : Auto variables becomes static, index values to array are placed in X and Y registers, and strength reduction is always applied whenever possible. There is another million of optmisation steps that can make code better each time, and turn horrible code into good code. That's how modern compiler works (I guess).
The tricky part is what I'll call "stack deletion" optimisation. However SDCC already does something like this. Now the question is if it is better to add 6502 to SDCC, or to modify CC65 in order to apply stack deletion, or to continue to code games in assembly or extremely poor generated code from C.
Quote:
Personally I want a language that cooperates well with assembly,
C cooperates extremely well with assembly. The usage of asm() statmens can inline assembly code, and with separate compiling and linking it is possible to mix portions of C and assembly (plus a ton of other languages, but that would not apply in our case).
Quote:
Honestly I don't understand the portability concerns. If I wanted to port a game from NES to another platform I'd want to re-code it.
Oh, really ?Quote:
Well, I'm not the one who is demanding it fit in NROM. I have advocated AxROM elsewhere, and I think a dedicated 32k for C code is a pretty reasonable amount of space.
I get your point. However it is a shame to be limited to make games where few realtime action takes place AND small games that uses a large mapper, isn't it ? What if you actually wanted to make a large game, that would "normally" fit in AOROM for instance (256kb) ? Would you have to use MMC5 with 1MB PRG-ROM just because you used CC65 ?
Bregalad wrote:
What if you actually wanted to make a large game, that would "normally" fit in AOROM for instance (256kb) ?
It takes a lot of art to fill that much space. If you have a Rareware budget for artists, you can probably get a Rareware budget for programmers too.
I'll meditate on this. However, my former goal was to make some simple games to proof to people I'm a serious programmer, and that way artists and other people would joint, and we'd be able to make bigger / better games.
However, sounds like for now I just proved that I am a good programmer but I can't keep focus on one project without immediately moving to another, etc... I think it's my nature to be like this ^^ Wonder what would have happened to me if I was born back when computers didn't exist.
tepples wrote:
Bregalad wrote:
What if you actually wanted to make a large game, that would "normally" fit in AOROM for instance (256kb) ?
It takes a lot of art to fill that much space. If you have a Rareware budget for artists, you can probably get a Rareware budget for programmers too.
There may be things other than graphics in there, though. Maybe you will fill up a lot of space with the level data if your game has a lot of level data? (However, depending on the game, this might not be necessary: MATCHNUM uses the level number as a random number seed, and then start from a solved state and makes random backwards moves until a solvable level is generated. KNAR does something similar. Depending on the game, though, this might not be appropriate, or even if it normally is, the Famicom might be too slow or too less RAM. Therefore, you still might want to fill up a lot of ROM space with level data.)
We've heard from the experts coming from a low level background. Now let's hear from a fool tumbling down from high level languages
Yet another C compiler will be just a curiosity. I don't see the experts using it and the fools still wont know what to do with it.
The most successful higher level language implementations have been parsers for BASIC that output assembly. Usually using "kernels" that are used as templates for certain game features. These are NOT runtime engines. They do not run BASIC code in some HAL.
I'm thinking batari BASIC as a prime example:
http://bataribasic.com/ http://www.randomterrain.com/atari-2600 ... mands.html
qbradq wrote:
Seriously, think about a three-dimensional array. It's a pointer to an array of pointers to arrays of pointers to arrays. How is that going to be efficient on any platform?
No, this is incorrect, and these are not the same thing in C, and even though you can often implicitly convert an array to a pointer, you cannot always do this. (Other languages, like Java, are a different story.)
To create a three dimensional array, you do something like:
int x[2][2][2] = {{{1,2},{3,4}},{{1,2},{3,4}}};In memory, this will be represented as data only, which looks like:
1,2,3,4,1,2,3,4To create arrays of pointers, you must do this differently:
int* yyy0 = {1,2};
int* yyy1 = {3,4};
int* yyy2 = {1,2};
int* yyy3 = {3,4};
int** yy0 = {yyy0,yyy1};
int** yy1 = {yyy2,yyy3};
int*** y = {yy0,yy1}In memory is very different. (Looks like how it is defined.)
To
access data in both of these structures, this is the same
syntax:
x[1][1][1] and
y[1][1][1] are the same. The underlying implementation for fetching these, however, is different. When fetching from
x, it knows the dimensions of each part of the array, so there are no "arrays of pointers" that need to be involved. For
y it is a different story. The type of
y[1] is an
int**, so it fetches that
int** to evaluate
y[1][1] which fetches the next pointer, etc. The types of these things are known at compile time, so even though the syntax is the same the compiler knows how they are different and will generate very different machine code.
The difference becomes very much apparent when you want to implicitly cast a multidimensional array to an array of pointers (i.e. you can't do this). Trying to pass
x[1] as an
int** will correctly generate an error, because there is no pointer data in
x[1].
Anyhow, my criticism of CC65's performance on multidimensional arrays was referring to its treatment of arrays specifically. Pointers are not the issue, here, but the very inefficient manner in which it will access data in multidimensional arrays vs. rolling the same thing by hand as a single dimensional array.
qbradq wrote:
Out of all of these, I like the idea of banked libraries as found in SMB3. Let me know what other banking strategies you may have in mind!
I think so too. A structure like that can be made fairly bankswitching freindly with HLL if limited properly I think. So you basically have the fixed bank, and atleast a library bank. You may have a separate Data bank or something but that can be ignored for now.
If you set up these rules I think it would make things much simpler for the compiler to handle banking:
1) The library banks can ONLY call functions within it's own library and the fixed bank. It operates as if there is no banking. The functions in a given library bank assume it's felow library functions are always there.
2) The fixed bank contains the code that handles all bankswitching commands before it calls functions that are in the library. Segregate high level tasks into separate banks, to make things simpler.
To start the compiler wouldn't have to figure out how to follow these rules, you could compile each bank separately based on how you manually seggregated functions into the fixed bank or library bank #n using separate files or something. Eventually you could get fancy enough to compile the entire thing in a large linear virtual space, and then determine what functions can fit together in which library banks and the compiler could handle everything ensuring it would follow the rules above.
Interupts make this a little more tricky, but if the interupt handlers are always available in the fixed bank it's not too hard to keep track of what's going on and keep the banks straight and still follow these rules.
batari Basic looks very interesting. It has a number of limitations similar to what I'm talking about. For one there's no parametrized subroutines or functions. Another thing is all math is 8-bit, which works out pretty OK for the 2600. It looks like the final version uses a program-at-a-time banking method, which fits the 2600's limitations.
The compiler I've been working on is similar to batari Basic, but has parameterized subroutines, n-byte math and a more modern syntax.
I like the idea of offering game "kernels" to end users to allow rapid development of pet projects. Sounds pretty awesome really!
Thank you all again for your input. This has given me a lot of perspective.
qbradq wrote:
I like the idea of offering game "kernels" to end users to allow rapid development of pet projects. Sounds pretty awesome really!
I guess the NES version of this would be selectable/configurable VBlank handlers. Depending on the type of game people are making, they might need to favor some types of VRAM updates over others.
Bregalad wrote:
I get your point. However it is a shame to be limited to make games where few realtime action takes place AND small games that uses a large mapper, isn't it ? What if you actually wanted to make a large game, that would "normally" fit in AOROM for instance (256kb) ? Would you have to use MMC5 with 1MB PRG-ROM just because you used CC65 ?
I don't think that's a fair assumption to make at all. In my experience, as games get larger, the bulk of space is used by data, not code. (It depends what kind of game you're making, there is no one-size-fits-all rule, but this is my general experience.) I actually think 32kb dedicated to C code could be enough to make even large-scope NES games like Master Blaster or SMB3 (though DPCM may be out of the question), though like any limited space there's always a way to fill it up if you want.
Also, if it isn't enough space, as shiru mentioned before, it's really easy to convert offending bits from C to ASM. A lot easier than developing them as ASM from scratch. The ability to prototype and tune in C alone is a big productivity booster even if you have to recode some of it later (by then you already have done the hard work of tweaking and reworking in the easier language).
It's a really tight squeeze to fit a C game into NROM when you have to put code and data all together. Once you get them out of that bank how much more space have you made for code? Probably at least 2x, maybe 4x! This doesn't scale up when you're making a bigger game, eventually it becomes about making more levels and more music, etc. and a lot less about adding more code.
So, in short, no, if I was trying to make a game that would "normally" fit in AOROM 256k, but with CC65, I suspect I would complete the coding part about 5x faster by using C, and waste maybe at most an extra 30k of space due to C bloat by the end maybe. (These are completely spurious ballpark numbers, but if I had to guess this is what I'd guess.) I would certainly NOT need 4x the space because of C, not on those scale. On a 32k game yes maybe, but not on a 256k game.
Speaking of programming a large game in C with a mapper, for now I think I'd go with 8+24 PRG and CHR RAM configuration, i.e. first 8K of PRG with bankswitching for all kinds of large data (music, level maps, graphics), top 24K fixed for C code, vectors, maybe a bit of DPCM. Too bad that to my knowledge there are no discrete logic mappers that allow this configuration, and MMC3 probably would be an overklll.
qbradq wrote:
The compiler I've been working on is similar to batari Basic, but has parameterized subroutines, n-byte math and a more modern syntax.
I like the idea of offering game "kernels" to end users to allow rapid development of pet projects. Sounds pretty awesome really!
Thank you all again for your input. This has given me a lot of perspective.
I'm just glad someone's working on this instead of us just debating about it!
Even better that the debate is providing useful input on your efforts.
Shiru wrote:
Too bad that to my knowledge there are no discrete logic mappers that allow this configuration, and MMC3 probably would be an overklll.
You could do it with discrete logic but it's going to be more than two chips worth, and no, there aren't any current discrete mappers like that. As for the MMC3 choice you're only using a very small subset of what the MMC3 has to offer. So you could simplify the MMC3 down to something that'd fit in the smallest CPLDs on the market. You'd still have MMC3 compatibility for emus, but not the overkill on the hardware.
I thought about this, sure, it could be done this way.
As for 24K for fixed bank, where it comes from. I tried to remove music and level data from the Sir Ababol game, leaving only compiled C code and some minor data pieces. Full game takes ~29K, with these things removed it is ~14K, so there is ~15K of data (a half of total size). I think a more complex game will have enough extra code to go over 16K, but probably could fit 24K.
Shiru wrote:
Too bad that to my knowledge there are no discrete logic mappers that allow this configuration, and MMC3 probably would be an overklll.
You could fake it by using UNROM and just duplicating the top half (what will be banked into $A000-BFFF) across each changeable bank. A little wasteful, but I don't think problematically so.
Quote:
Speaking of programming a large game in C with a mapper, for now I think I'd go with 8+24 PRG and CHR RAM configuration, i.e. first 8K of PRG with bankswitching for all kinds of large data (music, level maps, graphics), top 24K fixed for C code, vectors, maybe a bit of DPCM.
Sounds like MMC2 to me, except for the CHR-RAM part, but having automatically switchable CHR-ROM is very nice too !
And rainwarror, thank you very much for clarifying multi-dimentional arrays in C ! This has never been totally clear in my mind, but now I think it is. So it turns out that arrays and pointers are different things in C, but they just happen to be "accessed" with the same syntax.
Shiru wrote:
I think I'd go with 8+24 PRG and CHR RAM configuration, i.e. first 8K of PRG with bankswitching for all kinds of large data (music, level maps, graphics), top 24K fixed for C code, vectors, maybe a bit of DPCM.
So you want to take the top half of the address space and "punch out" a window for banked data.
Quote:
Too bad that to my knowledge there are no discrete logic mappers that allow this configuration, and MMC3 probably would be an overklll.
As Bregalad pointed out, an MMC2 with the CHR banking logic bypassed implements exactly this memory model, but you probably don't want to rip up a Punch-Out!!. So either clone the MMC2 in a CPLD or build a three-chip discrete mapper that uses 74161 and
[nope, messed up, see lidnariq's post below]
lidnariq wrote:
Shiru wrote:
Too bad that to my knowledge there are no discrete logic mappers that allow this configuration, and MMC3 probably would be an overklll.
You could fake it by using UNROM and just duplicating the top half (what will be banked into $A000-BFFF) across each changeable bank. A little wasteful, but I don't think problematically so.
Clever, thanks for the idea, I somehow didn't think about it. 256K ROM would then give 144K of usable space in this configuration, not too bad, considering it'll keep it simple hardware-wise and compatible with emulators.
lidnariq wrote:
You could fake it by using UNROM and just duplicating the top half (what will be banked into $A000-BFFF) across each changeable bank. A little wasteful, but I don't think problematically so.
Yeah you should be fine doing that. Something told me I should hold off on saying it wasn't possible until you chimed in. I never thought about just tossing some cheap bits at the problem.
Since the extra fixed bank is being faked at the preparation stage, the other nice thing is that you no longer have to have a strict 24/8 division. I'd assume larger variable banks would be more generally useful, but you could obviously go for more fixed bank too.
(That said, the "real" solution is 74'161 + 74'157 + some logic whose truth table has three identical values, such as:
* Variable bank from $8000-$9FFF: 74'1G02 / 74'1G32 or two transistors
* from $A000-$BFFF:
A13>A14 (single transistor)* from $C000-$DFFF: A14>A13 (single transistor)
* from $E000-$FFFF: 74'1G08 / 74'1G00 or two transistors)
On a second thought, I like this approach even more, as it does not require the switchable part to actually be 8K, which is pretty flexible. If linear space of compiled C code takes, say 20K, the switchable part could be 12K rather than 8K, reducing wasted space (200K of usable space on a 256K ROM). Or, if the code got slightly past 24K, and 8+16K configuration wouldn't be effective anymore (will require data duplication as well), this method would still work without changes, although switchable part will be smaller, and amount of wasted space will increase. Another thing, to store CHR data in PRG (with CHR RAM configuration), few 16K banks could only contain CHR data without duplicating the unswitchable part, and a small routine that unpacks/uploads CHR data could be placed near the top of memory. The same could be done with sound and music data. This again increases amount of usable space.
Sure, it isn't a perfect way to go, but considering simplicity and flexibility, it is a good option.
Edit: ninja'd by idnariq with this
Yes, this is an elegant solution, and the day the compiler is updated and produces more efficient banked code that fits in 16k, you don't have to "rethink" everything, you just adapt it and that's it. Very elegant.
qbradq wrote:
batari Basic looks very interesting. It has a number of limitations similar to what I'm talking about. For one there's no parametrized subroutines or functions. Another thing is all math is 8-bit, which works out pretty OK for the 2600. It looks like the final version uses a program-at-a-time banking method, which fits the 2600's limitations.
The compiler I've been working on is similar to batari Basic, but has parameterized subroutines, n-byte math and a more modern syntax.
I like the idea of offering game "kernels" to end users to allow rapid development of pet projects. Sounds pretty awesome really!
Thank you all again for your input. This has given me a lot of perspective.
While trying to learn C I've always fought with the syntax and formatting of instructions. I've never had to do that with relaxed languages like BASIC. I'd rather focus on game logic then white space or brackets being off.
I'd really like to see this project take off. I've tried to help test Scratchalan but I haven't heard back from the developer in ages. You can count on me to test the heck out of this once binaries are released. Got the PowerPak ready
Idea for automatically making variables static (not on stack)
You have functions, and they need some amount of local variables or function arguments. If you can reduce this with an optimizer, all the better.
Start by making a call graph. One root for each thread (main thread, IRQ, NMI). Consider everything that may be assigned to a function pointer, as well as calls to the function pointer.
If a function may possibly call itself due to a loop in the call graph, it needs stack variables. Otherwise, it can use static variables.
Use the call graph to allocate static space that doesn't overlap.
Duplicate functions that are called from IRQs if they overlap with code from main.
Looking over the disassembly notes, SMB3 has 5 8k banks of code for objects alone. Out of the 32 banks in the ROM, 19 contain code. I haven't been able to find (and can't be bothered to make) a code / data log for the game so I don't know how dense this code is, but I'd be willing to bet it's more than 24k. Like a lot more.
Dwedit wrote:
Idea for automatically making variables static (not on stack)
You have functions, and they need some amount of local variables or function arguments. If you can reduce this with an optimizer, all the better.
Start by making a call graph. One root for each thread (main thread, IRQ, NMI). Consider everything that may be assigned to a function pointer, as well as calls to the function pointer.
If a function may possibly call itself due to a loop in the call graph, it needs stack variables. Otherwise, it can use static variables.
Use the call graph to allocate static space that doesn't overlap.
Duplicate functions that are called from IRQs if they overlap with code from main.
Even being able to USE statically allocated variables/arguments would be very helpful, even if it didn't do any automatic detection of stack/static variables. In CC65 you always have to use the slow software stack, even though majority of functions in a typical application don't need recursion (99% of the time you don't need recursion or can rework your algorithms not to use it). The only "workaround" is to use global variables (for parameters) and/or static locals (for local variables), but both solutions suck.
I believe Atalan supports this type of automatic detection. I'm kind of surprised nobody has written a NES game with Atalan (that I know of...), but I guess it's mostly because of the esoteric syntax.
Actually, having automatic stack deletion can make better code than assembly (memory usage wise) !
Why ?
Because the compiler can do a call graph, and re-use variable among mutually exclusive function. This is harder to do in assembly as it has to be done in assembly, and oh my god, HOW MANY TIMES DID I MESS THIS UP ? Quite a lot.
Atalan looks amazing. The problem is that it's a new language, and that it's not (yet ?) portable. C, on the other side, is easily portable.
thefox wrote:
I'm kind of surprised nobody has written a NES game with Atalan (that I know of...), but I guess it's mostly because of the esoteric syntax.
Nope. I'd love to use Atalan but the NES page is barely there. Certainly not enough information for a new user to start making games.
http://atalan.kutululu.org/nes.html
Some related rambling - I know this is about C, but maybe there is something useful in my head:
I have a configuration file I include as a part of every module - modules are to stay organized and to help with build time, since my code is macro heavy is does build slow.
Rather than explain I'll show:
Code:
__MAPPER__ = 4 ; 0 = NROM, 4 = MMC3 , set constants below as needed
.if __MAPPER__ = 0
__NUM_16K_PRG_BANKS__ = 2
__NUM_8K_CHR_BANKS__ = 1
__MIRRORING__ = 0 ; Horizontal
.endif
.if __MAPPER__ = 4
; modify:
__MMC3_8K_BANKABLE_CODE_BANKS__ = 4 ; number of banks designated as CODE (remainder of bankable banks are DATA )
__MMC3_PRGROM_BANK_MODE__ = 1 << 6 ; 1: $C000-$DFFF swappable,$8000-$9FFF fixed to second-last bank)
__MMC3_TOTAL_PRG_BANKS__ = 8 ; total number of all CPU BANKS in 8K
__MMC3_TOTAL_CHR_BANKS__ = 8 ; total number of all CHR BANKS in 1K
; do not modifiy
__MMC3_DATA_BANK_SELECT__ = __MMC3_PRGROM_BANK_MODE__ | 6 ; 6: Select 8 KB PRG ROM bank at $C000-$DFFF / $8000-$BFFF
__MMC3_CODE_BANK_SELECT__ = __MMC3_PRGROM_BANK_MODE__ | 7 ; 7: Select 8 KB PRG ROM bank at $A000-$BFFF
__NUM_16K_PRG_BANKS__ = __MMC3_TOTAL_PRG_BANKS__ / 2
__NUM_8K_CHR_BANKS__ = __MMC3_TOTAL_CHR_BANKS__ / 8
; check:
.if (__MMC3_TOTAL_PRG_BANKS__ & (__MMC3_TOTAL_PRG_BANKS__ - 1)) <> 0
.error "MMC3 banks are not a power of two."
.endif
.endif
The idea is that it is preferential to bank $C000 to allow for DPMC samples. I treat $8000 as fixed data, $A000 as bankable code, $C000 as bankable data, $E000 as fixed code, and just stick to that. I think this is a pretty good way to do it. All if this is obfuscated with macros; I don't really have to worry about it the details of this at all once it is setup. I can also change the mapper to 0 and make a new build, and unless I run out of space or require a MMC3 feature, I can build NROM with only this change.
I've decided that I am going to hook a request onto NMI to change data banks to avoid any threading issues (banking code banking is already clean with no tracking), so if you want to change banks, you have to drop a frame. If NMI needs to change banks, it can check the current bank and save it by reading the bank number at a specific point, perhaps the last byte of the bank stores the bank number.
The parameter stuff: The stack only works well with the JSR and RTS instruction. Deal with any recursion manually and use zeropage for parameters and local variables. This can all be tracked at runtime (for accidental reuse) with thefox's latest revision of nintendulator with Lua scripting. Perhaps this is something to consider in a new language design: runtime range checking, etc.
edit: batari basic: very limited. My HL macros can do far more as far as syntax and control of the code, though it has some bios/api code taking care of some things for you so is more than just a compiler.
I may have oversimplified my statement that batari BASIC is just a compiler. However limited its got a thriving homebrew community and even managed to "port" Super Mario Brothers over:
http://www.youtube.com/watch?v=o5igUFICNB0Whatever batari BASIC is doing seems to be a very good strategy - certainly worth studying for a similar project on the NES
AFAIK the only reason it works is because of how strict you have to be programming the 2600, because something like this...well, it could work for NES, but it'd need reworking for sure. Is it possible? I definitely think so, though.
batari BASIC is a great example of a compiler that is specifically designed to work within the limits of a target system. It works very well.
I found
this, a C compiler part of a commercial development suite for the 8051 family of uC's that supports banked program code via pragmas. Pretty interesting.
http://www.crossware.com wrote:
The compiler will generate the necessary code to automatically switch code banks whenever a banked function is called and the linker will automatically locate each function in the specified bank.
This is what I have going on in ca65. If you can inline asm in cc65, can you also inline macros? I suppose you would have to integrate/be limited by:
http://www.cc65.org/doc/cc65-7.html
Quote:
I'm kind of surprised nobody has written a NES game with Atalan (that I know of...), but I guess it's mostly because of the esoteric syntax.
I actually tried Atalan for my NES project a while ago after feeling really hooked on it from reading the docs. I proceeded to try and convert a minimal part of my game to it: the in-game map editor I used for debugging, figuring it would be a good candidate for a trial period.
The experience did not live up to the expectation, as Atalan would produce lots of dummy stores (never read back, so no idea what they were for) to separately allocated temporary variables in zeropage, which made me realize even the simplest more large-scale project would quickly run out of zeropage storage. But even worse was the fact that the compiler would frequently segfault (that's what you get from C code after all
whenever it didn't like the syntax I'd written. And debugging your compiler is the last situation you want to be in when using a HLL...
Looking at the sparse activity of the source repository of Atalan leads me to classify it as semi-abandonware. It's a bit of a shame, because I think it's a lovely design spec for a language, and if someone had time to really polish the implementation I'd really love to use it. But that someone won't be me as part of my reasons for wanting some HLL functionality was having too little time to spend on my nesdev hobby...
But I then switched to using Movax's HLA macros as a fallback plan and haven't looked back since
Speaking of high-level functionalities in CA65 though... has anyone succeeded in making
include guard for .h files in CA65? If I try to do it with the equivalent CA65 code:
Code:
.IFNDEF __BLOCKTYPES_H__
.DEFINE __BLOCKTYPES_H__
; code
.ENDIF
...then the .IFNDEF will result in CA65 saying "blocktypes.h(1): Error: Identifier expected" the second time blocktypes.g is included.
^ This is because .define is not the same define as C.
In ca65:
.define foo bar
After this line, any text that matches foo is replaced with bar, all the time, the only exception is .undefine, so your .ifndef is actually resulting in .ifndef ... followed by a blank space.
You need to do this:
Code:
.IFNDEF __BLOCKTYPES_H__
__BLOCKTYPES_H__ = 1
; code
.ENDIF
Bananmos wrote:
Quote:
I'm kind of surprised nobody has written a NES game with Atalan (that I know of...), but I guess it's mostly because of the esoteric syntax.
I actually tried Atalan for my NES project a while ago after feeling really hooked on it from reading the docs. I proceeded to try and convert a minimal part of my game to it: the in-game map editor I used for debugging, figuring it would be a good candidate for a trial period.
The experience did not live up to the expectation, as Atalan would produce lots of dummy stores (never read back, so no idea what they were for) to separately allocated temporary variables in zeropage, which made me realize even the simplest more large-scale project would quickly run out of zeropage storage. But even worse was the fact that the compiler would frequently segfault (that's what you get from C code after all
whenever it didn't like the syntax I'd written. And debugging your compiler is the last situation you want to be in when using a HLL...
Looking at the sparse activity of the source repository of Atalan leads me to classify it as semi-abandonware. It's a bit of a shame, because I think it's a lovely design spec for a language, and if someone had time to really polish the implementation I'd really love to use it. But that someone won't be me as part of my reasons for wanting some HLL functionality was having too little time to spend on my nesdev hobby...
Ah, that's too bad. Now that I think of it, I think I tried it myself some time ago and remember it crashing when trying to compile some example project. I also don't like some of the syntax choices he/they made for the language.
Bananmos wrote:
Speaking of high-level functionalities in CA65 though... has anyone succeeded in making
include guard for .h files in CA65? If I try to do it with the equivalent CA65 code:
Code:
.IFNDEF __BLOCKTYPES_H__
.DEFINE __BLOCKTYPES_H__
; code
.ENDIF
...then the .IFNDEF will result in CA65 saying "blocktypes.h(1): Error: Identifier expected" the second time blocktypes.g is included.
Here's what works:
Code:
.IFNDEF __BLOCKTYPES_H__
__BLOCKTYPES_H__ = 1 ; Value doesn't matter
; code
.ENDIF
The reason is that .define doesn't work the way you think it does (.ifndef checks for existence of symbols, .define defines C style macros). The naming of the directives is kind of unfortunate and confusing.
Yea, I didn't like the syntax of Atalan myself. Now that I've got a weekend without work I'm going to try and get a base implementation banged out for now. I'm torn about two points. I'm not sure if I want to use a BASIC or C syntax. That's really just a matter of taste, but I guess I'll go with C-style syntax. It's just what I'm used to.
The other thing I'm trying to decide is if I want to generate the machine code directly or compile to ca65 assembly. I'm going to try the latter as it will be able to cross-link with libraries provided in ca65 or generic syntax pretty easily.
I'll see what I come up with
qbradq wrote:
The other thing I'm trying to decide is if I want to generate the machine code directly or compile to ca65 assembly. I'm going to try the latter as it will be able to cross-link with libraries provided in ca65 or generic syntax pretty easily.
I think this is a clear winner, because you can get a lot of other benefits as well, like being able to add debugging info (CA65 supports an undocumented directive which the C compiler uses to embed debug line info in the assembly files) and not having to write a linker.
Movax/TheFox:
Thanks guys! I knew the .define implementation in ca65 was a bit weird... but did't quite realize the consequences
Also beware of .defined (this one doesn't check for existence of a .define either). .if .defined( foo ) is the same as .ifdef foo.
thefox, honestly the main reason I want to use ca65 is so it'll be compatible with your Nintendulator builds
Also, .dbg is now documented in the snapshot docs, kinda. Is there another directive you are referring to?
ca65 is the way to go, not only because of debug features. It provides linker, it will allow to configure memory for various mappers, and, which is important, reuse existing code, like sound engines. Without ca65 support you'll get a lot of extra work to regain this features, especially if you choose binary output without intermediate assembly source output.
As for syntax, I'm pretty sure there is room for both C-like and Basic-like. The latter is an important thing to lower the entry barrier for (absolute, no prior programming experience) newcomers, it works well, as we can see in cases with bAtari basic and BasiEgaXors. C is obviously better choice for those who already know C or use CC65 and want to have a better option, without changing too much in existing code base.
qbradq wrote:
Also, .dbg is now documented in the snapshot docs, kinda. Is there another directive you are referring to?
Where? I haven't been able to find any reference to it in the docs. But yeah, .dbg was the one I was referring to.
My mistake, it's not in the docs. I have it in my compiler generator notes, I guess I just took notes from a compiled assembly file.
So far I've got no working code due to me trying to settle on a code organization style (for the language, not the compiler), and trying to finish the second quest of the Legend of Zelda
I am trying to avoid having to use forward declarations, but at the same time not have to compile all the code in one shot.
So the good news is I've made a lot of progress on my parser. The bad news is I'm going to have to start over. I missed some pretty big requirements during design and it's pretty unworkable now. At least I'll be able to reuse some of the code.
This was the first time I've tried to write a single-pass compiler, so I was expecting some level of failure.
qbradq wrote:
This was the first time I've tried to write a single-pass compiler, so I was expecting some level of failure.
It's only a failure if you quit or give up. You're recovering and going back at it again, so you can just term it as a 'hiccup'.
Just wanted to mention that my new design is working very well. Every time I write one of these things I get better at it
I should be ready for a 0.1 release soon, and I'll put it in a new thread. This one was really just meant as an exploration of the idea, and it served it's purpose very well.
Thank you to everyone for your input! It's been very valuable.
qbradq wrote:
Just wanted to mention that my new design is working very well. Every time I write one of these things I get better at it
I should be ready for a 0.1 release soon, and I'll put it in a new thread. This one was really just meant as an exploration of the idea, and it served it's purpose very well.
Thank you to everyone for your input! It's been very valuable.
Can't wait to try any version! Are you going to make the default binaries whatever is the easiest to manufacture? I used to think that would be whatever RetroUSB has but that seems to have faded out a bit..
I'm not exactly sure what you mean. Can you help me understand better?
Isn't NROM the simplest and easiest one to build? Most programs for NROM would probably work on some other mappers, too.
Hi,
as the author of Atalan I can assure you it's not yet in the state of semi-abandonware
I'm rewriting significant (well almost all) parts of Atalan based on the experience I acquired when writing the version 0.
That's why I'm not commiting the changes to repository as the compiler currently compiles only subset of the language and I am also redesigning some parts of the language.
As for using Atalan for NES development, some work has been done by Marcel Cevani (author of Scratchalan plugin). I personally didn't do some serious NES development for Atalan, as the platform of my heart is Atari 800
There is some NES specific support however (for example specific support for PPU reading).
However I cannot promise any date, when the Atalan will be available in some stable version
I believe C is not good match for 6502. There are many features in the language, that are impossible to implement effectively on 6502. Portability of any 8-bit platform game to some other platform will be very limited (except for text based games), so that's not an interesting feature.
Rudla