In C (and C++), you can define structures with arbitrary data types inside, and you can even define structures with packed bitfields. For example, you could do something like this:
Code:
union {
struct {
unsigned grayscale: 1;
unsigned leftBorderBackground: 1;
unsigned leftBorderSprites: 1;
unsigned showBackground: 1;
unsigned emphasizeRed: 1;
unsigned emphasizeGreen: 1;
unsigned emphasizeBlue: 1;
};
uint8_t data;
} PPUMask;
void PPU::writeHandler (uint16_t address, uint8_t value) {
switch(address &0x0007) {
case 1: PPUMask.data = value; break;
}
}
void PPU::draw (void) {
// ...
if (PPUMASK.showBackground) {
// ...
}
}
By naming bitfields while also being able to access the entire PPUMask structure at once, one can write very flexible and readable code. But every C person will tell you that this is bad, because the C standard lets compilers pad structures and fill up bitfields however it wants, and you should not assume anything about bit ordering, byte order and so on for the sake of portability. So, why not add additional keywords to the C standard that lets me specify what kind of bit or byte ordering I expect, if I expect anything, and whether I can accept structure padding in any given case, and let the compiler do the additional work? Something like this stylized example:struct {
unsigned grayscale: 1;
unsigned leftBorderBackground: 1;
unsigned leftBorderSprites: 1;
unsigned showBackground: 1;
unsigned emphasizeRed: 1;
unsigned emphasizeGreen: 1;
unsigned emphasizeBlue: 1;
};
uint8_t data;
} PPUMask;
void PPU::writeHandler (uint16_t address, uint8_t value) {
switch(address &0x0007) {
case 1: PPUMask.data = value; break;
}
}
void PPU::draw (void) {
// ...
if (PPUMASK.showBackground) {
// ...
}
}
Code:
struct IFFHeader {
char ID[4];
be_uint32_t length; // big endian chunk size
} binary; // don't pad anything
union {
struct {
unsigned grayscale: 1;
unsigned leftBorderBackground: 1;
unsigned leftBorderSprites: 1;
unsigned showBackground: 1;
unsigned emphasizeRed: 1;
unsigned emphasizeGreen: 1;
unsigned emphasizeBlue: 1;
} binary lsbfirst; // bits are packed and specified from 0x01s to 0x40s
uint8_t data;
} PPUMask;
But no. If you look at stackoverflow.com and similar sites, as well as most "portable" code that processes binary data in any form, you are expected to do something horrible like this in the name of portability:char ID[4];
be_uint32_t length; // big endian chunk size
} binary; // don't pad anything
union {
struct {
unsigned grayscale: 1;
unsigned leftBorderBackground: 1;
unsigned leftBorderSprites: 1;
unsigned showBackground: 1;
unsigned emphasizeRed: 1;
unsigned emphasizeGreen: 1;
unsigned emphasizeBlue: 1;
} binary lsbfirst; // bits are packed and specified from 0x01s to 0x40s
uint8_t data;
} PPUMask;
Code:
#define PPUMASK_SHOWBG 0x08
if (PPUMask &PPUMASK_SHOW_BG) {
// ...
}
// read big-endian chunk size
uint32_t iffLength = (chunkHeader[4] <<24) | (chunkHeader[5] <<16] | (chunkHeader[6] <<8) | chunkHeader[7];
Basically, when processing binary data, you're supposed to eschew high-level structures and do everything by hand in the name of portability: masking and shifting bits from packed bytes, retrieving and putting together multibyte fields from binary structures, and so on, with all the potential for additional error this hard-to-read code brings about. Instead of just being able to tell the compiler what I want, and let it do the work as needed on the particular platform, and as a consequence, any optimizations. And most C/C++ programmers, and certainly the people doing the standard seem to be perfectly fine with it.if (PPUMask &PPUMASK_SHOW_BG) {
// ...
}
// read big-endian chunk size
uint32_t iffLength = (chunkHeader[4] <<24) | (chunkHeader[5] <<16] | (chunkHeader[6] <<8) | chunkHeader[7];