I'm writing a ROM-editing tool in C and need to extract some 8x8 tiles from CHR-ROM. First, I want to convert 2pp NES tiles to indexed 8bpp images. Second, I want to render them to an OpenGL canvas so the user can edit them. I haven't learned a lick of OpenGL yet but I just managed to wrap my head around the conversion part. This is my code:
So this is my input and output:
I have two questions:
Code:
typedef unsigned char byte;
/* Get 16 bytes from somewhere in CHR-ROM. */
byte* data = rom_data(0x040010, 0x10);
/* 8bpp image that will hold tile. */
byte image[64];
int pixel = 0;
byte* low_bit = data;
byte* high_bit = data + 8;
int row = 0;
while(pixel < 64) {
image[pixel++] = (bool)(low_bit[row] & 0b10000000) + (bool)(high_bit[row] & 0b10000000) * 2;
image[pixel++] = (bool)(low_bit[row] & 0b01000000) + (bool)(high_bit[row] & 0b01000000) * 2;
image[pixel++] = (bool)(low_bit[row] & 0b00100000) + (bool)(high_bit[row] & 0b00100000) * 2;
image[pixel++] = (bool)(low_bit[row] & 0b00010000) + (bool)(high_bit[row] & 0b00010000) * 2;
image[pixel++] = (bool)(low_bit[row] & 0b00001000) + (bool)(high_bit[row] & 0b00001000) * 2;
image[pixel++] = (bool)(low_bit[row] & 0b00000100) + (bool)(high_bit[row] & 0b00000100) * 2;
image[pixel++] = (bool)(low_bit[row] & 0b00000010) + (bool)(high_bit[row] & 0b00000010) * 2;
image[pixel++] = (bool)(low_bit[row] & 0b00000001) + (bool)(high_bit[row] & 0b00000001) * 2;
++row;
}
/* Print out 8bpp image pixel-by-pixel to validate. */
for (int y = 0; y < 8; ++y) {
for (int x = 0; x < 8; ++x) {
printf("%d ", image[x + y * 8]);
}
putchar('\n');
}
/* Get 16 bytes from somewhere in CHR-ROM. */
byte* data = rom_data(0x040010, 0x10);
/* 8bpp image that will hold tile. */
byte image[64];
int pixel = 0;
byte* low_bit = data;
byte* high_bit = data + 8;
int row = 0;
while(pixel < 64) {
image[pixel++] = (bool)(low_bit[row] & 0b10000000) + (bool)(high_bit[row] & 0b10000000) * 2;
image[pixel++] = (bool)(low_bit[row] & 0b01000000) + (bool)(high_bit[row] & 0b01000000) * 2;
image[pixel++] = (bool)(low_bit[row] & 0b00100000) + (bool)(high_bit[row] & 0b00100000) * 2;
image[pixel++] = (bool)(low_bit[row] & 0b00010000) + (bool)(high_bit[row] & 0b00010000) * 2;
image[pixel++] = (bool)(low_bit[row] & 0b00001000) + (bool)(high_bit[row] & 0b00001000) * 2;
image[pixel++] = (bool)(low_bit[row] & 0b00000100) + (bool)(high_bit[row] & 0b00000100) * 2;
image[pixel++] = (bool)(low_bit[row] & 0b00000010) + (bool)(high_bit[row] & 0b00000010) * 2;
image[pixel++] = (bool)(low_bit[row] & 0b00000001) + (bool)(high_bit[row] & 0b00000001) * 2;
++row;
}
/* Print out 8bpp image pixel-by-pixel to validate. */
for (int y = 0; y < 8; ++y) {
for (int x = 0; x < 8; ++x) {
printf("%d ", image[x + y * 8]);
}
putchar('\n');
}
So this is my input and output:
Code:
0 0 0 0 0 0 0 0
0 3 3 3 1 1 1 0
0 3 3 3 1 1 0 0
0 3 3 3 1 0 0 0
0 2 2 2 0 0 0 0
0 2 2 0 0 0 0 0
0 2 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 3 3 3 1 1 1 0
0 3 3 3 1 1 0 0
0 3 3 3 1 0 0 0
0 2 2 2 0 0 0 0
0 2 2 0 0 0 0 0
0 2 0 0 0 0 0 0
0 0 0 0 0 0 0 0
I have two questions:
- Is there a faster, less-verbose way to do this in C? All of this casting, bitmasking and multiplication feels unecessary.
- Is it even possible to display indexed 8bpp images in OpenGL? Will that be hardware-accelerated? Or will I need to use a shader and 24bpp image?