🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Best way to convert 24bit to 16bit

Started by
15 comments, last by VisualLR 24 years, 7 months ago
well, technically that macro works for 15bit mode, but it chops off the higher intensity pixels, which is why you get odd-looking pixels here and there, so don't use it.

for really good 24->16 conversion, you should look into color dithering. if you're not familiar with this, it's the principle that adjacent pixels, when looked at from far (or high resolution), "blend" together to form a 3rd color (the average of the pixels). A big bitmap of checkered black and white pixels would look grey in a high resolution display.

the only problem with dithered images is that when you zoom out, most algorithms work in such a way that some white or black pixels show up "alone" and stick out a little...

but dithering is pretty complicated, so for more basic (and still effective), you just need to modify your macro a little to this:

#define RGB16BIT(r,g,b) ((b >> 3) + ((g >> 2) << 5) + ((r >> 3) << 11))

[This message has been edited by foofightr (edited November 25, 1999).]

Advertisement
Listen, if the conversion is not time-critical (ie in the initialization area) you should do yourself a favor and learn dithering.

Basically, the last post got it right. You start out at the top left pixel, and you chop the lowest three bits of the red component and store it in the new 16-bit bitmap. Then, however, you take those bits you chopped off and distribute them to some/all of the surrounding pixels in the original bitmap (easiest is half to the pixel to the right and half down). You do this across the first row, then, starting the last pixel of the row below, go to the left, instead passing the pixel error to the left and down. You zig-zag down the picture in this fashion, and in the end the resulting 16-bit bitmap is VERY accurate and has very little distortion/banding/color errors etc.

- Splat

Well, I tried the macro you gave me, but it sorta shaded the image, the macro I already had works a bit better, except for one single pixel.. Im beginning to think it's not the actual conversion code, but something else more sinister. I saved the converted 16bit bitmap and this is what happens:

This is the 24bit image.

This is the 16bit image.

As you can see there, there's a bad pixel there, it does it with every image around the same place..

Any ideas?

------------------
Luis Sempe
visuallr@netscape.net
http://www.geocities.com/SiliconValley/6276

whats might make my macro look "shaded" is that you feed it in values already in the 5-6-5 bit range for each color! You're not supposed to do that, it needs full 8-bit intensities for each RED, GREEN, BLUE. It shifts the values down from 8->5, 8->6, 8->5 bits for red/green/blue, respectively, and shifts them into their correct location for 16bit color..

To me it's kinda dumb to do half the work of the macro while passing its arguments. The only way I can see your macro working is if you use it like this:

short mycolor = RGB16BIT(_24bit.red >> 3, _24bit.green >> 2, _24bit.blue >> 3);

with mine, you use it like this:

short mycolor = RGB24to16(_24bit.red, _24bit.green, _24bit.blue);

.. this lets the macro do all the work

Hope this helps

P.S. Splat: you know what I mean about the "zooming out" thing?

foofightr, you were right on the problem I had with your macro, however, that one pixel still keeps getting messed up, the one on the images I posted, so Im guessing it's not a problem with the macro, but I havent been able to figure out what's causing the problem, any ideas? Im all out.

thanks!

------------------
Luis Sempe
visuallr@netscape.net
http://www.geocities.com/SiliconValley/6276

Is this pixel showing up when you do a straight blt or from a colorkeyed blt?

Josh

foofighter: Yeah, with dithering its crucial that you always pass the EXACT error and do not round, especially up. If you do, in a white region the error will accumulate until you get a pixel thats "whiter than white" in which case your storage for that color will overflow from 255 to 0, and you get black.

If that's not what you are talking about, then I suggest using a bit more advanced error passing strategy. If you are not zig-zaging across the bitmap (left to right then right to left) then you cannot get away with the simple 50/50 error passing I described above - you must pass some on the diagonals downward, both to the left and right.

- Splat


Splat, how does your dithering formula work in psuedo-code?
I don't really understand how this "pixel error" is distributed
from line to line.

Vader

Ok, here's the thing:

Let's start at the top left pixel. Looking at the 8-bit source red component, we see that it is 10111011 (187). So we shift it down to the 5-bit version 10111 and put that in the right place in the destination bitmap. But look, we cut off 011 (3) from the value. So, let's distribute that to some near by pixels (let's assume zig-zag down and right). So, we'll add 2 to the pixel to the right and 1 to the pixel below the current one IN THE SOURCE BITMAP. So in effect the total sum of the source bitmap remains the same because whatever we cut off we put somewhere else.

By continueing across the first line, we do our best to calculate the best colors given the 5-bit color space for each pixel. Good! However, in most every case we weren't exact. So, for example let's say EVERY pixel we underestimated the color value. So the next row will make up for that since we INCREASED the color values in it.

I'm sorry if that was unclear. There are many FAQs on this topic, some even here at Gamedev.Net I think.

- Splat

Josh, It's showing up when I do a straight blit.

This topic is closed to new replies.

Advertisement