Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!
Hi,
I know how to convert from 24bit to 16bit, but when I tried writing an algorithm to convert the other way around, it didnt work... this is my 24bit -> 16bit code:
b = (tempBuffer[(i*3)+0] >> 3);
g = (tempBuffer[(i*3)+1] >> 3);
r = (tempBuffer[(i*3)+2] >> 3);
color = _RGB16BIT(r, g, b);
((unsigned short *)buffer) = color;
so, how do I make this work the other way?
thanks!
Luis Sempe visual@guate.net http://www.geocities.com/SiliconValley/6276
...beware the 5-6-5 16 bit cards... ...use the ddpixelformat struct to find out where the bits are supposed to go/how long they are... something along the lines of: _int32 rmask = ddpixelformat.???RMask (don''t know the exact name, sorry.) int i =0; while(rmask & 1 != 1) { rmask >> 1; i++: }
while(rmask & 0x80 !=1) { rmask << 1; i--; }
_int8 R; if(i > 0) R = (tempBuffer & ddpixelformat.???RBitMask) >> i; else R = (tempBuffer & ddpixelformat.???RBitMask)<< -i;
So same thing for the other colors (you can just do the i,j,k (or whatever) calculation once upon initialization, and if most(all?) the cards just have the 565 or 555 code, even do away with the if(i>0) (should be >0 for R, <=0 for green (just treat it like it''s <0) and B < 0)) Now, use the 24bit macro.
Also, you''ll get banding if you just cut off the least significant bits of the 24-bit color. Go read up somewhere (read: Gamasutra) about pixel-conversion and dithering methods.