To start off with, I'll point out that I know practically nothing at all about game development or programming or any of that. This is more information-hunting than anything else. Now, I do have a background in CGI. Which, in terms of graphics and rendering, follows many of the same principles, just totally different execution.
Basically, I'm trying to figure out the difference between a 16bpc graphics environment and a 32bpc graphics environment. Aside from the obvious stuff like "32bpc has more information".. this I know already.
Now.. many current PC games/engines, and all console games, support only an 8bpc environment. Now, what I'm about to say may be totally wrong, but it's what I've come up with on my own in terms of how it works.
The way I understand it, in a 32-bit (8bpc) environment, no value anywhere in the scene can go higher than 1 or less than 0 (or 100%, 255, however you want to look at it). This is why a lot of games seem to have very "flat" lighting and shading.. because if you illuminate the entire scene up to 50% intensity, then your key light can be no greater than 50%. Which doesn't yield the most dynamic lighting.
Now, in CG-land, there are no such restrictions. I can take the illumination levels as high or as low as I want, and everything is still there. LightWave, like all other high-end CG apps, uses a 128-bit floating point renderer. Because of this, I have no concept of what sort of limitations exist in a 16bpc environment.. I have no experience with it, because nothing I have uses it. It's all or nothing. CG artists and production houses simply don't waste their time with 16bpc because it offers no advantage whatsoever over 32bpc, which has far more information.
Obviously, that isn't the case in a game environment, where memory is often a big factor in what you can add.
Now, I know the difference in image formats.. HDR and all that, since I use them regularly in my CG work. But, as before, I have practically no experience in FP16 images. In fact, I'm not even sure if I've ever even seen one.. I have no way of creating them except from scratch in LW (which kind of defeats the purpose, since I'm setting up all the lighting anyway).
Do FP16 images actually contain the full dynamic range of lighting that a 32bpc image contains? Or is it choked off at some point? Does anyone have an example of a 16bpc HDR photograph that I could fiddle with to see the exposure ranges?
My other question is in regards to scene lighting. I have my theory on how 8bpc engines are lit (above), and how full FP environments have absolutely no lighting restrictions at all. But what about 16bpc engines? Are they limited in the dynamic range that you can represent through lighting, shading, reflections, specular highlights, etc?
Feel free to get technical in terms of illumination, bit-depth, etc (All the stuff I'd know from a CGI perspective). But I don't have any programming knowledge, so if you go there, you'll lose me.
And yes, this is actually in reference to games rather than CGI.. Consoles, specifically. As most of you are probably aware, the Xbox360 supports a 16bpc (64-bit) graphics environment, while the PS3 supports a 32bpc (128-bit) environment. I'd like to know what the difference is, and if these engines are (hypothetically) used to their fullest extent, what sort of advantage might one have over the other.
________
buy vaporizers
Basically, I'm trying to figure out the difference between a 16bpc graphics environment and a 32bpc graphics environment. Aside from the obvious stuff like "32bpc has more information".. this I know already.
Now.. many current PC games/engines, and all console games, support only an 8bpc environment. Now, what I'm about to say may be totally wrong, but it's what I've come up with on my own in terms of how it works.
The way I understand it, in a 32-bit (8bpc) environment, no value anywhere in the scene can go higher than 1 or less than 0 (or 100%, 255, however you want to look at it). This is why a lot of games seem to have very "flat" lighting and shading.. because if you illuminate the entire scene up to 50% intensity, then your key light can be no greater than 50%. Which doesn't yield the most dynamic lighting.
Now, in CG-land, there are no such restrictions. I can take the illumination levels as high or as low as I want, and everything is still there. LightWave, like all other high-end CG apps, uses a 128-bit floating point renderer. Because of this, I have no concept of what sort of limitations exist in a 16bpc environment.. I have no experience with it, because nothing I have uses it. It's all or nothing. CG artists and production houses simply don't waste their time with 16bpc because it offers no advantage whatsoever over 32bpc, which has far more information.
Obviously, that isn't the case in a game environment, where memory is often a big factor in what you can add.
Now, I know the difference in image formats.. HDR and all that, since I use them regularly in my CG work. But, as before, I have practically no experience in FP16 images. In fact, I'm not even sure if I've ever even seen one.. I have no way of creating them except from scratch in LW (which kind of defeats the purpose, since I'm setting up all the lighting anyway).
Do FP16 images actually contain the full dynamic range of lighting that a 32bpc image contains? Or is it choked off at some point? Does anyone have an example of a 16bpc HDR photograph that I could fiddle with to see the exposure ranges?
My other question is in regards to scene lighting. I have my theory on how 8bpc engines are lit (above), and how full FP environments have absolutely no lighting restrictions at all. But what about 16bpc engines? Are they limited in the dynamic range that you can represent through lighting, shading, reflections, specular highlights, etc?
Feel free to get technical in terms of illumination, bit-depth, etc (All the stuff I'd know from a CGI perspective). But I don't have any programming knowledge, so if you go there, you'll lose me.
And yes, this is actually in reference to games rather than CGI.. Consoles, specifically. As most of you are probably aware, the Xbox360 supports a 16bpc (64-bit) graphics environment, while the PS3 supports a 32bpc (128-bit) environment. I'd like to know what the difference is, and if these engines are (hypothetically) used to their fullest extent, what sort of advantage might one have over the other.
________
buy vaporizers
Last edited by a moderator: