Crazyace said:brings to mind the Jaguar's 16 bit CRY mode for graphics - gave '24 bit' colour in 16 bits, but gourard shading was a bit strange at times
Bingo. btw I believe this is just the beginning, there's a lot more work to do.ERP said:He describes it above.
He declares a MSAA buffer, renders to it in his new format, and does the final down sample manually, I would guess he reserves enough free bits at the top of the Luminance LSB to deal with carry out of the Bilinear filter lerps. Resolves the carry in the shader, does his tone mapping and writes the final result out in RGB.
Jaws said:I guess tiling the backbuffer to the SPUs LS' would be feasible with the reduced bandwidth of INT8? Not to mention that SPUs should rip through that datatype!
Yep, but one could work on the int8 format as it is or convert it to a full precision floating point representation (in Luv color space) with a little snippet of code.Laa-Yosh said:AFAIK SPUs only support 32 bit float data, I recall DeanoC or Faf posting about this...
Laa-Yosh said:AFAIK SPUs only support 32 bit float data, I recall DeanoC or Faf posting about this...
Looking on the ISA SPU can natively add/substract 16/32 Bit Integers, multiply 16Bit Ints. Everything else needs to be broken up into multiple operations. That aint that important though, I think the big problem is that divide is missing...Jaws said:Each SPU can work on 16-way 8bit integers...
Jawed said:What would you do with framebuffer data in Cell that RSX wouldn't do far more effectively?
Jawed
Npl said:Looking on the ISA SPU can natively add/substract 16/32 Bit Integers, multiply 16Bit Ints. Everything else needs to be broken up into multiple operations. That aint that important though, I think the big problem is that divide is missing...
RGB isn't a perfect colourspace but it's actually not a bad choice for rendering. The most 'natural' colourspace for a framebuffer (assuming you want to accumulate contributions from multiple lights over multiple passes) is some number of channels, each representing the number of photons accumulated within some wavelength band. FP16 RGB can be treated as a rough approximation of this for three wavelength bands. Some offline renderers are now moving to using larger numbers of channels and integrating at the end against curves approximating the response of the cones in the human eye.Cyander said:Still, I agree with them, the RGB colorspace is the WORST for lighting math, as RGB has no direct connection to a light source. We have made a lot of math to approximate how a light would work in the RGB space, as RGB is faster to display on a computer, even though something like HSV or YUV is a better choice when you are attempting to map something to reality. As long as the calculations themselves are done with reasonable accuracy by their shaders, does it matter what the storage format is in the framebuffer?
Lighting is performed neither in FP16 nor nAo's color space - it's done in shader with normal FP32. The final results are downconverted with a loss of precision in BOTH cases - it just happens to be that the FP16 downsample is perceptually 'more' lossy.heliosphere said:but in terms of what colour space is best / most natural for lighting it's not better than FP16 RGB.
As soon as you want to accumulate lighting over multiple passes additive blending is very important (and also physically correct as long as you are working in a linear RGB space). If you can do all your lighting in a single pass then blending is less important but one-pass-per-light is quite a common technique these days and then you want your frame buffer to do the right thing when you do an additive blend. Deferred renderers that do a pass per light also want a linear RGB frame buffer for correct lighting. Even for engines that aren't doing a pass per light it's useful to be able to accumulate lighting across more than one pass in many situations.Fafalada said:Lighting is performed neither in FP16 nor nAo's color space - it's done in shader with normal FP32. The final results are downconverted with a loss of precision in BOTH cases - it just happens to be that the FP16 downsample is perceptually 'more' lossy.
So blending issues aside - nAo's choice of colorspace would give better results for lighting.
One could argue that blending should not even be mentioned in the same time with physical correctness given that it's hardly any kind of physical phenomenon - but unfortunately it's far too important tool in CG that we could live without it either.
Graphics trends change - we do what works best with target platforms, not what is considered "common".heliosphere said:If you can do all your lighting in a single pass then blending is less important but one-pass-per-light is quite a common technique these days and then you want your frame buffer to do the right thing when you do an additive blend.
I wasn't disputing that this is a useful technique for next-gen platforms (when do we get to start calling them current-gen btw?) - it's a clever method for optimizing for the limited framebuffer bandwidth and / or blending capabilities available. What I was taking issue with was the idea that FP16 RGB is "the WORST for lighting math" - if you have the bandwidth and the blending support it is a good choice for a framebuffer because it allows you to accumulate lighting over multiple passes correctly (at least as correctly as is possible without moving to more spectral bands). I think FP16 RGB will be the framebuffer format of choice on the PC once DX10 hardware arrives because it makes it a lot easier to "do the right thing" and keep your lighting in a linear space that behaves properly under additive blends where other alternatives require you to move back and forth from one colour space to another (which is not straightforward without fully programmable blending) if you want correct behaviour when accumulating lighting across multiple passes.Fafalada said:Graphics trends change - we do what works best with target platforms, not what is considered "common".
From where I'm standing, PS3 is the diametric opposite of PS2 - while on PS2 FB accumulation was our bread and butter, on PS3 you want to avoid it at all costs - regardless of what FB format you use.
On 360 excessive accumulation is a bad idea also - not performance wise but because you're stuck in FP10.
No doubts about that, it helped and still helps..Shifty Geezer said:nAo (and others) : Has your work on PS2 and it's lack of hardware features had a positive contribution on developing alternative software solutions?
To be fair the answer in this case is negative, when I was working on PS2 I wasn't caring about color spaces at allThe need to solve in 'software' what PS2 lacked in hardware has thrown up a lot of research, including a lot of alternative data models for colourspaces.
I don't know the answer here..Or is this research into different data models for GPUs ongoing research across all platforms? Presumably not, as the IHVs have tended to keep developing standard 'symmetric' data formats.
I really don't know, but as long as we need more and more frame buffer bandwith I believe that using a different color space could be useful even in some years from now.I'm curious how much non-standard colour modelling is going to be appearing on PCs and consoles and who's going to be pushing the envelope.