RSX aren't capable of HDR+FSAA?

Brimstone said:
Microsoft and ATI optomized their 360 hardware to achieve full throtle performance given the limitations of the console enviroment. It's one of the reasons why the 360 has 10mb of eDRAM with the ROPS and other circutry combined to allow for A.A. + HDR without slowing down the rest of the GPU.

At the end of the day, programmers aren't going to be able to create more bandwidth and processing power than a system has.
But they are going to be able to use it more efficiently. Xenos manages HDR+AA as much through the FP10 format as the eDRAM. Change it to FP16 and you incur a penalty, so it looks likely the lower quality HDR format will be ubiquitous to XB360 titles. FP16 blending isn't supported on Xenos either which I think is even more of an issue determining use of the FP10 format.
 
Is it me or has EVERY PS3 game that has been shown had HDR???

It seems that Dev's love it on PS3, there's only a handful of 360 games that have it....:)
 
version said:
use 128 bit rendering, what kutaragi speaking on E3, nothing problem with this
Certainly - there would be no problem at all with 128 bit color, if only the RSX had access to unlimited memory and bandwidth. :)
 
Last edited by a moderator:
!eVo!-X Ant UK said:
Is it me or has EVERY PS3 game that has been shown had HDR???
Dunno quite how you'd tell. After all GT3 on PS2 had HDR like exposure adjustment. Did that have HDR?

You don't need HDR to do bloom and 'backlight exposure'. It's main use AFAIK is where you have light and dark areas and want to maintain detail in both, like walking into a cave that's 1/10th the brightness of outside and gradually have the 'eyes' adjust so the details on models inside cover the range of dark to light with 256 level of intensity per channel. So situation won't need HDR. Like a space combat game (Star Wars) that's constant illumination, or a sports game.
 
Shifty Geezer said:
Dunno quite how you'd tell. After all GT3 on PS2 had HDR like exposure adjustment. Did that have HDR?

You don't need HDR to do bloom and 'backlight exposure'. It's main use AFAIK is where you have light and dark areas and want to maintain detail in both, like walking into a cave that's 1/10th the brightness of outside and gradually have the 'eyes' adjust so the details on models inside cover the range of dark to light with 256 level of intensity per channel. So situation won't need HDR. Like a space combat game (Star Wars) that's constant illumination, or a sports game.
I believe TeamICO went to great lengths to 'fake' HDR in Shadow of the Colossus with bloom, over-saturation and dynamic aperture adjustment.
 
Shifty Geezer said:
That's not an accurate assessment. The actual discussion said a fair bit on the improved quality over FP16 framebuffers in most situations, the fact that this method can be used on all and any SM3.0 (SM2.0 even, probably) achitectures with the same benefit of considerable BW savings, not just RSX, and that it was a better solution than truncated FP formats like Xenos's FP10.
Deano was definately going on a limb in suggesting that.

NAO32 does not support the most widely used alpha blending modes. That is a serious drawback, and if you don't believe me, look at how HDR is always specially coded for ATI hardware on the PC, if at all. It has absolutely NOTHING to do with SM3.0, and everything to do with FP blending.

If you don't have hardware blending, you really only have two choices. Either don't do blending in the HDR buffer and save it all for the final LDR image that you send to the display (in which case you will miss some HDR effects), or you do what is called "ping-ponging" to emulate blending. This copies a portion of the main framebuffer into a temporary texture, and then reads from this texture when rendering back into the main framebuffer.

You have to repeat this for every potentially overlapping transparent object you draw. Something like FarCry's dense vegetation is near impossible (hence no ATI HDR patch). Layered smoke becomes too expensive as well. I recall reading presentations from ATI and NVidia that say rendertarget changes are expensive, so keep them in single digits. Ping-ponging, then, is not exactly ideal for rendering 1000 small transparent sprites. Fire, smoke, dust, rain, grass/bushes, muzzle flash, explosions, windows/glass, etc. all become hard to draw without alpha blending. Sometimes the ping-pong requirements can be reduced by reformulating the effect, sometimes not.

If the alpha channel is used to indicate brightness (which is how HL2 works, I think, since it doesn't use a FP16 framebuffer), it is much more of a canned effect, and you're not really rendering anything at a higher dynamic range. Moreover, some effects of HDR are usually missing, like a bright object through a window or dust cloud.

Heavenly Sword probably may not have very big blending requirements, which is why the custom format works for NT. However, NAO32 is not a perfect solution by any means, and has heavy tradeoffs. We discussed this at length in a thread about RSX's bandwidth. (Aside: I wish nAo addressed my last post there, but I understand he's quite busy.)
 
Last edited by a moderator:
_phil_ said:
far cry is alfa_test.should work ,ain' it ?
Not always alpha tested. Look at the grass in the pic I linked to, or here. Blended vegetation is far less aliased, especially when lower mipmaps kick in.
 
Mintmaster said:
Not always alpha tested. Look at the grass in the pic I linked to, or here. Blended vegetation is far less aliased, especially when lower mipmaps kick in.

If you're blending, you're probably:

Rendering stuff with soft edges.
Rendering in a depth sorted order after all the opaque geometry is done.

In which case, I see no reason you should resolve your AA buffer (random colour-space or otherwise) down to a normal one in an RGB colour space (but retain HDR) and then render your translucent geometry as usual.

Then apply the tone-map and other HDR processing...
 
Mintmaster said:
Deano was definately going on a limb in suggesting that.
What I meant is that for opaque geometry there is no good reason to use a HDR RGB colour space except lack of shader instructions. Given you can always convert to another space for alpha, why would you use HDR RGB?

Unfortately people seem to think that one framebuffer and colour space per game is the only option. We use several in a single frame... NAO32 opaque, FP16 RGB HDR alpha, ARGB8 RGB LDR alpha.
 
Mintmaster said:
Deano was definately going on a limb in suggesting that.

NAO32 does not support the most widely used alpha blending modes. That is a serious drawback, and if you don't believe me, look at how HDR is always specially coded for ATI hardware on the PC, if at all. It has absolutely NOTHING to do with SM3.0, and everything to do with FP blending.
Dean was faster than me, but yes, one can obviously use different color spaces / render target formats / resolution at each renderin pipeline stage.
If you don't have hardware blending, you really only have two choices. Either don't do blending in the HDR buffer and save it all for the final LDR image that you send to the display (in which case you will miss some HDR effects), or you do what is called "ping-ponging" to emulate blending.
As you can have different HDR representations in the same frame you can also have different options, furthermore in some special case with non overlapping geometry you can probably blend in a shader without ping ponging at all.
You have to repeat this for every potentially overlapping transparent object you draw. Something like FarCry's dense vegetation is near impossible (hence no ATI HDR patch).
Umh..the quality is not the same but what about alpha to coverage? :)
Layered smoke becomes too expensive as well. I recall reading presentations from ATI and NVidia that say rendertarget changes are expensive, so keep them in single digits.
Consoles are a bit better..
Heavenly Sword probably may not have very big blending requirements, which is why the custom format works for NT
Blendering requirements aside you might decided to use it anyway since you're not 'forced' to blend in that space.
. However, NAO32 is not a perfect solution by any means, and has heavy tradeoffs. We discussed this at length in a thread about RSX's bandwidth. (Aside: I wish nAo addressed my last post there, but I understand he's quite busy.)
Sorry, I completely forgot about that post (and yes, lately I'm working like a dog :) )
 
DeanoC said:
What I meant is that for opaque geometry there is no good reason to use a HDR RGB colour space except lack of shader instructions. Given you can always convert to another space for alpha, why would you use HDR RGB?
These are just theoretical reasons. Heck, they may not even be possible:

1) You don't want to spend the memory on all the buffers.
2) Rendering everything to FP16 buffers is not the biggest bottleneck.
 
Inane_Dork said:
1) You don't want to spend the memory on all the buffers.
You don't need ALL those buffers at the same time..*hint* ;)
2) Rendering everything to FP16 buffers is not the biggest bottleneck.
It depends, in memory bw costrained systems can be a big win, like having 50% or even more bandwidth for your textures.
 
Shifty Geezer said:
FP16 blending isn't supported on Xenos either which I think is even more of an issue determining use of the FP10 format.

Can someone else confirm this, I have read reports that would contradict this? I thought Xenos ROP's did support FP16 blending, but its TMU's didn't support FP16 filtering, much like ATI's X1000 series.
 
I've read a bunch of things, but I understand this to be true:

Xenos has FP16 filtering.
Xenos has FP16 blending.
Xenos can do FP16 MSAA, except that it cannot automatically resolve the samples to pixels.

Someone with more interest or experience can correct me.
 
Last edited by a moderator:
Mintmaster said:
Ping-ponging, then, is not exactly ideal for rendering 1000 small transparent sprites. Fire, smoke, dust, rain, grass/bushes, muzzle flash, explosions, windows/glass, etc. all become hard to draw without alpha blending. Sometimes the ping-pong requirements can be reduced by reformulating the effect, sometimes not.
There are actually compelling reasons(that are unrelated to HDR and/or any specific hardware) to render fillrate guzzling particle sprites like smokes etc. into offscreen targets and combine them with opaque stuff in a latter pass.
In which case you could also render them in a different HDR space, one that may even support fixed hw blending.
 
Last edited by a moderator:
Back
Top