SSAO is becoming pretty much a standard now isn't it?

Ether_Snake

Newcomer
I noticed that a whole lot of game have been using it recently.

Uncharted 2, Call of Juarez, Bionic Commando, etc.

Next: object-based motion blur?:)
 
I think it's good. In my opinion, they both make games look better and ssao is always being improved it seems so I don't have a problem with them becoming widespread.
 
Multi-gpu solutions die! :devilish: ok well someone should have seen something bad with going MRT route in that sense. ;) Use the second GPU to store other properties in 8 more RTs. :p
 
Are you saying that games supporting MRT won't work on AFR solutions? That kinda stuffs up ATI's business plan if those games are becoming more prevalent doesn't it?
 
Are you saying that games supporting MRT won't work on AFR solutions? That kinda stuffs up ATI's business plan if those games are becoming more prevalent doesn't it?

Not MRTs per se. Just the implementation of inter-frame dependent effects. Velocity is inherently time-dependent so... frame dependent.
 
Cheers. So its just certain effects that won't work properly with AFR, not the entire game? Still, if those types of effects become common it has the potential to screw ATI's business model a bit.

Not that I mind, I too dislike multi GPU solutions and would prefer to see a return to single GPU focus.
 
Time-dependent is certainly not the same thing as interframe dependent. There are absolutely no interframe dependencies from MRT or object-based motion blur.
 
Well according to Dave the inter-frame transfers currently don't saturate the available PCI-e bandwidth (rendering sideport useless). So maybe they've got enough magic dust in their drivers to transfer prior frame data between GPUs without a performance hit.
 
Time-dependent is certainly not the same thing as interframe dependent.

I meant time for velocity... velocity being speed... speed being per unit time, just to conceptualize using current and prior information (not to suggest calculating off of some timed-information... I don't know. That didn't even cross my mind). But anyways, how do you determine pixel velocity without using previous information?
 
Well according to Dave the inter-frame transfers currently don't saturate the available PCI-e bandwidth (rendering sideport useless). So maybe they've got enough magic dust in their drivers to transfer prior frame data between GPUs without a performance hit.

The problem is timing, IMHO: if one persistent RT that is required for beginning rendering frame 2 is updated only very late in the rendering process, your second GPU has just idled for let's say 75% of frame 1's rendertime- most of the parallelism is wasted in such a situation.
 
Well according to Dave the inter-frame transfers currently don't saturate the available PCI-e bandwidth (rendering sideport useless). So maybe they've got enough magic dust in their drivers to transfer prior frame data between GPUs without a performance hit.
Also consider the fact that you're usually far better of to avoid such transfers all together by driver profile and a bit of special coding in game. For example you might render impostors into RT at frame X and use that RT as texture for next 20 frames. You'll have to transfer this texture from GPU that rendered it to all other GPUs in SLI/CF, which will be handled by the driver by default. It is however better to tell the driver not to resolve such sync issues and render impostors twice in two consecutive frames, which will cause that both GPU0 and GPU1 will have an updated texture.
 
I meant time for velocity... velocity being speed... speed being per unit time, just to conceptualize using current and prior information (not to suggest calculating off of some timed-information... I don't know. That didn't even cross my mind). But anyways, how do you determine pixel velocity without using previous information?

Of course you use previous information, but all information you need is CPU-side. You're not reusing anything that's on the GPU.

The problem is timing, IMHO: if one persistent RT that is required for beginning rendering frame 2 is updated only very late in the rendering process, your second GPU has just idled for let's say 75% of frame 1's rendertime- most of the parallelism is wasted in such a situation.

Yep. Sync points is the real performance killer. Data transfers is generally a relatively small issue.
 
for nv users is AO for vista only?
i have the same drivers under xp + vista 185.85 and the ao setting is only in the cp in vista
 
The SSDO video looks pretty nice, but did anyone notice some funkiness when the camera is moving? Watch at around 51 seconds. The SSDO seems to disappear before the object is occluded by something in front. Maybe just their implementation?
 
Fairly typical artifact for particular angles and distances. It's the nature of the algorithm. Those 'ambient occluders' aren't in the screen anymore, so no AO is calculated after a few frames, hence the vanishing.
 
Back
Top