Any details on AMD Leo demo?

Status
Not open for further replies.

sqrt[-1]

Newcomer
From:
http://developer.amd.com/SAMPLES/DEMOS/pages/AMDRadeonHD7900SeriesGraphicsReal-TimeDemos.aspx

"Specifically, this demo uses DirectCompute to cull and manage lights in a scene. The end result is a per-pixel or per-tile list of lights that forward-render based shaders use for lighting each pixel. This technique also allows for adding one bounce global illumination effects by spawning virtual point light sources where light strikes a surface. Finally, the lighting in this demo is physically based in that it is fully HDR and the material and reflection models take advantage of the ALU power of the AMD Radeon HD 7900 GPU to calculate physically accurate light and surface interactions (multiple BRDF equations, realistic use of index of refraction, absorption based on wavelength for metals, etc)."

So is this an update on the technique I used in my light indexed deferred rendering demo - or is it something else entirely? The per-tile list of lights could be similar to Uncharted lighting (I noticed that a few other presentations at GDC used computer shaders and tiled lighting)

I wonder if they have found a new novel way of supporting different light types in such a scheme? (The main problem with light indexed deferred rendering)

I assume there will be a Siggraph/GDC presentation on this - but any speculation is welcome.
 
It does sound like light-indexed deferred rendering, or something very similar. It would be pretty easy to implement with compute shader tile calculation, or with DX11.1 logical blend operations.
 
22-30FPS on HD6970 with majority around 25FPS

same here....what graphics features does HD7970 adds? pretty hard to tell these days...BUT the shaders aliasing...jaggies...textures shimmering...you know those IQ eyesores...still very prevalent even if i was running at ...1080p ..forcing 8xEQAA...morphoric AA on/off...16xAF through CCC...the chains of bridge...the greenies...the stove...the camera edges...the door handle...the dragon jaws...all breaking the illusion of "CGI-in-realtime" graphics..

i wonder will the day finally come when these go away..imho these artifacts are still a problem to reach that "CGI-in-realtime" graphics...
 
same here....what graphics features does HD7970 adds? pretty hard to tell these days...BUT the shaders aliasing...jaggies...textures shimmering...you know those IQ eyesores...still very prevalent even if i was running at ...1080p ..forcing 8xEQAA...morphoric AA on/off...16xAF through CCC...the chains of bridge...the greenies...the stove...the camera edges...the door handle...the dragon jaws...all breaking the illusion of "CGI-in-realtime" graphics..

i wonder will the day finally come when these go away..imho these artifacts are still a problem to reach that "CGI-in-realtime" graphics...

You want CGI like quality, then push for IHVs to add support for REYES as well.
 
So how does the real demo compare to the 1080p video posted - I can't run the demo and was quite impressed by the video. (or perhaps I am less sensitive to these artifacts)
 
You want CGI like quality, then push for IHVs to add support for REYES as well.

Will REYES get rid of those aliasing? There are so much computing...teraflops...on the table now...but the age old artifacts are still present..What do you mean by ....'supporting' REYES?


sqrt[-1];1616159 said:
So how does the real demo compare to the 1080p video posted - I can't run the demo and was quite impressed by the video. (or perhaps I am less sensitive to these artifacts)

Looks worse man...visible aliasing clearly distracted me..video could pass of as old CGI...
 
same here....what graphics features does HD7970 adds? pretty hard to tell these days...BUT the shaders aliasing...jaggies...textures shimmering...you know those IQ eyesores...still very prevalent even if i was running at ...1080p ..forcing 8xEQAA...morphoric AA on/off...16xAF through CCC...the chains of bridge...the greenies...the stove...the camera edges...the door handle...the dragon jaws...all breaking the illusion of "CGI-in-realtime" graphics..

i wonder will the day finally come when these go away..imho these artifacts are still a problem to reach that "CGI-in-realtime" graphics...

Try a FXAA injector.
(I'm not sure it works on DX11 though, so sorry if that was too stupid.)
 
Will it be a REYES type system or some form of pathtracing that makes it into a marketable game first? Seems like with the brigade 2 engine we already have an unbiased pathtracing game with no aliasing; but tons of noise. The noise will get better with future hardware and algorithms.

http://igad.nhtv.nl/~bikker/

 
Last edited by a moderator:
Watch a PIxar movie and find out. :)
From what I've understood they get rid of aliasing by massive oversampling. If you render at exactly the output resolution you'd get just as jaggy image as with regular rendering methods.
 
From what I've understood they get rid of aliasing by massive oversampling. If you render at exactly the output resolution you'd get just as jaggy image as with regular rendering methods.
REYES uses stochastic sampling and oversampling to get rid of edge aliasing.

It's quite cheap in REYES as you basically supersample a solid/goraud colored polygon patches. (REYES calculates all shaders into colors on micro polygons/vertexes before it actually renders the image.)
http://www.renderman.org/RMR/st/PRMan_Filtering/Filtering_In_PRMan.html

For shader aliasing the preferred cure is to fix the shader itself. (supersampling, LoD, prefiltering and so on.)
 
Move this from the other thread as it fits more naturally with the discussion here:

Isn't that similar to how Frostbite 2.0 generally occludes the lit surfaces, with tiled screen-space and running a compute shader pass ahead?
Yes it's exactly the same, it's just instead of storing the surface data that you need to do lighting, you regenerate it in a second geometry pass. The cost you pay (in addition to the second geometry pass) is storing the lighting lists. Given that they need to store them in this case, I hope they are storing a hierarchical tree instead of a raw list/bitfield for each tile but I somehow doubt that they are... It's not really that interesting a question, because you simply compute or test whether it's faster to store the lighting data or surface data, and do whatever works best.

Honestly the "differences" between deferred and forward are not worth discussing these days. It's a big grey area of application-dependent performance considerations. Even generalizations like "forward can do more complex materials" or "deferred can do more lights" are simply incorrect. I guess it's normal for the media discussion to lag the technology by 3-5 years, but it's somewhat tiresome.

The interesting discussion today has nothing to do with lighting or G-buffers or anything else, but rather the shading efficiency of the immediate-mode 3D pipeline, particularly for small triangles and with varying AA techniques (MSAA, SSAA, surface-based deferred AA). That's the most interesting thing about "rescheduling" computation in image space using compute shaders, not that you can save some memory bandwidth by storing the light lists in local memory vs. VRAM or anything else that can be decided by a simple performance test.

Neat demo though. I wish they had used LEAN mapping or something to get rid of the specular shimmering, but otherwise it's quite clean looking.
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top