Jawed said:
Jawed, that
is very interesting.
20-30fps with their edge-to-point based system seemed very impressive for the early demo of the dragon. She specifies at the end that the illumination examples (everything after FBT) that they were running on a Pentium D 2.8GHz and only using 1 core. For those who cannot watch it (50min), they had some models up in the millions of polygon range with hundreds of thousands of lights and were doing Global Illumination techniques and rendering frames at under 2 minutes. The images she showed with sub-2min render times were impressive and give some hope. We may have to wait 10-12 years for interactive framerates, but the techniques she demod seem to avoid a ton of problems. It would work with moving lights and also moving geometry that changed shapes. It was exciting when she showed that as more lights were added the computational needs leveled out quickly.
Overall the concepts are interesting and makes a lot of sense. For those who cannot watch it, her illumination technique was basically taking the idea of GI and mixing in 'smart' sparse sampling. Lights are grouped/clustered and trees of lights are made and using pre-defined % error limits each pixel only uses the number of lights necessary to get an accurate illumination value within the given 2% error. So instead of tens of thousands of lights calculated per pixel, only dozens are. @ 37min is a good chart of the concept, and at 38:30 how the cheating works--very intelligent and sneaky! What probably caight my attention the most was how this was a unified technique that worked for hard and soft shadows, HDR, direct and indirect illumination... and very high quality AA (which was very cheap).
Funny how all these problems, which have dedicated hardware on GPUs to resolve them, can all resolved through this unified (!) technique. This reminds me of a Kirk interview from last year where he said that AA was eventually a problem that would be resolved in the rendering engine and it was not worth the effort dedicating a lot of resources to it. I don't know the time scale he is on, but it does look like in 10 years or so he is right.
Like most of your pointed out, she pointed out the memory problem, and that Moores law alone won't solve our problems in this regards. Her solution was using light cuts in a cluster tree as she suggests would substantially cut down on the required memory accesses per pixel.
@ 25min she begins talking about textures ("featured based texturing" FBT) and I thought that was really cool. I am not a rocket scientist, but I had wondered a long time ago why a GPU could not have features to interpolate detail in textures, even if make crude vectors that upon closeup maintain the integrity of edges. She mentioned their technique is fairly small, and the results, at least on her samples, were excellent. She also seemed to emphasize that the FBT textures were 16x16 and looked much better than the 64x64 bilinear counterparts. She mentioned talking to someone at EA about the texturing.
Again Jawed, thanks! I wonder how long it will take until we see something like this in realtime
She mentioned this was on a single core of a 2.8GHz Pentium D and that it could be scaled to more cores. It will be exciting when her research is moved to a GPU because they are much more parallel. They are not as robust and lack the large caches (she seemed to emphasize approaches that leveraged CPUs and GPUs together, and down the road how we could see CPU-GPU hybrids). I guess outside the cache memory is going to be a big issue. Reducing latencies and getting good memory management.
Hopefully I am wrong and we will be seeing this stuff in realtime before we hit the dead end of silicon. If not it could be a painfully long, and expensive path until we move over to new hardware platforms