Can realtime Global Illumination be accomplished with 100TFLOPS of processing power?

Capeta

Banned
I was wondering what can be done in CGI with this much power. Can we render the best movie CGI effects + GI in realtime with a 100TFLOPS computer? What resolution and framerate can we expect? This hypothetical computer has TBs of RAM and TBs of bandwidth.
 
100TFLOPS? Probably not. A frame of full professional movie level CGI would take at least a day to render a frame on one of today's modern PCs. A modern PC is already about ~5GFLOPS in its CPU.
 
5 years estimate is not with the type of global illumination we see in offline rendering, but something more optimized for real time rendering, which is definitly a possibility
 
You don't need nearly that amount of processing power for convincing global illumination. There are real-time Cornell Box demos with global illumination that use only ~1 GFLOP/s.

I think that the biggest problem is the software, not the hardware. With a brute-force approach even 100 TFLOP/s is on the low side for a fairly complex scene. But with good heuristics you could do amazing things with just 1 TFLOPS/s or less.
 
You don't need nearly that amount of processing power for convincing global illumination. There are real-time Cornell Box demos with global illumination that use only ~1 GFLOP/s.

I think that the biggest problem is the software, not the hardware. With a brute-force approach even 100 TFLOP/s is on the low side for a fairly complex scene. But with good heuristics you could do amazing things with just 1 TFLOPS/s or less.

I agree, and even the best CGI movies can only be convincing, but never a pure 100%. The problem is neither hardware nor software, the real problem is our EYES, they are just too good! :D
 
i think given enough rendering power and enough man hours of coding and u could produce a scene that is 100% indistinguishable from real life
 
...the real problem is our EYES, they are just too good! :D
Actually they are not that good. If I took a picture and would digitally make some spots a bit darker and some lighter, without introducing sharp edges, people would not recognise it as an edited picture. They might spot the differences when placed side-by-side but can't tell the real picture from the edited one.

So as long as there are no high frequencies involved there's a big tolerance for relative errors in illumination. It's the same reason why a 'blob' has been acceptable as a character's shadow for real-time games for about a decade. Currently direct lighting with blurred shadows is the preferred approach, and we're not that far from indirect lighting with one bounce. Beyond that the extra realism quickly becomes unnoticable. I doubt CG movies really go for the highest realism (certainly the kids watching them wouldn't notice the difference).

The main reason why a frame takes so long to render is because they use a fairly brute-force approach to get mediocre realism. There's not much reason for them to try harder. Reyes also takes a big part of the rendering time, while classic rasterization could produce nearly the same result today in a fraction of the time.

So we're definitely only 5 years away from rendering a Pixar movie on the PC in real-time. The only question is who will take the effort. And do we really need this graphical realism to make games more fun...
 
Last edited by a moderator:
That clip is pretty impressive, but it's lacking in a couple of areas (the water doesn't look right; there's some <what I presume is> LOD popping going on in the undergrowth; also, are the trees actually moving?).

Seems to me though that while its clearly important to get the lighting, etc., correct that the next big challenge in interactive realism will be getting the whole environment to behave as we're used to. It's not so hard to make a single frame look photo-realistic, but once things start moving it's a dead give-away to anybody who has actually been outdoors.

So whilst debates about rasterisation v. ray-tracing, GI versus whatever are fine in their own right, none of them are total solutions to the acutal problem of fooling the brain in to believing it is existing in a real (but artificially generated) world. Those debates centre around how you render, rather than what it is that you're rendering and how it behaves and interacts with itself and everything else. I think overall that could be a much harder problem, especially in a free-form real-time environment where anything goes, and nothing is pre-scripted or hand animated.

So I wonder whether cresting the ridge of achieving real-time global illumination is only going reveal another ridge 10,000m further beyond. In all honest I'm extremely sceptical about the idea of being able to interact with an artificial environment and be totally fooled within the next decade. I know this is slightly tangential to the thread topic, but I think it's important to view this holistically rather than get hung up on certain families of algorithmic solutions.
 
I find the idea of this thread really interesting.

In the days of 3D studio release 4 (1995 DOS based renderer) It was interesting to watch hardware slowly take on more and more hardware features that could resemble those offered by 3DS r4. By the time the original geforce 256 came out around late 1999 it could be approximated that it had 3DS render features, around 5 years down the line.

Whats also interesing, and a point made above is comparing how the software renderer would do things so amazingly inneffeciently. And watching how identical results could be done so much faster with dedicated hardware.

When you compare 100 TF of offline rendering by CPUs you must consider the following:

1) How low the effeciency of spreading computation accross thousands of CPUs can be, Rather than 1 or 2 GPU cores
2) Any given CPU may be more general purpose than is required for its job of rendering, meaning a poor flop / transistor ratio.
3) Software solutions will not be very highly optimised to run quickly on any given CPU.
4) Algorythms used for rendering may be mathematically unoptimised and unneccessarily exhaustive for the requirements of real time application.

5) And lets not forget another VERY important point, by the time we have GPUs calculating 100 TF/ per second, I think in around 8 - 10 years, there will be many new breakthroughs in Maths, programing and software development. This will yeild much higher performing algorythms for creating todays offline effects, who knows how many paradigm shifts we may see in real-time rendering techniques 10 years from now.


So what I believe is that we will be doing many, many times more with 100 TF on a single GPU for a given frame of graphics, than we could dream of doing with 100 TF of offline style software render using todays techniques.


And when you consider that computing power has million folded in 30 years, and that we can expect the same again in another 30 years, we may see that in the future anything may be possible for real time rendering.

BUT WE ARE NOT THERE YET............. NOT EVEN CLOSE !!!!
 
no we arent 5 years away, we would need a ridiculous increase in poly counts per frame(talking at the very least millions of polys per frame) b4 we even get to the lighting/shadowing/animation area.
 
no we arent 5 years away, we would need a ridiculous increase in poly counts per frame(talking at the very least millions of polys per frame) b4 we even get to the lighting/shadowing/animation area.

I knew sum1 would take that first paragraph out of context !!!

Dont you think I realise we are more than 5 years away? I mean I hardly expect PS4 to be rendering hollywood movies...

I do think 20 - 25 years is more realistic. But lets not forget that bar keeps moving for film as well as real time. So its never ending really. Obvious really. But there you go.
 
when i say movie like i mean where u cant tell the difference between a video game and a movie u go record while walking around outside.
 
when i say movie like i mean where u cant tell the difference between a video game and a movie u go record while walking around outside.

probably gonna be a very old man before we get there, hard to say though. Very speculative, who knows.... but we are gonna have to deal with uncanny valley for the next 2 decades.

But, I tell you what also interests me, a question Ive been asking for years:

How long until dedicated graphics hardware (gpus) will take over the function of cpu render farms for offline rendering? I mean Its not a giant leap in imagination to think of farms of GPUs rigged to banks of memory. I still think even that level of flexability is a decade off. But Nvidias gellato is the fledgling efforts to move towards that idea.
 
Well I for one can certainly see GPU's being adapted to accelerate ray tracing, they are parallel enough to compute it well but lack on chip cache space and such IIRC. Dunno about radiosity calcs tho..
 
Well I for one can certainly see GPU's being adapted to accelerate ray tracing, they are parallel enough to compute it well but lack on chip cache space and such IIRC. Dunno about radiosity calcs tho..

Radiosity can be computed in several different ways, the most efficient ones can be only partially parallelized (e.g. what I use in realtime radiosity bugs). But with 10x more flops in GPU than in CPU, it becomes reasonable to run less efficient but more parallelizable code on GPU.
 
Im wondering at what point does the man hours required to do the art and programming for x game, given some huge polygon count becomes prohibitively expensive for realtime such that any increase in processing power becomes irrelevant.

Keep in mind, the movie people do quite a bit of 'by hand' optimization and artwork, programming this type of thing in would be so complicated and ardous I dont even want to think about it.

I suspect we'll hit that plateau long before we get to the promised land of reallife graphic fidelity.
 
You don't need nearly that amount of processing power for convincing global illumination. There are real-time Cornell Box demos with global illumination that use only ~1 GFLOP/s.

Err, too bad it's not that exciting to look at... ;)
 
Back
Top