What's left to do in graphics?

So. Photorealism. To be achieved using ray-tracing, or are we going to continue using a massive array of hacks in rasterisation to try and fudge the same effects?

Pixar RenderMan seems in favour of the latter, so far.
Use REYES/shadowmaps/envmapping/etc for most stuff, raytracing only where it matters.
 
Pixar RenderMan seems in favour of the latter, so far.
Use REYES/shadowmaps/envmapping/etc for most stuff, raytracing only where it matters.

Agreed, a hybrid-approach seems to be able to offer the best outcome from a IQ/performance standpoint but will bring a whole new slew of problems.
 
So. Photorealism. To be achieved using ray-tracing, or are we going to continue using a massive array of hacks in rasterisation to try and fudge the same effects?
They're not hacks. Both algorithms can produce the same results with varying tradeoffs. I wish people would stop spreading these misconceptions... this is more of a question of one data structure vs. another than some silly crusade. Maybe it's time for yet another B3D article on the topic ;)
 
http://www.gamasutra.com/php-bin/news_index.php?story=23742

Sweeney thinks we're a factor of 1,000 away from "perfect graphical realism" which he thinks might be just 10-15 years left.

Not sure we have enough process shrinks left for that?

Anyway, this is pretty much the interesting part


Looking ahead, how long do you think it will be before real-time computer graphics are 100% realistic like a movie?

There are two parts to the graphical problem. Number one, there are all those problems that are just a matter of brute force computing power: so completely realistic lighting with real-time radiosity, perfectly anti-aliased graphics, and movie-quality static scenes and motion.

We're only about a factor of a thousand off from achieving all that in real-time without sacrifices. So we'll certainly see that happen in our lifetimes; it's just a result of Moore's Law. Probably 10-15 years for that stuff, which isn't far at all. Which is scary -- we'll be able to saturate our visual systems with realistic graphics at that point.

But there's another problem in graphics that's not as easily solvable. It's anything that requires simulating human intelligence or behavior: animation, character movement, interaction with characters, and conversations with characters. They're really cheesy in games now.

A state-of-the-art game like the latest Half-Life expansion from Valve, Gears of War, or Bungie's stuff is extraordinarily unrealistic compared to a human actor in a human movie, just because of the really fine nuances of human behavior.

We simulate character facial animation using tens of bones and facial controls, but in the body, you have thousands. It turns out we've evolved to recognize those things with extraordinary detail, so we're far short of being able to simulate that.

And unfortunately, all of that's not just a matter of computational power, because if we had infinitely fast computers now, we still wouldn't be able to solve that, because we just don't have the algorithms; we don't know how the brain works or how to simulate it.

I've thought something similar but not in such clear terms for a while. AI is one field thats clearly stagnant for example.
 
A factor of 1,000 is roughly 10 density doublings of VLSI transistor density.
Counting down from 45nm, 32, 22, 16, 12, 8, 6, 4, 2.8, 2, 1.4.

The roadmaps show up to 22.
16 has been talked about.
Things are pretty crazy when it comes to getting single digits.
The space taken up by two separate atomic layers might not fit.

Perhaps Sweeny is counting on 3d integration to pick up the slack.

1,000 sounds like a fun number to call it quits, but I doubt the job will be done at that number.

The question of how we're going to shuttle that much data around isn't covered by Moore's law.
 
good luck keeping track of 500 000 physical objects (+ driving most of them - simulated humans, animals, machines, wind, sea waves etc.), simulating fluid dynamics, or even managing to find the needed manpower to actually build the game assets, keep track of the combinatorial explosion of possibilities, and so on.

those are other things in addition to the A.I. and rendering problems.

so, perhaps it done with really advanced nanotech and in the timeline of commercial nuclear fusion reactors. (Guess what game company and what long awaited game title it could be about :D)
 
Last edited by a moderator:
There's everything left to do. Its a field full of mathematical certainties and mathematical incorrectness, as well as on-going research to find new solutions to performance and realising creative visions. I wouldn't be surprised if they have a stage of saturation point in gaming, but the potential for improvement is unlimited and will extent to many industries.

Another thing, the pace of graphical advancement, particularly for real-time games, tends to be slower than a lot of people predict in my opinion, mainly because there are so many aspects to consider in realising the graphical complexity we dream of.
 
ye, but we still don't have it dynamically. Statically you can precompute so much data that you will have photorealistic graphics...

but still the biggest UNKNOWN is artifical intelligence. Anyway at the field of graphics, even IF rasterization dies and raytracing comes as state-of-the-art, there will be a need of research in it - optimisation trees aren't ideal and as fast as we would like, global illumination equation isn't solved completely yet, etc.

And the biggest problem of computer games is still on, why do we have so dumb stories and crap gameplay (except some really good games)?
 
There's also a lot of times where we'll do things in games which aren't really realistic, but provide some indication to the viewer that the feature is there. Often times in an environment which has colored lights, we'll actually over-saturate the immediate area (perhaps with additional bounce cards or something) with that color just so you can see clearly that it has some effect. Unrealistic, but you see the effect, and that's what matters. So often in games, you'll see things like lightning in the distance which coincides with the thunder or explosions in the distance where you hear the sound at the same time that you see the explosions... We all know that this is wrong, but if you handle this accurately in a game, it gives the impression that it's a bug or that the sound system is laggy.

The sad bit is how this often applies to AI in that we sometimes have to deliberately make AI bots do some dumb or otherwise wasteful things from time to time just to make it *appear* as if its smart or at least so that it looks like it's actively doing something. It's so easy to make an AI that headshots you in a single attempt without fail, but that comes across as cheap. When a bot wastes time visibly changing behaviors or changing its choice of cover locations, it comes across as more organic, even though it's not "smart" in the purest sense. Even otherwise, there are so many games which are largely simple variations of the same basic idea (i.e. See spot run. Kill spot.), that there's such a lack of room to bring something new to the table.

That said, it's not as though there aren't true technical challenges in AI, but simply that there's always going to be something that needs to be corrupted for the sake of "game-ness." Group behaviors and group planning in particular are a big area of unsolved problems. To me, though, the biggest AI problem in games in general is simply memory and performance. As the scale and complexity of the environments gets larger, and more bots are active at once, the multi-milllion dollar question is how do you make this work at n Hz with x amount of memory?
 
Also, if you really want to get down to it, all, afaik, global illumination algorithms assume that light transport is instant, whereas there should be a slight delay depending on the wave-length etc. Granted, that doesn't necessarily make much difference, but if you want the real simulation, there is quite a bit of work left....
 
Also, if you really want to get down to it, all, afaik, global illumination algorithms assume that light transport is instant, whereas there should be a slight delay depending on the wave-length etc. Granted, that doesn't necessarily make much difference, but if you want the real simulation, there is quite a bit of work left....

Are you saying relativistic effects should be incorporated into these algorithms, for the sake of 'realism', i.e. if you want the "real simulation"?

If you repeated the same "simulation" to the same observer, over a 24 hour period for example, would the information sensed by the aforementioned observer be the same throughtout this 24 hour period?

Isn't there always "quite a bit of work left to do"? When do you stop?
 
Never. We should obviously model each single photon as a vibrating string wrapped around a calabi-yau manifold. (j/k)
 
Back
Top