What's left to do in graphics?

Arbotron

Newcomer
Hi, all --

I'm going to be a university student next year and, for the longest time, I've been fascinated by the amazing field of computer graphics. For the past few months, I have been ardently studying the literature, writing programs, and enjoying learning the field. I want to be a 3D graphics/game programmer.

However, the very realism present in high end graphics also depresses me. I know real-time graphics isn't photo-realistic, but isn't it simply a matter of computing power and time before we have true photorealism? Aren't the algorithms for photorealism already there? Once that occurs, won't graphics programmers be out of a job?

So my question is: is it worth devoting my entire undergraduate (and eventually professional) career to the study of real-time computer graphics? Or will it be that by the time I catch up to the current state of affairs that I will be too late and everything will have been solved?

Also, is there still any groundbreaking research going on on the academic front for non-realtime graphics? What problems are left to be solved there?

Any input would be greatly appreciated. If my post sounds ignorant and it turns out that it is in the realm of pure absurdity that computer graphics will be solved within a few decades, then please feel free to flame me -- at least I will know that I can hope to make a contribution -- however small -- to the field.

-Arbotron
 
Good question, but I think it is not merely a matter of throwing hardware at the problem. Solving GI in real time has proved nigh impossible so far. Let alone rest of the stuff.
 
No one can predict how long it will take for photo-realistic graphics to become easy so I wouldn't worry about it. Do what you love and adapt as time goes on. Real time rendering has a number of interesting years left. As does off line rendering.
 
To stimulate discussion, I emailed Tim Sweeney, Epic Games' CEO and got an interesting response:

Ignoring game characters, I think we have about 15 years before we can achieve movie-quality visual realism in dynamic environments. Those rendering issues can be resolved by a combination of known algorithms and brute-force techniques. Your optical nerve only carries the equivalent of 2M pixels, so saturating them with visual detail is just a question of computing power.

Still, we're infinitely far away from "done" in all other areas. Any problem that involves simulating human or intelligent characters -- animation, conversation, speech -- are unsolved. If we had infinite computing power available today, we wouldn't be much better off, because we entirely lack the algorithms to deal with these problems. And, tools will increase in importance, since there are endless opportunities for inventing new approaches to make artists, programmers, and designers more productive.

15 years, eh? Looks like I should probably start looking into other fields of game programming/computer science in addition to CG.
 
I'd just point out that when talking about "movie quality" you're always talking about moving targets. Movies won't look the same 15 years from now as they do today. Basically what happens with CG in the movies is that you take the best algorithms you can think off and throw enormous amounts of computer power and hand crafting at the problem. Also great majority of the movies are still using traditional methods for what 80% of the movie. You need to ask yourself what exactly is done with CG when you're watching a movie.

Take latest Star Trek for example... All actors are still real actors, all "internal" shots are still done by camera shooting on sets, external shots are of course CG and are hell of a lot better then they were 7 years ago in Nemesis for example.
On the other hand if you look for movies that are completely CG everyone is still basically making cartoons such as Ice Age or The Polar Express.
The only movie I can think off that even tries to look real is Beowulf (http://www.collider.com/uploads/imageGallery/Beowulf/beowulf_movie_image__1_.jpg). Of course it looks hell of a lot better then Oblivion does, but we're not done by a long shot.

P.S.: It's also not the first time that "in 15 years we'll be done" has come up.
 
Aren't the algorithms for photorealism already there? Once that occurs, won't graphics programmers be out of a job?

Researchers usually work on general solutions to general problems. Programmers usually work on hacks better suited for real world. So having general algorithms for photorealism doesn't get programmers out of job, it doesn't even make their work less creative.

So my question is: is it worth devoting my entire undergraduate (and eventually professional) career to the study of real-time computer graphics?

http://www.phdcomics.com/comics/archive/phd050508s.gif :)
 
I think the question is as to whether there would be a limit to all of these. More like: Would we ever reach a state of perfection where we have maximized all graphical eye-candy and shit (and hit the end of increasing system requirements)???

Or to probably put it simply: If we hit the road block in terms of the ability to perfect these things or rather that we cannot even distinguish quality of detail greater than that, then what happens then???

This was the same thing I thought a few years ago regarding Counter Strike Source then Crysis & Crysis Warhead a few years after.

What about 50 or maybe even 100 years from now, would their be gaming applications that would demand constant increase in tech specs???

Well, who knows but in these cases, I guess the saying holds true:
Build it and they will come.;)
 
Well, on that 15 yr quote thing. Physicists (and the very best at the time) said in 19th century that all that remains is more exact calculations. We all know how that turned out. :)
 
In my opinion, we're left with an art problem rather than a technical one.

Realistic looking assets are time consuming to generate.
 
Photorealism is nice and all, but that is only going to be good enough for a couple more years ... then we will want stereoscopic realism with an adaptive focal plain matched in realtime to our eyes, realistically mixed with real-life imagery.
 
Biggest problem to overcome is:

Physics-AI-Animations

That triangle. One affects the other, in a loop. It's a huge challenge and has serious implications on what we can actually do when we make a game.
 
In my opinion, we're left with an art problem rather than a technical one.

Realistic looking assets are time consuming to generate.

I agree, though I wouldn't say we are at all close to 'done' when it comes to graphics. Even when I'm being mesmerized by Crysis I can't help but notice we have so much further to go!

Also, I'm waiting for animation to catch up with rendering ;)
 
Hmm... well, I notice he seems to have said 15 years until we reach an even keel with what can be done in movie CG, not necessarily that we'll "be done."

I mean, you look at problems like shadow filtering, which is something that even the movies have not really solved in the generic sense, if it means that in 15 years, we'll still have to be throwing in a dozen different fakes, cheats, and custom approaches for each individual case, then you still have some major issues to solve.
 
I mean, you look at problems like shadow filtering, which is something that even the movies have not really solved in the generic sense, if it means that in 15 years, we'll still have to be throwing in a dozen different fakes, cheats, and custom approaches for each individual case, then you still have some major issues to solve.
That's a good example though, because it's something that we've known how to solve/anti-alias/filter "correctly" for a long time... if you're willing to throw an order of magnitude more computing power at it, your shadows will look great! That said, there's almost always better uses for those transistors, so I don't forsee the innovation in graphics coming to an end any time soon. There will continue to be competition for the best quality/FLOP or similar between various algorithms for the forseeable future (even beyond 15 years).
 
I think software and hardware in general have a way of 'reinventing' themselves.
Even though we may have come very far on current high-end gaming PCs, don't forget there's also other hardware around.
For example, smartphones and portable game consoles have only just started to get 3d, let alone acceleration.

I think quite a few people here have had a 'deja-vu' feeling about that. Eg, when you would write a 3d renderer for a Gameboy Advance, you'd pretty much go about it the same way you did with early 386/486 systems or Amiga AGA stuff. The hardware capabilities and performance are quite similar.

In that sense I think even if we ever reach a 'solved' state for PC game graphics, there will still be plenty to do for other platforms.
Aside from that, I think that even on PC you might find yourself 'reinventing' 3d renderers when new CPUs and/or GPUs arrive.
 
What's left to do in graphics?
from HW side...
  • EDRAM
  • Z-RAM
  • tile-based architecture for desktops
  • stochastic sampling
  • fast14
  • non-AFR dual-GPU concept
  • RDRAM compatible memory controller
  • fast high-quality GPU-accelerated video encoding
Did I forget anything? :LOL:
 
So. Photorealism. To be achieved using ray-tracing, or are we going to continue using a massive array of hacks in rasterisation to try and fudge the same effects?
 
Back
Top