Epic Says This Is What Next-Gen Should Look Like

Once again, it's all bull****... when you start to resort to normal mapped models, SSAO and whatever screen space lighting solution that'll become the norm, and crude hair and cloth representations and simplified shaders, then you can't talk about a close approximation, can you?
And we still haven't touched neither the image quality (no aliasing, no shadow map bugs, no blurry textures) nor the scene complexity (dozens of characters in a jungle with dynamically simulated leaves for every plant to react to wind and moving objects).

Realtime rendering still haven't caught up to Toy Story 1 in many of the above mentioned aspects like scene complexity and image quality (from AA to motion blur).
There are basically three variables: available performance, number of pixels to render, and quality of rendered pixels. A current high end PC and GPU combo still doesn't have enough performance in 1/30 of a second to match a rendernode spending hours on a single frame - significant compromises have to be made.

So it's just empty boasting without any basis at all, as Shifty said they're just using the current buzz word but they don't even know what's going on at Weta (as noone really knows, except a few of their engineers - I'm told that less than a dozen guys actually understood how it all works).
 
Once again, it's all bull****... when you start to resort to normal mapped models, SSAO and whatever screen space lighting solution that'll become the norm, and crude hair and cloth representations and simplified shaders, then you can't talk about a close approximation, can you?
And we still haven't touched neither the image quality (no aliasing, no shadow map bugs, no blurry textures) nor the scene complexity (dozens of characters in a jungle with dynamically simulated leaves for every plant to react to wind and moving objects).

Realtime rendering still haven't caught up to Toy Story 1 in many of the above mentioned aspects like scene complexity and image quality (from AA to motion blur).
There are basically three variables: available performance, number of pixels to render, and quality of rendered pixels. A current high end PC and GPU combo still doesn't have enough performance in 1/30 of a second to match a rendernode spending hours on a single frame - significant compromises have to be made.

So it's just empty boasting without any basis at all, as Shifty said they're just using the current buzz word but they don't even know what's going on at Weta (as noone really knows, except a few of their engineers - I'm told that less than a dozen guys actually understood how it all works).

So you're saying there's lots more headroom in graphics, cool :p

For me, when I try to imagine something like "10x this generation" graphics (what we might assume next gen looks like), I see something pretty close to photorealistic.

Look at what these devs did this gen with games like Killzone or Crysis 2, and then imagine what they will do with 10X...it's pretty scary. I remember distinctly thinking way back when with Gameday NFL on PS1, "wow that looks just like a real game on TV". Of course now we're way past that. But NBA 2K games this gen REALLY push it close to "looks like a game on TV" already.

I sort of feel this is more suited to the next gen thread, I thought that's where I was posting. But I see somebody posted AMD's quip in the next gen unreal thread...oh well maybe it's the "what next gen should look like" thread.

Probably the best graphics I've seen yet is Battlefield 3 on PC, and that looks pretty photorealistic, I would hope proper next gen consoles will deliver at LEAST 4X that, especially once tapped.
 
Again, BF3 is nice, but neither the image quality nor the scene complexity could be considered to rival anything in offline CG, it's far from 'photorealistic' either. There may be scenarios where the content is limited enough to more or less sell the illusion, but most of the time a game will still fail to reach that quality level. It's also related to asset production and storage, of course; since it is an interactive environment, you can't prioritize level of detail the way you can do it in a movie with a fixed camera (at least for most games, of course).
Sure, KZ and UC and the others are all nice, but they never really transcended their limitations; at least I've always measured them to the general standards of realtime games and not what's possible in offline CG.

Keep in mind, I also consider BF3 to be an amazing looking game though!

As for there being lots of headroom left, it's still always going to be limited because of the realtime aspect. The average rendering time for an offline CG frame is about 60 minutes, as it has been for nearly two decades - so as the number of pixels has remained roughly the same, the VFX industry has spent all the extra performance on better looking pixels.
(Although ILM had some 200+ hour frames for TF3, in the scenes with the metallic tentacle monster, thanks mostly to raytracing, so that 60 minutes is really just an average...)

Games will always have only ~1/100.000th of that time to spend on a single pixel, now that we have roughly the same resolution (~2000x1000 pixels). There should probably be a little room to squeeze more performance out of a specialized console system compared to a rendernode with a general purpose CPU. And you can cut a significant amount of the rendering time spent on disk I/O by reducing the general level of detail and optimizing streaming and such. But the capabilities of realtime consumer level systems will always be significantly behind, and not just by two years.

You can also see from the above numbers why we still haven't seen image quality and complexity like TS1 in realtime, consoles and PCs still haven't caught up to be a hundred thousand times faster than those old SUN systems Pixar used for rendering in `95-`96.
 
som infos about "new Toy story" for the moment...Avatar:
http://www.nvidia.com/object/wetadigital_avatar.html

Yeah, that's only a part of the pipeline though :)

They're basically using spherical harmonics to store the light contributions in the scene in 3D space. This is especially important in the nigth scenes where every plant and creature is a light source thanks to the bioluminescence. Of course it also has to be dynamic as characters and leaves are all animated throughout the sequences and so the lighting has to keep on changing, too.

As far as I know they use point clouds to represent the scene geometry, relatively sparsely spaced points but it's enough for the generally soft lighting. Of course it's all derived from the actual scene geometry and even though they use displacement and tessellation (thanks to PRMan) it's still a LOT of geometry... maybe that's what they can speed up with GPU calculations, to generate the point clouds?

Then once they hive the lighting baked, they use that precalculated data through the normal rendering pipe using PRMan, which is still REYES based and has the best damn antialiasing and texture filtering I've ever seen in any renderer. Probably full of customized shaders and other extensions, though... I don't know what kind of SSS they use nowadays, for example, I recall they raytraced into shadow maps back in 2002-2003 with LOTR but that's quite probably obsolete by now.
 
I know this PR talk is nothing but BS, but it at least gives me hope that the next xbox will be a good jump in performance and not a wii-like bump as some feared.

Him mentioning AI is interesting. It too sounds like BS hyperbole, but that could indicate that AMD may be providing the CPU in the next xbox as well.
 
I know this PR talk is nothing but BS, but it at least gives me hope that the next xbox will be a good jump in performance and not a wii-like bump as some feared.

Him mentioning AI is interesting. It too sounds like BS hyperbole, but that could indicate that AMD may be providing the CPU in the next xbox as well.

Or the GPU is the CPU. ;)

Tommy McClain
 
Remember, I'm the guy saying the leap to next gen graphics-contrary to slowing down or diminishing returns- is going to be by far the biggest we've ever seen :D

I'm in agreeance here. The current consoles were released on the wrong end of a major shift in the way GPUs were built. The next consoles, even if the GPUs are *only* on par with Cayman, will bring about a colossal leap in rendering quality. But obviously not Avatar-good, for the reasons Laa-Yosh has already given.
 
I doubt it. Devs tend to push the graphics to the max as that's what everyone notices. So I see things like AI quickly being squeezed out by all but a few devs.
 
Just random ignorant speculation here:

Could they be talking about a significant increase in offline computation or even something akin to Onlive where the console only does latency sensitive calculations whilst the graphics quality scales alongside future advances in hardware on the server side and we could see continual graphics advancement for fast broadband connected consoles? So you can buy a new console but you'll only get the best graphics goodies if you subscribe to an online service?

I believe Microsoft was exploring this route and maybe AMD is fully aware of what the Microsoft Xbox Next is all about?

Maybe the next Xbox is basic power efficient and cost efficient hardware which can give a good level of advancement over what is possible with the current generation consoles designed not to significantly overshoot the needs of the multi-media orientated customer by being too expensive or too loud or too large but at the same time they can provide much better graphics for those who want it and are willing to pay for it. This way they can price discriminate between those who want the basic experience and those who want the full experience and charge accordingly. So whilst they don't increase the base price of game titles, they get to charge the core user significantly more if they choose to opt in.
 
I'm in agreeance here. The current consoles were released on the wrong end of a major shift in the way GPUs were built. The next consoles, even if the GPUs are *only* on par with Cayman, will bring about a colossal leap in rendering quality. But obviously not Avatar-good, for the reasons Laa-Yosh has already given.

Plus the other reasons I think these are:

- Longer time between gens naturally equals a bigger jump. We should be looking at at least 7-8 years whereas in the past the cycle was usually 5 years.

-Relative lack of high end PC development to "spoil" next gen graphics before they get here. This is really the big one imo and different from previous generations.
 
Cloud gaming hasn't yet proven to be the end of all solutions. I personally have significant problems with it and would always prefer to have the hardware at hand, available to me offline and with content that I actually have in my home instead of on some company's servers. Not to mention the video image quality and latency and so on.

So I sure hope MS doesn't plan to go that way :)
 
Cloud gaming hasn't yet proven to be the end of all solutions. I personally have significant problems with it and would always prefer to have the hardware at hand, available to me offline and with content that I actually have in my home instead of on some company's servers. Not to mention the video image quality and latency and so on.

So I sure hope MS doesn't plan to go that way :)

Would that be a problem with a well implemented system? Microsoft has one of the largest content delivery networks in the world and they're certainly in a good position to roll it out. So long as the servers are close to you relatively speaking they wouldn't necessarily add significant latency compared to what you already experience on a console with ~100-150ms input latency being standard.

Also would you mind it as much if you could 'host your own server' so to speak? What if you could play on your console but use your gaming PC to provide significant visual enhancements? It could be part of a strategy to unify their PC and embedded strategies and it'd be a decent revival for their home server ideas. Im pretty sure AMD wouldn't mind it if they get licensing on your console and sell you additional CPU and GPU hardware directly.
 
Yeah MS would be cutting out a good portion of their market if they went the Onlive approach.
 
Last edited by a moderator:
We still can't buy Mass Effect 2 DLC on the hungarian version of Live... let's not even talk about servers close enough to me ;)
 
-Relative lack of high end PC development to "spoil" next gen graphics before they get here. This is really the big one imo and different from previous generations.

That's a big one. Of course back in 2007 we had Crysis that completely blitzed anything we'd seen before, but since then there have been few titles that showcased what videogames could look like on modern hardware. BF3 looks to be one of those, but as a multiplatform title even it will be held back by the lowest-common-denominator factor in some form or another. Of course even if it were PC exclusive it would have be held back somewhat to ensure it would run on older hardware.

Think of the possibilities if you handed Polyphony Digital a GTX560 + i5 2400 + 4GB RAM and told them to make GT6. Such hardware couldn't achieve photorealism in an Avatar setting, but a racing game could look pretty much real to the casual observer.
 
That's a big one. Of course back in 2007 we had Crysis that completely blitzed anything we'd seen before, but since then there have been few titles that showcased what videogames could look like on modern hardware. BF3 looks to be one of those, but as a multiplatform title even it will be held back by the lowest-common-denominator factor in some form or another. Of course even if it were PC exclusive it would have be held back somewhat to ensure it would run on older hardware.

Think of the possibilities if you handed Polyphony Digital a GTX560 + i5 2400 + 4GB RAM and told them to make GT6. Such hardware couldn't achieve photorealism in an Avatar setting, but a racing game could look pretty much real to the casual observer.

Serious question, do you really need 4GB of ram in a console?
 
Back
Top