Digital Foundry Article Technical Discussion Archive [2011]

Status
Not open for further replies.
Laa-Yosh said:
"I'm a strong believer that we should already be at Avatar quality in real-time, but the mass market (not everyone has the highest end cards or CPUs for example) is significantly delaying this next step," observes Tiago Sousa.

Seriously, I've just lost all respect. Either he's an idiot or he has no idea what he's talking about.

Perhaps you should contact him and tell him if he thinks there is hardware out there that can do Avatar in realtime, there's plenty of money to be made outside of the game engine industry. Maybe even outside the movie industry. I can see an SNL in coupled with some nice mocap or even Kinect in an Avatar rainforest in 3D driving 3Dtv sales for instance.

I bet their AI is still 'old-school' too.
 
yea pretty much as the situation exactly as has been described here by many for quite some time now...

So why were they still GPU-bound on PS3 after reducing the resolution, with the PS3 game still a bit slower in less CPU intensive situations?
 
Last edited by a moderator:
So why were they still GPU-bound on PS3 after reducing the resolution, with the PS3 game still a bit slower in less CPU intensive situations?

It was only a small performance boost from the resolution reduction. I don't believe the PS3 version was soft V-sync'ed like on the 360 version either. I have no idea how many frames they increased by soft sync'ing the 360 version, though.
 
I also don't understand why the 12% lower resolution on PS3 leads to a huge 18MB saving in RAM, unless they have a huge number of buffers where the savings accumulate across.

Between SSAO, G-Buffers, and post processing they probably have quite a few full-resolution textures in memory, just like any other modern game.
 
Perhaps you should contact him...

Not sure if you're being sarcastic here, but it certainly isn't my responsibility to educate someone at Crytek, or to ask him to correct some sensationalist comments.
I suspect it's PR by the way, after all they have stereo 3D that only costs 1% of performance, too. I bet Jim Cameron would like to learn about that too, after Weta has spent a fortune on getting enough hardware to render everything in Avatar twice.
 
Not sure if you're being sarcastic here, but it certainly isn't my responsibility to educate someone at Crytek, or to ask him to correct some sensationalist comments.
I suspect it's PR by the way, after all they have stereo 3D that only costs 1% of performance, too. I bet Jim Cameron would like to learn about that too, after Weta has spent a fortune on getting enough hardware to render everything in Avatar twice.

I was being highly sarcastic.
 
Backface culling is cheap and part of EDGE, it's not occlusion culling. Also I finished the game twice (a testament to how much I enjoyed it), physics, AI and animation is nothing to write home about (not that they bothered me much mind you).
What someone finds impressive or lacking isn't really relevant.
 
"I'm a strong believer that we should already be at Avatar quality in real-time, but the mass market (not everyone has the highest end cards or CPUs for example) is significantly delaying this next step," observes Tiago Sousa.

Seriously, I've just lost all respect. Either he's an idiot or he has no idea what he's talking about.

You are so surprise? After the huge proclaims before cryengine become real on the console? :cry:
 
It was showing how much the engine has improved, especially from EA's efforts on PS3.
 
Last edited by a moderator:
Though I understood it as some critical spots might have it on. maybe for key places. Not much though.

@Cyan

Cyan, you still alive?
I gladly get my owned, though I would swear GI is on sometimes, like at the very beginning of the harbour level and in some other places.

Also I finished the game yesterday, and the ending is rendered in real-time, using GI. The change in lighting is quite spectacular.

p.s. I also took photos of that level you mentioned time ago, but well, it's rather pointless to upload them, it's relatively similar to that pic using Low settings on PC, with some slight differences.
 
Reading on the Making of Killzone 3, I was wondering why did GG feel the use of logluv HDR would compromise blending and interpolation to their engine, would using RGBM help at all and how does the compromise translate visually speaking?
While stablemate releases God of War III and Uncharted 2 both support (differing) implementations of high dynamic range rendering, Killzone 2 uses a 24-bit RGB framebuffer, which meant LDR lighting only. In previous presentations to the games industry, Guerrilla had discussed experiments using the Logluv colour space favoured by Naughty Dog, but the compromises in terms of blending and interpolation meant that the developer went with a refined version of its existing solution.

The existing buffers used in the deferred rendering setup were tweaked in order to give more dynamic range to the lighting and colour, or, as the developer puts it, "getting more bang for every bit in the buffer".
http://www.eurogamer.net/articles/digitalfoundry-the-making-of-killzone-3
Basically what's stopping GG from using HDR for kz3 when SSM and ND have done it for their respective titles.
 
Reading on the Making of Killzone 3, I was wondering why did GG feel the use of logluv HDR would compromise blending and interpolation to their engine, would using RGBM help at all and how does the compromise translate visually speaking?
http://www.eurogamer.net/articles/digitalfoundry-the-making-of-killzone-3
Basically what's stopping GG from using HDR for kz3 when SSM and ND have done it for their respective titles.

You can't blend to an encoded render target, since the GPU can only do linear RGB frame buffer operations during blending. Plus both formats make use of the alpha channel anyway, so you can't use it for opacity. You also can't properly filter those formats using the texture units, since they also are limited to linear RGB operations. A lot of people will still filter those formats anyway, since while the results are "wrong" they can still look okay in a lot of situations. I don't know about GOW3, but Naughty Dog gets around the blending problem by first rendering their opaques to a 2xMSAA encoded as LogLuv (RGBM for U2) and then resolving it to an FP16 target with no MSAA. Then they blend the transparents into that.

Guerilla is in bit of a different boat from Naughty Dog or Santa Monica because they use a "classic" deferred renderer. With their setup, the GPU needs to be able to blend the results of each lighting pass into the frame buffer since it handles all of the lighting. Naughty Dog does most of their lighting on the SPU's using a light prepass-esque setup, and GOW3 forward renders, so they don't have that problem. Hence why encoded HDR formats are not a magic bullet solution for them. They also mentioned in one of their presentations about how their artists preferred to have direct control over the final output colors rather than having to deal with exposure and tone mapping, but I couldn't tell you how much of that is true and how much of that is just damage control.
 
You can't blend to an encoded render target, since the GPU can only do linear RGB frame buffer operations during blending. Plus both formats make use of the alpha channel anyway, so you can't use it for opacity. You also can't properly filter those formats using the texture units, since they also are limited to linear RGB operations. A lot of people will still filter those formats anyway, since while the results are "wrong" they can still look okay in a lot of situations. I don't know about GOW3, but Naughty Dog gets around the blending problem by first rendering their opaques to a 2xMSAA encoded as LogLuv (RGBM for U2) and then resolving it to an FP16 target with no MSAA. Then they blend the transparents into that.

Guerilla is in bit of a different boat from Naughty Dog or Santa Monica because they use a "classic" deferred renderer. With their setup, the GPU needs to be able to blend the results of each lighting pass into the frame buffer since it handles all of the lighting. Naughty Dog does most of their lighting on the SPU's using a light prepass-esque setup, and GOW3 forward renders, so they don't have that problem. Hence why encoded HDR formats are not a magic bullet solution for them. They also mentioned in one of their presentations about how their artists preferred to have direct control over the final output colors rather than having to deal with exposure and tone mapping, but I couldn't tell you how much of that is true and how much of that is just damage control.
Thanks for the detailed explanation mate, much appreciated. So GG would pretty much have to walk the hard road of FP16 if they keep using the fully deferred renderer then.
 
Status
Not open for further replies.
Back
Top