Digital Foundry Article Technical Discussion Archive [2011]

Status
Not open for further replies.
Something caught my eye in the article, especially because I've seen similar stuff in other publications on GG and their games...

Back in 2005, the infamous Killzone 2 "target render" displayed at E3 was met with disbelief and derision from both press and gamers, but the final game actually surpassed the quality of the CG cinematic in many ways.

The content in KZ2 is indeed more polished and nicer in terms of art direction; but then again the trailer and its assets have been created by a very small team on a very tight schedule using outdated production techniques, and on the fraction of the game's budget.
From the global illumination through the volumetric particle effects and general polygon counts to the antialiasing quality, there's still a lot of stuff that was obviously impossible on the competitor's system (Xbox360) and it's still impossible to achieve even on highend PCs with triple GPU systems. These elements have clearly played an important role in the initial reactions to the trailer.

The backlash has clearly hurt the people at Guerilla deeply, and to some extent I can understand their reactions, especially if they had nothing to do with the entire issue.

But it still doesn't change the fact that the way this trailer was presented to the press and the gamers was a highly immoral misdirection. Doesn't matter if it was intentional or just quickly capitalizing on the enthusiastic first reactions; if it was only Sony's responsibility or GG's people were involved as well - it was still an inexcusable lie to say that the movie was actual gameplay running on a preproduction PS3, just as it would have been a lie to say that the final game was to be exactly like that.

I'm a big fan of DF, but I don't really appreciate this attempt to whitewash one of the worst PR stunts of this generation, the sole reason for most realtime captured movies having to start with a disclaimer since that E3.
 
I can't believe my eyes. Surely you've been here long enough to know better than that.
 
Again, this is an impression based on several publications created in cooperation with GG, and all have similar statements. I get the feeling that people at GG feel hurt and believe that they've been treated unfairly, because all these articles state that the mess with the trailer was totally justified because of the end product being as good or even better.
But in my opinion they got what they've deserved, both the praise and appreciation for their artistic and technological achievements - and the backlash about all the misdirection.

What really is a bit unfair is that the first Motorstorm trailer had some even more blatantly impossible stuff, and yet they got away with it in the end ;)


Anyway, getting back to the article, it's a nice overview with a few cool insights; like it's pretty interesting to see total texture memory being about 100 megs in the GDDR, or how it relates to 10-15 MB of animation data. It will be interesting to see how these will change proportionally with the next gen of consoles, where 2GB memory is pretty much guaranteed - I personally expect texture memory to increase more than 4 times, and animation data maybe even less than 4x. Upping the texture resolution already takes care of the 4x increase, but stuff like tessellation adds at least another channel (displacement) and more set dressing and larger scenes are also pretty much expected.
On the other hand, maybe virtual texturing will become a standard feature? Repi said that Frostbite 2.0 is also using it, which is probably what the PR person talking about streaming has also meant.
Another thing that could benefit from more memory would be a more intensive use of blendshapes, which consume a lot of memory compared to bones and skinning, but offer more precise control, and combined with tessellation and displacement the next-gen basemesh poly counts could be kept at a reasonable level.

Also, the software rasterizer on SPUs is pretty impressive, I believe it means streaming mesh data from GDDR to SPU local storage, right? Would be interesting to see how PS4 rearranges the internal buses and memory pools, based on experiences with PS3 development.
 
Something caught my eye in the article, especially because I've seen similar stuff in other publications on GG and their games...



The content in KZ2 is indeed more polished and nicer in terms of art direction; but then again the trailer and its assets have been created by a very small team on a very tight schedule using outdated production techniques, and on the fraction of the game's budget.
From the global illumination through the volumetric particle effects and general polygon counts to the antialiasing quality, there's still a lot of stuff that was obviously impossible on the competitor's system (Xbox360) and it's still impossible to achieve even on highend PCs with triple GPU systems. These elements have clearly played an important role in the initial reactions to the trailer.

The backlash has clearly hurt the people at Guerilla deeply, and to some extent I can understand their reactions, especially if they had nothing to do with the entire issue.

But it still doesn't change the fact that the way this trailer was presented to the press and the gamers was a highly immoral misdirection. Doesn't matter if it was intentional or just quickly capitalizing on the enthusiastic first reactions; if it was only Sony's responsibility or GG's people were involved as well - it was still an inexcusable lie to say that the movie was actual gameplay running on a preproduction PS3, just as it would have been a lie to say that the final game was to be exactly like that.

I'm a big fan of DF, but I don't really appreciate this attempt to whitewash one of the worst PR stunts of this generation, the sole reason for most realtime captured movies having to start with a disclaimer since that E3.

I'm not tries to justify them, but sony it isn't the first & the last who doing that...
 
I'm not tries to justify them, but sony it isn't the first & the last who doing that...
I'm thinking of the Halo 3 E3 trailer, but they said that was 'in-engine' not in-game. Misleading, but still the truth, in a sense.
 
Last edited by a moderator:
Perhaps we can move along now from old... claims and PR stunts. They're rather inane as it is!
 
What do you think the memory overhead of mlaa is if you are already doing spu postprocessing or shading?

In the KZ3 case it looks like they couldn't tile so they had to reserve a full frame buffer of memory. There may be some cases where you can tile, like process it all in 1280x16 tiles for example, but heavy general spu use during the game could make that be tricky with mlaa. Mlaa seems to be fairly cpu intensive, I forget the timings but it was like 4ms across 5 spu's, or something like that. They spread it all across all those spu's because they need maximum paralization to get the mlla process complete as quickly as possible. If they tiled it, or did it in segments to save memory, that mlaa load could stretch too far as spu's would now be doing a mix of mlla and other post process, which could lead to stalls if mlaa takes too long to finish. A predictable 4ms on spu lets them do a predictable 4ms on gpu at the same time, tiling makes it all much more difficult to succesully schedule it all.

In any case we could say that worse memory case for mlaa is one frame buffer, best case could be a small tile amount of memory in the ~100k range. Does any game do it tiled yet? If one does I presume it would be one that had an abundance of spu to spare.


It's not full res. They rasterized depth at 640x360, then did a 16x16 downscale to 40x23 to make the buffer they used for performing the occlusion tests. Ultimately it's very similar to what DICE does for occlusion culling.

Did Guerilla ever say that? I don't doubt it's true, makes sense that they would do it like that, but I'm curious if they ever confirmed that someplace. Now that they have an spu software rasterizer they could potentially use it for a full resolution transparency pass. Assuming there was spu time to spare of course which KZ3 doesn't, but maybe other games could do it.
 
In the KZ3 case it looks like they couldn't tile so they had to reserve a full frame buffer of memory. There may be some cases where you can tile, like process it all in 1280x16 tiles for example, but heavy general spu use during the game could make that be tricky with mlaa. Mlaa seems to be fairly cpu intensive, I forget the timings but it was like 4ms across 5 spu's, or something like that. They spread it all across all those spu's because they need maximum paralization to get the mlla process complete as quickly as possible. If they tiled it, or did it in segments to save memory, that mlaa load could stretch too far as spu's would now be doing a mix of mlla and other post process, which could lead to stalls if mlaa takes too long to finish. A predictable 4ms on spu lets them do a predictable 4ms on gpu at the same time, tiling makes it all much more difficult to succesully schedule it all.

In any case we could say that worse memory case for mlaa is one frame buffer, best case could be a small tile amount of memory in the ~100k range. Does any game do it tiled yet? If one does I presume it would be one that had an abundance of spu to spare.




Did Guerilla ever say that? I don't doubt it's true, makes sense that they would do it like that, but I'm curious if they ever confirmed that someplace. Now that they have an spu software rasterizer they could potentially use it for a full resolution transparency pass. Assuming there was spu time to spare of course which KZ3 doesn't, but maybe other games could do it.
Really a great news here. Transparency are the biggest problem of the ps3 hardware.
 
I saw this interesting image of workload optimisation for Killzone 3 set against development progress. I hadn't seen it before, quite interesting:

http://images.eurogamer.net/assets/articles//a/1/3/3/4/7/0/5/performance_Frozen_Shores.png
 
Look at all that untapped power. :oops: 150%!

I will say though, at least now we have something showing just how bloody significant shadows can be with respect to triangle setup.

It should be noted that in the other shots that show memory allocations, that these are probably more for level designers who are going to be concerned with things that they can change (assets). So although it looks like there's a ton of memory left-over it's just that they're not showing the programs/functions and such that are running...
 
It seems like SPUs are of great help with geometry to RSX.I wonder is there any way to do it on 360?Is there even need of that?
Could someone explain it(Joker,Al)?

Oh and why do alot of devs say that rendering geometry is not something that takes alot of ms from their budget?I remember Sebbi saying its like ~10% and here it seems it takes almost 40% of RSX time.
 
Look at all that untapped power. :oops: 150%!

I will say though, at least now we have something showing just how bloody significant shadows can be with respect to triangle setup.

It should be noted that in the other shots that show memory allocations, that these are probably more for level designers who are going to be concerned with things that they can change (assets). So although it looks like there's a ton of memory left-over it's just that they're not showing the programs/functions and such that are running...
Even if we just say the dark red line is max they could use, they still have a good amount of growth in that department. They are well under that dark red line.

It seems like SPUs are of great help with geometry to RSX.I wonder is there any way to do it on 360?Is there even need of that?
Could someone explain it(Joker,Al)?

Oh and why do alot of devs say that rendering geometry is not something that takes alot of ms from their budget?I remember Sebbi saying its like ~10% and here it seems it takes almost 40% of RSX time.
And, that's the RSX unincumbered by a lot of other tasks that would otherwise absorb a good deal of that available time. How long ago did Sebbi post that "~10%" statement?
 
Look at all that untapped power. :oops: 150%!

You mean that's how much slower the game ran compared to the 30fps target?


As for Sebbi and that 10%, realize that he was talking about the 360 as far as I know, with unified shaders and all.
 
Look at all that untapped power. :oops: 150%!

I'll say look at all those millions of polygons. The geoemtry complexity is beyond belief and not it's not texture art or mapping.... so a total of around 1m polys/frame with most on shadows?
 
You mean that's how much slower the game ran compared to the 30fps target?

Just teasing. :p I don't think 150% really correlates to power. It's hard to read what they actually mean by the graph, but I believe 100% is indeed 33ms. If that's true, then the CPU tasks were limiting framerate up until the end (for that particular snapshot) where they seem to have gotten things more or less sync'd with the GPU. But that's just my interpretation of the graphs for the moment. In the end, it's still a snapshot, and there's going to be a whole game to consider. Targeting 100% of 33ms is a bad idea if one wants a stable 30fps regardless of what can happen. They need some head-room afterall.
 
You mean that's how much slower the game ran compared to the 30fps target?

Yeah that looks to be what it is, and you can see it get optimized over the months until they hit their target somewhere near December 2010 to where they have a nice buffer to keep it all at 30fps. Interestingly it shows that they had a harder time getting cpu use to maintain 30fps, compared to gpu use which looks like they got into the 30fps range somewhere around June.


It seems like SPUs are of great help with geometry to RSX.I wonder is there any way to do it on 360?Is there even need of that?

You can to a point. 360's vmx can do anything the spu's can do, they will just get crushed performance wise far sooner. Generally there is no need to bother though. Skinning sometimes gets moved to cpu on 360, but much of the rest can just be left to gpu.
 
Even if we just say the dark red line is max they could use, they still have a good amount of growth in that department. They are well under that dark red line.

You have to stay well under the red line to absorb performance spikes and maintain 30fps. How much you need to stay under it depends on the hardware. Ps3 tends to spike more due to the nature of it's hardware, it has more things that can cause performance to stall. So you need that extra leeway under the red line to keep performance in line. Same on 360 but just not as much both because it's unified setup absorbs spikes much better, and because it's hardware has far less gotchas as to the types of things that can suddently run really slow and hit you with a performance spike. Back in the day we went with ~28ms as the acceptable upper bound on PS3, and ~31ms on 360 for 30fps games. I wonder what other studios go with, would be curious to know.
 
Even if we just say the dark red line is max they could use, they still have a good amount of growth in that department. They are well under that dark red line.


And, that's the RSX unincumbered by a lot of other tasks that would otherwise absorb a good deal of that available time. How long ago did Sebbi post that "~10%" statement?

The thing is that that particular scene may use 80% of resources as shown but other one may take all,maybe even go overboard.There is no point in upping the graphics in a bit less chaotic scenes and lowering them in one that are more chaotic.They could add a bit more things so you get to almost 100% in this particular scene but others that are even more taxing would suffer in frame rate.

EDIT.
Joker beat me to it :)
 
Interestingly it shows that they had a harder time getting cpu use to maintain 30fps, compared to gpu use which looks like they got into the 30fps range somewhere around June.

Makes sense if you consider just how much stuff they're doing on the SPUs, from software rasterizer through AI to MLAA and other post processing filters.
 
Status
Not open for further replies.
Back
Top