Image Quality and Framebuffer Speculations for WIP/alpha/beta/E3 games *Read the first post*

Don't flame me (too much) but they've evidently had to make some compromises to increase the detail and complexity, and image quality seems to be one of them.
 
It looks jaggier than beta, but less blurred. I'll take it. Also,texture work is off the hook, shadows, not so much. By the way, where did motion blur go?
 
It just looks very rough and ugly in those screenshots N_B has linked. I realize that some of it is because the lack of 2xMSAA (or was it Quincunx?) but still, it hurts the eyes a bit compared to UC2. Further away from offline CG image quality.
 
2x, but it was pretty broken since they only enabled it in the material pass and lighting was done on the SPUs.

It looks jaggier than beta, but less blurred.
Yeah, probably just tweaked the edge detection. It's still more blur than the previous two games had just by nature of the algorithm, and there are still a fair number of jagged edges left. I suppose the resulting blur is part of the weird look the beta had and in the screens here.

I'd be curious to know why they bothered switching, but anyways.
 
Didn't 2xMSAA take them couple ms on RSX? Difference is not really big, but the further you go in image(depth) it seems more aliased. Sub pixel problems maybe?
 
It's still more blur than the previous two games had just by nature of the algorithm,

That's strange because the word I'd use for these screens is absolutely not blur but noise instead. A LOT of noise and not even the film grain looking kind. It's like the texture LOD is set to far too high.
 
Didn't 2xMSAA take them couple ms on RSX?

Can't recall specifically, I'll have to look through the GDC slides again, but I don't remember them mentioning the specific cost of enabling it.

edit:

Ah ok, it was just in the graph. So 7ms for the depth/normals @ 2x, which should be same speed as 1x if I'm not mistaken (ROPs are 2x samples per clock, and this is Z+colour) The resolve from 2x RGBM to 1xFP16 is nearly 2.5ms.

Sub pixel problems maybe?
And/or the edge filter they're using.
 
Oh another thing... I heard somewhere that they got rid of the filmic tonemapping, but I'll have to check my source on that...
 
That's what it looks like, giving up on image quality related features in order to add more content, more action, more set pieces, while hoping that they can get away with it.

Goes to show how utterly efficient UC2's engine had to be, they can't push the system any further - so for every single thing they add, they have to take away something too.
 
Oh another thing... I heard somewhere that they got rid of the filmic tonemapping, but I'll have to check my source on that...

It definitely does not look like they got rid of it. There's quite a bit of the enhanced contrast/saturation/crushed blacks that's signature of filmic tone mapping curves.
 
hm... maybe it was another game. The conversation I had was ambiguous since we were discussing multiple games at the time.
 
It just looks very rough and ugly in those screenshots N_B has linked. I realize that some of it is because the lack of 2xMSAA (or was it Quincunx?) but still, it hurts the eyes a bit compared to UC2. Further away from offline CG image quality.

I'm not so sure if you remember exactly the past unchy (no offence) but the IQ was the 'worst' part of unchy engine because msaa was almost useless in the most of places imho ... probably the CG offline (a lot more smoother than the previous) gives the feeling of worse IQ ingame (& shimmering gives it's 'best' contribution here ), but essentially because post processing filter works better in the close camera views & cinematic.
 
I personally thought the game looks much sharper than the previous 2 game. Maybe just better texture work. Especially those new shots that were posted. And I just finished U2 recently.
 
they can't push the system any further - so for every single thing they add, they have to take away something too.
That's exactly what console graphics technology development is all about. For the last half year I have been optimizing and optimizing our GPU code just to get the last features in. Basically at the end you have to move single shader instructions from one shader to other to shift bottlenecks a little bit (extra alu is free if samping or texture cache are bottlenecks and vise versa). It's really challenging, since everyone expects you to get the features complete, but at the same time you have to frantically optimize to fit everything to the 16 ms (60 fps is a bitch on current consoles). It's really surprising how large tech refactoring I find myself willing to do, just to get a 1-2% GPU perf boost, if that 1-2% allows us to add a new feature :)
 
This would make for such a nice debate on 100% resource utilization... actually, scratch that. ;)

Anyway, thanks for the insight! It's always nice to get more info about these issues... would not have thought that even 1-2% can matter.
 
That's exactly what console graphics technology development is all about. For the last half year I have been optimizing and optimizing our GPU code just to get the last features in. Basically at the end you have to move single shader instructions from one shader to other to shift bottlenecks a little bit (extra alu is free if samping or texture cache are bottlenecks and vise versa). It's really challenging, since everyone expects you to get the features complete, but at the same time you have to frantically optimize to fit everything to the 16 ms (60 fps is a bitch on current consoles). It's really surprising how large tech refactoring I find myself willing to do, just to get a 1-2% GPU perf boost, if that 1-2% allows us to add a new feature :)
How much of that optimisation would you incorporate into future engine designs? Will your next-gen games use what you've learnt squeezing cycles out of this gen, or are the optimisations so specific to certain code that it bare no use going forward?
 
Back
Top