Looks like it.edit: is that AF in the desert shots?
Yeah, though I think you can tell that it's not doing the greatest job of detection.still edge detect?
image quality seems to be one of them.
Yeah, probably just tweaked the edge detection. It's still more blur than the previous two games had just by nature of the algorithm, and there are still a fair number of jagged edges left. I suppose the resulting blur is part of the weird look the beta had and in the screens here.It looks jaggier than beta, but less blurred.
It's still more blur than the previous two games had just by nature of the algorithm,
Didn't 2xMSAA take them couple ms on RSX?
And/or the edge filter they're using.Sub pixel problems maybe?
Are PS3's assets the same as the HD pack then?
Oh another thing... I heard somewhere that they got rid of the filmic tonemapping, but I'll have to check my source on that...
It just looks very rough and ugly in those screenshots N_B has linked. I realize that some of it is because the lack of 2xMSAA (or was it Quincunx?) but still, it hurts the eyes a bit compared to UC2. Further away from offline CG image quality.
As far as I know. It's on DF's plate as well, but there's a lot to do.
That's exactly what console graphics technology development is all about. For the last half year I have been optimizing and optimizing our GPU code just to get the last features in. Basically at the end you have to move single shader instructions from one shader to other to shift bottlenecks a little bit (extra alu is free if samping or texture cache are bottlenecks and vise versa). It's really challenging, since everyone expects you to get the features complete, but at the same time you have to frantically optimize to fit everything to the 16 ms (60 fps is a bitch on current consoles). It's really surprising how large tech refactoring I find myself willing to do, just to get a 1-2% GPU perf boost, if that 1-2% allows us to add a new featurethey can't push the system any further - so for every single thing they add, they have to take away something too.
How much of that optimisation would you incorporate into future engine designs? Will your next-gen games use what you've learnt squeezing cycles out of this gen, or are the optimisations so specific to certain code that it bare no use going forward?That's exactly what console graphics technology development is all about. For the last half year I have been optimizing and optimizing our GPU code just to get the last features in. Basically at the end you have to move single shader instructions from one shader to other to shift bottlenecks a little bit (extra alu is free if samping or texture cache are bottlenecks and vise versa). It's really challenging, since everyone expects you to get the features complete, but at the same time you have to frantically optimize to fit everything to the 16 ms (60 fps is a bitch on current consoles). It's really surprising how large tech refactoring I find myself willing to do, just to get a 1-2% GPU perf boost, if that 1-2% allows us to add a new feature