Wow man...If its sub hd and no AA or AF they are sooo gonna flop with CE3 it ain't even funny
Anisotropic filtering on consoles? If we are lucky, CE3 will have full unoptimized trilinear filtering on all surfaces. Most existing AAA games/engines use combination of bilinear (far away objects) and optimized trilinear (near objects). Full anisotropic filtering everywhere (like seen in PC games) is just way too demanding for current generation consoles.
According to the white papers Crytek have released, their reprojection based temporal AA is very fast on Xbox 360. It most likely uses the same reprojection steps/buffers as their reprojection SSAO, so it shouldn't cost that much extra. I would be really surprised if they didn't use this AA method, as they have spend so much development time in it, and released articles stating that it's really good fit for their engine infrastructure and works really fast on current generation consoles.
1152x720 is a really good alternative to 1280x720, since it fits better in the EDRAM. A very slight (unnoticeable) drop in rendering resolution is a really good compromise if it drops your vertex processing cost by 33%. You can use that extra processing time for something else that improves the image quality much more than a couple of extra horizontal pixel rows.
I believe it is mostly due to the 10MB EDRAM limit, as 3 RTs of Crysis 2's deferred renderer would require 3 tiles to fit in with 22MB G-buffer size. (too much loss in performance) 1152 x 720 res will yield 19.9 MB G-buffer. (2 tiles) It'll be interesting to see how PS3 version turns out.
Has Crytek disclosed their G-buffer layout yet?
If they have three g-buffers + depth and use 1152x720 resolution, the g-buffers layout has to be basically this:
- 32 bit depth (D24S8)
- 2 x 64 bit color buffers (likely 4x16F)
- 1 x 32 bit color buffer (likely 4x8, 3x10+2 or 2x16F)
= 192 bits per pixel = 24 bytes per pixel
24*1152*720 = 19906560 bytes (= 2 EDRAM tiles).
But having eight 16 bit channels in the g-buffer seems like a huge waste. Encoded normal takes 2 channels and HDR color takes 3 channels, but I don't see a reason for having 3 extra 16 bit float channels just for material properties (that would look exactly the same with just 8 bits). Something doesn't add up.
By using 1152x720 resolution they could also have two 4x8 (or 3x10+2) g-buffers + depth buffer. This fits the EDRAM perfectly, no tiling required at all. However this layout is extra tight, there's no room for extra material properties (just specularity and glossiness). The surface colors are 8 bit per channel and the two channel normal would be likely stored as 10 bits per channel (as 2x8 bit normal quality is not that good). If they have gone this route, they likely have also stored some extra material/lighting parameter in the stencil bits.