You have an extra variable to play with on consoles. Rendering the screen in tiles (not necessarily as small as the Larrabee tiles) will reduce the storage needs. How doesn't that matter?
You mean the reduction in the framebuffer? Sure, but I'm of the philosophy that large framebuffers aren't needed. 1080p, 4xAA, 32 bpp (FP10 or even better shared exponent) is enough, IMO, so that's just 64 MB.
Besides, the cost is fixed, and framebuffer reduction can be achieved on any platform with object level binning into large tiles. The problem with binned rendering is the unpredictability.
Yes, you can dump tiles before binning is finished, but then in the most poly-heavy situations (AFAIK the biggest culprit for framerate dips) you not only have a large rendering load, but you lose efficiency too.
Just so I can follow the math, how many parameters per vertex and how many vertices per frame are that number made up from?
So you are assuming that Crysis is using >10 million polygons per frame?
Remember, I'm talking about the way Intel is doing it, and I did acknowledge that there are ways of reducing it.
I think 50-100 bytes per vertex (some vertex shaders create many iterators) and 60 bytes per primitive is reasonable (in addition to coverage info, they need a 3x3 matrix per primitive for interpolation and a float3 for 1/z). Is 4M polys per frame that unreasonable of an assumption? Even when running at 20-30 fps we see scaling that doesn't depend only on pixel count.