I don't understand what you guys are arguing about. You are both right. You need compute power to be able to do your visibility testing, culling. It's also true that you need an SSD that's fast enough to be able to read blocks of data for models and 8k textures very quickly. Each model had up to four 8k textures, by the description. There's a ton of texture data being moved. Is the Series X fast enough to stream in all that texture data? We have no idea because they didn't run it on series x and they didn't give any metrics. It's definitely possible that slower drives would have to live with 4k textures in place of 8k textures etc. We don't know yet
Maybe PS5 doesn't have to worry about mip selection because it's just fast enough to load textures in. Maybe a slower SSD on a PC, or some other device would have to live with 4k or 2k textures. Maybe Xbox will stream in a lower resolution textures then blend and swap to the high resolution texture if it arrives a frame late (that is how they described sampler feedback streaming). We don't know if the scenes on ps5 were pushing io in a way that the series x ssd couldn't handle.
For me like, I'm trying my best to separate what could be I/O and what could be compute.
and when I think of compute we have been doing 4K graphics on lesser hardware. So most people just think it's an I/O problem.
See Xbox One X for instance. Most people are willing to trade resolution for more graphics detail.
But they misunderstood why they don't respect 4K resolution and it's because of the following reason:
To obtain 100% raster efficiency, one must have a triangle/pixel coverage of 1 triangle per 16 pixels.
This means, resolution be damned, high or low, when you move up to 4K or down, the ratio of triangles to pixels are roughly the same. The only thing that changes is aliasing quality.
But because Fixed Function hardware is still so fast, even with older weaker harder we are capable of doing this, all we need is bandwidth to feed the fixed function units.
It would appear that 32 ROPs on X1X was just sufficient enough.
So you see 6TF isn't a lot of power, the power came from Fixed Function units. The 6TF of compute was augmenting and adding complexity to the FF pipeline.
And when we look at the memory footprint, the textures are still 4K in size however and make up a majority or the streaming bandwidth required and the capacity in memory. With normals, mips etc. And textures take up vastly more footprint than vertex meshes.
When you match mesh with textures the mesh size needs to inflate by nearly 16x the amount.
So the question becomes, was streaming the issue for rendering unlimited detail technology? Or that fact that we didn't' develop the technology to get away from fixed function hardware.
The obvious limitation for no unlimited detail for today is not I/O. Because even with a 100GB/s SSD speed, you'd croak trying to work with subpixel triangles and single pixel triangles on any FF hardware pipeline. The problem is exponential.
That was the limitation, and by moving that limitation over to compute and away from Fixed Function, now you are looking at largely how much compute power you have to draw the screen.
Which means the I/O portion of it, actually worked within the realms of what we have already.
It was the rendering method.
So with 12 GB of memory, a slow 100MB/s drive, 4K textures and a huge buffer period, they still managed a high fidelity 4K game at 60fps. Virtually streamed.
They only thing missing was not the textures, but the triangle meshes. And it's not because FF hardware isn't capable of producing a lot of triangles or culling them, but because it croaks exponentially worse once it drops below a specific triangle/pixel emission output.
Triangle meshes shouldn't inflate the size to 500x more I/O requirement.
But if Gears 5 moved to unlimited detail and added in 4K meshes to match those 4K textures. Removing the mips, lod, normals etc. They should have capacity and more than enough to hold those denser meshes.
But the compute power would not nearly be enough to do 4K anymore. It will be significantly less, like 1080p to 1440p or even worse than that.
Thus I believe the I/O from Sony is largely spent streaming in the 8K textures and 16K shadowmaps.
so 8K textures are 4x more size than 4K textures. 16MB per texture vs 67MB per texture without compression. And that's the whole texture, not the streaming texture from drive.
These SSD drives are capable of a shit ton more and that's why they wanted to showcase it could even handle movie quality assets.