If that was the case then devs woudln't need to invent countless strategies in order to compute and shade only what the player sees.
Yea, with regards to frustum culling, the blue is what the player can see, and outside of that is being rendered even though the player can't see it. What happens when you point the frustum directly into the sky? Are you implying that you need to render 360 around the player all of sudden because they can end up looking anywhere? I don't understand where you are going with this point. With regards to frustum culling, most of the render work is still within the blue, so if your blue zone is pointing at a skybox how much outside of that should the engine really be rendering?
Who says that ? Because there are no official sources about the number of ROPs on Pro, only assumptions. Because if the Pro had 64 ROPs then theoretically,
Pretty sure we group calculated on the 1X having 32 ROPS.
even with the reduced memory bandwidth and lower GPU clocks, the Pro would have a small advantage in pixel output.
I'm not sure what you're referring to with this point. ROP calculations are precision * frequency * ROPS = Bandwidth/ROP bound.
For 4Pro
RGBA8 911 * 32 * 4 bytes (rop bound)
RGBA16F 911 * 32 * 8 bytes (bandwidth bound)
For PS4
RGBA8 800 * 32 * 4 bytes (rop bound)
RGBA16F 800 * 32 * 8 bytes (bandwidth bound)
1X
RGBA8 1172 * 32 * 4 bytes (rop bound)
RGBA16F 1172 * 32 * 8 bytes (rop bound)
^^
I don't think 1X will be bottlenecked before 4Pro when it comes to rasterization. Unless there are 64 ROPS on PS4Pro. That would be a different scenario.
Or it's not a coincidence and the Pro is a balanced machine with just enough power to produce a 4K CBR image (with some optimization) when the base PS4 outputs at 1080p. Which seems the case in already some noteworthy games.
I'm just discussing SoW. SoW doesn't use checker boarding. The note by DF indicated that even at Resolution mode it could not hit 4K when looking into the sky. I'm suggesting to you it's likely not a compute problem, I don't know a scenario in which a compute problem would not allow this device to hit max resolution when it's required to render as little as possible.
Yes the XBX will have better textures thanks to more available memory. But it won't be necessarily '4K' textures (not 4x higher res). For instance Tomb Raider on XBX has just the PC settings above Pro (I believe from memory PC Ultra instead of High). In most cases devs will just use the best textures available on PC.
This seems tangential to what I'm discussing. My point is that you can't use your calculations from PS4 -> PS4Pro resolution scaling and expect the same formula to work with Xbox One X. They have different hardware and you don't know where the bottlenecks are. I targeted your compute as being a flaw in logic since looking at a sky box and not having max resolution should not be a compute problem.
I don't see how more memory could help framerate or average resolution though in those third parties games (if that's what you implied).
It's about how the engine is designed. Not how the hardware is designed. I'm speaking strictly about SoW and the bottlenecks it may be hitting with PS4 Pro, but may not actually hit with 1X because of a difference in ram size.
Xbox One had this problem. We've found on many occasions that deferred engines cannot fit on XBO, the 32 MB esram + the slow DDR3 makes it a strong candidate for 900p.