If max theoretical bandwidth was the only concern, increasing or decreasing the memory so that all chips are the same size would achieve that goal. But that leads to other questions. Is 10GB too little? Is 20GB too expensive. Or, is 16GB the right total, but we need a portion of that to be as fast as possible because the main bandwidth sapping part of realtime 3d graphics is writing and reading to relatively small buffers.IF the XSX wanted to have all it's ram accessible @ Max speed would that mean, increasing the total amount of ram or decreasing?
ie. could they do a sneaky last minute 6Gb RAM upgrade and let it all run @ max speed?
Most likely.Maybe a depth-only pass where depth is easier to compress, and so having double the units would still be effective with less raw bandwidth.
If max theoretical bandwidth was the only concern, increasing or decreasing the memory so that all chips are the same size would achieve that goal. But that leads to other questions. Is 10GB too little? Is 20GB too expensive. Or, is 16GB the right total, but we need a portion of that to be as fast as possible because the main bandwidth sapping part of realtime 3d graphics is writing and reading to relatively small buffers.
I disagree with the idea that you can't have cross-gen games that "fully take advantage" of the new hardware. Cross-gen does not have to mean parity. We've even seen the opposite in the past where next-get versions of cross-gen games were cut down compared to the prior-gen version. Fortunately, I don't expect to see that during this transition.
I understand that you could certainly create a game that fundamentally relies on an upgraded hardware spec that can't deliver even a down-graded experience on older hardware, but that's not going to create an inherently superior game, just a different one. I am eager to see the first example that delivers on both, but that won't preclude me from appreciating or enjoying scalable experiences simply because they are scalable.
Also it was about what they felt was the best way to get to 10TF, not that a higher raw TF value isn't better.
Agreed, He wasn't lying, and he did explicitly also state that by increasing clocks there were limitations because memory wasn't going any faster. Most people choose to ignore his caveat. So while his statement is that a rising tide lifts all boats; the pipeline goes faster everywhere, this places additional pressure on memory being able to delivery on both it's bandwidth and latency.
Bloodborne is also SONY IP and has metecritic score 92.That would be a first for sony, as for every gen it mostly where tech demos (order 1886) or just games that won't make any impression both graphics and gameplay wise (shadowfall, FC3 looks better). I rather have corporations put their resources and time into proper AAA titles, most don't have access to these new machines anyway at launch, which make it a niche market.
You're also forgetting scaling, which is something that didn't exist in the scope we know today.
If we see his example of 36CUs vs. 48CUs with the same Tera Flops, he mentioned the overall performance is raised due to higher frequency, and it is easier to fully use 36 CUs of the narrower GPU.
The point is faster GPU pipelines can increase performance significantly, a different way compared with more TereFlops.
And it’s very easy to deduce that PS5 GPU can beat a 48 CUs GPU with 10.3 TFs operating at 1.673 GHz. I assume PS5 probably matches the performance of 48CUs at 1.825 GHz. In other words real world in-game performance between xsx and PS5 is roughly 52/48 which is 8.3% and adding 2~3% of PS5 down clocking. Overall difference is 10~11%.
We can wait for multi platform games to reveal the performance results.
What ? Killzone SF looked amazing at launch, FC3 ? Hope That's not far cry 3 ? Tried ps4 version last week, looked Really rough.That would be a first for sony, as for every gen it mostly where tech demos (order 1886) or just games that won't make any impression both graphics and gameplay wise (shadowfall, FC3 looks better). I rather have corporations put their resources and time into proper AAA titles, most don't have access to these new machines anyway at launch, which make it a niche market.
You're also forgetting scaling, which is something that didn't exist in the scope we know today.
And it’s very easy to deduce that PS5 GPU can beat a 48 CUs GPU with 10.3 TFs operating at 1.673 GHz. I assume PS5 probably matches the performance of 48CUs at 1.825 GHz. In other words real world in-game performance between xsx and PS5 is roughly 52/48 which is 8.3% and adding 2~3% of PS5 down clocking. Overall difference is 10~11%.
The 448 GBps ibandwidth is alright if the GPU was clocked lower around 9.2TF but their upclocking of only the GPU makes it a bottleneck much earlier.
It seems the extra bandwidth is not much of a surplus at all, more like what's required in proportion to the CU count.So if we're comparing TF and B/W then;
PS5 = 43.6 GB/s @ 10.28TF to 48.7 GB/s @ 9.2TF
XSX = 46.1 GB/s @ 12.15TF
Or is that bad math?
If correct it hardly seems a big difference (~7% in very worst case)
might get better LOD, more varied texture, more high res assets per frame due to faster SSD?
So why do you need an ultra fast SSD for? Just for faster load time and the lulz? Does it not help you stream more detail faster in tandem with RAM?Only if the devs are incapable of using RAM.
Only if the devs are incapable of using RAM.
Only if the devs are incapable of using RAM.