I have absolutely no clue what you're talking about. You sound all over the place here.
You can have both the ~500GB/s memory bandwidth while also having a fast 3GB/s+ storage bandwidth with super low latency. That's what the consoles have. :/ This isn't an either/or situation.
"We cant count the SSD as RAM" - I really dont get what you're even trying to say here. Of course it isn't RAM. You really didn't understand my whole paragraph there if I thought that's what I was suggesting. I was saying that the fast SSD's here enable devs to get far more from the RAM that does exist at any given point in time than they could before. Which will work as an effective 'memory multiplier'.
This is an absolutely critical piece of the puzzle that will enable these machines to provide a 'next gen' experience. Cuz the simple RAM doubling alone would be highly insufficient for that.
So when we're looking at the target visuals for Spiderman 2 there, it's definitely not just a matter of the RAM increase alone. The huge increase in I/O will have a large effect on this as well.
Ofcourse the machines do benefit from the faster read speeds from storage, and it does help out the low amount of total ram. But it doesnt equal 4x the memory increase (32gb gddr6) either.
All you have is a screenshot of a spiderman game, that tells us next to nothing what the game is doing. I think looking at Rift Apart is giving us a good idea of what to expect. DF said it many times for this generation, 'keep expectations in check'. Improvements will be made, ofcourse, but its not like these new x86/off the shelf hardware machines are so hard to program for that we need to wait an entire gen to see what they can do. That wasnt even the case with the PS4. It was with PS2/PS3.
1. Games have to use DS in the first place and on PC this can take a long time for it to be standard in games and game engines
2. GPU decompression at 14GB/s will still be lower than PS5's maximum
3. PS5's I/O is proven in the real world with a handful of games already loading in sub 2 seconds, DS is not proven in the real world
4. What if Sony release a PS5 Pro with even faster speeds? PC will be behind again.
5. Nvidia's numbers for RTX I/O make no sense.
1. Maybe, same for the consoles it seems. The amount of games using this new hardware is kinda dire.
2. The 14gb/s was sustained, PS5 wont even be close. With gpu decompression, burst speeds could be over 30gb/s. Raw speeds you're looking at 7gb/s already today. PCIE5 will be substantionally faster.
3. Spiderman loads around 2 seconds slower on the PC, thats before gpu decompression.
4. PC was never behind, not even at the launch of the PS5.
5. But Cerny's numbers do, right.
The GPU's definitely made a pretty sizeable increase, though. :/
We've gone from 1.3 and 1.8TF machines to 10 and 12TF machines, and that's with these newer flops 'going farther' than they used to in actual usage, along with a range of new features. That's not a gargantuan improvement, but it's a minimum 5x increase and entirely respectable.
Its not about if they made a sizeable increase or not, many think they didnt, many others think they did. Modern/newer flops was true for every generation, not just this one. From G70 to GCN probably was a much larger increase per flop then GCN to RDNA.
PS3 to 4 GPU wise was a much, much larger increase. On pure GF/TF, but even more so considering the arch improvements made back then. The generational leaps are just smaller. Even on PC to an extend.