General Next Generation Rumors and Discussions [Post GDC 2020]

So what kind of pros are there to higher clocks and narrow compared to lower and wider?
Everything not tied to the shader ALU's is faster, so ROPS run faster and TU's run faster. Command processors run faster. Caches run faster. etc. I don't know that being narrow in itself is an advantage though unless targeting a TF target.

Cerny's example explained it best. Given two hypothetical GPUs, one at 1 GHz with 2n CUs, and the other at 2 GHz with n CUs, both have the same shader performance measured in teraflops. However, the GPU running at 2GHz has twice the texture, ROP, instruction-dispatch, performance. Everything that's not shaders is faster than the wide, slow GPU.

However, if looking at cost, where we're possibly looking at XBSX and PS5's SOC being pretty much price comparable, wider and slower is clearly the more powerful option. It would appear that PS5's design was constrained by BC requirements after all, resulting in a forced solution as poopy as Nintendo's. Sony need a decent level of abstraction for PS5 game development to ensure next-gen, they have complete freedom over the hardware choices. MS on the other hand are already there, and their 'platform' is looking very robust going forwards.
 
It would appear that PS5's design was constrained by BC requirements after all, resulting in a forced solution as poopy as Nintendo's.

Which is something that I don't understand. BC, IMO, is one of those things that people demand and never use. Has MS stated how many hours users spend playing 360 or OG games vs XBO games? If BC has been so fundamental in their decision they could have gone for a different solution and ensure that 100 - 200 most popular games are playable and forget about the rest. OK, you screw 0.1% of your userbase, so?
 
Which is something that I don't understand. BC, IMO, is one of those things that people demand and never use. Has MS stated how many hours users spend playing 360 or OG games vs XBO games? If BC has been so fundamental in their decision they could have gone for a different solution and ensure that 100 - 200 most popular games are playable and forget about the rest. OK, you screw 0.1% of your userbase, so?
There is probably another reason....there is a huge list of games on PSN, Sony wants to keep them selling and long gone are the times were old games aged badly.
These are potential sellers. Especially those indie games that age well. In addition Sony wants to make sure that existing users migrate to PS5. Regardless if people use BC much it is more a decisive factor than it was now that games are bought digitally from stores.
People buy old games cheap on steam to these day on PC and gamers dont want to feel that they are losing their purchases.
 
Because I'm bored and quarantined at home I decided to create an excel sheet with all the specs, it's missing concrete features like VRS,4D graphics and blast processing but I will try to add that info in the future.

https://docs.google.com/spreadsheet...rKOFbkG5TrHvFB6Gzdr/pubhtml?gid=0&single=true

Captura.JPG

Just two additional comments:
- Yes, commas are used for decimals, and the metric system is superior.
- If you don't like this post, you should be banned, IMO.
 
Last edited by a moderator:
@jayco have they indicated how much memory is usable for games on PS5? I have not seen that number verified anywhere.
 
@jayco have they indicated how much memory is usable for games on PS5? I have not seen that number verified anywhere.

I don't think they made that public. I know MS has 2.5 GB of RAM and 1 CPU core reserved for running the operating system and shell, I guess it will be similar on the PS5.
 
Because I'm bored and quarantined at home I decided to create an excel sheet with all the specs, it's missing concrete features like VRS,4D graphics and blast processing but I will try to add that info in the future.

https://docs.google.com/spreadsheet...rKOFbkG5TrHvFB6Gzdr/pubhtml?gid=0&single=true

View attachment 3681

Just two additional comments:
- Yes, commas are used for decimals, and the metric system is superior.
- If you don't like this post, you should be banned, IMO.
The theoretical max value on compressed bandwidth on PS5 SSD is 22GB/s. 8/9 GB/s is just a realistic number that should be reachable in most conditions. But it can actually reach 20GB/s in ideal conditions :

In ideal conditions can be +20GB/s.

https://www.neogaf.com/threads/next...leaks-thread.1480978/page-1508#post-257428542
 
I will stick to the official numbers for now. But if that is actually possible that SSD has more bandwidth than some DDR4 configurations.

I don't think it is a matter of "possible", Cerny flat stated it, but he also stated that was best case and would not be the norm. (Did he actually say "Maybe 10% of the time" or is my mind sticking that in there?)


Unrelated thought:
Going from memory. If i get a figures wrong, apologies in advance. They should be ballpark correct.

Roughly speaking, the Sony SSD solution is around 2x as fast. I think Shifty posted some napkin math based on an older post that went, roughly, 62mb per 16 ms frame to change everything. MS is at 66mb per 16ms frame and Sony is around 100MB per 16 ms frame. As cool as the Sony solution is, and I will leave it to others to debate the real world implications thereof, I do have to wonder. MS specifically mentioned that most of the swapping and memory in use was about textures. Which often went unused or were only partially used. There is supposed to be something about only loading (SSD to RAM) the part of a texture that is actually going to be used, rather than the whole thing. Wouldn't that, assuming it works and MS is not misrepresenting the situation, have a huge effect on your needed bandwidth between the SSD and the RAM? As well as how much RAM was in actual use?
 
I will stick to the official numbers for now. But if that is actually possible that SSD has more bandwidth than some DDR4 configurations.
Believe Xsx is actually 12.155 TF. Which I guess would make it 12.2 for your spreadsheet.

Nm maybe that was total system
 
Last edited:
I don't think it is a matter of "possible", Cerny flat stated it, but he also stated that was best case and would not be the norm. (Did he actually say "Maybe 10% of the time" or is my mind sticking that in there?)


Unrelated thought:
Going from memory. If i get a figures wrong, apologies in advance. They should be ballpark correct.

Roughly speaking, the Sony SSD solution is around 2x as fast. I think Shifty posted some napkin math based on an older post that went, roughly, 62mb per 16 ms frame to change everything. MS is at 66mb per 16ms frame and Sony is around 100MB per 16 ms frame. As cool as the Sony solution is, and I will leave it to others to debate the real world implications thereof, I do have to wonder. MS specifically mentioned that most of the swapping and memory in use was about textures. Which often went unused or were only partially used. There is supposed to be something about only loading (SSD to RAM) the part of a texture that is actually going to be used, rather than the whole thing. Wouldn't that, assuming it works and MS is not misrepresenting the situation, have a huge effect on your needed bandwidth between the SSD and the RAM? As well as how much RAM was in actual use?

There is much more than texture. We have motion matching we can imagine load tons of animation because having crazy facial animation during gameplay will become the norm. We will have LOD0 model further in the distance. Like I said we can imagine crazy destruction system this is what of the things Remedy told they can do better than control with a SSD and I repeat this idiot at Sony have a great demo about this and they showed nothing on stage for fuck sake. We can imagine baked offline BVH linked to a weather system for "static light" maybe and mix with dynamic RT light. There is probably much more used case. Or better baked ligthmap than The Order 1886 and using a setup like HZD weather system for indirect light where you have a number of settings for weather and day and night system. And I probably forget tons of things.

The use case of today aren't the use case of tomorrow...
 
Last edited:
I will stick to the official numbers for now. But if that is actually possible that SSD has more bandwidth than some DDR4 configurations.
22GB/s is the official number for max compressed speed. 8/9 GB/s is the bandwidth typically reached, it's not a max theoretical number like Xbox's 4.8GB/s. Cerny:

The unit itself is capable of outputting as much as 22GB/s if the data happened to compress particularly well

 
I will stick to the official numbers for now. But if that is actually possible that SSD has more bandwidth than some DDR4 configurations.
The typical figure for lossless compression ratio have no place in hardware specs. It has nothing to do with the hardware capabilities. Basically, data entropy is unrelated to hardware. There is a direct link to the compression algorithm however but lossless codecs are not very far from each other.

Data types reaching 4:1 ratio will cap at 22GB/s on ps5 and 6GB/s on xbsx.

Data types reaching 2:1 ratio will cap at 11GB/s on ps5 and 4.8GB/s on xbsx.

Incompressible data will be "above 5GB" (cerny's precise wording, so. 5.5 have margins for OS, downloads, GC, etc) and "2.4GB guaranteed" on xbsx (unknown margins).

Everything will be somewhere between these extremes and will depend on game data. We should expect different types of data varying wildly. Average figures, from hypothetical game data compression ratio, are ridiculously misleading. They do this with LTO tapes with imaginary capacity.
 
Last edited:
Because I'm bored and quarantined at home I decided to create an excel sheet with all the specs...

So, we just create a hypothetical console based on all the minimums in that chart and we end up with the target specification of nearly all multiplatform games for next gen.
*A 3.5Ghz CPU
*A 36CU GPU running at 1.82Ghz
*10GB of 448GB/s RAM
*A 2.4GB/s SSD
*Lots of black plastic
 
The typical figure for lossless compression ratio have no place in hardware specs. It has nothing to do with the hardware capabilities. It's the actual data compression ratio which will vary. Data entropy is unrelated to hardware. There is a direct link to the compression algorithm however. Lossless codecs are not very far from each other.

Data types reaching 4:1 ratio will cap at 22GB/s on ps5 and 6GB/s on xbsx.

Data types reaching 2:1 ratio will cap at 11GB/s on ps5 and 4.8GB/s on xbsx.

Incompressible data will be "above 5GB" (cerny's precise wording, so. 5.5 have margins for OS, downloads, GC, etc) and "2.4GB guaranteed" on xbsx (unknown margins).

Everything will be somewhere between these extremes and will depend on game data. We should expect different types of data varying wildly. Average figures, from hypothetical game data compression ratio, are ridiculously misleading. They do this with LTO tapes with imaginary capacity.

When they give a typical this is probably what they have seen in real game...
 
Back
Top