Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
I think some are forgetting that regardless of how fast your SSD is things ultimately have to go through system ram, so the amount of system ram and bandwidth is still a limiting factor. Things can get swapped in and out faster and that's important and will be beneficial on both systems but it's not going to be some kind of miracle game changer graphically.
 
I have to say I’m very interested in whether we get half step consoles again. I think the SSDs are going to expose some new system bottlenecks not previously reckoned with.
 
Which in turn is a better solution also.

Hardly, both have identical implementations of "mesh shaders". I'm really starting to hate that term since people are using it to describe a hardware implementation rather than a PC API hardware abstraction. Those "mesh shaders" in D3D12/Vulkan extensions that people keep speaking of does not exactly map to console implementation.

It would probably be better from this point onwards to describe console hardware implementation of 'mesh shaders" as "primitive shaders" or 'NGG' (next generation geometry engine) ...
 
Yes. I was just pointing out that any advantage XBSX has from CPU will be for cross-platform as well as exclusive titles, whereas Sony's advantage is more likely only for exclusives. CPU utlisation will scale naturally on modern engines whereas first-gen SSD-streaming titles will likely just cap at lowest common denominator.

Not at all, it's the opposite in fact. Initial games will all be CPU targeted to the lowest common denominator, the PS5, meaning only Xsx exclusives will see any benefits, probably from the 3.8ghz mode. And even later when hypthreading grows more common a 5.x% difference won't mean much. Meanwhile, while sustained streaming will probably be capped to lowest, but burst streaming for game startup, fast travel in open worlds, and etc. has zero reason to be capped. Thus we can expect the PS5 to do these things more than twice as fast in all titles from the very beginning.
 
Last edited:
If we define next generation(after ps5) as some new technology or performance class previous gen cannot achieve then I find it difficult to imagine there would be next gen. Just more of the same in incremental steps as large or as small as manufacturing technology allows. Just trying to add compute units is not going to make very big difference. Previous gen could run same thing in lower resolution&framerate. i.e. we could now be stuck with mobile phone like upgrade cycle.

Only really disruptive thing that I can imagine shaking things up is machine learning and very, very heavy use of it. Of course cloud can also be disruptive but that is whole another story.
 
https://www.chiphell.com/forum.php?mod=viewthread&tid=2201057&page=2&mobile=2#pid44551026

From Zoo, I think its AquariusZi but on chiphell.

PS5 clocks very high, and therefore price is high and yields are bad. Move to counter 12TF from XSX.
Why do you think Zoo is Aquariuszi??


I would still have much preferred 40-48CU's @ a solid 2.1GHz or something.
Mark Cerny specifically mentioned an example of 36CUs vs. 48CUs. IMO maybe this is how to decide to choose 36CUs.

But I still wonder how much 2.23GHz for other GPU parts would help compensate fewer CUs.
 
I don't know if Cerny gave a specific reason why they chose the rate they settled on.
On the other hand, Microsoft has an API and hardware that tries to track what assets are in demand in hopes of more intelligently selecting what needs to load, so maybe it's possible to get similar results with less raw bandwidth?

Their stated goal is to basically never be loading for longer than 1 second. This is, in effect, the complete elimination of load times. And the PS5 appears to be doing most of the same asset tracking the Xbox Series X promised. I've been saying this for a while, MS aimed to reduce load times. Sony wanted them gone all together.

The PS5's SSD speed advantage will mostly be beneficial to exclusives.

The XSX's CPU advantage will mostly be beneficial to exclusives.

But the SSD advantage is huge, and the CPU advantage is miniscule.

In fact Mark Cerny highly praises the narrow and fast approach. How to compare PS5 and xsx game performance
if PS5 GPU is at 2.23GHz with faster front-end?

I would not take it as a given at this point that there will even be resolution differences between XSX and PS5. I think a lot of the differences will end up being a wash for most games, especially as reconstruction techniques will probably be the norm for demanding games.
 
Geometry engines have been in GCN cards for years. Mesh shaders are a RDNA2 only thing, as they seem to be step above the primitive shaders in RDNA1. @3dilettante help!
Cerny called primitive shaders one of the uses of the geometry engine. @29:30
It looks like he just described an rdna2 feature. Any time it's a sony addition he calls it custom, and when it's from AMD he calls it a new feature for ps5.
 
Mark Cerny specifically mentioned an example of 36CUs vs. 48CUs. IMO maybe this is how to decide to choose 36CUs.

But I still wonder how much 2.23GHz for other GPU parts would help compensate fewer CUs.[/QUOTE]
I remember MS saying this exact same thing last gen. Upping their CU count to 14 vs the clock boost they ended up doing. Supposedly the clock boost gave them better results. Not sure if that was really the case but idk.
 
Microsoft explicitly mentioned "patented VRS " in their seriesX press releases. Given RDNA2, obviously it should also be in ps5, maybe as different implementation, name etc and this silence is to not wake up some corporate lawyers.
 
Microsoft explicitly mentioned "patented VRS " in their seriesX press releases. Given RDNA2, obviously it should also be in ps5, maybe as different implementation, name etc and this silence is to not wake up some corporate lawyers.
Their patent mentions an “application-driven” solution.
 
Status
Not open for further replies.
Back
Top