General Next Generation Rumors and Discussions [Post GDC 2020]

I read over some of the interview and it comes across like perhaps he hasn’t run production code yet; I’m unsure. It does sound like he may have had access to PS5 and not yet XSX.

anyway; we will know the truth in 6 months. DF does a better job in proving things imo; at least they do the basic ground work to test things. In this case Richard has done a good job with RDNA1 to showcase performance differences between CU vs Clocks. I can’t get a read on whether this engineer actually has hands on.

The guaranteed speed for XSX is 2.4GB/s.
MS did not provide an optimal number. Nor did they present the fastest number. They only provided the guaranteed number; which may also be their fastest number, this is likely the case. This was in alignment with choosing fixed clocks. And in alignment with showcasing their split pool memory. They left it out there transparently for developers to weigh in. They were bound to get some negative feedback from developers.

If enough developers cry enough; just do the simple thing and make the remaining 4 chips 2GB and be done with it (which I believe is the devkit setup)
 
It might not be carried away, but you are comparing same architecture and iteration of chip (XSX and PS5) to Vega 19' and 20' (where 20' received huge upgrades vs previous one).

PS5 and XSX are literally apples to apples. Based on same GPU arch and same CPU arch.
Yes, but I am specifically responding to this context:

to increase performance on rdna you are better off with more CUs rather then clocking so much higher. It is harder to gain more performance with clock increases then CU increases.

Reason why Sony is able to hit those frequency is not because they have completely different chip building block, but how they approached to TDP v clock limit. They limited a TDP and let chip run at max it can as long as its inside TDP limit, XSX runs as every console till now - cap cooling at what worst case scenario can provide and fix the clocks.
This is your speculation, and my whole point is that RDNA 2 would have different performance characteristics. e.g. a different optimal v/f curve, architectural changes to make it respond better to clock scaling.
 
We don't know it as there is no big rdna1 gpu (40 vs 36 is not 52 vs 36)
But there are 22CU RDNA and 40CU RDNA chips. Scaling is perfectly in line. Look, what Crytek guy said has tons of holes that anyone here can pinpoint. Cerny's comparison between 36 and 48CU chip is on point, but mostly because 48CU chip is clocked well below frequency sweet spot. Frequency won't scale indefinitely, but his comparison was used as PR bullet point vs competition that has higher number.

If people expect raw power and BW won't matter, on same arch, because API and tools will give more bang for buck, then you only have to look into XBX and Pro to see its probably gibberish. No, MS wont leave 20% performance on table and underutilized GPU duo to shite API. Hasn't happened yet, and it won't happen.
 
Yes, but I am specifically responding to this context:




This is your speculation, and my whole point is that RDNA 2 would have different performance characteristics. e.g. a different optimal v/f curve, architectural changes to make it respond better to clock scaling.
Agreed; but MS should have come to similar conclusions as they have the same chip. They would have done a fair bit of testing as well and they went with a different CU count. It’s clear that XSX is a complete departure from their older technologies and BC is not a factor for them. While it’s clear there are some things that Sony had to concede on to support BC just looking at the BC modes.
 
But there are 22CU RDNA and 40CU RDNA chips. Scaling is perfectly in line. Look, what Crytek guy said has tons of holes that anyone here can pinpoint. Cerny's comparison between 36 and 48CU chip is on point, but mostly because 48CU chip is clocked well below frequency sweet spot. Frequency won't scale indefinitely, but his comparison was used as PR bullet point vs competition that has higher number.

If people expect raw power and BW won't matter, on same arch, because API and tools will give more bang for buck, then you only have to look into XBX and Pro to see its probably gibberish. No, MS wont leave 20% performance on table and underutilized GPU duo to shite API. Hasn't happened yet, and it won't happen.
22cu and 40 cu are both small, as I said, there is no big navi gpu yet (and we remember how problematic big chips was for vega arch, navi for sure is better in this aspect but we don't now how much better)
 
This is your speculation, and my whole point is that RDNA 2 would have different performance characteristics. e.g. a different optimal v/f curve, architectural changes to make it respond better to clock scaling.
And you speculate it will change with RDNA2 as well. It might, but until now RDNA1 scaling points to worse scaling with frequency then with CU numbers. It might scale better, worse or similar, but difference won't be 20%. His entire point is :

XSX GPU will run lower then TF (quite possible), PS5 will run above its theoretical TF (actually physically impossible). Him comparing it to PS3 and 360 gen has absolutely 0 relevance, these where completely different consoles with chips made by different vendors and different memory arrangements. PS5 and XSX are quite simply closer to one another in architecture then Pro and XBX.
 
22cu and 40 cu are both small, as I said, there is no big navi gpu yet (and we remember how problematic big chips was for vega arch, navi for sure is better in this aspect but we don't how much better)
Vega was very much front end limited, it tilted already heavy compute bias even more on bigger chips.

Nvidia cards are scaling very nicely with even higher CU count then XSX, even though they are PC parts and not specifically designed around their own low level API. I have 0% doubt that MS will saturate 52CU chip and won't leave 20% performance on table.
 
Vega was very much front end limited, it tilted already heavy compute bias even more on bigger chips.

Nvidia cards are scaling very nicely with even higher CU count then XSX, even though they are PC parts and not specifically designed around their own low level API
yep, but amd is not nvidia and we have to wait for big navi to know if they improved in this area
 
And you speculate it will change with RDNA2 as well. It might, but until now RDNA1 scaling points to worse scaling with frequency then with CU numbers. It might scale better, worse or similar, but difference won't be 20%.
Two points:

1. It is AMD who made claims about pushing clocks upwards and more logic/physical optimisation. Logically speaking, both would lead to changes in performance characteristics, regardless of being good or bad.

2. I am merely commenting on, given (1), that how performance responds to frequency scaling may not translate between RDNA 1 and RDNA 2.

So please consider not putting words in other's mouth to get your points across. I never disputed that the SoC of Xbox Series X would have a higher GPU performance in the first place. It does have more compute power after all. :-?
 
Two points:

1. It is AMD who made claims about pushing clocks upwards and more logic/physical optimisation. Logically speaking, both would lead to changes in performance characteristics, regardless of being good or bad.

2. I am merely commenting on, given (1), that how performance responds to frequency scaling may not translate between RDNA 1 and RDNA 2.

So please consider not putting words in other's mouth to get your points across. I never disputed that the SoC of Xbox Series X would have a higher GPU performance in the first place. It does have more compute power after all. :-?
Yes, but pushing clocks higher has another benefit - less $ per mm2, therefore even if net gain for freq scaling would be minimally worse, it would likely be big net benefit for AMD (or Sony/MS).

I am doing this only because you went with comparison of completely different chips, Vega 19' and 20', and building block for XSX and PS5 is same.
 
Cerny's example was compering 10TF to 10TF and different way to get to it.
Not 10TF is better than 12TF.
Its from their benchmarks.

Same way MS said that clocking higher was better than enabling the disabled cu's on XO.

But MS could easily come out and say based on their benchmarks and what devs want to do in the future (e.g. more compute/RT) having more raw TF regardless how you get it is what is important. As long as the front end is fast enough and bandwidth is adequate.

Neither company would be wrong, just remember its from their perspective.
 
there is difference between scale and scale 1 to 1 ;)
That is true. Problem is, we know Sony was likely limited by BC when designing the chip and MS wasnt. So for performance targets, they likely went with compromises either way. Will frequency scale 1:1 over 2.2GHz, will it scale better then additional CUs, or worse, we don't know. What we do know, its not going to scale good enough to overturn 2TF and BW advantage XSX has.

Its easy to say narrow and fast is better when you are Cerny, but that was choice they made duo to other requirements. MS went with 8GB of DDR3 and tried to explain eSRAM and higher clocks, but in vain.
 
That is true. Problem is, we know Sony was likely limited by BC when designing the chip and MS wasnt. So for performance targets, they likely went with compromises either way. Will frequency scale 1:1 over 2.2GHz, will it scale better then additional CUs, or worse, we don't know. What we do know, its not going to scale good enough to overturn 2TF and BW advantage XSX has
overturn not but the difference can be even smaller than already small ~20%
 
7.4GB/sec. Random read though...
I apologise if I interpreted that wrong. I assumed you were making an argument in support of PS5's greater speed, on account of you not actually posting any actual argument and leaving it to me to flippin' guess what point you were trying to make, like half the other lazy-arse posters in this thread posting one line facts.
 
I apologise if I interpreted that wrong. I assumed you were making an argument in support of PS5's greater speed, on account of you not actually posting any actual argument and leaving it to me to flippin' guess what point you were trying to make, like half the other lazy-arse posters in this thread posting one line facts.
i still don't understand what he wrote. We have official numbers on PS5 random read for their SSD?
 
He's (apparently) talking about moving between multiple games in less than a second. That means saving up to around 13GB that's currently in ram, and reading about 13GB back into it.

So he's either breaking NDA, revealing a Sony OS feature they've kept secret, and revealing SSD performance beyond anything Cerny talked about ... or he's repeating things (of questionable accuracy) he's picked up from the internet.
Isn't this impossible, though? I mean, I know we have a 22GB/s theoretical, best case with compression max, but this would be writing 13 and reading 13, giving a total of 26 in less than a second? Hasn't there been confirmed no compressor, only a decompressor? That would mean it would take more than 2 seconds to write whats in memory to the SSD at 5.5GB/s.
To be fair, we've only seen this in practice on Xbox One games played on Series X, and they were running the One X code path, so we can assume they use 10GB or less RAM, so Series X is writing max 10GB and reading max 10GB in the 6-8 seconds they've shown in the demos. 3.3GB/s total if my math is right there, though it's possible not all games have memory fully saturated.

From what I've been told games are typically based around the PS4 (lead platform), with X1 games scaled back accordingly.
Recent releases with terrible Xbox One/S performance would confirm this, anecdotally.

How can you possibly compare a typical block-device on Phison controller with no cache (that's what XBSX has) to a highly custom full stack solution that PS5 has?
For starters I would need to see the PS5 drive in action. We've seen Series X running Xbox one games, and have seen loading times, quick resume etc. How can anyone compare a demonstrated system component based on existing, previously benchmarked technology with what is at this moment technology not existing in a shipping product.

Yea, if only Xbox One was lead platform and it could be optimized as well as PS4, better CPU and higher clocks would probably pull it ahead.

Seriously, we had this talk since 2005 here. There is no way 100s of titles released on both PS4 Pro and XBX are un optimized for platform with 2x the userbase and therefore are the reason for lower performance.

There is no way it can actually be that platform with better GPU, CPU and more BW is simply more capable of delivering better performance.
Doom 64 runs at 1440p on Xbox One S, only 1080p on PS4. One S confirmed more powerful because higher clockspeed :p.
 
The article is gone as well. Probably he shared more information than his NDA allowed him to.
or he's speaking his own opinions and bias but it's coming across as him speaking on behalf the company. And his company shut it down. It didn't come across as a standard developer interview, he said some very tasteless things that can't be proven. I believe he ended with that no one could optimize XSX greater than PS5. I suspect this statement will be proven wrong from launch.

regardless, developers can be biased as well. And neutral devs that do provide information can also be tossed out as being biased by fans.
 
Last edited:
Back
Top