General Next Generation Rumors and Discussions [Post GDC 2020]

Reading through all the comments here it seems like everyone all has their own interpretation of the specs released whether if their interpretation is right or wrong. It's like, I don't even know who's words to stick to any more :LOL:.
 
You are saying raw deficit can be overcome with API and tools, when this was not the case with XBX and PS4Pro. If PS5 will overcome less BW and 2TF less by having better API, why didnt Pro do the same?
My understanding is it's due to a list of factors including better low level API, a single pool of ram with not so bad bandwidth, much faster gpu clock, better utilization of gpu/cpu resources from variable clock and possible SSD assistance all working in tandem to mitigate the TF gap. It's certainly not a clear cut disadvantage like how Pro is to 1X, there's so much more differences here it's not funny.
 
None of which makes sense, like his comments on XSX memory being the "same mistake" as the X1 sram.

I don't think anyone actually believes what he wrote ;)

Sounds like it's gonna be a Vega 56 vs Vega 64 situation where the higher CU count is disproportionate to the increase in speed while he believes the much higher frequencies of GPU clock matters more in relative comparison. Also a split ram pool holding back maximum efficiency does sound obvious, 10GB of fast ram is not gonna be enough for next gen especially at 4k. PS5's API also sounds like magic pudding compared to DirectX on XSX. Wouldn't be surprised if multiplat difference is so minimal DF would need extra magnifiers to determine the pixel count. I'll say 1900p-2050p on PS5 vs 2016p on XSX to be the norm with some exceptions depends on dev skills, resources etc. The real difference is gonna be in the exclusives and I can see PS5's SSD to be utilized in ways to increase visuals most 3rd party could only dream of.

Like in the DF video on the PS5, to increase performance on rdna you are better off with more CUs rather then clocking so much higher. It is harder to gain more performance with clock increases then CU increases.


PS5's lower BW might actually bottleneck it at some point. Differences in multiplat games could be higher settings, more stable framerate and resolution. On the ssd, MS gurantees a sustained rate, wheras sony again has variable rates (good for pr). According to some sources, bcpack will largely mitigate the difference, maybe even favoring xsx. In exclusives we will see the advantage of a considerable more powerfull gpu (2 to 3tf diff), faster cpu with no downclocks and a possible 3.8ghz, higher bw, and a ssd thats close, exclusives might outshine ps5's.

This guy sounds like an absolutebeginner, how did he land a job at Crytek? Is situation that dire there?

True, but devs that claimed PS5 had heat issues are being put away at once. I think that you put faith in what you want to believe, no matter how much fantasy it is.

I have one issue with "TFLOPs are only theoretical metric", most take it as "Stronger console cannot even use TFs to its full", but somehow the weaker one can?

It's explained in the above DF video, it's the opposite, XSX has a better chance to achieve its full potentional.

--------- merged post for "devs" rumor mill -----

Didn't see your post, appeared while posting the above.


 
Last edited by a moderator:
I don't think anyone actually believes what he wrote ;)



Like in the DF video on the PS5, to increase performance on rdna you are better off with more CUs rather then clocking so much higher. It is harder to gain more performance with clock increases then CU increases.


PS5's lower BW might actually bottleneck it at some point. Differences in multiplat games could be higher settings, more stable framerate and resolution. On the ssd, MS gurantees a sustained rate, wheras sony again has variable rates (good for pr). According to some sources, bcpack will largely mitigate the difference, maybe even favoring xsx. In exclusives we will see the advantage of a considerable more powerfull gpu (2 to 3tf diff), faster cpu with no downclocks and a possible 3.8ghz, higher bw, and a ssd thats close, exclusives might outshine ps5's.



True, but devs that claimed PS5 had heat issues are being put away at once. I think that you put faith in what you want to believe, no matter how much fantasy it is.



It's explained in the above DF video, it's the opposite, XSX has a better chance to achieve its full potentional.
Nice of you to omit that he said that's only one example and it could varies from cases to cases depends on what the tasks are.
 
According to some sources, bcpack will largely mitigate the difference, maybe even favoring xsx. In exclusives we will see the advantage of a considerable more powerfull gpu (2 to 3tf diff), faster cpu with no downclocks and a possible 3.8ghz, higher bw, and a ssd thats close, exclusives might outshine ps5's.

I really doubt this. Theres going to be a large number of assets that do not compress at all with the BC class of compression algorithms, I also doubt that BCPack increases compression by ~100% compared to the already existing BC algorithms.
 
Sure I do, both consoles would hit a ram bottle neck at some point but it seems like he's so unhappy with that split pool.
It’s a fair position. Bottlenecks are bottlenecks. A truck has 1000 HP but can beat a a smart car off the line. Same idea. If the pressure is on for Xbox that split pool is really terrible then it’s on MS to address this.

but once again; I’d be surprised if this is true or just conjecture. MS should know and should have tested this.
 
Pro and One were limited by their vanilla versions.
The most probable scenario being: optimize everything for XOne, because it's the slowest and hardest to optimize for. Do as less as possible for any other platform to save costs.
Or, you know, Occams razor and stronger console is simply stronger? More TFs, more BW with same architecture results in better performance. Who would have thought it.
 
5.5GB/s raw (2.4GB/s)
8-9GB/s compressed (4.8GB/s)

That's more like it, but according to windowscentral, bcpack will mitigate that different largely. The SSD's might perform closer to eachother then some believe.

Nice of you to omit that he said that's only one example and it could varies from cases to cases depends on what the tasks are.

It goes to show that it's not that simple as you believe it is, in some situations higher clocks might be better to achieve full potentional, in others, going wider does. It does seem he's implying that in general, it's better the wider way, that's why most high end GPU's dont go narrow and insanely fast.


And the 5700 series is RDNA 1.0 btw. Meaningless comparison.

If it was meaningless, DF wouldnt have mentioned it in a PS5 tech coverage. Also, there would have to be enormous changes in design for that logic to change, and doesnt explain XSX having a wide approach.
Furthermore, XSX's GPU is rather high clocked too at 1800+ mhz, its not like its a low clocked GPU by any means. 2Ghz sustained is already a problem for the PS5 gpu, and thats a 36CU variant.
 
RDNA1 does not scale better with frequency. It scales pretty similar, but not better (a bit worse actually).

There are few benchmarks with 5700 clocked at 2100MHz and stock XT and stock XT beats it in all games, by bigger margin then DF, but in DF we know 100% it was locked for both while here 5700 might have downclocked.

This notion that higher frequency and API will eradicate 2TF advantage and more BW is weird to me. This has never been proven on same architecture, so why should we buy that?
 
Like in the DF video on the PS5, to increase performance on rdna you are better off with more CUs rather then clocking so much higher. It is harder to gain more performance with clock increases then CU increases.
AMD's official claim for RDNA 2 is to improve perf-per-watt by 50%. That is a product of RDNA 2 pushing the operating frequency higher, while improving physical design and perf-per-clock. We also have the Renior APU which has 27% less CUs, clocked higher, redesigned for 7nm, and still performed way better than its 11CU predecessor (alongside the doubled CPU core count).
 
That's not how it works. Otherwise nobody would ever go from DX9 (which NV wanted with all their heart).
Not sure what are you arguing about.

XBX advantage in TFs/BW has replicated (and sometimes more) in real life. It even had 50% less pixel fillrate, yet it didn't matter.

To say its because everyone was limited by XBone and thats the reason is bogus. Occams razor means all 4 consoles performed roughly exactly where their specs put them. Same architecture, same vendor, different power = expected difference.
 
There are conditions for PS5's read speed too - it doesn't hit 22 GB/s all the time; it'll be a fringe case (otherwise Cerny would have said 22 GB/s instead of 8 to 9, wouldn't he, or even 10 GB/s because those faster read events push the average up).

For all extents and purposes, it's 4.8 GB/s versus 9 GB/s as far as the devs are concerned. PS5 might get the occasional burst read, but you can't design for it. What's the value in arguing over the minutia?
Show me a source where MS say 4.8 GB/s is a typical value.
 
AMD's official claim for RDNA 2 is to improve perf-per-watt by 50%. That is a product of RDNA 2 pushing the operating frequency higher, while improving physical design and perf-per-clock. We also have the Renior APU which has 27% less CUs, clocked higher, redesigned for 7nm, and still performed way better than its 11CU predecessor (alongside the doubled CPU core count).
Not apples to apples at all. To be apples to apples it would have to be Renoir with say 2TF vs 1.65TF, with 2TF having more BW and more CUs at lower clocks.

What you are doing is comparing last years Vega with less BW and this years one. Its not even the same chip.
 
I really doubt this. Theres going to be a large number of assets that do not compress at all with the BC class of compression algorithms, I also doubt that BCPack increases compression by ~100% compared to the already existing BC algorithms.
Thought it was more about packaging than compression?
I.e. Can load in partial texture from it instead of the full texture?
That's more like it, but according to windowscentral, bcpack will mitigate that different largely. The SSD's might perform closer to eachother then some believe.
You have to know how to parse what is reported.
What is just what they heard from the general web and conjecture and what is from their sources/devs.

They've not heard that from their sources from what I can tell, just reporting what is being said on the net.
 
That's exactly my point. Design goals can change and so as the resulting performance characteristics, so observations won't get carried over.


RDNA and RDNA2 aren't even the same iteration of IP. :D
It might not be carried over, but you are comparing same architecture and iteration of chip (XSX and PS5) to Vega 19' and 20' (where 20' received huge upgrades vs previous one). It would be like comparing RDNA 1 and RDNA 2, which is not what next gen will be like.

PS5 and XSX are literally apples to apples. Based on same GPU arch and same CPU arch. Reason why Sony is able to hit those frequency is not because they have completely different chip building block, but how they approached to TDP v clock limit. They limited a TDP and let chip run at max it can as long as its inside TDP limit, XSX runs as every console till now - cap cooling at what worst case scenario can provide and fix the clocks.
 
RDNA1 does not scale better with frequency. It scales pretty similar, but not better (a bit worse actually).

There are few benchmarks with 5700 clocked at 2100MHz and stock XT and stock XT beats it in all games, by bigger margin then DF, but in DF we know 100% it was locked for both while here 5700 might have downclocked.

This notion that higher frequency and API will eradicate 2TF advantage and more BW is weird to me. This has never been proven on same architecture, so why should we buy that?
We don't know it as there is no big rdna1 gpu (40 vs 36 is not 52 vs 36)
 
Back
Top