My understanding is it's due to a list of factors including better low level API, a single pool of ram with not so bad bandwidth, much faster gpu clock, better utilization of gpu/cpu resources from variable clock and possible SSD assistance all working in tandem to mitigate the TF gap. It's certainly not a clear cut disadvantage like how Pro is to 1X, there's so much more differences here it's not funny.You are saying raw deficit can be overcome with API and tools, when this was not the case with XBX and PS4Pro. If PS5 will overcome less BW and 2TF less by having better API, why didnt Pro do the same?
None of which makes sense, like his comments on XSX memory being the "same mistake" as the X1 sram.
Sounds like it's gonna be a Vega 56 vs Vega 64 situation where the higher CU count is disproportionate to the increase in speed while he believes the much higher frequencies of GPU clock matters more in relative comparison. Also a split ram pool holding back maximum efficiency does sound obvious, 10GB of fast ram is not gonna be enough for next gen especially at 4k. PS5's API also sounds like magic pudding compared to DirectX on XSX. Wouldn't be surprised if multiplat difference is so minimal DF would need extra magnifiers to determine the pixel count. I'll say 1900p-2050p on PS5 vs 2016p on XSX to be the norm with some exceptions depends on dev skills, resources etc. The real difference is gonna be in the exclusives and I can see PS5's SSD to be utilized in ways to increase visuals most 3rd party could only dream of.
This guy sounds like an absolutebeginner, how did he land a job at Crytek? Is situation that dire there?
I have one issue with "TFLOPs are only theoretical metric", most take it as "Stronger console cannot even use TFs to its full", but somehow the weaker one can?
Nice of you to omit that he said that's only one example and it could varies from cases to cases depends on what the tasks are.I don't think anyone actually believes what he wrote
Like in the DF video on the PS5, to increase performance on rdna you are better off with more CUs rather then clocking so much higher. It is harder to gain more performance with clock increases then CU increases.
PS5's lower BW might actually bottleneck it at some point. Differences in multiplat games could be higher settings, more stable framerate and resolution. On the ssd, MS gurantees a sustained rate, wheras sony again has variable rates (good for pr). According to some sources, bcpack will largely mitigate the difference, maybe even favoring xsx. In exclusives we will see the advantage of a considerable more powerfull gpu (2 to 3tf diff), faster cpu with no downclocks and a possible 3.8ghz, higher bw, and a ssd thats close, exclusives might outshine ps5's.
True, but devs that claimed PS5 had heat issues are being put away at once. I think that you put faith in what you want to believe, no matter how much fantasy it is.
It's explained in the above DF video, it's the opposite, XSX has a better chance to achieve its full potentional.
According to some sources, bcpack will largely mitigate the difference, maybe even favoring xsx. In exclusives we will see the advantage of a considerable more powerfull gpu (2 to 3tf diff), faster cpu with no downclocks and a possible 3.8ghz, higher bw, and a ssd thats close, exclusives might outshine ps5's.
Nice of you to omit that he said that's only one example and it could varies from cases to cases depends on what the tasks are.
It’s a fair position. Bottlenecks are bottlenecks. A truck has 1000 HP but can beat a a smart car off the line. Same idea. If the pressure is on for Xbox that split pool is really terrible then it’s on MS to address this.Sure I do, both consoles would hit a ram bottle neck at some point but it seems like he's so unhappy with that split pool.
Or, you know, Occams razor and stronger console is simply stronger? More TFs, more BW with same architecture results in better performance. Who would have thought it.Pro and One were limited by their vanilla versions.
The most probable scenario being: optimize everything for XOne, because it's the slowest and hardest to optimize for. Do as less as possible for any other platform to save costs.
The most probable scenario being: optimize everything for XOne, because it's the slowest and hardest to optimize for. Do as less as possible for any other platform to save costs.
5.5GB/s raw (2.4GB/s)
8-9GB/s compressed (4.8GB/s)
Nice of you to omit that he said that's only one example and it could varies from cases to cases depends on what the tasks are.
And the 5700 series is RDNA 1.0 btw. Meaningless comparison.
AMD's official claim for RDNA 2 is to improve perf-per-watt by 50%. That is a product of RDNA 2 pushing the operating frequency higher, while improving physical design and perf-per-clock. We also have the Renior APU which has 27% less CUs, clocked higher, redesigned for 7nm, and still performed way better than its 11CU predecessor (alongside the doubled CPU core count).Like in the DF video on the PS5, to increase performance on rdna you are better off with more CUs rather then clocking so much higher. It is harder to gain more performance with clock increases then CU increases.
Not sure what are you arguing about.That's not how it works. Otherwise nobody would ever go from DX9 (which NV wanted with all their heart).
Show me a source where MS say 4.8 GB/s is a typical value.There are conditions for PS5's read speed too - it doesn't hit 22 GB/s all the time; it'll be a fringe case (otherwise Cerny would have said 22 GB/s instead of 8 to 9, wouldn't he, or even 10 GB/s because those faster read events push the average up).
For all extents and purposes, it's 4.8 GB/s versus 9 GB/s as far as the devs are concerned. PS5 might get the occasional burst read, but you can't design for it. What's the value in arguing over the minutia?
Not apples to apples at all. To be apples to apples it would have to be Renoir with say 2TF vs 1.65TF, with 2TF having more BW and more CUs at lower clocks.AMD's official claim for RDNA 2 is to improve perf-per-watt by 50%. That is a product of RDNA 2 pushing the operating frequency higher, while improving physical design and perf-per-clock. We also have the Renior APU which has 27% less CUs, clocked higher, redesigned for 7nm, and still performed way better than its 11CU predecessor (alongside the doubled CPU core count).
Thought it was more about packaging than compression?I really doubt this. Theres going to be a large number of assets that do not compress at all with the BC class of compression algorithms, I also doubt that BCPack increases compression by ~100% compared to the already existing BC algorithms.
You have to know how to parse what is reported.That's more like it, but according to windowscentral, bcpack will mitigate that different largely. The SSD's might perform closer to eachother then some believe.
That's exactly my point. Design goals can change and so as the resulting performance characteristics, so observations won't get carried over.Not apples to apples at all.
RDNA and RDNA2 aren't even the same iteration of IP.Its not even the same chip.
It might not be carried over, but you are comparing same architecture and iteration of chip (XSX and PS5) to Vega 19' and 20' (where 20' received huge upgrades vs previous one). It would be like comparing RDNA 1 and RDNA 2, which is not what next gen will be like.That's exactly my point. Design goals can change and so as the resulting performance characteristics, so observations won't get carried over.
RDNA and RDNA2 aren't even the same iteration of IP.
We don't know it as there is no big rdna1 gpu (40 vs 36 is not 52 vs 36)RDNA1 does not scale better with frequency. It scales pretty similar, but not better (a bit worse actually).
There are few benchmarks with 5700 clocked at 2100MHz and stock XT and stock XT beats it in all games, by bigger margin then DF, but in DF we know 100% it was locked for both while here 5700 might have downclocked.
This notion that higher frequency and API will eradicate 2TF advantage and more BW is weird to me. This has never been proven on same architecture, so why should we buy that?
Mitgate what exactly?
Have you read the patent?