Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
acv4tdknq.jpg

@Dictator Not 15% more, 30% more.
I think a bit of a misspoke there on my Part! In my brain I must have been saying one Thing thinking another.
 
You can continue to use XDK (old DevKit iterated on from 2012) to produce Xbox One family titles and have ability to know if you're running on Xbox Series X or S hardware. However, you can not take advantage of new APIs or Hardware features. I believe Sea of Thieves is still operating on the XDK. This would allow for higher resolutions or framerate options, but the title is still operating in "Backwards Compatibility" mode. This can be seen in the Game Info section for each title. I think this is called something like "XDKAware", but I could easily be off on the name.

You can switch over to use GDK to produce Xbox One, Xbox Series, PC, and xCloud titles and have access to the new APIs and Hardware features.

So if a game is developed using XDK on Xbox Series consoles it's still technically in BC mode?
 
There are obviously demising returns between "clearly not performing properly" and "perfect". Unless there's some huge unaccounted for factor, the xbox team can expect a lot more upside on tool optimization, because they're well below an adequate quality level right now.

Here's the thing. Reliable sources over on resetera, i.e. multiplatform developers, have been saying all year that there's not much difference between the consoles - definitely not 20% visuals delta as the theoretical maximum number of teraflops numbers would suggest. And this has been echoed by reliable sources like Jason Schreier who talks to a lot of devs. Some folks have said PS5 has advantages. PS5 is running that GPU, the whole GPU and cache, much faster. In cases where games aren't utilising all the CUs and/or are not close to needing maximum performance, why wouldn't PS5 turn in better performance? It would be kind of weird if it didn't. Expect both consoles it improve, whether Series X has more headroom to improve is something that may not be known for 2-3 years. ¯\_(ツ)_/¯
 
Yeah, across developers, engines, genres we're seeing the same thing. It's the dips in performance that really stand out to me. Everyone seems to be having this problem.
At least for high framerate modes, it might be a fixed function issue as the lower resolutions depend more on that than the CUs. There's still a limit at the main memory bandwidth, but perhaps for the ROPs the bandwidth compression is seeing better throughput.

If we assume they are both 64 ROPs for colour writes, then XSX will only need 467GB/s (1.825GHz * 64 writes * 32bpp per write), so they actually have a lot more bandwidth than they need for that. PS5 would need 570GB/s at 2.23GHz. Bandwidth compression may provide about up to an extra 30% on average (448GB/s + 30% = 582GB/s) which puts raw fill rates at a big advantage on there.

Things may look a little different on blending rates, but there's evidence that transparency heavy scenes are also at a disadvantage on XSX, so there's something funny going on there, and there shouldn't be an API issue blocking such simpler operations. Normally, you'd see the blend rates saturate the available bandwidth and XSX should have automagically won, but that's seemingly not being observed.

Things may look different again where 4xFP16 render targets (not pixel shader precision) are used for certain render passes, but the addition of different 32bpp precision formats may skew things back again as developers optimize for performance vs pixel quality.

I'll leave it at there. :V
 
Last edited:

Seeing this results, it looks like both consoles would benefit from native 1080p or 1440p rendering when corresponding output resolution is selected... looks like reconstruction to 4k and then downsampling to whatever output resolution is selected are unnecessary steps.

One would also think that point of dynamic resolution is to hold certain fps and not have fluctuating resolution and fluctuating fps with tearing.
 
So if a game is developed using XDK on Xbox Series consoles it's still technically in BC mode?

One way to think of it is that XDK is DX12, you can still enhance the game within those boundaries like they do for the 1X, as you know what console its running on. Enhanced/optimized BC I guess you could call it.

GDK you have access to DX12U (VRS, Mesh shaders, SFS), direct storage.
As you know what platform your running on you know what features and profile targets are available. (PC, XO, 1X, XSX, XSS)
 
Here's the thing. Reliable sources over on resetera, i.e. multiplatform developers, have been saying all year that there's not much difference between the consoles - definitely not 20% visuals delta as the theoretical maximum number of teraflops numbers would suggest. And this has been echoed by reliable sources like Jason Schreier who talks to a lot of devs. Some folks have said PS5 has advantages. PS5 is running that GPU, the whole GPU and cache, much faster. In cases where games aren't utilising all the CUs and/or are not close to needing maximum performance, why wouldn't PS5 turn in better performance? It would be kind of weird if it didn't. Expect both consoles it improve, whether Series X has more headroom to improve is something that may not be known for 2-3 years. ¯\_(ツ)_/¯

This in a nutshell. I mentioned this quite a few times, that certain developers and reliable sources like Schreier have told gamers for quite some time this would be the case. That both systems were quite close in performance, that any differences would be barely noticeable. I believe most of these issues (of nonbelief) stemmed from obvious console warriors, fake insiders, and corporate hype-men.
 
Since people keep using the phrase "reliable sources", who were the ones behind the 13TF PS5 Specifications? Weren't some trying to put that at the feet of Jason Schreier?
 
At least for high framerate modes, it might be a fixed function issue as the lower resolutions depend more on that than the CUs. There's still a limit at the main memory bandwidth, but perhaps for the ROPs the bandwidth compression is seeing better throughput.

If we assume they are both 64 ROPs for colour writes, then XSX will only need 467GB/s (1.825GHz * 64 writes * 32bpp per write), so they actually have a lot more bandwidth than they need for that. PS5 would need 570GB/s at 2.23GHz. Bandwidth compression may provide about up to an extra 30% on average (448GB/s + 30% = 582GB/s) which puts raw fill rates at a big advantage on there.

Things may look a little different on blending rates, but there's evidence that transparency heavy scenes are also at a disadvantage on XSX, so there's something funny going on there, and there shouldn't be an API issue blocking such simpler operations. Normally, you'd see the blend rates saturate the available bandwidth and XSX should have automagically won, but that's seemingly not being observed.

Things may look different again where 4xFP16 render targets (not pixel shader precision) are used for certain render passes, but the addition of different 32bpp precision formats may skew things back again as developers optimize for performance vs pixel quality.

I'll leave it at there. :V

Well I think one thing we can say for certain is that XSX might have more flops and more bandwidths, but it ain't really showing 'em!

Being ROP limited particularly at high frame rates is reasonable I guess. MS specifically mentioned RT (and not raster) as being the driver for their fat bus. It would be funny if PS5 turned out to be better at 120hz after MS had been banging on about it for months.

---------

Anyway, even when MS get their tools to a better place, I think we're going to continue see some of what we've already seen : that for cross gen games at least, the maths per pixel ratio is a better fit for the PS5 arrangement.

And MS had better hope games start making use of mesh shaders to keep things efficient on the front end.
 
Since people keep using the phrase "reliable sources", who were the ones behind the 13TF PS5 Specifications? Weren't some trying to put that at the feet of Jason Schreier?

No, those were rapid fanboys. Jason Schreier is a self-confessed 'non-technical person', I have only ever see him repeat sentiments of what he's heard from developers. He's never delved onto the details of technicalities that he himself does not understand. This forms part of his credibility.

As a journalist, I've never seen him claim to know more than he understands or ride coattails on information shared in confidence. And this is probably why developers feel comfortable talking to him and sharing information with him. This speaks volumes.

edit: and just to provide clarity. I think Microsoft absolutely know what they're doing with Series X. A bit like PS4 with it's extra compute capacity for the future, I think Microsoft have planned for this, but also endowed Series X with a ton of overhead for high-bandwidth graphics functions. We may not see the fruits of this for a few years. I can see wide vs fast trading blows (technical advantages) over the generation until devs develop techniques that work well on both architectures. The only way this doesn't happen is if Sony do have some weird secret sauce, but I feel like we would have heard about this by now. Different approaches to the same problems but Microsoft have more GPU headway to stretch their legs. :yes:
 
Last edited by a moderator:
Since people keep using the phrase "reliable sources", who were the ones behind the 13TF PS5 Specifications? Weren't some trying to put that at the feet of Jason Schreier?

Earlier or current SDKs could had/have higher clocked and full-fat 40CU GPUs. I don't necessarily remember Jason stating such a number, but only that both systems were quite close in performance.
 
Since people keep using the phrase "reliable sources", who were the ones behind the 13TF PS5 Specifications? Weren't some trying to put that at the feet of Jason Schreier?

Jason Schreier never told the PS5 is 13 Tflops, he told the performance are nearly the same and this is what is arriving now. Multiples peoples some dev like Matt Philips knowing people working on PS5 and XSX said Tflops is not the end of all. On the PS5 I have nothing against the CU 10.28 Tflops and continuous boost, I did not believe what people told me about XSX performance even when people said API are better on PS5 because of the memory bandwidth of PS5. With wat we know of the PS5 spec, this is an by far THE weakness of the console much more than the Teraflops.

https://medium.com/@mattphillips/te...-of-comparing-videogame-consoles-4207d3216523

People said the Crytek guy Ali Saheli was a liar and said shit
https://www.neogaf.com/threads/ali-...-interview-up-tweets-article-removed.1535138/

Same when Billy Khan and Alex gneiting of ID software praised the PS5 multiples times.

Some of the people who were saying this for months, they think the console will be close the full generation. I don't believe it because on paper it seems impossible but for sure they have gain some credibility.

But if it arrives, I don't understand why Sony doesn't give details about the choice they have make on the PS5 APU like it is fucking secret defense.
 
No, those were rapid fanboys. Jason Schreier is a self-confessed 'non-technical person', I have only ever see him repeat sentiments of what he's heard from developers. He's never delved onto the details of technicalities that he himself does not understand. This forms part of his credibility.

As a journalist, I've never seen him claim to know more than he understands or ride coattails on information shared in confidence. And this is probably why developers feel comfortable talking to him and sharing information with him. This speaks volumes.

Yup. That's why we know so much more about "crunch" and the "sexual allegation" issues going on in the gaming industry, because he's trusted on delivering compelling and accurate stories.
 
No, those were rapid fanboys. Jason Schreier is a self-confessed 'non-technical person', I have only ever see him repeat sentiments of what he's heard from developers. He's never delved onto the details of technicalities that he himself does not understand. This forms part of his credibility.

As a journalist, I've never seen him claim to know more than he understands or ride coattails on information shared in confidence. And this is probably why developers feel comfortable talking to him and sharing information with him. This speaks volumes.

edit: and just to provide clarity. I think Microsoft absolutely know what they're doing with Series X. A bit like PS4 with it's extra compute capacity for the future, I think Microsoft have planned for this, but also endowed Series X with a ton of overhead for high-bandwidth graphics functions. We may not see the fruits of this for a few years. I can see wide vs fast trading blows (technical advantages) over the generation until devs develop techniques that work well on both architectures. The only way this doesn't happen is if Sony do have some weird secret sauce, but I feel like we would have heard about this by now. Different approaches to the same problems but Microsoft have more GPU headway to stretch their legs. :yes:

The PS4 advantage was visible day one and it stay during all the generation out of a few outliers running at 720p on Xbox One and 1080p on PS4 most of the time before the consoles begin to show their limit it was 900p against 1080p at 30 fps or 720p against 900p at 60 fps.

Sony has done something but hide it, we know they don't have INT4 and INT 8 "rapid packed math" but the Graphics Sony engineer who said too much said they have many custom advanced functionnality. But for an unknown reason they don't want to talk about the APU.

I was thinking it is because they are ashamed of its inferiority and maybe it is the case and when Gamecore will be finished the XSX will naturally be around 20% faster than the PS5 but this is annoying.

EDIT: Imo the PS5 advantage will be there for a few months to a year and the XSX will naturally be above sooner than later.
 
Last edited:
Well, I would just guess, xbox is underperforming because it more or less uses the PC version dumped down so it runs and optimized only so far that 60fps was possible. This was not possible for the PS5 where code had to be rewritten to work on PS4 & PS5. It should be more or less as simple as that.
The similarity of the xbox to a PC (even on OS side) make easy ports possible. And now that the CPU and GPU are not limiting that much, it is even easier to just port the game at once without optimizing it.
 
https://forum.beyond3d.com/posts/2176238/

hmmmmmm

RB+.png

So potentially XSX/PS5 are 2*4 colour*4 RB+ = 32ROPs -> 64 colour writes, 32 blend output per clock, which would make sense since blending 64 pixels would require way more bandwidth than what's there. It'd be a decent die size optimization as well.

i.e.

64* clock rate * 4B write
32 * clock rate * (4B write + 4B read)
 
Last edited:
Status
Not open for further replies.
Back
Top