Xbox Series X [XBSX] [Release November 10 2020]

Why do people say XSX GPU is like a 2080? Just doing a couple minutes looking at it...

According to this a 5700Xt is only 9% slower than a 2080 (they dont compare directly to 2080 Ti)

https://www.techspot.com/review/1870-amd-radeon-rx-5700/

Just on basic TF numbers (9.7 vs 12.1), XSX GPU would be 24% faster than 5700XT, so well faster than 2080.

But it gets worse, that's treating the 5700XT as a 9.7 TF card, but that's actually only via it's max boost clock. Sustained 5700XT TF would be lower, so the real difference in favor of XSX is significantly higher than 24%.

And worse again, RDNA 2 should have some measure higher IPC than RDNA1. I dont have that number but for some reason 15% comes to mind.

Combine these factors and it should be well into or above 2080Ti territory, if not above. Where am I going wrong?
 
Where am I going wrong?

2080 is more then the stated 10.2TF, the clocks are supposed to be higher as default, atleast someone corrected me before on it here. 2080Ti is close to 14TF all said and done, doubt (in raw power) its as fast or faster.
2080 level performance isnt too shabby anyway, if thats what xsx gets at the least.
 
Very strange Microsoft went with 2 different memory pools and speeds for Series X, especially after they were the first to go with a unified approach and I also seem to remember Sony saying during PS4 development unified memory was the one thing all developers asked for (yes I know series X memory is 'unified' from API point of view) but is 10Gb of memory at 560GB/s enough to combat 448GB/s for 16Gb of memory?
 
Very strange Microsoft went with 2 different memory pools and speeds for Series X, especially after they were the first to go with a unified approach and I also seem to remember Sony saying during PS4 development unified memory was the one thing all developers asked for (yes I know series X memory is 'unified' from API point of view) but is 10Gb of memory at 560GB/s enough to combat 448GB/s for 16Gb of memory?

I suspect the answer is yes. I recall a long post - I think it was from this forum's very own Sebbi - before the launch of the PS4, stating why they would take more bandwidth over more capacity.

Naturally, both is preferable, but when we're looking at 10GB's of 560GB/s memory vs 14-15GB's of 448GB/s memory, I think the relatively small increase in capacity is more than offset by the relatively large disparity in bandwidth.

It's why I'm so disappointed that Sony haven't gone with 16gbps GDDR6. A single, unified pool of 512GB/s memory, with 14-15GB's available to developers, would've made for interesting comparisons with the XSX's 10GB's of 560GB/s + 3.5GB of 336GB/s memory:
  • The same overall capacity, save for some small difference in OS reservations.
  • Greater bandwidth for the XSX's "optimal" 10GB's.
  • Greater bandwidth per TF for the PS5, but still less bandwidth overall.
As it is, the XSX will have more bandwidth per TF, and substantially more bandwidth overall.
 
With no duplication of data it is hard to say if games will be bigger

I am really curious about this - a bit sad I know!

Why do people say XSX GPU is like a 2080? Just doing a couple minutes looking at it...

Perhaps take more than two minutes before trying to distill thousands of thousands of engineering effort by experts in their field, into a conclusion? :yep2: Maybe five or ten minutes...
 
I suspect the answer is yes. I recall a long post - I think it was from this forum's very own Sebbi - before the launch of the PS4, stating why they would take more bandwidth over more capacity.

Naturally, both is preferable, but when we're looking at 10GB's of 560GB/s memory vs 14-15GB's of 448GB/s memory, I think the relatively small increase in capacity is more than offset by the relatively large disparity in bandwidth.

It's why I'm so disappointed that Sony haven't gone with 16gbps GDDR6. A single, unified pool of 512GB/s memory, with 14-15GB's available to developers, would've made for interesting comparisons with the XSX's 10GB's of 560GB/s + 3.5GB of 336GB/s memory:
  • The same overall capacity, save for some small difference in OS reservations.
  • Greater bandwidth for the XSX's "optimal" 10GB's.
  • Greater bandwidth per TF for the PS5, but still less bandwidth overall.
As it is, the XSX will have more bandwidth per TF, and substantially more bandwidth overall.

Yeah the split BW doesn't seem ideal, but I'm guessing it ends up being you chuck the GPU data into that wide 10GB pipe and it's all good. I hope that doesn't introduce too much programming complexity though.

We know 2.5GB is for the OS, so that leaves you with 3.5GB of slow GDDR I guess for CPU stuff.

I'm just trusting MS engineers on this one, one assumes the cost vs performance tradeoff was worth it. It probably works well to effectively have the 10GB as your GPU RAM. That's 2X the entire 5GB RAM available in One/One X/PS4 after OS allocation of 3GB (did they ever reduce those?) which is in line with 16GB vs 8GB..

I am really curious about this - a bit sad I know!



Perhaps take more than two minutes before trying to distill thousands of thousands of engineering effort by experts in their field, into a conclusion? :yep2: Maybe five or ten minutes...

So we should never say a 2080Ti is faster than a GT 1030 because we havent spent thousands of years? LOL it's called benchmarks, that's what I'm working off.

I'm going to look more into this. I believe DF concluded RDNA was ~25%>GCN, but I'm not sure of a RDNA1 vs 2 mention, I thought AMD might have said 15% though.
 
So we should never say a 2080Ti is faster than a GT 1030 because we havent spent thousands of years? LOL it's called benchmarks, that's what I'm working off.
What benchmarks are you looking at that are relevant to nextgen console's actual performance?
 
What benchmarks are you looking at that are relevant to nextgen console's actual performance?


5700XT vs 2080/2080Ti and etc.

Of course it will be far from exact but even an programmer for these actual consoles wont be exact, it's impossible, and depends on countless factors, we can still have an idea.
 
5700XT vs 2080/2080Ti and etc.

Of course it will be far from exact but even an programmer for these actual consoles wont be exact, it's impossible, and depends on countless factors, we can still have an idea.

Agree with you, does it for me, for now. 5700XT's are often OC'd from factory also, they are around the 10TF mark.
 
Disagree.
You are free too but I would be very interested in how the PC architecture, with it's multi RAM-pools, numerous I/O buses and layers of abstraction are remotely similar to single RAM-pool APU-based consoles.
 
XBSX is having a glorious time as of late. Base hardware along with PC for DirectX12 Ultimate.

NVIDIA-DIRECTX12-ULTIMATE.jpg


 
Last edited by a moderator:
Yeah the split BW doesn't seem ideal, but I'm guessing it ends up being you chuck the GPU data into that wide 10GB pipe and it's all good. I hope that doesn't introduce too much programming complexity though.

We know 2.5GB is for the OS, so that leaves you with 3.5GB of slow GDDR I guess for CPU stuff.
It's not split pools. The CPU can access all the RAM, just at 336 GB/s. The GPu can only access 10 GB at 540 GB/s (as I understand it). It amy even be all RAM is addressable by all components, but the GPU will hit the lower BW if reaching into the standard memory pool:

"Memory performance is asymmetrical - it's not something we could have done with the PC," explains Andrew Goossen "10 gigabytes of physical memory [runs at] 560GB/s. We call this GPU optimal memory. Six gigabytes [runs at] 336GB/s. We call this standard memory. GPU optimal and standard offer identical performance for CPU audio and file IO. The only hardware component that sees a difference in the GPU."

If the CPU, audio and IO couldn't access the 'GPU optimal RAM', they wouldn't see identical performance for the two memory types.
 
Is the CPU clock with SMT 3.6 or 3.66? I've seen both reported. Just google 3.66 Series X and you'll see a lot of outlets reporting it, Polygon, WCCFTECH, etc.

Xbox Wire says 3.6 on their spec sheet, so I would take that absent some other evidence.
 
Back
Top