Current Generation Hardware Speculation with a Technical Spin [post launch 2021] [XBSX, PS5]

Status
Not open for further replies.
If the XSX supports mesh shaders, doesn't that mean that it can't have the RDNA 1 front end since RDNA 1 does not support mesh shaders?
I wouldn't pay much attention to this rdna 1.x vs full rdna 2. Both consoles seems to have some mixed rdna1 rdna2 aspects with xsx more rdna2 features like mesh shaders, vrs and int8/4. In the end what it matters is real performance in games and big desktop rdna2 cards don't show improvement in performance/flops department.
 
Do we trust somebody who thinks GPUs are magic? It's like taking climate change advice from Republicans. :runaway:

"time will tell" really sums it up. if you rewind to 2013 when Mark Cerny was explaining why PS4 had an an excess (or "non-round") configuration of CUs compared to Xbox One, it was because Sony anticipated far greater use of compute. Although use of Compute this did happen, and because of the low-powered Jaguar cores kind of had to happen, in terms of multiplatform game developer, compute didn't explosive as much because Xbox One didn't have as much excess compute capacity once CUs had finished servicing graphical needs.

Sometimes doing something a bit weird (Cell, 360 EDRAM) is a gamble. Cell was problem but 360 EDRAM would a boon for 720p MSAA - a post-process effect good for Xbox and mostly just absent on PS3 games but which didn't materially impact the design of games.

What am I saying? I have no idea. Maybe time will tell? :runaway:
I don’t think we can compare PS4 to PS3 and have any idea how that would link to PS5 and future performance. PS3 was exotic bespoke h/w which is still hard to emulate, PS4 was designed with devs in mind for ease of use and (likely after discussing with devs) how gaming programming would likely evolve.

As Cerny likely followed the same mantra (if it ain’t broke, don’t fix it) - maybe the PS5 won’t be left as far behind as some are predicting? It is after all proving so far that there’s no perceivable advantage to XSX with a fairly sizeable power gap.
 
I wouldn't pay much attention to this rdna 1.x vs full rdna 2. Both consoles seems to have some mixed rdna1 rdna2 aspects with xsx more rdna2 features like mesh shaders, vrs and int8/4. In the end what it matters is real performance in games and big desktop rdna2 cards don't show improvement in performance/flops department.

This guesswork using IP blocks whose capabilities are unknown is all kinds of stupid. What does an RDNA 2 card do that an RDNA 1 gpu can't? This has already been answered by AMD in great details: HW raytracing, Sampler feedback, HW VRS and more importantly mesh shaders. Anyway, one of the gpu architects for XS has already responded (to a fanboy no less):

 
Both consoles need to support some form of backwards compatibility with GCN; so these types of changes to their GPUs should be expected.
Anyone can have an opinion on what is and isn't RDNA 2. It's a pointless debate in the end as it's a debate about semantics and not a debate about technical functionality.

One could easily point out that both consoles moving to RDNA 2 is what dropped the power enough to increase clockspeeds. That would be sufficient proof on its own that they are both based on RDNA 2. The feature blocks of RDNA 2 can be brought in, customized, or ignored as is standard behaviour for semi-custom solutions.
 
I wouldn't pay much attention to this rdna 1.x vs full rdna 2. Both consoles seems to have some mixed rdna1 rdna2 aspects with xsx more rdna2 features like mesh shaders, vrs and int8/4. In the end what it matters is real performance in games and big desktop rdna2 cards don't show improvement in performance/flops department.
I, and most people here, do not care about the console war BS. What I'm asking is about one specific claim... from my understanding, the fact that the Xbox Series consoles have mesh shaders negates the claim that the Xbox Series consoles have the "RDNA 1 front end" does it not?
 
Both consoles need to support some form of backwards compatibility with GCN; so these types of changes to their GPUs should be expected.
Anyone can have an opinion on what is and isn't RDNA 2. It's a pointless debate in the end as it's a debate about semantics and not a debate about technical functionality.

One could easily point out that both consoles moving to RDNA 2 is what dropped the power enough to increase clockspeeds. That would be sufficient proof on its own that they are both based on RDNA 2. The feature blocks of RDNA 2 can be brought in, customized, or ignored as is standard behaviour for semi-custom solutions.

What is full fledged RDNA 2 is not up for debate, AMD closed the door on that (https://www.amd.com/en/technologies/rdna-2). PS5 seems to lack VRS and sampler feedback (not sure whether primitive shaders and mesh shaders are different things) while both consoles have omitted the infinity cache. It all comes down to whether a putative RDNA 2 GPU is showing the capabilities that defines the architecture.
 
Last edited:
I, and most people here, do not care about the console war BS. What I'm asking is about one specific claim... from my understanding, the fact that the Xbox Series consoles have mesh shaders negates the claim that the Xbox Series consoles have the "RDNA 1 front end" does it not?

This front-end/back-end talk is a perversion of a game that PC enthusiasts/miners use to do every time: review IP blocks versions dumps to deduce that maybe a new card is being tested; no info on architecture/capabilities, just that this particular card might be different from what currently exist. This of course becomes meaningless in a semicustom setting with injection of IP from another partner.
 
I, and most people here, do not care about the console war BS. What I'm asking is about one specific claim... from my understanding, the fact that the Xbox Series consoles have mesh shaders negates the claim that the Xbox Series consoles have the "RDNA 1 front end" does it not?
I'm not a person to decide ;d Locuza also in his deep video about xsx said frontend is closer to rdna1
 
What is full fledged RDNA 2 is not up for debate, AMD closed the door on that (https://www.amd.com/en/technologies/rdna-2). PS5 seems to lack VRS and sampler feedback (not sure whether primitive shaders and mesh shaders are different things) while both consoles have omitted the infinity cache. It all comes down to whether a putative RDNA 2 GPU is showing the capabilities that defines the architecture.
They both only advertised that their solutions are RDNA 2 based. Not that they are 100% RDNA 2 carbon copies.
Yes, if you want to be technical about it, it does look like XSX has more RDNA 2 blocks. And, I guess possibly Zen 2 blocks with some extra customizations of their own.

Not being RDNA 2 in itself doesn't make the system deficient however, it just makes it not RDNA 2. They both set out to solve the same problems, but their approaches are different.
a) Solving the memory cost problem. At a maximum of 16GB to keep costs down, they needed to find a way to increase fidelity while keeping within the limits of 16GB.
Sony:
5.5 GB/s SSD + Kraken Compression
This gets them 80% of the way there. The remaining 20%, the memory has to get into cache for processing. So what to do? Cache scrubbers to remove that last bit of random latency to go from textures of SSD directly into frame.

MS:
2.5 GB/ssd + BCPack Compression
This gets them 50% of the way there. The remaining 50%? Sampler Feedback Streaming, only stream in exactly what you need, further reducing what needs to be sent and when. That gets you 30% of the remaining way. You still got to get it through cache, instead of cache scrubbers their SFS will use an algorithm to fill in values for textures being called by SFS. If for whatever reason the texture has not arrived in frame, it will fill in the texture with a calculated value while it waits for the data to actually arrive within 1-2 frames later.

b) Approaching how we redistribute rendering power on the screen?
MS exposes VRS hardware
PS5 in this case will have to rely on a software variant

C) Improving the front end Geometry Throughput, Culling etc.
MS aligns features sets around Mesh Shaders, and XSX also supports (hopefully by now) Primitive shaders
PS5: Primitive shaders and whatever customizations they put towards their GE
* they are not the same, but comparable.

From a hardware perspective, feature wise while they aren't the same at a technical level, strategically they have things in place to solve the same problems.
However from a software stack perspective, they are dramatically different. Xbox does everything through VM containers, and DX12 is likely significantly less efficient compared to Vulkan and GNM. If you are to compare GNM to DX12, you can see how much more register usage there is on DX12 over Vulkan for instance to do the same things on the GPU. That alone probably has a very significant impact on the performance of some games over others.
 
I, and most people here, do not care about the console war BS. What I'm asking is about one specific claim... from my understanding, the fact that the Xbox Series consoles have mesh shaders negates the claim that the Xbox Series consoles have the "RDNA 1 front end" does it not?
Yes. These guys are talented, but I suspect not so talented to be able to see customizations within the Geometry Engines. Mesh Shaders are done within the Geometry Engine of the XSX IIRC. So I would imagine that being difficult to separate out from the 5700XT Geometry Engine.

Technically the front end is more than just 1 part however. I guess it's how far people want to define as being the front end. Once again, this feels like a useless claim from them. Unless it serves some sort of technical relevancy, it's not really important.
 
They both only advertised that their solutions are RDNA 2 based. Not that they are 100% RDNA 2 carbon copies.

They are likely not a 100% carbon copy on XBS-X/S, but according to James Stanard (MS graphics R&D, GPU architect) it is in fact 100% RDNA 2. Which also doesn't mean that MS hasn't customized it, but it does mean that it has everything that RDNA 2 has.

Regards,
SB
 
They are likely not a 100% carbon copy on XBS-X/S, but according to James Stanard (MS graphics R&D, GPU architect) it is in fact 100% RDNA 2.

Regards,
SB
Yea indeed. And I think if you look at major components, Series consoles has them all.
The improved energy efficiency
RT + DX12U features.
I mean, if that is the definition of RDNA 2, then by that definition it is 100% RDNA 2.

Scan converters, and CU layouts and the minor modifications are not being advertised belonging to RDNA 2. So I dunno where these guys are headed.
 
Biggest differential between RDNA1 and RDNA2 are low level design and process optimization that resulted in big perf/watt improvement + HW based RT. This is what both consoles are packing, because if this wasnt case, we would be looking at quite a bit weaker consoles.

For referrence - XSX is ~210W full system load with 2.5TF and 8GB of GDDR6 more then 5700XT, 8C Zen2 and all accompanied parts of console. That is same wattage of sole 5700XT at similar clocks, less power and half thr RAM.

Other then that, some blocks might have been carry over from RDNA1 duo to legacy compatability or design lock timeline, but I guess they make very little difference in actual performance (if at all). Seems like MS waited a bit longer to lock it to get DX12U baseline there (SFS, MS and VRS), but othet then that it seems both are based on RDNA2.
 
The additional XSX features like VRS, sampler feedback streaming, ML extensions and mesh shaders all need developer work to take advantage of (mesh shaders potentially at the content stage too) - none of these are "automatic" gains unlike boosting clockspeed to moon.

Adoption of some XSX features, as in the PC GPU space, might not be all that rapid. ML is a huge and bewildering field, and I don't really know what it might bring (beyond the usual suspects like upscaling and de-noising). And it's not like PS5 can't do it, it's just that XSX extensions might allow it to be somewhat faster relatively speaking.

I guess mesh shaders are another thing, like ray tracing, that could be a good fit for a "wide" GPU with lots of compute (and lots of on-chip memory).
 
Last edited:
The additional XSX features like VRS, sampler feedback streaming, ML extensions and mesh shaders all need developer work to take advantage of (mesh shaders potentially at the content stage too) - none of these are "automatic" gains unlike boosting clockspeed to moon.

Adoption of some XSX features, as in the PC GPU space, might not be all that rapid. ML is a huge and bewildering field, and I don't really know what it might bring (beyond the usual suspects like upscaling and de-noising). And it's not like PS5 can't do it, it's just that XSX extensions might allow it to be somewhat faster relatively speaking.

I guess mesh shaders are another thing, like ray tracing, that could be a good fit for a "wide" GPU with lots of compute (and lots of on-chip memory).

ML is going to be very important for AI, arguably in same way RT can aid in AI calculations, but I think ML will play a much bigger role out of the two overall in that aspect. To that end I wonder how much of MS's custom support for extended ML low-precision math took cues from CDNA design-wise but that's almost impossible to speculate on since IIRC there's no schematics, diagrams or even whitepapers for CDNA available.

From what I've been reading, it seems PS5's support for ML extends to FP16 and INT16, which makes sense considering PS4 Pro already supported FP16. So MS would have, essentially, added support for INT8 and INT4, and of course they have more CUs therefore they can dedicate more in parallel for grouped ML calculations. I'm still curious if there's anything else they've done on the GPU side specifically for DirectML; if their general tech (ish?) event hasn't already happened this month then hopefully some more details emerge around that time.

Some of the mesh shader test results have been...massive, to say the least. 1000% increases on some GPUs. In terms of actual practicality running real games though, that will probably come way down. Still though, even a 10-fold reduction would give 2x FPS in practice, that can stack with other things like what MS are already doing with their FPS enhancement tech (if this can work on older BC games, can it also theoretically be done for newer games, even if it just helps increase FPS a bit rather than 2x/4x multipliers?), that all starts to add up.
They both only advertised that their solutions are RDNA 2 based. Not that they are 100% RDNA 2 carbon copies.
Yes, if you want to be technical about it, it does look like XSX has more RDNA 2 blocks. And, I guess possibly Zen 2 blocks with some extra customizations of their own.

Not being RDNA 2 in itself doesn't make the system deficient however, it just makes it not RDNA 2. They both set out to solve the same problems, but their approaches are different.
a) Solving the memory cost problem. At a maximum of 16GB to keep costs down, they needed to find a way to increase fidelity while keeping within the limits of 16GB.
Sony:
5.5 GB/s SSD + Kraken Compression
This gets them 80% of the way there. The remaining 20%, the memory has to get into cache for processing. So what to do? Cache scrubbers to remove that last bit of random latency to go from textures of SSD directly into frame.

MS:
2.5 GB/ssd + BCPack Compression
This gets them 50% of the way there. The remaining 50%? Sampler Feedback Streaming, only stream in exactly what you need, further reducing what needs to be sent and when. That gets you 30% of the remaining way. You still got to get it through cache, instead of cache scrubbers their SFS will use an algorithm to fill in values for textures being called by SFS. If for whatever reason the texture has not arrived in frame, it will fill in the texture with a calculated value while it waits for the data to actually arrive within 1-2 frames later.

b) Approaching how we redistribute rendering power on the screen?
MS exposes VRS hardware
PS5 in this case will have to rely on a software variant

C) Improving the front end Geometry Throughput, Culling etc.
MS aligns features sets around Mesh Shaders, and XSX also supports (hopefully by now) Primitive shaders
PS5: Primitive shaders and whatever customizations they put towards their GE
* they are not the same, but comparable.

From a hardware perspective, feature wise while they aren't the same at a technical level, strategically they have things in place to solve the same problems.
However from a software stack perspective, they are dramatically different. Xbox does everything through VM containers, and DX12 is likely significantly less efficient compared to Vulkan and GNM. If you are to compare GNM to DX12, you can see how much more register usage there is on DX12 over Vulkan for instance to do the same things on the GPU. That alone probably has a very significant impact on the performance of some games over others.

Just goes to show the apples to oranges approach the two have taken, I wish more people looked at it this way vs. thinking one company's approach is THE approach and everything else must be measured against it. It's such a simple-minded way of looking at the world of technology.

I do have more optimistic views on DX12, particularly DX12U, resource usage tho, seemingly. Remember seeing some early comparisons in overhead costs for certain things and DX12U came very close to Vulkan in a lot of those areas. This was maybe a year or so ago since I saw it, but I don't know when exactly the article I read mentioning the comparisons was written.

Either way, it's probably expected for Vulkan to hold advantage in lower overhead costs but at least from a brief bit I looked into on DX12U comparisons with it in a lot of areas it should be more competitive compared to the past. The bigger potential impediment for DX12U in terms of overhead resource costs is actually probably with Windows 10; the Xbox OS IIRC is a custom build with some aspects of the Windows kernel but it's nothing as simple as a "stripped-down Windows", it's specifically built for the consoles. However on PC, there was that somewhat amusing "test", I guess it could be called (?), with people running Windows apps on Linux and getting better performance.

Maybe that's down to what specifics are going on with their versions of Windows 10 or what features they have running vs. disabled but, yeah, that was a thing. Thankfully PC has the leniency of just scaling up the hardware as required.
 
The recent lex fridman podcast with jim keller was interesting. One prediction keller made is that it's possible game graphics rendering becomes neural network problem instead of rasterization/compute problem. Really good podcast. I'll link it also to this thread(inside spoiler). Another really interesting part was keller talking about how he uses lucid dreaming to solve problems while sleeping. Third interesting thing is work keller is doing with graph based neural network accelerators and his arguments why that is better than gpu/tpu for neural network processing.

 
Last edited:
Still though, even a 10-fold reduction would give 2x FPS in practice, that can stack with other things like what MS are already doing with their FPS enhancement tech (if this can work on older BC games, can it also theoretically be done for newer games, even if it just helps increase FPS a bit rather than 2x/4x multipliers?), that all starts to add up.

But new games are already using the hardware as much as they can, given the nature of their code at present.
Are you sure you're not thinking of interpolated frame inserts? Example: https://www.eurogamer.net/articles/digitalfoundry-frame-rate-upscaler-tech-interview
I don't think that's what Microsoft is doing. I could be mistaken, though.
 
But new games are already using the hardware as much as they can, given the nature of their code at present.
Are you sure you're not thinking of interpolated frame inserts? Example: https://www.eurogamer.net/articles/digitalfoundry-frame-rate-upscaler-tech-interview
I don't think that's what Microsoft is doing. I could be mistaken, though.

I think the hope is to find alternative cheaper approximate ways to compute various things. Cloth/water and general physics simulations with neural networks is one very interesting domain. Similarly perhaps we can get away by rendering certain things at lower resolution/rate and let neural networks do their thing. It's all very new and unproven. Lot of papers and demos show promise though.



I believe the one of the dreams developers joined nvidia. I wonder if he also is working on the signed distance fields with neural nets approach/was that the reason he joined nvidia.

If there is to be next generation console I wouldn't be at all surprised if the next gen feature was neural networks and drastic change in rendering capability. Just adding more of the same is not easy road anymore. Manufacturing improvements for making better chips isn't anymore what it used to be. Cost is big issue as is power consumption.
 
Status
Not open for further replies.
Back
Top