Current Generation Hardware Speculation with a Technical Spin [post GDC 2020] [XBSX, PS5]

Status
Not open for further replies.
Just a question about Tier 2 VRS, I watched the digital foundry video on gears 5 and it left me wondering, if I understood it correctly Tier 2 VRS uses an edge detection algorithm in part to determine which regions need increased resolution before the image is shaded, but how does it do an edge detection on an unrendered image?

IIRC, that is game dependent and not entirely part of VRS. So it's not "Tier 2 VRS uses an edge detection algorithm", it's "Gears 5 uses an edge detection algortihm". With VRS Tier 2, you can specify what shading rate you want within a draw call, so one item can be full while another is 1:2. It's up to the developers to determine what the appropriate shading rate they want is.

Technical Blog directly from Microsoft in 2019 on VRS @ https://devblogs.microsoft.com/directx/variable-rate-shading-a-scalpel-in-a-world-of-sledgehammers/

https://www.eurogamer.net/articles/digitalfoundry-2020-gears-tactics-tech-analysis

They = The Coalition

Well, apparently the changes are down to using what is known as Tier Two variable rate shading. Gears Tactics uses Tier One, which allows developers to specify the shading rate per draw call, while Tier Two allows for more granular control within a draw call. This allows more precise control over which parts of the screen are adjusted. They use an edge detection filter to figure out the rate of shading and can vary it across the screen in a series of 8x8 tiles. In this case, using VRS basically saves five to 12 per cent of rendering time per frame which leads to a higher average resolution, making for a sharper looking game. Artefacts from VRS are not completely eliminated with Tier Two, but they're very hard to discern.

VRS Blog said:
Developers won’t have to choose between techniques

We’re also introducing combiners, which allow developers to combine per-draw, screenspace image and per-primitive VRS at the same time. For example, a developer who’s using a screenspace image for foveated rendering can, using the VRS combiners, also apply per-primitive VRS to render faraway objects at lower shading rate.
 
I thought they made it so you could develop remotely on dev kits in the office, not ones in azure.

From what has been said, the first xsx blades, only came of the assembly line couple months ago I believe it was.

Perhaps. I thought they were located in their cloud, not the individual offices. Or maybe it was a mix of both? Like perhaps they added some individual DevKits into a network location, while not being the industrial blades they will use for xCloud later on.
 
Perhaps. I thought they were located in their cloud, not the individual offices. Or maybe it was a mix of both? Like perhaps they added some individual DevKits into a network location, while not being the industrial blades they will use for xCloud later on.
They did say using xcloud technology, but pretty sure was allowing to develop against on site kits.
When I get a chance I'll see if can find it.
 
Regardless of what devs say - I think it makes more sense if the XSX GPU has features from RDNA 2 that the PS5 GPU does not (VRS, SFS, and mesh shaders being an advancement on primitive shaders). Xbox Series X having a bigger more feature rich GPU would align with the time tables we know about this gen. That PS5 was originally going to launch in 2019, that Xbox Series X was waiting on silicon and hardware for longer and as a result sent its dev kits out much later.
https://www.pttweb.cc/bbs/PC_Shopping/M.1590679194.A.F5B
From Taiwanese Aquariuszi:

PS5 started chip verification (A0) in H2 2018. But they screwed up and the chip was sent back to TSMC for the whole H1 2019. Then another C0 version started verification in H2 2019.

Xboxsx started the chip verification in H1 2019 and finished in H2 2019.



It looks like xbox sx final silicon was earlier than PS5.
 
https://www.pttweb.cc/bbs/PC_Shopping/M.1590679194.A.F5B
From Taiwanese Aquariuszi:

PS5 started chip verification (A0) in H2 2018. But they screwed up and the chip was sent back to TSMC for the whole H1 2019. Then another C0 version started verification in H2 2019.

Xboxsx started the chip verification in H1 2019 and finished in H2 2019.



It looks like xbox sx final silicon was earlier than PS5.
I think the starting point is more important here.
 
Until somebody that actually knows the details shares it, we do not really know what PS5 lack from or add-on to RDNA2 is, its fun to speculate, but it is what it is.
From hardware perspective I really think PS5 GPU is essentially RDNA2 since it has the same RT solution and much better power efficiency than RDNA1.

And the development schedules between PS5 and Xbox SOC don’t seem too far.
 
Last edited:
So you’re saying that Sony explicitly saying it’s RDNA2 based is a lie, or what?

It’s in the official FAQ even: https://www.google.nl/amp/s/blog.playstation.com/2020/11/09/ps5-the-ultimate-faq/amp/

RDNA2 with customizations. I don’t think it is even up for discussion, with the only question marks being what the customizations are beyond the cache scrubbers and other stuff for SSD integration into the GPU, hardware texture decompression and the tempest CU. Maybe some tweaks to the Geometry Engine. There could be more as Mark Cerny seemed to hint at not everything being mentioned, but these were probably the highlights.

And I think the games on Xbox and PS5 are already showing they are more similar than different.
 
I think some's argument is "RDNA2 based does not mean it is RDNA2".

Frankly, I find that entire discussion entirely meaningless. This is like the US Congress having arguments over the definition of "is" is.. We have the best console Sony produced and the best console Microsoft produced. Everything is what it is. No discussions will change reality. Both are amazing systems and should be enjoyed for their games.
 
I think some's argument is "RDNA2 based does not mean it is RDNA2".
Agree.
What is important is that its RDNA2 based, not RDNA1 with bits added on.
If they don't have specific features, it's more than likely out if choice.
That choice can also be for combination of reasons.
 
Couldn't that cause issues, though? You'd still have to calculate and cull the geometry before determining the shading rate, right? Not only that, but the geometry LOD may not always correlate 1:1 with the texture quality. Some surface could have very simple geometry but require a render output of a high-quality texture, maybe since the object in question could be for decorative purposes and collision between the player isn't really expected, but it's close enough to the viewer in the frame to require a higher resolution for that texture even if the geometry it's mapped over is much lower quality?

So wherever VRS would fall in the pipeline, it can't be that early is my own personal guess. In addition with that, this is just me asking here but, does RT BVH traversal testing utilize the geometry of the model the tests are passed on, or is it literally just a bounding box and that's as far as the collision zone for the object goes for running the traversal test on? I assumed the object geometry would play a factor into that but then again, I'm not really read up on how RT intersection tests work at the technical level beyond some of the basics (or even how AMD in particular does them, though that's an aside).

I think there would be a number of issues with trying to determine shading rate in the GE. Smaller polys don't automatically need lower or higher shading rates, and the same is true for larger polys. Or any poly. You could use a clever primitive shader or mesh shader to generate geometry (from that which was read in) that was more optimal for the rest of the pipeline, but that's about making geometry that's more efficient to process rather than selectively shading pixels that you've already rasterised. At least as far as I can see anyway!

I think the same is likely true of depth. You can use depth to, for example, calculate DOF, and use that to hide lower quality assets (or probably areas of lower res rendering), but again I don't see how the GE would automatically know what you were planning to do with the geometry you were currently processing.

I keep coming back to the idea that the GE is great for making more efficient geometry to render, and it could probably be used to split the screen into quads that rendered at different resolutions (that you could adjust on a per frame basis), but the GE just seems too early to make smart decisions about shading frequency vs rasterisation frequency in a very fine grained manner.

WRT to ray tracing, you normally have the actual geometry of whatever you're RT against in one of the nodes in the BVH tree. You use the BVH tree to locate a relatively small amount of geometry to finally test against, and then get your poly (and through that its texture / material data etc) from there. Unless you don't hit anything and go up into the sky (just use the skybox?), or go out of range (I dunno, skybox? Environment texture? Nothing?).

Acceleration structures are basically about ignoring as much geometry as you can as you trace the ray through the world. This is because testing against geometry is very expensive, but testing against a bounding volume is cheap. The down side is that BVH trees take up more and more memory the more detail and nodes they contain. And even if testing is cheap, it still isn't free.

There are lots of shortcuts / optimisation involved in RT, and the best tradeoffs won't be the same for every game. I think that's why MS (and probably Sony) were talking about making the acceleration structures customisable and exposing them to developers on console.
 
They did say using xcloud technology, but pretty sure was allowing to develop against on site kits.
When I get a chance I'll see if can find it.
https://developer.microsoft.com/en-...ud-is-helping-game-developers-stay-connected/
The quotes at the end make it a bit more clear that its on prem, I just chose this one, but other talks about it being on prem also.
xCloud will give the opportunity to dev teams and also internal and external QA teams to put their hands on our latest game builds from everywhere minutes after their release. By allowing the teams to connect remotely to their devkits and take advantage of the high bandwidth LAN network from our various office locations, xCloud will also add another layer of security as the content created will stay on our corporate network." - Guillaume Le-Malet, Infrastructure
 
This is like the US Congress having arguments over the definition of "is" is..

It's not sex unless the ***** goes into the ******. :yep2: So it's not cheating as long as you only use your mouth to blow...air.

Agree.
What is important is that its RDNA2 based, not RDNA1 with bits added on.
If they don't have specific features, it's more than likely out if choice.
That choice can also be for combination of reasons.

Or a question of timing. But I suppose when to finalize your design is a choice. MS chose to finalize their design later in the process while Sony presumably chose to finalize their design a bit earlier.

Regards,
SB
 
Or a question of timing. But I suppose when to finalize your design is a choice. MS chose to finalize their design later in the process while Sony presumably chose to finalize their design a bit earlier.
That's one of the combinations of reasons.
In that one, benefit may not out weigh delay of incorporating it etc.

The base design was RDNA2 though, not RDNA1.

I suppose in this case MS has done a good job throwing out the comment, but without really explaining what it means. As highlighted when never gave DF a straight answer about it.
 
Uhm ... The VRS percentage would apply across the entire TF range, not only the extra performance range. But the way they arrived at their number wasn't correct either.

Not what I'm saying. 12 TF vs. 10 TF is 20% deficit. Then reports are that good Tier 2 VRS nets you another 10%. Which is actually the equivalent of XSX having 13.2 TF, I just rounded in Sony's favor a little.

Oops, you're right. :oops:
My bad. #MATHFAIL
 
Anyone is searching for the missing cache in the seriesx soc, but when ps5 was announced what stood out was the unbalanced bandwidth relative to the tf. And then we forgot.
 
Anyone is searching for the missing cache in the seriesx soc, but when ps5 was announced what stood out was the unbalanced bandwidth relative to the tf. And then we forgot.
Huh? What's this in reference to? Clearly I did forget.
 
Huh? What's this in reference to? Clearly I did forget.
The missing cache in the series X soc is from the digital foundry visit they did earlier this year, they were told there is 76 Mb of cache in total on the chip, but it isn't evident from the die shots that have been provided for the series X where this cache could be
 
The missing cache in the series X soc is from the digital foundry visit they did earlier this year, they were told there is 76 Mb of cache in total on the chip, but it isn't evident from the die shots that have been provided for the series X where this cache could be

They said there is 76 MB of SRAM on the chip not 76 MB of cache... This is too in the Hothips presentation.
 
Status
Not open for further replies.
Back
Top