That RDNA 1.8 Consoles Rumor *spawn*

That's just deflection on your part ...

This discussion between us is not about RDNA2's mesh shader implementation. This discussion is about sampler feedbacks and how it's useful with tiled resources ...

In no way should we expect RDNA2 to have a different tiled resources implementation from RDNA seeing as how 95%+ of what was stated in AMD's RDNA optimization guide applied to GCN too despite also having just as major architectural changes like primitive shaders and a wave32 execution mode ...

Why get so insecure over a potentially worthless feature ?

Ok.
 
You can do it before shipping the game and better yet it's guaranteed to work everywhere at nearly no performance hit. If storage space is a concern then you can use texture compression as well ...

And texture streaming is an orthogonal issue altogether. Inferencing shaders can't really beat texture sampling hardware when it comes to these tasks and they have optimized compression paths too ...

I/O traffic stopped being a bottleneck ironically since the new consoles introduced SSDs so using ML to minimize this traffic isn't going to matter in the slightest. A better case for ML texture upscaling would've been on systems using an HDDs since high I/O traffic would prove to be a bottleneck ...

If you do it before shipping the game you lose all the advantages. Basically losing everything the MS Game Stack manager was talking about.

And clearly it's not an orthogonal issue if up-resing from memory can have a very direct impact on streamed data. I mean that literally goes against the actual definition of what orthogonal means!

MS Game Stack offers a multiplatform set of tools (some for Playstation too), so it probably could help HDDs too in some capacity, when they're ready to offer it.

So why having an ultra fast SSD ? [emoji6]

Well, there's an awful lot more than textures you may want to stream, and you'll still be streaming lots of texture data even if you wanted to up-res say you final, highest res mip-level. And if there was no benefit to saying on transfers from, say the XSX SSD, why would Sony have shot for something more than twice as fast? :D

I see ML for textures something more useful in Stadia-like consoles.

Trouble there is you've already gone past the texture sampling stage, and are at the point of a final image. But you do raise a very interesting point that have a DLSS like step in a Stadia like mini console could be a really cool step to improve the service.

For example, a 2060 is a fairly large chip but if you were to strip out everything but the tensor cores required to DLSS a good base image up to a very high quality 1080p, and you'd have a better output than current Stadia, with probably lower rendering requirements, lower power requirements, and lower latency and transmission BW than they currently do for the low quality stuff they currently send out.

Maybe the future of streaming boxes is a basically a little tensor box ....
 
Considering Sony's success and current/future investment on VR and the fact that VRS' "origins" are on VR optimizations, if there's no VRS on the PS5 then it's because Sony preferred to implement one or a couple of other features that allow/accelerate foveated rendering in a different way.
There are patents from Sony addressing foveated rendering, (which they call "region of interest adjustment") so I have no doubt this will be hardware accelerated. It could be less flexible than VRS, though.

I think Sony have probably come up with something that's more or less as good as VRS for VR with foveated rendering, and that crucially they could take control of themselves largely independently of any AMD designed CU based VRS.

For whatever reason MS have been touting their "patented VRS", which perhaps indicates it wasn't available from AMD at the time, or that maybe AMD have "bought in" on MS's solution (at least for RDNA 2). I'm hoping AMD will give a deep dive on RDNA 2 pretty soon ...

Or perhaps this "one thing" the PS5 missing from RDNA2 is quad-rate INT8 and octo-rate INT4 processing for neural network inferencing. I haven't heard Sony talking about it. But I'm guessing it would be harder to take away these capabilities that are embedded into the ALUs themselves.

So far AMD haven't mentioned it for RDNA 2, and MS's own comments indicated packed 8 and 4 bit int is their own customisation. For these reasons I've not been considering it as the "one thing", as I think that's compared to "PC RDNA 2" rather than XSX which like PS5 is somewhat customised.
 
To me it seems that the PS5 Geometry Engine (as described in the patent) is integrated into the traditional geometry pipeline. My suspicion is that when work started with it, is was being added/integrated to RDNA1 CUs developed at the time. And because of that big customization, a very limited set of RDNA2 features were later back ported to PS5 CUs. It may be that the PS5 GPU is feature wise more like RDNA 1.2 rather than RDNA 1.9.
Same concern. I think Sony had to wait and develop both a BC capable solution and in line with this important features.

Perhaps coincidentally all the features that MS marketed for XSX and DX12U are unsupported by RDNA1

Sure not coincidentally.
They know the weak point of the concurrence.

Which is better though? He is smart not to go further than that.
Just politically correct [emoji28]

I think Sony have probably come up with something that's more or less as good as VRS for VR with foveated rendering, and that crucially they could take control of themselves largely independently of any AMD designed CU based VRS.

For whatever reason MS have been touting their "patented VRS", which perhaps indicates it wasn't available from AMD at the time, or that maybe AMD have "bought in" on MS's solution (at least for RDNA 2). I'm hoping AMD will give a deep dive on RDNA 2 pretty soon ...



So far AMD haven't mentioned it for RDNA 2, and MS's own comments indicated packed 8 and 4 bit int is their own customisation. For these reasons I've not been considering it as the "one thing", as I think that's compared to "PC RDNA 2" rather than XSX which like PS5 is somewhat customised.
Hope you're right.
Maybe Sony & AMD developed something equivalent (more or less).... but in this case why not to say ???

Will wait & see
 
Last edited by a moderator:
For whatever reason MS have been touting their "patented VRS", which perhaps indicates it wasn't available from AMD at the time, or that maybe AMD have "bought in" on MS's solution (at least for RDNA 2). I'm hoping AMD will give a deep dive on RDNA 2 pretty soon ...
Turns out after a lot of digging. This is their Hololens VRS patent. They have been working on it for a long while; foveated rendering and eye tracking is a feature of the upcoming hololens 3 Due in a few years.

they may have chosen this patent for future expansion should they ever decide bring VR to Xbox.
 
Simple, SSDs have a higher I/O bandwidth than HDDs do but of course it isn't going to match the speed of VRAM. More importantly why are you concerned about SSDs not being able to match the speeds offered by VRAM in the case of streaming ? In the near future, SSDs will be capable of streaming 3 raw uncompressed 24-bit 4096x4096 per 16.6ms/frame which is multiple times more texel density than 4K resolution itself is capable of displaying so I'm failing to see how from your perspective that I/O traffic will be an issue at all ...

Also, ML isn't this magical change in the paradigm of data compression you think it is that will somehow revolutionize our current texture streaming systems. It's just another data compression method with it's own faults .

Thats nice and imagine how many lower res textures an ssd can stream using machine learning to up res them. Aside from that we are talking about today in these consoles launching this Nov not the future after that . We also already have higher resolution panels than 4k and gamers continue to want faster and faster frame rates. So there is never enough speed when pushing forwards


Well I can already see a downside because using ML isn't anywhere near close to a lossless conversion process that they seem to hint. Sometimes using ML to do texture upscaling will noticeably diverge from the original texture result.

By comparison doing texture compression using offline preprocessing and then using the texture sampler's hardware block to do the decompression will get you near ground truth result which is close to the quality of the uncompressed texture. This way developers can minimize the signal to noise ratio thus controlling the quality/data loss tradeoff depending on the compression formats they use. Best of all, developers don't have to train any black box models to get texture compression technology right. Texture compression is so much simpler to deal with and you don't burn shader ALU operations either which could be used for more useful things ...

Sure you could do inferencing on a 128x128 texture size and output a 4096x4096 texture size result but it isn't going to be pretty. While you could fix these quality issues by increasing the input resolution of the textures but then you'd have to give up on being able to use a small amount of data anyways thus negating your initial premise of the advantage behind in using ML. We also can't guarantee that the trained model will consistently work the way we expect it to either compared to a hardware decompression block which will give us more consistent results ...

In conclusion, it's pretty hard to beat 2+ decades worth of optimizations behind texture samplers ... (even Larrabee prototypes had texture sampling hardware and future GPUs will still have them despite having less graphics state than their predecessors did)

its the same for texture compressions as they can leave artifacts. Also as we all know objects in the distance do not need the same detail as objects closer to you. So even if the upscaling diverges from the original there is a large space of game world that it wont affect and the bandwidth and storage can be used else where in the frame to give better results.

Every day new technology is developed and the old ways are turned on its head. You say that the result wont be pretty for a 128x182 texture to 4096x4096 but you can pick any texture size between them that will give a good end result. Any of it would be a decrease in bandwidth needs and storage size.
 
I think Sony have probably come up with something that's more or less as good as VRS for VR with foveated rendering, and that crucially they could take control of themselves largely independently of any AMD designed CU based VRS.

They will definitely have hardware accelerated foveated rendering / Region of Interest (ROI) adjustment. Though like VRS in the beginning, they could want to talk about that feature when PSVR2 comes out. It's not that Sony couldn't have this feature available for non-VR games, but my guess is it'll be more oriented for VR.
It could be e.g. aimed at adjusting rendering resolution according to "screen zones" while focusing on higher performance, instead of shader LOD. Adjusting rendering resolution levels is probably more "aggressive" than VRS in regards to image quality impact, but it could be much more performant in a VR headset that does eye tracking



For whatever reason MS have been touting their "patented VRS", which perhaps indicates it wasn't available from AMD at the time, or that maybe AMD have "bought in" on MS's solution (at least for RDNA 2). I'm hoping AMD will give a deep dive on RDNA 2 pretty soon ...
Microsoft's mention of "Patented VRS" does suggest some form of exclusivity, yes.
 
Turns out after a lot of digging. This is their Hololens VRS patent. They have been working on it for a long while; foveated rendering and eye tracking is a feature of the upcoming hololens 3 Due in a few years.

they may have chosen this patent for future expansion should they ever decide bring VR to Xbox.
Hololens will use it , but that wasn't why it was developed or even the first product it will come too
 
XSX does have ML support. We've known this for months ... View attachment 4342

And it also exposes this via its DirectML api, which we've known for months
Our DirectX checker tests this feature. RDNA 1 supports Direct ML feature level 2.
So by default any and all RDNA devices support ML. If we’re discussing a hardware requirement.
 
What's "12TF+EQUIVALENT 13TF RT WORKLOAD VIA PARALLEL PROCESSING"?

Sounds like someone put up a bunch of terms they don't really understand into one sentence.
 
It's a butchered interpretation of the following (DF):
"Without hardware acceleration, this work could have been done in the shaders, but would have consumed over 13 TFLOPs alone," says Andrew Goossen. "For the Series X, this work is offloaded onto dedicated hardware and the shader can continue to run in parallel with full performance. In other words, Series X can effectively tap the equivalent of well over 25 TFLOPs of performance while ray tracing."


....whatever that means. ಠ_ಠ

Maybe he's talking about doing pure compute (ALU) in parallel with RT (TEX). ¯\_(ツ)_/¯ Sounds like typical MS magicarp messaging.
 
What's "12TF+EQUIVALENT 13TF RT WORKLOAD VIA PARALLEL PROCESSING"?

Sounds like someone put up a bunch of terms they don't really understand into one sentence.
The statement here is that the costs of doing intersection tests in compute would need to be 13TF of power to be equal to the dedicated hardware units.
So while the RT units are intersecting and working on the next intersection it does not stop the compute pipeline from doing something else, so all in parallel. This is in respect to the inline RT call in a compute shader that was added to DXR 1.1. Which enables the programmers to inline an RT call in the middle of a compute shader, highly desirable compared to having to do everything as a separate call

at least that was my interpretation of the words
 
Back
Top