That RDNA 1.8 Consoles Rumor *spawn*

Just because RDNA2 has sampler feedbacks doesn't mean that it'll have a robust implementation of tiled resources. It's safe to assume that they still probably haven't found a way to do cheap tile mapping updates with RDNA2 unless they explicitly state or show otherwise ...
Well MS did file this patent, which is explicitly related to increasing the efficiency of tiled resources via improvements (including potential hardware implementation) in the handling of residency maps.
 
Exactly, even the Xbox one had a strong point (executeindirect) compared to the PS4 even though it was massively outgunned everywhere else. We shall know what's up in four months time.


Not going to be all that much drama here. We know which console is more powerful. I guess the interesting part will be how much of DF wins it translates to and how much is in not a big deal territory.
 
If you do it before shipping the game you lose all the advantages. Basically losing everything the MS Game Stack manager was talking about.

And clearly it's not an orthogonal issue if up-resing from memory can have a very direct impact on streamed data. I mean that literally goes against the actual definition of what orthogonal means!

MS Game Stack offers a multiplatform set of tools (some for Playstation too), so it probably could help HDDs too in some capacity, when they're ready to offer it.

What advantages might those be ?

And it is an orthogonal issue as far as I'm concerned. How much data you can compress (whether using ML or not) is independent of how much data you can stream from your drive ...(both can impact texture streaming system but they're very much separate variables)

Thats nice and imagine how many lower res textures an ssd can stream using machine learning to up res them. Aside from that we are talking about today in these consoles launching this Nov not the future after that . We also already have higher resolution panels than 4k and gamers continue to want faster and faster frame rates. So there is never enough speed when pushing forwards

Texture compression can be used achieve the same effect and increasing resolution means a potential tradeoff with lower frametimes or vice versa so your texel density budget doesn't really change regardless ...

I very much doubt that native 4K resolution will even come close to being the standard for AAA games as they have other important matters like ray tracing to deal with which will almost certainly make it impossible for them to target very high resolutions or framerates ...

its the same for texture compressions as they can leave artifacts. Also as we all know objects in the distance do not need the same detail as objects closer to you. So even if the upscaling diverges from the original there is a large space of game world that it wont affect and the bandwidth and storage can be used else where in the frame to give better results.

Every day new technology is developed and the old ways are turned on its head. You say that the result wont be pretty for a 128x182 texture to 4096x4096 but you can pick any texture size between them that will give a good end result. Any of it would be a decrease in bandwidth needs and storage size.

Texture compression can have artifacts but for the most part developers can more easily control their output compared to artifacts from ML-based solutions. With ML you're at the mercy of how good your data will be while training this model but with texture compression you're at the mercy of an algorithm which is far more reasonable since they're more likely to work from a general standpoint these days ...

Just in case you've been living under a rock, developers have been using MIP maps to this optimize the scenario of distant textures that you speak of so ML doesn't really offer anything new here either ...

Well MS did file this patent, which is explicitly related to increasing the efficiency of tiled resources via improvements (including potential hardware implementation) in the handling of residency maps.

That patent only covers a feedback loop system. It doesn't go into the specifics of how the tile mappings are updated in memory so this patent is pretty much outside of that scope. This leads me to believe that RDNA2 just like it's predecessors before it doesn't actually have a different implementation of how tile mappings are updated ...
 
The statement here is that the costs of doing intersection tests in compute would need to be 13TF of power to be equal to the dedicated hardware units.
So while the RT units are intersecting and working on the next intersection it does not stop the compute pipeline from doing something else, so all in parallel. This is in respect to the inline RT call in a compute shader that was added to DXR 1.1. Which enables the programmers to inline an RT call in the middle of a compute shader, highly desirable compared to having to do everything as a separate call

at least that was my interpretation of the words

these patents were only recently made public and im thinking they're the final design of RDNA2 CU's

Each CU has RT units parallel to simd units (that execute shaders) .. thats what i interpret 13 (RT) + 12 (shader) = 25 ..

If you've designed your scheduled calls correctly, the scheduler can feed both SIMD and RT units in parallel per CU ..

EcCF7q7U0AAbgZG


ref: https://pdfpiw.uspto.gov/.piw?PageN...0692271.PN.%26OS=PN/10692271%26RS=PN/10692271
 
What advantages might those be ?

And it is an orthogonal issue as far as I'm concerned. How much data you can compress (whether using ML or not) is independent of how much data you can stream from your drive ...(both can impact texture streaming system but they're very much separate variables)



Texture compression can be used achieve the same effect and increasing resolution means a potential tradeoff with lower frametimes or vice versa so your texel density budget doesn't really change regardless ...

I very much doubt that native 4K resolution will even come close to being the standard for AAA games as they have other important matters like ray tracing to deal with which will almost certainly make it impossible for them to target very high resolutions or framerates ...



Texture compression can have artifacts but for the most part developers can more easily control their output compared to artifacts from ML-based solutions. With ML you're at the mercy of how good your data will be while training this model but with texture compression you're at the mercy of an algorithm which is far more reasonable since they're more likely to work from a general standpoint these days ...

Just in case you've been living under a rock, developers have been using MIP maps to this optimize the scenario of distant textures that you speak of so ML doesn't really offer anything new here either ...



That patent only covers a feedback loop system. It doesn't go into the specifics of how the tile mappings are updated in memory so this patent is pretty much outside of that scope. This leads me to believe that RDNA2 just like it's predecessors before it doesn't actually have a different implementation of how tile mappings are updated ...

From said patent:

"Tiled resources (also known as partially resident textures or PRTs) can be improved so that these PRTs can be widely adopted while minimizing implementation difficulty and performance overhead for GPUs. These improvements include hardware residency map features and texture sample operations referred to herein as “residency samples,” among other improvements......

......Advantageously, due in part to residency map 305 being small and granular, residency map 305 can be fully implemented into hardware components of graphics processor 120. This has the technical effects of faster, more efficient operation of graphics processors, the ability to hold enhanced residency maps in hardware, and reduced computing resource required to properly render texture data for objects in rendered scenes."

Like I already said, the indirection texture (residency map) and the way it was being previously handled was the main problem with PRTs during tile determination. These residency maps not only have to be updated after the required tile has been streamed from memory but also and more importantly sampled in the first place to select a non-resident tile and and obtain its address in virtual memory (which is how PRTs enable one to have more resources that one can hold in memory/cache).
 
Last edited:
From said patent:

"Tiled resources (also known as partially resident textures or PRTs) can be improved so that these PRTs can be widely adopted while minimizing implementation difficulty and performance overhead for GPUs. These improvements include hardware residency map features and texture sample operations referred to herein as “residency samples,” among other improvements......

......Advantageously, due in part to residency map 305 being small and granular, residency map 305 can be fully implemented into hardware components of graphics processor 120. This has the technical effects of faster, more efficient operation of graphics processors, the ability to hold enhanced residency maps in hardware, and reduced computing resource required to properly render texture data for objects in rendered scenes."

Like I already said, the indirection texture (residency map) and the way it was being previously handled was the main problem with PRTs during tile determination. These residency maps not only have to be updated after the required tile has been streamed from memory but also and more importantly sampled in the first place to select a non-resident tile and and obtain its address in virtual memory (which is how PRTs enable one to have more resources that one can hold in memory/cache).

This "residency map" is just the "MinMip texture" that was described in the D3D sampler feedback spec. I want to know how the hardware will update the tile mappings itself. This patent is pretty useless in trying to describe that process ...

I don't think you've really got all that much of a grasp on how on what exactly sampler feedbacks are or how they interact with tiled resources so let me explain. Feedback maps can only be either a committed or a placed resource. A tiled resource is normally created as a reserved resource. The UpdateTileMappings API specifically needs to be called to change the tile mappings in a reserved resource. Just having a feedback map alone won't automatically do this for you like you seem to keep implying ...

Feedback maps are a totally different type of resource compared to a reserved/tiled resource and are separate from each other ...
 
What advantages might those be ?

Well they're basically the same as the last time I stated them (in a post of mine which I think you quoted from). Also the same as the ones that the MS Game Stack GM himself talked about, quoted by @eastmen (which I think you also replied to).

C'mon bro, no-one has time to go running in circles. ;)

And it is an orthogonal issue as far as I'm concerned. How much data you can compress (whether using ML or not) is independent of how much data you can stream from your drive ...(both can impact texture streaming system but they're very much separate variables)

The post you had responded to was about one of the potential benefits of real time upresing, namely the impact on a texture streaming system of being able to stream smaller textures and ML upres in real time. Basically, about the direct relationship between a hypothetical ML uprezzer and texture streaming requirements.

You responded "And texture streaming is an orthogonal issue altogether", leading me to think you were saying that ML upressing in real time is unrelated to a game's texture streaming requirements.

If what you meant was "streaming requirements are independent of a given drives capabilities", then yeah. But that, ironically, is independent of the point I was trying to make...

Texture compression can have artifacts but for the most part developers can more easily control their output compared to artifacts from ML-based solutions. With ML you're at the mercy of how good your data will be while training this model but with texture compression you're at the mercy of an algorithm which is far more reasonable since they're more likely to work from a general standpoint these days ...

I know this quote was directed to eastmen, but I wanted to point out that the training data could easily be uncompressed 24-bit 8K asset textures (or higher for future compatibility), with the machine tuned on a per-game or even a per-level or per-material basis (whatever you want). So static images with definite states. Far more optimal than the situation for DLSS, or MS's own ML HDR for backwards compatibility (ML is everywhere!!) which both have to work on very dynamic frame-buffers.
 
This "residency map" is just the "MinMip texture" that was described in the D3D sampler feedback spec. I want to know how the hardware will update the tile mappings itself. This patent is pretty useless in trying to describe that process ...

I don't think you've really got all that much of a grasp on how on what exactly sampler feedbacks are or how they interact with tiled resources so let me explain. Feedback maps can only be either a committed or a placed resource. A tiled resource is normally created as a reserved resource. The UpdateTileMappings API specifically needs to be called to change the tile mappings in a reserved resource. Just having a feedback map alone won't automatically do this for you like you seem to keep implying ...

Feedback maps are a totally different type of resource compared to a reserved/tiled resource and are separate from each other ...

By updatetilemappings, you mean the mapping of the virtual texture to the tile pool done by the hardware pagetable?
 
Hmm yea Sony needs to get out infront and deal with this.
Needs? Sony can overclock the ps4 pro, call that ps5 sweet silence and anybody would buy it without making any question.
If they lack vrs they will just say that their image quality is always to the max, and any outlet will shit on xsx because of reduced quality.
Sony doesn't need to do anything.
 
  • Like
Reactions: snc
Hmm yea Sony needs to get out infront and deal with this. This will be a burden if they don’t deal with this leading up to launch.

Why? Most people that care about this info are console trolls.
 
Last edited by a moderator:
  • Like
Reactions: JPT
Articles add up over time. There comes a point where if enough people are armed with the information and it’s never corrected, then it becomes knowledge for everyone. It would have an effect if they are the same price for instance.

I can't imagine most console buyers knowing what RDNA or even a GPU is. They did care about the UE5 demo or the PS5 reveal event tho.

Let console concern trolls be concerned and show COD running on a PS5 next month.
 
I can't imagine most console buyers knowing what RDNA or even a GPU is. They did care about the UE5 demo or the PS5 reveal event tho.
Users get informed, things can shift. That’s just part of the way customers are. You are going to get the person that does research and that travels down too.

I’m just stating my opinion of things. It’s not detrimental but I think it deserves some attention.
 
Depends on what people consider important.

Will the average Joe be more likely to notice a 20% resolution drop and VRS omission, or a 100% increase in loading speed?

I'm certain that Sony are going to smash records this year with the PS5 regardless of it usage of a feature that the competitor uses (and is likely covered by *other* unique features).
 
Articles add up over time. There comes a point where if enough people are armed with the information and it’s never corrected, then it becomes knowledge for everyone. It would have an effect if they are the same price for instance.
Over the time with the DF articles about multiplat games performances this lack (if it's there) can have quite an impact... maybe not so huge but clearly visible... and myself for instance this time I'm considering XSX (last time did not give a cent to Xbox One)...
 
Back
Top