AMD: RDNA 3 Speculation, Rumours and Discussion

Status
Not open for further replies.
We're not talking about AMD competing in RT though. We're talking about them reducing the performance hit of ray tracing by (potentially) accelerating more of the RT pipeline. I mean they literally told us they have enhanced the ray tracing capabilities of each CU. Presumably they managed to achieve some benefit for this and it wasn't a complete waste of time?
Looking through the slides here:


I can't find such a statement. Was this stated at some other presentation?
 
Looking through the slides here:


I can't find such a statement. Was this stated at some other presentation?
It's not in the slides.

 
It's worth noting that there's a picture on that page:

2022-06-10_4-20-55.png


that implies stacking. The rumours about "stacking coming later with 3D V-Cache" seem to be missing this implication.
 
It would presumably be impossible to implement frame generation without AI.
Disagree. We could take Oculus Timewarp as an example of frame generation (predating Ada), which works without any ML aproach and is even cheap to calcualte.
Without hw ML, reconstruction technologies are going to perform slower or with worse quality.
If they are slower, then all other tasks are faster, becasue the chip area would be used for general compute power, not specialized HW which so far is only useful for one task which takes little time.
Don't you think your excitement about ML application is more a result of inflated marketing than actual demand on it for games?
Remember ML acceleration was not requested from game devs at all, nor do they use it now.

It's a bit like you go to the shop to buy a car, and they sell you this for twice the price becasue maybe one day you'll need the extra feature:
1667144450523.png

Now if AMD is waiting with adding ML cores until ther is actual demand for it, i can only thank them for keeping on point and ignoring pointless hypes.
If they add it, this does not necessarily confrim the demand, it may still only confirm the power of inflated marketing, making people think they would need it becasue everybody talks about it.
Personally i'm perfectly fine with AMDs upscaling. Used it in several games now on my old GPU, and it just works.
Regarding frame generation, this can be done without ML as well, if really needed. Which i hope is not the case. Texture space shading would give the same perf advantages without artifacts, RT coud be done at lower res and upscaled, etc.

With Nvidia and Intel both pushing hardware sorting AMD will be in a tough spot if they stubbornly stick to SIMD traversal and leave it to devs to figure out
Well, the problem is AMD does not give us the option to figure it out. We can't implement our own traversal, because intersection instructions are not exposed.
Which hints they do not plan to stick at that, and RDNA3/4 will get its traversal units.
 
Well, the problem is AMD does not give us the option to figure it out. We can't implement our own traversal, because intersection instructions are not exposed.
 
Disagree. We could take Oculus Timewarp as an example of frame generation (predating Ada), which works without any ML aproach and is even cheap to calcualte.

Doesn't timewarp work by internally rendering a larger FoV than what the player can see and then using the headset motion to change the player view within that already rendered frame to produce a new frame for the user based on their new head position?

If so that's very different to Nvidia's frame generation. DLSS 3 performance mode is already rendering only 1 out of 8 pixels with the rest being AI generated. Whle that may be possible using other methods, it's unlikely to be so with similar, or even acceptable image quality. What happens as AI improves and there's an even larger proportion of generated frames? We've already has nAo on here talking about that being the direction of the industry up to and including 100% AI generated frames. There's no way AMD is getting away with ignoring this paradigm shift.
 
L
Vega was a buggy iteration of a GPU architecture dating back to 2012. Hardly a relevant comparison to a well functioning one dating back to only 2020.


It’s difficult to say what feature would be equivalent. Maybe programmable shaders?

Lets wait and see what SER actually delivers. But keep in mind this is Nvidia’s 3rd iteration of RT core. I don’t expect AMDs first iteration to compete if they even decide to add some in the first place. It’s entirely possible they just keep all processing in the shader core.
So your saying more then just process has something to do with performance ????


But that's in direct contradiction to your previous position.
 
What bugs are you talking about?
Primitive shaders and the draw stream binning rasterizer I remember never being enabled or working. Later GPUs finally got them workable.

So your saying more then just process has something to do with performance ????


But that's in direct contradiction to your previous position.
I never said process was the only factor, just a primary one.
 
Last edited:
The most important nobody has asked yet, will we get a whitepaper for RDNA3? RDNA1 we did but 2 we didn't, I feel like publishing it might be at the bottom of some poor person's action list and will remain there until the end of time
 
Doesn't timewarp work by internally rendering a larger FoV than what the player can see and then using the headset motion to change the player view within that already rendered frame to produce a new frame for the user based on their new head position?
I guess so. But if you can change the camera position from a static image, and have way to deal with resulting disocclusion artifacts well enough,
you can just add in motion vectors to do the same to extropolate time, or interpolate and mix from two source frames.
Two sources would also help with disocclusion, as one source likely always has the information missing from the other.
I have no doubt a non ML solution is possible, and if devs take a hand on it this would also eliminate problems like figuring out what's HUD elements, for example.
If so that's very different to Nvidia's frame generation. DLSS 3 performance mode is already rendering only 1 out of 8 pixels with the rest being AI generated.
Yes, but the argument of improved IQ does not hold if we look at it from another angle: Raytracing, which exactly is the primary reason those solutions exist at all.
To get IQ from RT, we need many samples. DLSS reduces those samples to 1/8th, and it does not 'invent' this information with ML to compensate the lack.
Thus, to me that's all a taylored marketing campaign in the first place, with contradictions hard to spot even for experts.
Basically you could reduce RT resoultion and sample it using traditional (rep)rojection methods to the frame rendered at (allmost) native resolution and FPS.
Quality would be the same or better, but then you have no selling point for proprietary features.
We've already has nAo on here talking about that being the direction of the industry up to and including 100% AI generated frames. There's no way AMD is getting away with ignoring this paradigm shift.
I'm not sure if a NV researcher claiming neural rendering is the future for games is an objective proof this will indeed happen anytime soon.
But just saying. No disrespect from my side. I just think we need to be careful if a single company claims ownership over gfx innovation of another industry.
We need to remain critical and objective. Currently, DLSS solves a problem which does not really exist, as 1080p is still the most widely used resolution, upscaling to 1440p is fine even with trivial filters, and it's not clear if a 22fps game blown up to look smooth is indeed better than a native 60fps game with some less gfx effects.

But again: No disrespect. And it's not that i'm against neural rendering. I'm happy i do not have to work on upscaling on my side.
As said before, personally i'd love to see ML motion blur, for example. Maybe that's one of the next features.
And i'd like some other AI applications even more, e.g. dynamic AI conversation with NPCs. Or it might give us practical large scene fluid simulations, etc. All of that is in the works, with promising results already shown.
It's just that, to convince me, we still need more than just DLSS, and it has to come from more sources than just a single company (which currently sells at too high prices), to proof the actual and real demand.
And then HW acceleration is welcome.
 
Primitive shaders and the draw stream binning rasterizer I remember never being enabled or working. Later GPUs finally got them workable.


I never said process was the only factor, just a primary one.
yes you did , i asked you what where your reasons to say AMD couldnt improve RT performance by more the 2x and you said process and nothing else.

see

So now we come back around , why cant given a ~2x transistor budget can't AMD improve RT performance by more then 2x.
 
Can RDNA actually co-issue tex/intersection and math, or does issuing the instruction stall the math SIMDs for a cycle?

But only looking at intersection is an incredibly limited view. RDNA does traversal via normal math instructions while nvidia and intel offload that to the RT cores. And more problematic than simply contention of resources is that traversal is a very branchy workload that is poorly suited to SIMD hardware.
Proper multitest synthetic benchmarks would be awesome to find answer for these.

Pure RT tests.
Then combine with pure rasterization, texturing, shading, compute etc.
 
yes you did , i asked you what where your reasons to say AMD couldnt improve RT performance by more the 2x and you said process and nothing else.

see

So now we come back around , why cant given a ~2x transistor budget can't AMD improve RT performance by more then 2x.
Neither of those posts state or even imply that manufacturing node is responsible for 100% of GPU improvements. My argument was based primarily on historical precedent while using what Nvidia was able to achieve with a bigger improvement to manufacturing as an additional data point.
 
Last edited:
Neither of those posts state or even imply that manufacturing node is responsible for 100% of GPU improvements. My argument was based primarily on historical precedent while using what Nvidia was able to achieve with a bigger improvement to manufacturing an an additional data point.
The 9700 Pro and 6800 Ultra less than doubled transistors over the 8500/FX 5900, yet were much more powerful in shader bound applications. The 6800 Ultra could see a 4X or greater improvement in DX9 games.
 
Status
Not open for further replies.
Back
Top