AMD: RDNA 3 Speculation, Rumours and Discussion

Status
Not open for further replies.
I dunno, the new one has advantages over DLSS and disadvantages
BTW, I didn't pay attention to frame rates, but when I looked this video again, it's now clear to me that the "TXAA" mode simply runs in full resolution, so we are comparing native res + TAA here vs half and quarter pixels DLSS.
It's quite remarkable that even DLSS Performance with 2x FPS beats native + TAA on foliage
As for aliasing on wires, there are many ways to fix it.
 
BTW, I didn't pay attention to frame rates, but when I looked this video again, it's now clear to me that the "TXAA" mode simply runs in full resolution, so we are comparing native res + TAA here vs half and quarter pixels DLSS.
It's quite remarkable that even DLSS Performance with 2x FPS beats native + TAA on foliage

Lol quite the story. Whatever anyone has to say on DLSS2 and its hardware acceleration part, all i know is that it gives me really huge performance increases with images that match or exceed native quality using a fraction of the resolution. The dlss tech isnt even out that long, its the gamechanger along with ray tracing and to some extend nvme drives becoming the norm.

Theres a good reason AMD is developing their own DLSS competitor for RDNA3+ gpus.
 
all i know is that it gives me really huge performance increases with images that match or exceed native quality using a fraction of the resolution.
I am pretty sure that's what most of the gamers see.
When you're into tech, you start digging into corner cases because you know that quarter res should not look like full res image, right?
But that automatically makes your opinion very biased since 1) Gamers don't play with 10x zooming and microscope, that's simply not the case 2) TAA is not a reference, current implementations of TAA have tons of drawback with all these pixels in native resolution. 3) People explicitly search for DLSS drawbacks and don't do the same for TAA (since it's a reference quality in their minds)
People often claim they like sharper image with TAA in CP2077 for example, but then again, they simply don't know that this sharpness is due to sharpenning being applied by devs on top of the TAA. You can easily do the same with DLSS too, just press Alt + F3 in the game and apply whatever levels of sharpness you like.
 
Last edited:
They're not developing DLSS competitor for RDNA3+ GPUs. They're developing DLSS competitor for all GPUs.

Then it most likely wont be a competitor in all regards, performance is going to suffer somewhere if there isnt the hardware for it, which RDNA2< lacks. Rest assured it wont matter all that much since gpu availebility is kinda non existant, amd's rdna3+ gpus will be more relevant anyway.
 
On twitter someone suggested that in the future the way to go might just be software renderer with "GPUs" becoming AI upscalers.

Kinda interesting the implications if you really think about it.
 
Then it most likely wont be a competitor in all regards, performance is going to suffer somewhere if there isnt the hardware for it, which RDNA2< lacks. Rest assured it wont matter all that much since gpu availebility is kinda non existant, amd's rdna3+ gpus will be more relevant anyway.
What damn hardware? Just because NVIDIA runs current versions of DLSS on their tensor cores it doesn't mean matrix crunchers are the optimal hardware to run it and even less it's competitors. Faster tensors don't seem to help Ampere with it really either.
We don't even know whether they'll use any sort of ML with it or not - in the end algorithmic solution is always superior if you can match the quality. And even if they do use ML, AMD has been really clear about wanting to bring it to all platforms which means it will run well on RDNA2, which means if it's using ML it's using INT4/INT8, which can run on anything even if you don't support them at higher speeds (and AMD has some pre-RDNA2 chips with fast INT4/8 too). And we have nothing suggesting RDNA3+ would bring their matrix cores or similar to gaming GPUs.
 
What damn hardware? Just because NVIDIA runs current versions of DLSS on their tensor cores it doesn't mean matrix crunchers are the optimal hardware to run it and even less it's competitors. Faster tensors don't seem to help Ampere with it really either.

NV claims you need RTX gpus to be able to fully use DLSS2.0 to its full extend. They say tensor hardware assists in the process. That means the hardware features help out. If they lie or not that isnt the topic for it.

We don't even know whether they'll use any sort of ML with it or not - in the end algorithmic solution is always superior if you can match the quality. And even if they do use ML, AMD has been really clear about wanting to bring it to all platforms which means it will run well on RDNA2, which means if it's using ML it's using INT4/INT8, which can run on anything even if you don't support them at higher speeds (and AMD has some pre-RDNA2 chips with fast INT4/8 too). And we have nothing suggesting RDNA3+ would bring their matrix cores or similar to gaming GPUs.

The thing is, AMD hasnt shown anything so far that matches dlss (with RDNA2). Its the same story with RT before RDNA2 made its debut here on the forums, as the one your having now with DLSS. Its very realistic to assume that RDNA2 wont compete on the same level as DLSS2.0 on RTX gpus does.

I feel its the same as the RT discussion/tech, it will be there on RDNA2 (and any other gpu out there) but less performant/capable. AMD might introduce some form of improved acceleration in RDNA3+ gpus, same goes for ray tracing, their next iteration is going to be much better.

Like said it wont matter all that much since well, no one is able to get any of the current gpus anyways. We will see how rdna3 stacks up against the competition, it will be their 2nd gen rt and reconstruction tech going on so its intresting to say the least (and the 100TF leak).

More often then not, a solution that fits all gpus (even between different ihvs) is most likely not going to be as capable as a solution for specialized hardware variant.
 
NV claims you need RTX gpus to be able to fully use DLSS2.0 to its full extend. They say tensor hardware assists in the process. That means the hardware features help out. If they lie or not that isnt the topic for it.

The thing is, AMD hasnt shown anything so far that matches dlss (with RDNA2). Its the same story with RT before RDNA2 made its debut here on the forums, as the one your having now with DLSS. Its very realistic to assume that RDNA2 wont compete on the same level as DLSS2.0 on RTX gpus does.

I feel its the same as the RT discussion/tech, it will be there on RDNA2 (and any other gpu out there) but less performant/capable. AMD might introduce some form of improved acceleration in RDNA3+ gpus, same goes for ray tracing, their next iteration is going to be much better.

Like said it wont matter all that much since well, no one is able to get any of the current gpus anyways. We will see how rdna3 stacks up against the competition, it will be their 2nd gen rt and reconstruction tech going on so its intresting to say the least (and the 100TF leak).

More often then not, a solution that fits all gpus (even between different ihvs) is most likely not going to be as capable as a solution for specialized hardware variant.

It's relevant to thread when you try to use it as argument (and pretty much as the only argument)

Tensor cores do absolutely nothing for DLSS except for speed. You can run the same calculations without matrix crunchers too. And we don't know if matrix crunchers are even optimal for DLSS, let alone any possible competitors.

AMD hasn't shown anything because it's not ready to be shown.

Acceleration is only about speed, you constantly try to argue it would matter for quality. It doesn't. There isn't any "reconstruction hardware".

And again, we don't know whether AMD will even use ML or not, algorithmic solution is always superior if you can match the quality.
 
And we have nothing suggesting RDNA3+ would bring their matrix cores or similar to gaming GPUs.
I haven't seen a single rumor pointing to RDNA3 getting the matrix multiply functionality we've seen in CDNA, and there's been more than a bunch of RDNA3 rumors out there.
 
No, it can give you better performance at quality x, but it doesn't increase the quality. It may sound like semantics, but it's relevant.
Better performance = better quality. And I'm not even talking about temporal resolution (aka framerate) - which is also a part of quality. The higher is the performance ceiling the higher can be the quality of graphics.
It's not semantics, it's reality.
 
Better performance = better quality. And I'm not even talking about temporal resolution (aka framerate) - which is also a part of quality. The higher is the performance ceiling the higher can be the quality of graphics.
It's not semantics, it's reality.
Quality is a function of both the algorithm itself and the available processing power. Speed tends to help improve quality, but it is not a direct causation. A simple counter example is integer upscaling — scales perfectly and trivially with whatever resolution, multiplier and parallel resources you throw at it, but inherently offers no IQ increase.

The reality is more than simply “more X = more Y”, least to say, even if it sounds like arguing semantics. But everyone would most likely agree with you that more processing power means more potential for higher quality.
 
Last edited:
are we back at Vega Timeline?
The what.
Parts taped out.
Some are even dead (hello NV!).
I hope not but it's what it seems.
Oh you better hope you have enough money to pay for N31.
I haven't seen a single rumor pointing to RDNA3 getting the matrix multiply functionality we've seen in CDNA
The most neverever thing in AMD client GPUs, yes.
It's a phone/laptop first IP family and those things have dedicated GEMM brrrr dark silicon piles.
 
Status
Not open for further replies.
Back
Top