AMD Radeon RDNA2 Navi (RX 6500, 6600, 6700, 6800, 6900 XT)


It'll be really interesting to see how they do this and what the result is. Obviously non ML upscaling has been around for quite a while and has already been the subject of significant development both by the console vendors (I'm looking at you PS4P) and by individual developers. So I'm curious as to what AMD intend to do better.

This does seem to validate earlier comments though about it not being a simple matter for other vendors to produce their own ML upscaling solutions. Do people still expect that of Sony and Microsoft this generation for example if even AMD aren't going down that route? Or even individual developers?
 
It'll be really interesting to see how they do this and what the result is. Obviously non ML upscaling has been around for quite a while and has already been the subject of significant development both by the console vendors (I'm looking at you PS4P) and by individual developers. So I'm curious as to what AMD intend to do better.

This does seem to validate earlier comments though about it not being a simple matter for other vendors to produce their own ML upscaling solutions. Do people still expect that of Sony and Microsoft this generation for example if even AMD aren't going down that route? Or even individual developers?

Wait a little you will see arrive solution ML based for platform holder Sony and Microsoft...
 
Wait a little you will see arrive solution ML based for platform holder Sony and Microsoft...

Are they going to release new console hardware? Maybe slap on tensor like hw to current models at no cost to current owners.

Dunno though, it seems console users, atleast here, cant live with reconstruction tech.
 
Are they going to release new console hardware? Maybe slap on tensor like hw to current models at no cost to current owners.

Dunno though, it seems console users, atleast here, cant live with reconstruction tech.

Lol you don't need tensor core for ML, there is FP16, INT8, INT4 instruction inside AMD RDNA 2 GPU PC or consoles, it will be done in CUs. Very funny when at least one of the platform holder has the solution available to devs who needs to implement it. It is too late for launch games maybe next year or 2022 depending how much it impacts graphic pipeline.

Like every software solution, it needs time to implement it.;-)

ML is not exclusive to Nvidia and if it was a matter of money Microsoft have much more than Nvidia.

https://www.macrotrends.net/stocks/charts/NVDA/nvidia/cash-on-hand
10.7 billion

https://www.macrotrends.net/stocks/charts/SNE/sony/cash-on-hand
41 billion

https://www.macrotrends.net/stocks/charts/MSFT/microsoft/cash-on-hand
133 billion

And even the homeless Sony have more cash than Nvidia. Microsoft and Sony are big company not joe the homeless...

They just launch the consoles and the software R&D they have done for the last 5 years for Sony and 4 years for Microsoft needs some implementation by game developer.
 
Last edited:
And you don't need RT h/w for RT.


Who's that?

Sony available in the SDK since a few months just wait a little H2 2021 first party title or 2022 depending of the impact on graphical pipeline and I am sure Microsoft have something ready soon too.

The same guy working for a third party studio who told me PS5 SSD will be 8 GB/s for uncompressed data. I tease it on the board before Road to PS5. And this is working for reconstruction and anti aliasing.

This is available for all devs third or first party.
 
Sony available in the SDK since a few months just wait a little H2 2021 first party title or 2022 depending of the impact on graphical pipeline and I am sure Microsoft have something ready soon too.
Why the wait for a year? If it's as easy to integrate as DLSS then it shouldn't take more than a week to add it to a game.
 
Lol you don't need tensor core for ML, there is FP16, INT8, INT4 instruction inside AMD RDNA 2 GPU PC or consoles, it will be done in CUs. Very funny when at least one of the platform holder has the solution available to devs who needs to implement it. It is too late for launch games maybe next year or 2022 depending how much it impacts graphic pipeline.

Hm thats intresting. But doing it on the CU's means it somehow is eating from the raster performance they otherwise do?
Im told that, DLSS does rely on hardware tensor cores, so if consoles can do that without thats nice.

Like every software solution, it needs time to implement it.

Thats true.

And even the homeless Sony have more cash than Nvidia. Microsoft and Sony are big company not joe the homeless...

Hm, i dont think Nvidia is either :) Also, is that Sony as a whole, or the playstation devision?
 
You could upscale the framebuffer to 4K like the Shield TV is doing. Havent tried it with Geforce now so i dont know much latency it introduces.
 
Hm thats intresting. But doing it on the CU's means it somehow is eating from the raster performance they otherwise do?
Im told that, DLSS does rely on hardware tensor cores, so if consoles can do that without thats nice.



Thats true.



Hm, i dont think Nvidia is either :) Also, is that Sony as a whole, or the playstation devision?

Sony as a whole but the Playstation division is the biggest revenue and profit division. I just explain than if money was the problem, it is not one for Sony or Microsoft. This is a matter of time and ressources spend on the problem.

After I never said this is exactly like DLSS, the acronym is a bit different. And I never said this as good or better. I know this is cool from what someone told me better than what they had before.

Nvidia had some time to refine their solution it is probably better.
 
Sony as a whole but the Playstation division is the biggest revenue and profit division. I just explain than if money was the problem, it is not one for Sony or Microsoft. This is a matter of time and ressources spend on the problem.

After I never said this is exactly like DLSS, the acronym is a bit different. And I never said this as good or better. I know this is cool from what someone told me better than what they had before.

Nvidia had some time to refine their solution it is probably better.

Ye, maybe, i am the wait-and-see kind of person. If AMD, sony says so il believe that, its still too vauge though. NV is apparently using its local tensor core hardware to achieve the results we get with DLSS. If RDNA2 gpus do it on the CUs (like partially RT), then im wondering how and if that impacts performance somehow.
DLSS isnt just a software solution.
 
This might be a fun thing to compare:
1 month:

Fulfilled:
RTX 3080 - 344
RTX 3090 - 99

Incomming:
RTX 3080 - 123
RTX 3090 - 55


1 week:

Fulfilled:
RX 6800 - 100
RX 6800XT - 25

Incomming:
RX 6800 - 13
RX 6800XT - 9

In Denmark there are also a lot of whine over the availability of new Xbox/PS consoles...and AMD CPU's
Seems everything is in tight supply atm.
Crossing my finger for that it doesn't affect servers.
Just ordered 34 x Dell P570 VXRails...less that a week of delivery-time...so for me it seem consumer space is hit way harder than Enterprise.

1 Month

Fulfilled:
RTX 3080 - 344
RTX 3090 - 99

Incomming:
RTX 3080 - 123
RTX 3090 - 55


1 Month:

Fulfilled:
RX 6800 - 117
RX 6800XT - 42

Incomming:
RX 6800 - 43
RX 6800XT - 13

Source:
https://www.proshop.de/AMD-Radeon-RX-6000-Series-overview
 
Hm thats intresting. But doing it on the CU's means it somehow is eating from the raster performance they otherwise do?
Im told that, DLSS does rely on hardware tensor cores, so if consoles can do that without thats nice.
NV tensors also can only do certain math operations, and program flow is handled on shader cores as usual. So it's not that different. (There is no small algorithm here to implement, unlike RT - just basic math ops.)
I don't know any details, but i would guess NV supports 4x4 matrix multiply in one tensor instruction, but AMD only supports dot product. Really just a guess - did not look at instructions.
 
Wait a little you will see arrive solution ML based for platform holder Sony and Microsoft...

Only if it makes any sense to do so. Just because something is new and popular doesn't make it the best tool for the job. UE4's new TAA gets results similar in quality to DLSS at a lower performance cost.

What many devs really want is a TAA tool that's easy to set up. Today's TAA can be a full time job of setting up dynamic res, tweaking everything per title, etc etc. If "Super Resolution" can cut down on that in a performant and good enough looking matter it will be a success.
 
Only if it makes any sense to do so. Just because something is new and popular doesn't make it the best tool for the job. UE4's new TAA gets results similar in quality to DLSS at a lower performance cost.

What many devs really want is a TAA tool that's easy to set up. Today's TAA can be a full time job of setting up dynamic res, tweaking everything per title, etc etc. If "Super Resolution" can cut down on that in a performant and good enough looking matter it will be a success.

Does their TAA really get results similar to DLSS? I'd like to see some comparisons if you can direct me to the right place.
 
NV tensors also can only do certain math operations, and program flow is handled on shader cores as usual. So it's not that different. (There is no small algorithm here to implement, unlike RT - just basic math ops.)
I don't know any details, but i would guess NV supports 4x4 matrix multiply in one tensor instruction, but AMD only supports dot product. Really just a guess - did not look at instructions.

Hm, im no expert on that, but NV says DLSS runs (partially) on the tensor hardware/cores. Obviously it helps greatly in performance, turning a 1080p/1440p image into a 4k one that looks exactly like a native 4k one is kinda impressive to the untrained eye (99% of the users):
 
Back
Top