AMD: Navi Speculation, Rumours and Discussion [2019-2020]

Status
Not open for further replies.
Well that's obvious, but how's RDNA2 supposed to make Turing obsolete when we don't know how fast its raytracing will be, let alone if it even lacks features such as Mesh Shaders and Tensors?

Oh.

I understand this like "since Turing won't be the competitor, there is not point to compare rdna2 to turing", without a speed notion. Even if RDNA2 is faster than Turing, it doesn't mean a lot.
 
Why are tensor cores suddenly a feature that's a must have for modern GPUs? They're barely only now providing some meagre benefit with a shoe-horned feature since their intended use isn't available in games (de-noising)
 
Is Navi 3X coming somewhat soon on the heels of Navi 2X?
Why are tensor cores suddenly a feature that's a must have for modern GPUs? They're barely only now providing some meagre benefit with a shoe-horned feature since their intended use isn't available in games (de-noising)
It could be a temporary solution. Perhaps no other researched method achieved similar results.
 
Why are tensor cores suddenly a feature that's a must have for modern GPUs? They're barely only now providing some meagre benefit with a shoe-horned feature since their intended use isn't available in games (de-noising)
might just be future proofing.
you didn't need DX11 cards for a very long time either. Honestly, we didn't start getting DX11 games until 2013+ when the consoles switched over, but everyone was buying DX11 cards as far back as 07-08.

If the new generation of consoles do support some from of ML in games, then it would be a good addition to have now, as the generation will last 6 years.
Like we see with ray tracing on consoles, we can now expect to see a lot more use out of the feature in the PC space.
 
Why are tensor cores suddenly a feature that's a must have for modern GPUs? They're barely only now providing some meagre benefit with a shoe-horned feature since their intended use isn't available in games (de-noising)
Has to do with DirectML, latest DLSS advancements and whatever MS has been showing on this lately.
 
Has to do with DirectML, latest DLSS advancements and whatever MS has been showing on this lately.
Could you refer what MS has been showing or DirectML tensor stuff? DLSS by all accounts seems more like "we have to find some use for tensor cores on gaming products too"
 
Could you refer what MS has been showing or DirectML tensor stuff? DLSS by all accounts seems more like "we have to find some use for tensor cores on gaming products too"
The latest version of DLSS (2.0?) looks like being of actual benefit though.
 
The latest version of DLSS (2.0?) looks like being of actual benefit though.
Oh, of course it's a lot better what it was, but whether it's benefit or not depends what one looks for, there's still plenty of problems even with the latest DLSS builds with ringing artifacts, breaking dof etc, but this probably isn't the right thread for that discussion.
 
Could you refer what MS has been showing or DirectML tensor stuff? DLSS by all accounts seems more like "we have to find some use for tensor cores on gaming products too"
most of the focus i suspect will be around graphics (a very solved case for AI). But there are other uses that we are getting increasingly better at
audio processing, natural language processing, interpolation, etc.

the puppy AI demo is about 43 minutes.
- no animation for the dog
- completely physics based
- trained to walk etc
- all the fumbling is a result of the training artifacts

50 dogs at liike 43/45 minutes - colliding with each other, bodies interacting, dogs falling over etc.
fairly intensive I suspect. without tensor cores, that would be painful on to keep the frame rate up alongside AAA graphics.

the other demo is about super resolution.

other example AI Agents made in Unity
 
Last edited:
Oh.

I understand this like "since Turing won't be the competitor, there is not point to compare rdna2 to turing", without a speed notion. Even if RDNA2 is faster than Turing, it doesn't mean a lot.
With a 50% more efficient architecture AMD could at last have reached Nvidias Turing efficiency and maybe a little more (maybe they included like Turing integer concurrent operations in the shader cores?) taking apart the differences in node efficiency. We'll see if Ampere is really much more efficient than Turing apart from the gains inherents to going 7nm (30%?) and Nvidia mantains the efficiency crown even before AMD launches RDNA 2.
 
With a 50% more efficient architecture AMD could at last have reached Nvidias Turing efficiency and maybe a little more (maybe they included like Turing integer concurrent operations in the shader cores?) taking apart the differences in node efficiency. We'll see if Ampere is really much more efficient than Turing apart from the gains inherents to going 7nm (30%?) and Nvidia mantains the efficiency crown even before AMD launches RDNA 2.
Erm, no. RDNA can be as efficient as Turing, see for example https://www.techpowerup.com/review/palit-geforce-rtx-2080-super-gaming-pro-oc/29.html - Some models are doing worse, but then there's RX 5700 which is actually more efficient than NVIDIAs similar performance RTX 2060 Super. Of course Turing is on older process, but that's the reality where it is so hypothetical 7nm Turing is irrelevant

Thus, if NVIDIAs next gen improves efficiency under 50 % (including gains from everything, process and architecture etc), RDNA2 should end up more efficient at least on some models.
 
Erm, no. RDNA can be as efficient as Turing, see for example https://www.techpowerup.com/review/palit-geforce-rtx-2080-super-gaming-pro-oc/29.html - Some models are doing worse, but then there's RX 5700 which is actually more efficient than NVIDIAs similar performance RTX 2060 Super. Of course Turing is on older process, but that's the reality where it is so hypothetical 7nm Turing is irrelevant

Thus, if NVIDIAs next gen improves efficiency under 50 % (including gains from everything, process and architecture etc), RDNA2 should end up more efficient at least on some models.

I meant RDNA 1 without node effect still wasn´t as efficient as Nvidia´s. Only from only an architecture point of view. They were similar perf/watt with a node difference. Now things get more interesting because if RDNA 2 gets a 50% boost and Nvidia also improves "only" a 50% from all (architecture plus jumping to the same node as AMD) they will be at last similar in perf/watt taking account only their architecture.
 
Navi3X is being accelerated to meet targets of a well known Handset manufacturer.

Is there additional information on AMD's agreement with Samsung, or is this another product? It wasn't described as incorporating a future main-line product, and it sounded like Samsung would be working on the IP in ways that were distinct from semi-custom, which has a more well-known history of cross-pollination with mainline.
 
Status
Not open for further replies.
Back
Top