AMD: Navi Speculation, Rumours and Discussion [2019-2020]

Status
Not open for further replies.
First comes PR, you mean :) As I said, there's a piece of hardware that is essentially not really needed, you have to invent reasons to have it there (and get PR machine going with "just buy it" and other shenanigans that we witnessed in the recent years). Pretty sure many gamers honestly believe Jen-Hsun and pals invented raytracing :)

PC gaming is past the marketing stage. Hardware accelerated Raytracing and DLSS (or DL in generell) are proven techniques.

You could replace them with universal CUs that could also do int4/int8/fp16 operations (again, are tensor cores actually used for DLSS? no one really researched that, afair) that could do the same operations and be used for something else (fp+int CMT in Ampere is an example of such sort of unification). If we look at another specific ASIC, RTX, improvements allegedly done to the RT cores basically were negated by the fact that memory/cache bw did not improve by much (not as bad as in P100, for instance, but it's now close to that)

Why would you dedicate transistors for additional INT8/INT4 functionality when you dont like TensorCores?!
 
Cyberpunk isn't out yet, and we don't know how DLSS behaves in that game either.


The number of games with DLSS 2.0 support is sketchy at best. When it came out, nvidia claimed a total of 4 games supporting DLSS 2.0, but then they diluted all DLSS 1 & 2 titles under the same DLSS moniker, which makes it impossible to know which ones are DLSS 1 or 2.
It's even harder to ascertain which titles are actually using the tensor cores, as there were some DLSS implementations that didn't use ML inference at all, it was just a post-processing filtering based on machine learning.


I don't know exactly why we're talking about DLSS in the Navi thread, but if the point was that every RTX card can just use DLSS2 in every game to render at a lower resolution and show similar performance to a more expensive AMD card, then that's not a point at all. DLSS2 (the good one), doesn't have an adoption wide enough to make such a claim.
 
PC gaming is past the marketing stage. Hardware accelerated Raytracing and DLSS (or DL in generell) are proven techniques.
Proven for what? I'd actually love to have a fully pathtraced game like the original q2pt, but alas it's too computationally intensive even for such a primitive geometry, materials and very modest amount of rays and bounces. Paying half the FPS for an assortment of effects which are probably unnoticeable during dynamic gameplay - YMMV, I'd say. In BF5 (I played it only briefly, but still), I'd doubt any avid shooter conossieur would choose RT over extra fps.

Why would you dedicate transistors for additional INT8/INT4 functionality when you dont like TensorCores?!
Pretty sure it's impact is lower than doing a dedicated "tensor core" pathway. Besides, it can be used for a variety of other scenarios, like the forgotten RPM.
 
Proven for what? I'd actually love to have a fully pathtraced game like the original q2pt, but alas it's too computationally intensive even for such a primitive geometry, materials and very modest amount of rays and bounces. Paying half the FPS for an assortment of effects which are probably unnoticeable during dynamic gameplay - YMMV, I'd say. In BF5 (I played it only briefly, but still), I'd doubt any avid shooter conossieur would choose RT over extra fps.

Proven as an instrument for developers. Bright Memory, Amid Evil and "Deliver us the Moon" are Indie games supporting DXR (and DLSS). When Indie-Games starting to adopt features the marketing stage is left.You can build a benchmark parcours with 10+ raytracing games. Your question why Ampere supports certain processing blocks is easy to answer: There are games which get accelerated by them.

Pretty sure it's impact is lower than doing a dedicated "tensor core" pathway. Besides, it can be used for a variety of other scenarios, like the forgotten RPM.

GA104 is 392,5mm^2 and provides ~180TOPs. "Big Navi" is something between 400mm^2 and 530mm^2 and supprts only ~90TOPs. Arthemetic units getting cheaper and cheaper with every node shrink. So it makes sense to build specific ASICs for certain (limited) workloads.
 
Note: this is the last time I'll comment on DLSS in this thread.
The only question that mattered was whether or not we can just throw DLSS as a general enhancement for every current and future game, to know if we should include DLSS results in comparison to the upcoming RDNA2 cards.
DLSS 2.0 is still heavily reliant on game-to-game optimization, so the answer to the question is no.




All DLSS titles between Nov 2019 and now are DLSS 2.
Everything prior to that with the exception of Control launch version is DLSS 1.0.

Nah. Nvidia is very careful on their site as to where they mention DLSS 2.0 and where they omit the "2.0".
Fortnite for example has 0 mention of DLSS 2.0 on both nvidia's and epic's announcement.

And it's very possible to know which is which through the use of your eyes.
:LOL::LOL::LOL:
 
I started this DLSS thing because I was wondering if AMD would have a similar solution so feature wise it would be closer to NVIDIA feature set, since it will get RT as well.
 
I started this DLSS thing because I was wondering if AMD would have a similar solution so feature wise it would be closer to NVIDIA feature set, since it will get RT as well.

I thought they already confirmed they are taking it seriously.
 
I started this DLSS thing because I was wondering if AMD would have a similar solution so feature wise it would be closer to NVIDIA feature set, since it will get RT as well.

Please use one of the other existing threads for that, or start another one if you wish. It shouldn't require two mod warnings for people to stop incessantly posting about Nvidia and DLSS in a Navi thread.
 
I wouldn't be surprised if the 6000 series numbers that AMD showed were for their "gaming" flagship with a marked up slightly faster but much more expensive card waiting in the wings. Essentially exactly what they did with the 5900x and 5950x and what Nvidia did with the 3080 and 3090.

Either way if the 256-bit bus rumor is true those numbers are very impressive.
 
Status
Not open for further replies.
Back
Top