AMD: Navi Speculation, Rumours and Discussion [2019-2020]

Status
Not open for further replies.
So I guess variable rate shading is going to be a standard now?

I'm really not sure it's that useful. Similar results are already obtained for alpha particles, and have been around for ages. They look like crap but have been used as a necessary performance saver. As for temporal downsampling, temporal upsampling and AA and etc. already does this very well. Ultimately I'm not sure this is an entirely necessary hardware feature except for a few very specific use cases. The results from Wolfenstein certainly aren't encouraging. But hell maybe some developer will prove me wrong.
Keep me honest, but isn't a "free" 5-10% improvement a pretty large win?
 
If used correctly yes. But I doubt every dev will take the time to make it look like "free" 10% improvement. My guess is that it will become like another graphical setting, and you will see a differences between the different mode...
 
Keep me honest, but isn't a "free" 5-10% improvement a pretty large win?

There's absolutely no "free" about it, it's not something you just flip a switch on and your frame time is faster. Devs need to put in the work to support it. Now that I think about it, this is basically the same as the odd reconstruction upsampling done for Quantum Break, but hardware supported and easier to implement. But Quantum Break didn't actually provide a pixel for pixel reconstruction of a 1440p image, and neither will this.

As Rootax pointed out, if this is used it will most likely end up, especially at first, as just another option to lower hardware requirements at the cost of image quality. So, possibly a bit neat for that, but nothing really super interesting.
 
Competition between licensable engines (LE) and the desire to maximise the possibility that devs will continue to use LE as opposed to developing inhouse engines will probably motivate Epic & Unity to build support for lower precision floats into their engines. Thus eventually all Epic & Unity licensees will have that feature build into their future games.

Don't forget all the future consoles will obviously support lower precision floats, as opposed to just the mid gen mediocre selling PS4 PRO which has an install base of likely less than 10 million, so the economic incentive for devs to support this feature will be geometrically greater 3 years into the nextgen console cycle.
 
Last edited:
Seems 10-18% better than a 580, for much smaller silicon. One would suppose Navi is built for next gen consoles, if they're home (stationary) consoles then they're limited by price.

If this really has 20 CUs and is putting out that, then it seems AMD succeeded in getting price for performance. A 40 CU seems to be about the same performance as a GTX 2080. Of course it's pre-release (supposedly by about 3 months?) and a single artificial benchmark that's really even relevant (Aztec Ruins high tier offscreen). So actual performance in more relevant games is a question mark. But, well if it's true it seems insanely promising for AMD, as far as gaming goes.
 
66AF GPU is listed in the Linux version of AMD GPU drivers as Vega 20
66AF in OpenGL information is just a name, which can be modified in inf file.Perhaps the uploader uses this method to force the installation of the driver.
GFX1010 in OpenCL can't be modified in these ways.And other information in OpenCL is also inconsistent with Vega20.So I think the GFX1010 should be accurate.
 
The pci id does look like Vega20 but the performance doesn't and the gfx1010 and compute units count look off too. There's a F0 also but it doesn't have compute info to look up.
 
Between this demo, DXR support coming to older Nvidia cards, and beyond "RTX" doesn't seem quite as important as it once did:

Regardless, after talking it over on twitter did hit on a win for variable rate shading: Anything hidden by enough alpha effects can probably be lowered without being noticed. Which will be rather nice for heavy effects moments that tend to drop frames today.

Putting it all together: Are the upcoming Navi cards powering Google's new cloud streaming service? (There's no specific specs). Was the Vega 56 the initial GPU used in the upcoming Xbox or PS5 early dev kits? I don't know, but it does seem like Nvidia's "RTX Forever!" line of marketing is finally a miss for a company that's had a lot of PR hits in a row.
 
RT is already doable on current arch. The question is, "rt units" or done via the general compute units.... I really doubt it will have dedicated RT units like nVidia, it would take too much space on a SoC for a console (and AMD needed to work on this for a long time...)
Who said AMD hasn't worked on it? We have no clue which way AMD is approaching raytracing, all we know is that it's being worked on (and surely has been for a long time, DXR didn't just pop out of nowhere a year ago when it was made public)
For what it's worth, NVIDIAs RT-cores and tensor cores in a TPC (that's 2 SMs) are estimated take up bit over 20 % more space than same architecture with FP16 in place of tensor cores and no RT (TPC on TU106 is estimated to be about 10.89 mm^2, which of 0.7 mm^2 is RT-cores and 1,25 mm^2 for tensor cores and all other changes RT & tensor cores together require
 
The question how is really no important, today GPU can do RT but the performance loss makes it non viable in real world. today being able to use RT is "being able to use RT without too much loss in performance". Would be interesting to see if Consumer Navi is also capable of doing RT as well.
 
Nobody said they didn't work on it. I said I don't think they did for Navi (for some reasons, like I don't believe it's a priority for a mid range chip (navi10), I don't trust the RTG capabilities anymore, etc).

I believe it will be present in what they show as "next gen" in their slides...

It's just a personal and subjective opinion.
 
Last edited:
The only other statement from AMD we have about Raytracing was Sue saying something along the lines of "Doing raytracing in a way that developers can get behind". To paraphrase from memory.

I took this to mean a far more programmable set of shaders than Nvidia's fixed function BVH traversal units. The tradeoff between specialization at high speeds and generalization is always there. But as Crytek's demo shows, programmability alone can bring incredible speedups. I'd guess they're use their cone tracing as an early tracing culling/space skipping technique. Which is entirely useful for just about any raytracing application.

Not that Turing can't run cone tracing, it's just that it can't run it on it's RT cores. What exactly AMD is doing for Navi then is a mystery. But optimizing normal compute cores for tracing access, whether that's smaller "wavefronts" to allow more divergence or whatever, would be a good idea.
 
I took this to mean a far more programmable set of shaders than Nvidia's fixed function BVH traversal units.
IIRC, the shading after the intersection has been found is fully flexible with nvidia.
the BVH and intersection of rays is so far where we see nvidia adding fixed function, the rest of it is compute.
 
Nobody said they didn't work on it. I said I don't think they did for Navi (for some reasons, like I don't believe it's a priority for a mid range chip (navi10), I don't trust the RTG capabilities anymore, etc).

I believe it will be present in what they show as "next gen" in their slides...

It's just a personal and subjective opinion.

That's what im expecting. A mid-range GPU is the best fit for a console, the 'next-gen' is going to take on Nvidia, and hopefully they succeed.
 
Status
Not open for further replies.
Back
Top