AMD: Navi Speculation, Rumours and Discussion [2017-2018]

Status
Not open for further replies.
I wouldn't be shocked if this turned out to be tessellation all over again, where NVIDIA goes all in right away, while AMD adopts a much more cautious approach, dedicating just enough hardware to DXR as they absolutely have to, until it sees wide enough adoption to warrant the use of more silicon.

That said, DXR would seem to have broader initial support than tessellation did at the time, but that's not necessarily something AMD could have foreseen.

The only thing needed for DXR support is basically some hardware instruction to make programming raytracing easier.

NVIDIA has vastly overstated it's hardware "capabilities", you can do raytraced reflections today perfectly well, the only thing RTX really has is dedicated raytracing hardware instead of using compute units to do it. But considering you can run a hybrid of raytraced shadows on an Iphone 7 the need for such hardware is questionable. Especially when RTX costs so damned much with such relatively little performance when using raytracing; fps on a high end RTX card is cut from probably 4k 60fps+ to "targeting" 1080p 60 with raytracing on in BFV for example.
 
That said, DXR would seem to have broader initial support than tessellation did at the time, but that's not necessarily something AMD could have foreseen.
I think much broader support will be forthcoming when combining different aspects of the new architecture, ie. DXR with DLSS.
But what that hardware won’t be able to deliver is genuine real-time ray tracing, or the sort of AI-powered super sampling Nvidia’s new RTX cards can offer. And the DLSS feature, using Turing’s Tensor Core silicon, is arguably even more impressive than the gorgeous ray traced games we’ve seen demoed since Gamescom.

It uses Nvidia’s Saturn V supercomputer and teaches it what super high-resolution games should look like by feeding it millions of images. It then takes this data, packages it up into your GPU driver downloads, so the RTX card in your PC alsoknows what high-res games should look like, and uses its in-built artificial intelligence to accurately add extra pixels into moving images on the fly.

Essentially, your GPU will know how good games can look and, even if you’re not rendering at that level, it can fake it for you. You get better looking games with higher frame rates than other post processing techniques can manage, and it’s all by using AI. If you’re not excited by that then could you point to the doll and tell me where the robot touched you.

That level of deep learning integration is not something AMD has in its wheelhouse. And though I know AMD has already been talking about its Radeon Rays tech, and how it’s capable of real-time raytracing on its own hardware, the few demos AMD has shown play fast and loose with the term ‘real-time’.
https://www.pcgamesn.com/nvidia-rtx-microsoft-dxr-beyond-next-gen-consoles
 
The cost is high indeed, but "relatively" little performance? Relative to what? Is there some much faster hardware available I am ignorant of?

I think they mean relative to traditional rasterization. 1080p60 with RT is 1/4th the output for a game that will run at 2160p60 without. $1200 is a lot of money to run a game at a quarter of the resolution just for some raytraced reflections or shadows (but probably not both in most of these games).
 
The cost is high indeed, but "relatively" little performance? Relative to what? Is there some much faster hardware available I am ignorant of?

Relative, again, to the fact that you can do raytraced shadows on an Iphone 7 on Sponza. You really don't need dedicated RT hardware to do raytracing.

I think much broader support will be forthcoming when combining different aspects of the new architecture, ie. DXR with DLSS.

https://www.pcgamesn.com/nvidia-rtx-microsoft-dxr-beyond-next-gen-consoles

This stuff is just as much bullshit as Nvidia's PR about "you can only do raytracing on our RTX cards!" Deep learning neural nets in no way need dedicated hardware to work, this stuff could work on all GPUs (if only somehwat slower and less efficiently) and could be backdeployed if Nvidia dropped their bullshit. In fact with Vega's fp16 bit support AMD could deploy the exact same sort of thing efficiently to Vega cards, as DNN can work faster on less precise hardware. Assuming Navi carries over some of Vega 20's added instructions this could go even faster with it's int8 support.

AMD really, really does not need to follow Nvidia's bullshit PR path to make Navi a success. DXR instruction set support would require minimal hardware changes compared to Nvidia's route of separate dedicated Raytracing hardware. DICE showed a 3x average speedup for just raytracing between Volta and RTX, but Vega's previously proposed wavefront splitting could provide similar speedup without the need to add hardware (and cost). Similarly Navi doesn't really need dedicated neural net inferencing hardware like RTX, added int8 instructions to the compute units could provide a great speedup, again without adding nearly as much to cost as Nvidia's dedicated hardware does.

The end result could be Navi showing up with all the same shiny effect support as RTX, while being significantly cheaper in the price for performance metric.
 
Relative, again, to the fact that you can do raytraced shadows on an Iphone 7 on Sponza. You really don't need dedicated RT hardware to do raytracing.
I am not sure I follow your logic. On one hand you say an Iphone 7 is good enough. Now, surely any RTX card should be "relatively" faster than that. In fact, I would guess any RTX card's performance would be relatively monstrous and not relatively little by comparison. Indeed I don't know of any hardware which would offer more performance. On the other hand, if we accept the statement of relatively little performance as true, then an Iphone 7 would be woeful by comparison.
 
Relative, again, to the fact that you can do raytraced shadows on an Iphone 7 on Sponza. You really don't need dedicated RT hardware to do raytracing.



This stuff is just as much bullshit as Nvidia's PR about "you can only do raytracing on our RTX cards!" Deep learning neural nets in no way need dedicated hardware to work, this stuff could work on all GPUs (if only somehwat slower and less efficiently) and could be backdeployed if Nvidia dropped their bullshit. In fact with Vega's fp16 bit support AMD could deploy the exact same sort of thing efficiently to Vega cards, as DNN can work faster on less precise hardware. Assuming Navi carries over some of Vega 20's added instructions this could go even faster with it's int8 support.

AMD really, really does not need to follow Nvidia's bullshit PR path to make Navi a success. DXR instruction set support would require minimal hardware changes compared to Nvidia's route of separate dedicated Raytracing hardware. DICE showed a 3x average speedup for just raytracing between Volta and RTX, but Vega's previously proposed wavefront splitting could provide similar speedup without the need to add hardware (and cost). Similarly Navi doesn't really need dedicated neural net inferencing hardware like RTX, added int8 instructions to the compute units could provide a great speedup, again without adding nearly as much to cost as Nvidia's dedicated hardware does.

The end result could be Navi showing up with all the same shiny effect support as RTX, while being significantly cheaper in the price for performance metric.

Well, there are two possibilities: either Nvidia is staffed with incompetent charlatans or you are missing the fact that even if GCN CUs could perform the required operations, since nothing is free in 3D, doing extra math in shader-bound scenes without dedicating hardware is going to reduce the performance just the same.
 
AMD really, really does not need to follow Nvidia's bullshit PR path to make Navi a success.

Well it’s a good thing DXR doesn’t require dedicated hardware and there’s nothing stopping Navi from running it on general compute cores.

There’s also nothing stopping DXR from running on Vega, Pascal, Polaris or Maxwell. Not sure what point you’re trying to make exactly.
 
The only thing needed for DXR support is basically some hardware instruction to make programming raytracing easier.
This is a meaningless statement.

Do you have something in mind as to what this instruction might do?

Texturing is also some instruction, but that instruction happens to kick off a shitload of memory fetches to dedicated caches, interpolations and filtering. One could totally implement texturing in a shader, but the overall performance would probably tank by a factor of 10.

Ray tracing is not hard. Some people have it printed on a business card. Making it fast is something else entirely.

I doubt that we’ll soon see a detailed expose on how it is done, but that some instruction may very well kick off a whole bunch of machinery with a complexity that similar to texturing.

NVIDIA has vastly overstated it's hardware "capabilities", ...
What have they overstated? Please enlighten us!

... you can do raytraced reflections today perfectly well, the only thing RTX really has is dedicated raytracing hardware instead of using compute units to do it.
Exactly. There’s nothing new about ray tracing. And if those units finally make it possible to do things that are worth doing(!) in real time then that seems like a big deal to me.

On a 970 at 1080p, his implementation takes 2.8ms to cast 1 direct ray per pixel and probably one secondary ray. He achieves something like 0.8G rays/s.

And with that, you get that ugly picture with hard shadows. The kind of picture that, during the launch presentation, was used to demo how things should not look. [emoji4]

the need for such hardware is questionable. Especially when RTX costs so damned much with such relatively little performance when using raytracing;
A factor 5x to 10x is not relatively little performance in my book.

The cost of the GPU itself is a marketing discussion.

In fact with Vega's fp16 bit support AMD could deploy the exact same sort of thing efficiently to Vega cards, as DNN can work faster on less precise hardware.
That makes it the perfect match for tensor cores, where both Volta and Turing have a huge advantage of Vega. But especially Turing because that one also has 8 bit tensor cores.

Vega's previously proposed wavefront splitting could provide similar speedup without the need to add hardware (and cost).
What?

Do you honestly think that being able to recover occasional inefficiencies of the SIMD pipeline can compensate for 2 fully dedicated hardware accelerators?

The end result could be Navi showing up with all the same shiny effect support as RTX, while being significantly cheaper in the price for performance metric.
I think that Nvidia will be absolutely thrilled if AMD does nothing more than the trivial improvements that you’ve sketched.
 
I think that Nvidia will be absolutely thrilled if AMD does nothing more than the trivial improvements that you’ve sketched.
Intel as well, with RTX NVIDIA's Quadro has become the de facto standard for visualization and rendering farms, Intel will follow suit to capture that market back.
This stuff is just as much bullshit as Nvidia's PR about "you can only do raytracing on our RTX cards!"
NVIDIA is not alone in this, dozens of developers are making the same claim with their demos running mulitple times faster on Turing compared to Volta and even more compared to Pascal.
 
The cost is high indeed, but "relatively" little performance? Relative to what?
Relative to ~$250 GPUs from 2 years ago without raytracing enabled, which are more than capable of rendering almost any game at 1080p over 60FPS.

RTX effects come at a huge cost, at least for this first generation of GPUs carrying dedicated raytracing hardware.

I love the visual upgrade allowed by the new GPUs, but I wouldn't pay the price they ask for it given that I'd have to play everything with lots of jaggies in my ultra wide QHD monitor.

Navi trying to play catch at this stage would be a bad idea IMO.
 
I suspect Navi will have some sort of RT acceleration as well. It probably won’t be as much as Turing, but AMD knows the DirectX roadmap and RT is the next logical step in graphics evolution.
 
1080p60 ray traced, will mean 1440p30 ray traced. A lot of people will find that acceptable if they are going to have vastly superior image quality like the one seen in Metro Exodus or Atomic Heart. Absolute 60fps is not always necessary. Heck, people with 1080Ti have been living with sub 60fps @Ultra settings and 4K for a long time in many AAA games.
 
I suspect Navi will have some sort of RT acceleration as well. It probably won’t be as much as Turing, but AMD knows the DirectX roadmap and RT is the next logical step in graphics evolution.
RT acceleration could be present through dedicated instructions within the updated CUs. Same with AI features. That might make sense.

But dedicated ALU units that can't do anything other than tensor or RT operations probably isn't happening with Navi.
 
Do we know if RT is something very hvery hard to manage, drivers wise ?

If so, when I see how Vega went, I wonder if rtg have the ressources to pull that off, even if they had the hardware. I remember Raja saying that the hard part with Vega was the software side of things... And then another article saying that most of the ressources went to Navi, but for the "Sony part".
 
@Rootax
It was mentioned that raytracing is easier to program and that it easily run on shader. So my guess is, that it isn't so much Driver work needed.
 
My understanding was it was easier for the (game) devs, but I haven't read anything about drivers teams.
 
RT acceleration could be present through dedicated instructions within the updated CUs. Same with AI features. That might make sense.

But dedicated ALU units that can't do anything other than tensor or RT operations probably isn't happening with Navi.
I think we might see tensor cores sooner than RT cores just because there is more use for them. AI based AA and up scaling is likely going to be a huge part of future gaming especially for things like VR.
 
Do we know if RT is something very hvery hard to manage, drivers wise ?
The hard part is the BVH traversal because it takes additional time out of your frame budget. You can do only so much with caching (except of course you build a dedicated accelerator with tons and tons of very fast cache), so the thing will be how much BVH traversal you can do without crippling your normal shading which does not happen automagically with raytracing. Maybe, AMDs good support for DirectX Multi-Engine can help out a bit, so raytracing might run asynchronously like some other compute stuff.

The question though is, how much rays will you need to trace to reach critical mass, i.e. make a game playable, and is that something, Vega or Navi can pull off asynchronously.
 
Status
Not open for further replies.
Back
Top