Impact of nVidia Turing RayTracing enhanced GPUs on next-gen consoles *spawn

Status
Not open for further replies.
Has AMD published their roadmap with regards to next generation Ray tracing and AI hardware? They're at least a generation behind Nvidia which isn't a good sign for the PS5.

And is there any mention of new Tegra hardware based on the new tech and feature sets developed for Turing?

Actually not very clear and only foreseen for Vega 20 7nm, but maybe we have some idea of what could be used in the called "secret souce".

AMD on Computex.jpeg
 
NV20 supported rt-patches six months before Radeon8500...
You sure? I remember RT-patches being just a quick answer when ATI brought in N-Patches (also supported by Matrox IIRC?) even if the hardware it was brought for was older than R8500. The support was also short-lived as it's performance was nothing short of catastrophe, so much so one asks if it was just software solution
 
GTX 2080/2808Ti is a beast fast enough to run multiplatform games at full whack and with new bells and whistles on top. But it does that with a frikkin' humongous die.

What I'm most interested in seeing is how a 200mm^2 (for example) Turing GPU fares against other 200mm^2 gpus (as contemporary as possible).When you're already deep into compromise territory - running below native res, with dynamic scaling or sub 30 fps, and with middling settings - I want to see if the die area lost for Tensor cores is a net gain.

Hopefully it is. Or perhaps - for consoles - the area and design work is best used somewhere else.

I have a bad feeling we aren't going to see Navi till 2020, which may hinder comparisons in PC land.
 
GTX 2080/2808Ti is a beast fast enough to run multiplatform games at full whack and with new bells and whistles on top. But it does that with a frikkin' humongous die.

What I'm most interested in seeing is how a 200mm^2 (for example) Turing GPU fares against other 200mm^2 gpus (as contemporary as possible).When you're already deep into compromise territory - running below native res, with dynamic scaling or sub 30 fps, and with middling settings - I want to see if the die area lost for Tensor cores is a net gain.

Hopefully it is. Or perhaps - for consoles - the area and design work is best used somewhere else.

I have a bad feeling we aren't going to see Navi till 2020, which may hinder comparisons in PC land.
Current forecast suggests there won't be more Turings but refreshed Pascals, of course this could be wrong.

Vega 20 is coming around new years or early Q1, AMD has already confirmed that gaming 7nm is coming shortly after and that Navi is one of their "Initial 7nm lineup", which means it's coming in 2019 and most likely during H1
 
Vega 20 is coming around new years or early Q1, AMD has already confirmed that gaming 7nm is coming shortly after and that Navi is one of their "Initial 7nm lineup", which means it's coming in 2019 and most likely during H1
Highly unlikely. Navi will come after Rome and Ryzen, likely at the end of 2019. I remember reading an AMD official stating the same thing.
 
Highly unlikely. Navi will come after Rome and Ryzen, likely at the end of 2019. I remember reading an AMD official stating the same thing.
Latest thing AMD has said about Navi:
AMD’s next major milestone is the introduction of our upcoming 7nm product portfolio, including the initial products with our second generation “Zen 2” CPU core and our new “Navi” GPU architecture.
 
Latest thing AMD has said about Navi:
This is a transcript of AMD's CFO "Devinder Kumar" conference call to Deutsche Bank.

Q: And the absolute last question on the graphics side, 7 nanometer Vega coming to the data center side of it, you've talk about that before at the end of this year. When should we expect 7 nanometer to occur on the more traditional gaming…

A: We haven’t missed that piece. I think, if you look at it from what we have stated, we have 7 nanometer data center GPU launching later this year; we are sampling the 700 CPU this second half of ’18 and then launching in 2019; after that, we'll have the client piece of it; we haven’t been specific about the timing; and graphics will be coming out later than these products.

They are not going to launch Navi and waste capacity at something that may or may not compete with NVIDIA, they would rather fight it out with Intel while they still have the superior position. So Vega 20 will be a pipecleaner, then comes Rome, then Ryzen 3000 then Navi.
 
Makes sense to prioritize products that can compete well in profitable areas.
 
This is a transcript of AMD's CFO "Devinder Kumar" conference call to Deutsche Bank.

They are not going to launch Navi and waste capacity at something that may or may not compete with NVIDIA, they would rather fight it out with Intel while they still have the superior position. So Vega 20 will be a pipecleaner, then comes Rome, then Ryzen 3000 then Navi.
Based on AMDs earlier schedule Ryzen 3000 should come in late Q1, which still leaves time for Navi in H1. My earlier copy/paste was quote from AMD Q2 quarterly call and I don't think they could justify Navi being "initial 7nm product" if it was coming nearly a year or over a year later than their first 7nm product.
 
Based on AMDs earlier schedule Ryzen 3000 should come in late Q1,
Highly unlikely, they will flood the channel with server CPUs to capitalize on Intel's absence. The server side is where the money is made.
which still leaves time for Navi in H1.
Again very HIGHLY unlikely for the same reason above. 7nm capacity is very limited, and is under stiff competition from NVIDIA, AMD, Apple, and Qualcomm. They can't have too many chips shipping in a tight timeframe, this will massively affect their supply.
I don't think they could justify Navi being "initial 7nm product" if it was coming nearly a year or over a year later than their first 7nm product.
Yes it could, AMD has always been loose about their timelines in an effort to not piss investors off. Initial 7nm products is a term that could cover an entire year worth of products.
 
AMD could surprise us all but given their recent execution on GPU products it doesn’t seem wise to be overly optimistic on timing of any releases. Navi in H1 2019 would be fantastic but where there’s no smoke...
 
Navi in H1 2019 would be fantastic but where there’s no smoke...
...there's a Bunsen burner with a good airflow, 100% combustion, and a clean, blue flame, with no smoke particles being generated.

You don't need a fire to be the cause of smoke, and you don't have to generate smoke from fire, so the smoke situation doesn't always illustrate the combustion situation.
 
I'd love to see raytracing coupled with foveated rendering. That'd free the power limits, focussing on 10% of the screen, meaning proper realtime raytracing could be performed with full, unified lighting. Machine-learning reconstruction could readily fill in the low fidility blanks around the fovea portion of the display. With foveated rendering, rasterisation hacks at their best may hit photorealistic but raytracing would solve all the production aggro.

Check this out Shifty. This has me really excited. 95% of the pixels in this example are not rendered and then the empty space is filled by a deep learning algorithm. This makes me think how a VR headset or AR glasses with eye tracking could be used as a screen for traditional console/gaming PCs, to greatly increase performance.

 
Last edited:
Posted in other RT thread...
NVIDIA RTX Effects Cost 9.2 ms in Remedy’s Northlight Engine, Running at 1080p on an RTX 2080 Ti
As reported by Golem.de, the raytraced scene delivered clearly higher quality graphics but the expense was rather significant. Between contact shadows (2.3 ms), reflections (4.4 ms, as you can see in the picture below) and denoising (2.5 ms), all of the NVIDIA RTX effects cost 9.2 ms in render time.

This is important, as I’m sure many of you already know, because the overall render time cannot be higher than 16 ms in order to target 60 frames per second, or 33 ms in order to target 30 frames per second. That means the remaining budget to fit everything else and achieve 60FPS would be a mere 6.8 ms.

To make matters worse, the demo was running at 1080p resolution using the brand new top of the line RTX 2080 Ti GPU. Lower specced graphics cards such as the 2080 or 2070 would inevitably fare worse. On the other hand, Remedy will surely optimize NVIDIA RTX performance ahead of Control’s release (currently planned for next year) and it’s also possible that the final game will allow users to customize the options, for instance deactivating the costly reflections while keeping raytraced contact shadows and diffuse GI.

https://wccftech.com/nvidia-rtx-9-2-ms-remedy-northlight/
9.2 ms, over half the rendering budget, on the fastest RT GPU. 2.3 ms for shadows and 2.5 ms for denoising, and apparently this is overall GPU time being eaten into and not with denoising happening parallel.independently of the shaders. If denoising is preventing you from shading, that's 30% of the rendering budget gone on 1080p60 shadows.

I think this speaks volumes about the cost/benefit ratio being completely off for next-gen consoles. If RT is going to happen in consoles, it needs to be something very different to RTX.
 
Posted in other RT thread...
9.2 ms, over half the rendering budget, on the fastest RT GPU. 2.3 ms for shadows and 2.5 ms for denoising, and apparently this is overall GPU time being eaten into and not with denoising happening parallel.independently of the shaders. If denoising is preventing you from shading, that's 30% of the rendering budget gone on 1080p60 shadows.

I think this speaks volumes about the cost/benefit ratio being completely off for next-gen consoles. If RT is going to happen in consoles, it needs to be something very different to RTX.
I think there is ample time by 2020 to have something more mature in this space.
By that, I don't just mean the hardware. I mean the way DXR and DX12 interact. And things could be different in the console space. There are quite a few custom features in DX12_X that don't exist in DX12; ie more features with executeIndirect, more flags and counters, more ability to expose the hardware etc.

The reason I'm back on the RT for next gen, is because I've been looking at the DirectX forums and the same guys who make DXR are also on the Xbox team. DXR is constantly being updated, and the drivers for supporting DXR are also being updated. 2 years is a lot of time for things to change. If you recall, DX12 did not ship with Shader Model 6 and it only was released with in the last year (as experimental), driver support for full release (I think June of this year) is still shaky from what I understand. I'm expecting to see more changes with DXR in that respect, I've yet to see a game require SM6.0 for baseline support when I think about it.

On top of profiling like they did with X1X. I can see support for RT, not at insane levels, but enough for it to become a baseline console feature.
 
Last edited:
Status
Not open for further replies.
Back
Top