Nvidia Turing Speculation thread [2018]

Status
Not open for further replies.
Now imagine this but with ray traced soft shadows:

664vgcp.gif


:love:
 
I thought the way they were demoing RTX was complete shit until they showed Battlefield 5. I mean, with the Metro demo where they were showing GI, it would have been a lot more interesting to show it with the light moving around, or with the windowing opening in the wall moving around, or adding and removing window openings. The whole point of real-time GI is for it to be dynamic, otherwise it's something games fake pretty well right now. The Battlefield demo was the best because it was dynamic, and reflections are a very easy selling point because reflections are faked poorly.
 
Likely human effect. People get so used faked scenes that look good but have no comparison to something significantly more accurate. They don’t see a big difference because they are used to seeing the fake. In reverse, if you are used to seeing a correct image, and then take away everything, it’s much more noticeable.

TLDR; sometimes harder to see additions than it is to see removal.

Oh I could see the differences. It didn't really change the mood of the scene or take me out of the experience. However, you're completely correct about the reverse. Once these things become commonplace, we won't want to go back.
 
I thought the way they were demoing RTX was complete shit until they showed Battlefield 5. I mean, with the Metro demo where they were showing GI, it would have been a lot more interesting to show it with the light moving around, or with the windowing opening in the wall moving around, or adding and removing window openings. The whole point of real-time GI is for it to be dynamic, otherwise it's something games fake pretty well right now. The Battlefield demo was the best because it was dynamic, and reflections are a very easy selling point because reflections are faked poorly.
This is what you wanted:

 
I thought the way they were demoing RTX was complete shit until they showed Battlefield 5. I mean, with the Metro demo where they were showing GI, it would have been a lot more interesting to show it with the light moving around, or with the windowing opening in the wall moving around, or adding and removing window openings. The whole point of real-time GI is for it to be dynamic, otherwise it's something games fake pretty well right now. The Battlefield demo was the best because it was dynamic, and reflections are a very easy selling point because reflections are faked poorly.

And that's in part because DICE's screenspace reflecions don't take alpha into account well. Not that such is easy, but it's doable, Watchdogs 2 despite looking rough does fine in this regard. Still the reflections are nice, but yeah... this is the kind of thing I think you'll see for a while. Reflections are indeed the main thing I can imagine being done well and being an obvious difference from raster, but it's going to be very smooth shiny reflections only. Coherent low sample rays ftw! As long as they're short range (all the long range stuff is still cubemaps and whatever). Entirely raytraced shadows would be very nice for devs, but that's a lot of work when you have to have a backup non raytraced solution for a good while.

But Turing will sell well, because Vega failed. The actual performance improvements, especially vs cost, are modest at best for a 2 year wait. But they are improvements, and when the competition can't keep up then hey, you're just winning more! Overall I do appreciate NVIDIA pushing tracing for the future of games, pushing Direct X Tracing compatibility into the forefront is great, now AMD will have to do so as well, same with Intel's upcoming GPU. But the dedicated hardware that's only going to be used for a few effects that aren't always great seems, of questionable value to end users when you could do those effects somewhat slower on other hardware, but at a definitely lower price.
 
For non-gamers (devs, artists etc) $999 for the 2080Ti is a steal IMO and a god send!F or gamer.. well its definitely not worth the price just to have nicer reflections/shadows which you prolly won't notice that much during gameplay (ie.BF5) or reflections/GI that are for the most part easily faked using SSR/baking. But the futur is RT and for game developers not having to use shitty hacks for such effects is going to be a game changer once every GPU has HW RT acceleration and every console too. In the mean time AMD is totally out of the picture if the don't have BVH acceleration in their next GPU as this is the feature that everybody is now going to be looking for in a GPU. They already have their own real-time AI denoising tech via FireRays/Pro Render but the missing building block is HW support for RT. Interesting all of the RT stuff wich is going to be added to the games is purely DXR so should be easly supported by AMD/Intel once they have their HW for it. The proprietary stuff will be the denoising part (using OptiX/RTX and the DSSA stuff etc) which as I said AMD already has (and is cross-vendor compatible & Open Source) but Intel doesn't yet (they have their own open source path traincing kernel Embree which are used in the corona renderer).
 
Oh btw, for anyone wondering about the Metro Exodus demo... yeah this is what I was saying about being shading bound. Basically GPUs rely on shading everything similarly because they only have to shade things that are very close together, that rock in a forest is close to that other rock when you look at it in game, so the shading (what lights hit it, etc.) is going to be very similar for each rock. So you can group your shading hardware together as well, making it fast and cheap.

Thing is with what raytracing is good for, it's good for shooting rays off in random directions. You then shade whatever the ray "hits" but these random directions aren't close together. Meaning they aren't shaded similarly, meaning GPUs aren't really good at this part. For global illumination, quite frankly the biggest (added) effect I can think of that RT will great at, the farther the rays go the less similar they are. So you can do very short range kind of meh GI like in Metro Exodus, and/or very neat, shiny short range reflections like in Battlefield (they're shiny because shiny reflections all go in the same direction).

Basically more than dedicated RT hardware is needed for RT to be great. GPUs will need to be redesigned to shade dissimilar things a lot faster. Other than shadows, because you don't need to shade whatever the ray hit, you just need to know whether it hit the light or not. But that's only neat if you think current shadows in games suck. I do, shadows are expensive and very short range (lights being cutoff arbitrarily a couple meters away is a very video game look), but for nice long range shadows to work, everyone's gonna need a DXR compatible card... so maybe 6 years from now : /
 
Oh I could see the differences. It didn't really change the mood of the scene or take me out of the experience. However, you're completely correct about the reverse. Once these things become commonplace, we won't want to go back.
Lol yea. Once you are used to a standard it’s hard to go down. Easy to go up.
 
I know I am sounding like a broken record, but what I find surprising is there are cards available for preorder all over the place, with no reviews or even an indication when the reviews might be forthcoming. While it’s not uncommon for tech presentation/announcement/whatever come ways before hands-on coverage, expecting people to preorder four-digit hardware before reviews is baffling.
 
Oh btw, for anyone wondering about the Metro Exodus demo... yeah this is what I was saying about being shading bound. Basically GPUs rely on shading everything similarly because they only have to shade things that are very close together, that rock in a forest is close to that other rock when you look at it in game, so the shading (what lights hit it, etc.) is going to be very similar for each rock. So you can group your shading hardware together as well, making it fast and cheap.

Thing is with what raytracing is good for, it's good for shooting rays off in random directions. You then shade whatever the ray "hits" but these random directions aren't close together. Meaning they aren't shaded similarly, meaning GPUs aren't really good at this part. For global illumination, quite frankly the biggest (added) effect I can think of that RT will great at, the farther the rays go the less similar they are. So you can do very short range kind of meh GI like in Metro Exodus, and/or very neat, shiny short range reflections like in Battlefield (they're shiny because shiny reflections all go in the same direction).
I'm wondering if it would be possible to have the rays collect information about whatever they hit (material index, world position normal, etc...), organize that data by material and then shade via compute shaders instead. That way you wouldn't trash the cache every time you trace a ray.
 
I know, yet there are morons preordering them as they are starting to sell out.

Shamefully, I’ll likely end up as one of them.

Something tells me if I wait until October or November, they’ll have game bundles for the holidays.
 
I'm getting the distinct feeling that gen 1 is not going to be fast enough for it to be worth. If you can't do well north of 60 FPS at some decent resolution then there's no point if we are talking games and not tech demos. I'd rather disable shadows and play at 720p than chug along at 30-60 FPS.

But real time global illumination is a dream come true. Lightmapping lent games a wonderful soft look that just looked like a real place, rather than the anti-sceptic look of real time lighting. With a little bit of overbrightening (just playing with the gamma curve was the old hack for e.g. Half-life mods) could look surprisingly good. Doom 3 was the game that made me dislike real time lighting and it took the better part of a decade to get over; it cost so much performance that they could have maybe 3 monsters at a time (in a Doom game!) and it looked about as appealing and realistic as sector shadows from Doom 2 with those atomically perfect point lights and low-poly models. Half-life 2 and mirror's edge did nothing to disabuse this notion.

From a game developer point of view pre-compiled lighting is a major pain in the ass I'm sure. With ugly hacks to make it light models that move around in real time.

Ambient occlusion in various forms tried to claw back some of that softness, but it wasn't particularly good at it. What they showed today was impressive, but it looked like 20-30 FPS on fairly simple scenes for the most part. That's in hit snooze for 5 years territory.
 
I will skip this gen but it’s extremely impressive that there are at least 3 top tier titles in the works and launching soon. Hopefully it doesn’t end here.

I see a lot of griping about nvidia wasting transistors on tech that will be hardly used or that will be too slow. But what’s the other option? If we wait until general compute is fast enough for raytracing it’ll probably not happen for at least another decade.

The real kicker will be the next console generation. If there’s no support for RT acceleration in consoles (or by AMD) then it might as well be dead until 2030. I’m optimistic that raytracing is actually easier to implement than all the hacks and that there are enough raytracing fanboys in the developer community to have DXR really take off.
 
I'm trying some reverse math... This at best may be a very rough approximation. But has there been any indication of how may rays per pixel you need to be able to calculate for ray tracing to look good enough?

With 10 Gigarays per second (2080Ti) at 4K and 60 fps, this translates to 20 rays per pixel (10e9/3840/2160/60). Is that good enough? Maybe in the real world assume half that plus the denoiser...
 
So is Turing going to be DX12 supported equivalent to Vega? Are we going to see true asynchronous raster and compute support? Or is it all suddenly about raytracing now?
 
Can those tensor cores be used for similar fashion like AMD's Rapid Packed Math? Seems like it is waste of potential if Tensor cores are just being used for denoising purposes.
 
Can those tensor cores be used for similar fashion like AMD's Rapid Packed Math? Seems like it is waste of potential if Tensor cores are just being used for denoising purposes.

Wouldn’t make sense to issue FP16 instructions to tensor cores. They’re setup for much denser matrix math. The “normal” FP32 CUDA cores can do double speed FP16 just fine.
 
Status
Not open for further replies.
Back
Top