Next Generation Hardware Speculation with a Technical Spin [2018]

Status
Not open for further replies.
RT may be a mid gen thing for the next console cycle. Current mid-gen's main selling point was 4K which leads me to believe they'll want to check off an obvious selling point for the next mid-gen refresh. With diminishing returns already being somewhat apparent going from 1080-4K and will be even more apparent when we go beyond that I could see RT as a great reason to upgrade to a mid-gen console. Also I'm not sure AMD or NVIDIA has tech that's ready or that makes sense for a console APU being launched in 2019/20 but I'm certainty not an expert on that.
It’s certainly a reasonable position. If mid gen was 4K. Then theoretically mid gen can be RT.

There’s sufficient enough time to profile RT games for the next while until mid gen is ready.
 
Screen space reflections are garbage and should disappear as soon as possible.

The rest of your post is just denial.
The discussion needs to move from advocacy to how or what would be a realistic implementation of RT in console.

There’s no doubt that everyone agrees RT is better; but that’s not the debate. RT is going to arrive on the console space, anywhere in the next 8 years I don’t believe in there is any debate in that. And those that don’t believe will be a small group until it happens.

What we need to do to keep this discussion moving in a positive direction is to talk about what we could expect graphically from something like a 2070. Because that’s exactly what I’m expecting it’s performance profile to be for a 2020 7nm APU.

I don’t think we need to sell RT. Just talk about what can realistically be accomplished with a 2070. Black box how the insides works since AMD is a large unknown here.

For me, just having RT available to resolve edge cases is imo, is already a leg up over rasterization. Use it wisely and use is to generate dramatic scenarios where lighting and shadows, possibly reflections matter to the immersion or the direction. You may not need it all game, maybe just certain scenes or cutscenes etc. But I think that’s where it will have its biggest bang for buck.

I think there will be few games that will completely and entirely have artistic ray tracing scenes everywhere trying to maximize the technology. Though I say that, order1886 is probably a game that would have benefitted from it.

For non RT folks. We should be talking more about what compute + GPU driven rendering will do. And how that can change the game.
 
Last edited:
It simply means that ray tracing is the new buzzword to try to sell new stuff to the yokels. Like VR or 3D, or... Surely you've seen this over and over during the years?
VR and 3D are consumer driven technologies, they require the consumer to adopt them first before they can thrive, and consumers are unwilling because these technologies still prove to be inconvenient for everyday usage.

Ray Tracing is nothing like VR or 3D, it's a developer driven technology, it's like Tessellation or GPU accelerated particles, both gained wider adoption with time, and became standard in current games. Ray Tracing also gives you wider effects to apply to your game, from shadows to reflections to global lighting .. etc. It's like Soft Shadows or High Res reflections. Such effects that are often seen in current games. Again NOTHING like VR or 3D. I am quite surprised experienced people think about RT that way, when the distinction is obvious miles away.
 
Is there really any need to be so rude?

Every engine worth its salt had global illumination in from before this generation began, and we can count the number of globally illuminated console games on one hand. One deformed hand, missing several fingers, at that.

There's every chance that RTRT will see the same fate next generation: global illumination becomes the norm, and the occasional game - of a similar scope to The Tomorrow Children or Driveclub - knocks off everyone's eye-socks.

So, before getting so salty over some rays, please just bear in mind that your arguments were applicable to GI only a few years ago, except that RTRT is a less known quantity.
Baked global illumination and real time global illumination are very different things.

We don't tolerate that sort of behavior in the console forums, so please check your attitude.
It's not about attitude, he just dismissed all the evidence to the contrary of his view. How is it rude to call him out on it? I mean saying that DXR is not intended for consumer software (aka games, some of them we've already seen) IS denial.

How does that equate to rasterisation having met its limits?

View attachment 2699

Yay, raytracing's gonna solve all our shadowing problems. :p

Kidding aside, raytracing is reliant on hacks to accelerate it, so the ideal, perfect renderer remains a ways off. Furthermore, that video shows the pursuit of really low-level raytracing, before RTX existed. Good quality lighting is being achieved with one sample per pixel, which is in the realms of doable as RT on compute in a next-gen console. If adding RT acceleration structures is cost effective in silicon, it behoves its inclusion, but if it requires considerable compromise of the raw shader power, it could potentially be left out without games suffering too much and maintaining maximum flexibility.
Flexibility for what? Most games adopt the same tech anyways. They vary mostly on art design not fundamental rendering techniques. This is specially true for those that are built with middleware like UE and Unity. Speed would benefit the majority of devs over the few eccentrics.

Edit:

http://cg.ivd.kit.edu/atf.php

Improved version of SVGF (A-SVGF), one of its main features is the elimination of temporal lag :D

The discussion needs to move from advocacy to how or what would be a realistic implementation of RT in console.

There’s no doubt that everyone agrees RT is better; but that’s not the debate. RT is going to arrive on the console space, anywhere in the next 8 years I don’t believe in there is any debate in that. And those that don’t believe will be a small group until it happens.

What we need to do to keep this discussion moving in a positive direction is to talk about what we could expect graphically from something like a 2070. Because that’s exactly what I’m expecting it’s performance profile to be for a 2020 7nm APU.

I don’t think we need to sell RT. Just talk about what can realistically be accomplished with a 2070. Black box how the insides works since AMD is a large unknown here.

For me, just having RT available to resolve edge cases is imo, is already a leg up over rasterization. Use it wisely and use is to generate dramatic scenarios where lighting and shadows, possibly reflections matter to the immersion or the direction. You may not need it all game, maybe just certain scenes or cutscenes etc. But I think that’s where it will have its biggest bang for buck.

I think there will be few games that will completely and entirely have artistic ray tracing scenes everywhere trying to maximize the technology. Though I say that, order1886 is probably a game that would have benefitted from it.

For non RT folks. We should be talking more about what compute + GPU driven rendering will do. And how that can change the game.
It's too early to make predictions about the hardware since we don't know what algorithms will be most likely to be used. If they're based on intersecting triangles then something like RTX is a good option (I guess) but if they're based on intersecting voxels or something else then who knows.

Disagree. Developers need consumers to spend over $600 to own RT hardware capable of running their RT-only games.
And that is already happening.
 
Last edited:
VR and 3D are consumer driven technologies, they require the consumer to adopt them first before they can thrive, and consumers are unwilling because these technologies still prove to be inconvenient for everyday usage.

Ray Tracing is nothing like VR or 3D, it's a developer driven technology, it's like Tessellation or GPU accelerated particles, both gained wider adoption with time, and became standard in current games. Ray Tracing also gives you wider effects to apply to your game, from shadows to reflections to global lighting .. etc. It's like Soft Shadows or High Res reflections. Such effects that are often seen in current games. Again NOTHING like VR or 3D. I am quite surprised experienced people think about RT that way, when the distinction is obvious miles away.

For tesselation, out of terrains rendering or water rendering it is not very mainstream.

https://developer.amd.com/wordpress/media/2012/10/Tatarchuk-Tessellation_GDC08.pdf

We are far from this but hardware tesselation were not flexible enough. With compute adaptative tesselation and even better mesh shader adaptative tesselation, tesselation will be come more common... And maybe primitive shader adaptative tesselation..
 
Last edited:
Disagree. Developers need consumers to spend over $600 to own RT hardware capable of running their RT-only games.
Is the VR requirement any different with regard to what developers expect of consumer expenditure?
 
Is the VR requirement any different with regard to what developers expect of consumer expenditure?

It's DavidGraham that's claiming the two are entirely different. That dev's don't require consumers to purchase anything for RT.

I'm saying they're the same from a consumer buy-in perspective (RT and VR).
 
It's DavidGraham that's claiming the two are entirely different. That dev's don't require consumers to purchase anything for RT.

I'm saying they're the same from a consumer buy-in perspective (RT and VR).
Technically DXR runs on non-specific RT hardware. We'll see if developers support that though.
 
Flexibility for what?

...we don't know what algorithms will be most likely to be used. If they're based on intersecting triangles then something like RTX is a good option (I guess) but if they're based on intersecting voxels or something else then who knows.
That flexibility. General purpose compute can intersect triangles for some games, voxels for others, SDF for others. RT hardware bound to a paradigm might be inefficient silicon further into the console generation if techniques change. As we're looking at hybrid rendering, flexibility in creating the images is going to be paramount for efficiency.

Also, not every game is photorealistic 3D. Games may benefit from having compute for other purposes like massive simulation. Or whatever,not necessarily compute but other acceleration/processing structure not necessarily tied to tracing rays.
 
Is the VR requirement any different with regard to what developers expect of consumer expenditure?
Slightly. When making a VR game, you know you have a limited audience and by-and-large spend accordingly - there aren't any AAA VR titles. In making a big-budget game for PC, the likes of which will showcase RTX, devs can't limit themselves to a few million units. As such, RT will have to be added on top of a conventional game engine rather than have RT only games made.
 
It's DavidGraham that's claiming the two are entirely different. That dev's don't require consumers to purchase anything for RT.
What? How is wearing a huge cumbersome glass or googles over your head the same thing as requiring a GPU that WILL get cheaper with time/successive generations and require absolutely zero inconvenience from the user? People already buy GPUs across all price spectrum. The added RT won't stop a user from buying the latest GPU to get increased fps. Heck people bought GPUs with higher prices than 500$ during the mining craze. Expensive GPUs always existed for a variety of reasons, Ultra settings, 100fps, 4K60, latest DX support .. etc. Are all of these things the same as VR/3D now?!!

And I am not claiming anything, they ARE different! And the amount of mental gymnastics used to equate RT to VR/3D is astonishing to say the least!
 
Last edited:
That flexibility. General purpose compute can intersect triangles for some games, voxels for others, SDF for others. RT hardware bound to a paradigm might be inefficient silicon further into the console generation if techniques change. As we're looking at hybrid rendering, flexibility in creating the images is going to be paramount for efficiency.

Also, not every game is photorealistic 3D. Games may benefit from having compute for other purposes like massive simulation. Or whatever,not necessarily compute but other acceleration/processing structure not necessarily tied to tracing rays.
It's always the same, hardware acceleration always starts with fixed function units. Even programmable shaders were comprised of fixed function hardware at first. Another thing that is always the same is that devs base their algorithms on making the best use of the hardware available. If RT hardware is what they have, RT hardware is what they'll use. That's why I'm not concerned with a hypothetical scenario where the RT hardware would remain unused. Such cases would be rare. Flexible units would be slower and that would affect ALL games. Not a good bet for first generation RTRT I think.
 
That flexibility. General purpose compute can intersect triangles for some games, voxels for others, SDF for others. RT hardware bound to a paradigm might be inefficient silicon further into the console generation if techniques change. As we're looking at hybrid rendering, flexibility in creating the images is going to be paramount for efficiency.

Also, not every game is photorealistic 3D. Games may benefit from having compute for other purposes like massive simulation. Or whatever,not necessarily compute but other acceleration/processing structure not necessarily tied to tracing rays.
There is DXR with voxels already. I’m going to assume it can work with SDF, no reason for it not too. The acceleration is on BVH traversal, that we know of so far. It’s only the tensor cores that are brutally specific.
 
If RT hardware is what they have, RT hardware is what they'll use.
True.
Flexible units would be slower and that would affect ALL games. Not a good bet for first generation RTRT I think.
Flexible units are slower versus dedicate hardware, but if the algorithms they run are faster, it's a win. This is why GPUs have moved to compute rather than pushing a certain number of techniques even faster, and it means we can have games like Dreams using SDF and tracing rays which would have been impossible if shaders had remained fixed looking at shading vertices and pixels.
 
Both require consumer buy-in for it to be economically feasible for devs to pursue RT-only (VR-only) games.
VR requires a decent GPU + headset. Inconvenience with the headset, massively limiting adoption.
RT requires only a decent GPU. Which means massively wider adoption as the tech claws it's way through lower GPU tiers in it's second iteration.

ie, VR is not the same as RT.

RT is the same thing as DX11 when it was new. Only decent GPUs were useful for DX11 games. Then lower end GPUs started getting better and better. Now they do DX11 well. DX11 games are now the standard in the industry.

And we are not talking about RT only games here, we are talking normal games + RT extras. Which doesn't block RT from reaching wider audience at all, in fact this massively increases it's chance of being adopted.
 
Last edited:
Just some questions.

Does Nvidia ever done "deals" whether shady, legal, etc. to incentivize their adoption of certain tech like PhysX?

Could they also be doing this with RT?

It such common practice within the industry?
 
Status
Not open for further replies.
Back
Top