Next Generation Hardware Speculation with a Technical Spin [2018]

Status
Not open for further replies.
To go further, there's no reason the AI derived algorithm couldn't target standard computer shader specs rather than requiring Tensor cores. DLSS is designed specifically to address nVidia's business needs.
neural networks run most efficiently on tensor.

Tensors flops are in the 100TF range. That’s several magnitudes higher than the standard CU.

The full-fat version of Turing (it’s not clear which GPU this specifically refers to) is capable of 14 TFLOPS of FP32, 110 FP16 tensor FLOPS (that’s the half-precision mode) and 78T RTX-OPS. That last metric isn’t really a metric at all since we don’t really know what an RTX-OP is, exactly, but presumably, that kind of information will be fleshed out at a later date. The current Titan X, in contrast, is capable of 12 RTX-OPS.
 
Last edited:
I would question that possibility as a theoretical limit, or a case in point where a developer purposely decided to make a 14TF game run at 1080p@30fps. I doubt there is so much quality per pixel there all using fast and efficient approximations each approximation getting you closer to a ray traced solution but without the drawbacks of ray tracing. There is fallacy in that logic.
There's certainly a cross-over point. The really important point here is for those advocating RT in the next consoles, it's a very uncertain cross-over point and it'd be wrong to think rasterising and other techs have hit their limits. It'd be nice to discuss the realistic probabilities of where that cross-over point lies rather than these kinda polarised all-or-nothing points.

To be clear, I'm pro RT and look forwards to the RT'd future. We're talking here about the realistic choices being faced by the engineers for the consoles releasing in a year or two, and the only data we have to consider are the silicon sizes of RTX and the current performances of the demos. I for one would rather look at that data than discuss a hypothetical future where everything aligns perfectly for RT.

SDF is the large enabler there for that to happen. But SDF isn't the right technology for a majority of AAA games out there.
It's a beginning, same as raytracing. It has a future (whole swathes of traced shadowing and lighting solutions on compute). Surely it's as much a folly to say the latest SDF and alternative representation developments can't be integrated into renderers as it is to say RT won't improve in techniques over these first demos?
 
There's certainly a cross-over point. The really important point here is for those advocating RT in the next consoles, it's a very uncertain cross-over point and it'd be wrong to think rasterising and other techs have hit their limits. It'd be nice to discuss the realistic probabilities of where that cross-over point lies rather than these kinda polarised all-or-nothing points.

To be clear, I'm pro RT and look forwards to the RT'd future. We're talking here about the realistic choices being faced by the engineers for the consoles releasing in a year or two, and the only data we have to consider are the silicon sizes of RTX and the current performances of the demos. I for one would rather look at that data than discuss a hypothetical future where everything aligns perfectly for RT.

It's a beginning, same as raytracing. It has a future (whole swathes of traced shadowing and lighting solutions on compute). Surely it's as much a folly to say the latest SDF and alternative representation developments can't be integrated into renderers as it is to say RT won't improve in techniques over these first demos?
Agree with all points. Answer is unclear.
With so many developers and engine adopting RT im honestly not sure where developers actually stand. And if console hardware supported RT we wouldn’t know. But I’d be shocked that there is such an outpouring of support for DXR if it’s just for nvidia users in the top 1% of GPU owners. I don’t know if you recall but a lot of features in Mxwell and Pascal were skipped. SVOGI etc, few games supported these.

The lineup for DXR is insane.

Not sure if SDF can mix with traditional rasterization. Maybe a @sebbbi question.

From what I recall I thought he completely side steps the rasterization pipeline and stays in compute pipeline only.
 
Last edited:
Dreams shows detailed geometry in SDF. Seems possible that low-detail SDF imposters could be used for shadow casting and ambient lighting, perhaps.
 
No. RT is definitely the next big step, but the current scope for compute and rasterisation is nowhere near peaked. Having the same tech now but with substantially more horsepower will enable plenty. There is lots of progress with realtime global illumination without needing specialist raytracing hardware. First-gen realtime RT may be comparable to latest-gen optimised compute-based lighting, though perhaps a little better quality but at half the framerate and resolution. So it's a significant trade between true reflections and shadows for far lower fidelity, or 'good enough' shadows and reflection hacks in far greater fidelity. We need RT to get beyond first/second/third generation (depending on what's being counted as a generation) for it offer better in every way. Raytraced shadows on compute is a thing already, for example. Throwing in all the tech advances you've mentioned that aren't tied to RT, there may be significant optimisations to be made further down the line too enabling far better quality from the existing shader+rasterisaion tech. It can also be argued that a move away from some of the fixed-function rasterising could be all that's necessary ,moving the GPUs towards programmability (and so adapting to raytracing as a result) without needing dedicate RT hardware.
First gen tacked on RT already surpasses state of the art rasterization hacks with the added benefit of not being a hack.

Also, SDF are only useful for static and procedural geometry. Skinned meshes are not supported.
 
While I cannot begin to guess where and how things will shake out with all of the technical stuff and developer support, isn't it safer to assume that 9th gen consoles (PS5, Scarlett) will not have RT hardware? Aside from all the talk and hype for cloud streaming, and I'm not buying into what Ubisoft said about 1 more generation of traditional consoles. So assuming there is a 10th generation of traditional consoles (i.e. PS6, Xbox Zodiac, or whatever) with all the processing hardware and RAM in your home, around 2026+ (or late 2020s) that might be the soonest consoles would have hardware RT support, and enough time gone by for realtime RT for games to have matured, shaken out and improved in terms of cost, etc. And not counting hypothetical mid-gen upgraded 9th gen consoles (i.e. PS5 Pro, Xbox Scarlet XX).

Of course, even by the late 2020s, realtime rendering will not use 100% raytracing or path trading, but some combination ray+raster hybrid. I would think. Could be wrong, but, then remember GDC 2018's RT for games roadmap:

gxYHc1J.jpg
 
Last edited:
While I cannot begin to guess where and how things will shake out with all of the technical stuff and developer support, isn't it safer to assume that 9th gen consoles (PS5, Scarlett) will not have RT hardware? Aside from all the talk and hype for cloud streaming, and I'm not buying into what Ubisoft said about 1 more generation of traditional consoles. So assuming there is a 10th generation of traditional consoles (i.e. PS6, Xbox Zodiac, or whatever) with all the processing hardware and RAM in your home, around 2026+ (or late 2020s) that might be the soonest consoles would have hardware RT support, and enough time gone by for realtime RT for games to have matured, shaken out and improved in terms of cost, etc. And not counting hypothetical mid-gen upgraded 9th gen consoles (i.e. PS5 Pro, Xbox Scarlet XX).

Of course, even by the late 2020s, realtime rendering will not use 100% raytracing or path trading, but some combination ray+raster hybrid. I would think. Could be wrong, but, then remember GDC 2018's RT for games roadmap:

gxYHc1J.jpg
If consoles don't support it it won't mature. It will be relegated to tacked on features and experiments for the most part.
 
While I cannot begin to guess where and how things will shake out with all of the technical stuff and developer support, isn't it safer to assume that 9th gen consoles (PS5, Scarlett) will not have RT hardware? Aside from all the talk and hype for cloud streaming, and I'm not buying into what Ubisoft said about 1 more generation of traditional consoles. So assuming there is a 10th generation of traditional consoles (i.e. PS6, Xbox Zodiac, or whatever) with all the processing hardware and RAM in your home, around 2026+ (or late 2020s) that might be the soonest consoles would have hardware RT support, and enough time gone by for realtime RT for games to have matured, shaken out and improved in terms of cost, etc. And not counting hypothetical mid-gen upgraded 9th gen consoles (i.e. PS5 Pro, Xbox Scarlet XX).

Of course, even by the late 2020s, realtime rendering will not use 100% raytracing or path trading, but some combination ray+raster hybrid. I would think. Could be wrong, but, then remember GDC 2018's RT for games roadmap:

gxYHc1J.jpg

There's still mid-gen refreshes. Maybe MS and Sony can use those to officially warm up major developers while producing actual products before doing a major leap with hybrid rendering with the PS6/XB3.
 
There's still mid-gen refreshes. Maybe MS and Sony can use those to officially warm up major developers while producing actual products before doing a major leap with hybrid rendering with the PS6/XB3.

I agree. If there are midgen upgraded refresh consoles, they certainly wouldn't be made for the purpose of doing 8K (like PS4 Pro & X1X were to begin to push 4K)
Rather, they'd be used to warm up developers on raytracing, several years before PS6/XB3.
 
First gen tacked on RT already surpasses state of the art rasterization hacks with the added benefit of not being a hack.
And running at 1080p30.Now if you read what I said properly, I'm saying rasterised hacks, though hacks, may well be good enough approximations and also run fast enough.

Why are you willing to think that RT is going to get much better in the next couple of years and worth inclusion into console but aren't willing to accept that rasterisation has plenty of room for improvement too?

Also, SDF are only useful for static and procedural geometry. Skinned meshes are not supported.
Firstly, it was an example of how compute has plenty left to offer, not a proposed universal solution. Secondly, what's static about Dreams and Claybook? Thirdly, you can combine technologies. Use simplified SDF models for shadow casting, say, on top of conventional rasterising. There are options that should be equally explored alongside the RT options, because the next consoles are going to be with us for 5+ years and it'd be pretty tragic if they come with first-gen raytracing that's so ineffectual in real-world games that it barely makes a difference while framerates struggle for the entire generations, or if they skip raytracing altogether and it turns out RT is a super-efficient technique across the board to improve games and the consoles end up a generation behind rendering technology for their whole lives.
 
And running at 1080p30.Now if you read what I said properly, I'm saying rasterised hacks, though hacks, may well be good enough approximations and also run fast enough.

Why are you willing to think that RT is going to get much better in the next couple of years and worth inclusion into console but aren't willing to accept that rasterisation has plenty of room for improvement too?
e.g. rendering the image from another perspective with a lower resolution lower details (because most times it is not a 1:1 reflection) would also be sufficient for most reflections (like mirrors do in racing games) and would most times be many times faster. Another example would be portal, that does this for it's portals. I know portal is a really low-detail game but using this should give you an illusion of a reflection that is good enough to be convincing and this with a quite low performance penalty (no extra hardware needed, therefor the chip can be bigger).
 
I agree. If there are midgen upgraded refresh consoles, they certainly wouldn't be made for the purpose of doing 8K (like PS4 Pro & X1X were to begin to push 4K)
Rather, they'd be used to warm up developers on raytracing, several years before PS6/XB3.

Or they could just be substantially more powerful consoles with more and higher performing compute shaders that devs then decide if they want to use for higher resolution, higher framerates, higher resolution for VR, higher framerates for VR, etc.

Investment in RTRT for games pales in comparison to the investment in VR, after all.

As mentioned above, just because nvidia launched a Geforce family using chips clearly oriented at the Pro market with RT fixed function units doesn't mean rasterization hit a wall.
 
So:
Phil Spencer going on the record at E3 and mentioning ray tracing for the next Xbox platform
Microsoft launching DXR
NVIDIA doing RTX
AMD saying they are going to do ray tracing too
Intel officials excited for ray tracing, probably will integrate it into their dGPUs too considering their history with ray tracing on Larrabee
Major developers making demos and playing with ray tracing in major engines
Most developers speaking enthusiastically about doing ray tracing

All of that is not enough to convince some that rasterization has reached it's limits, and that the general trend in the industry is to use ray tracing to advance real time graphics?
 
Last edited:
Why is it that, as far as I understand, RT cores can’t be used for anything else?

Because they're fixed function units, not programmable. Which is why they're extremely efficient area and power wise.
Same thing with mining ASICs.
 
Why is it that, as far as I understand, RT cores can’t be used for anything else? It would be nice if all that silicon was used for something at all times, even when no RT is required. What am I missing?

Exactly. If RT cores could also be used for anything else in traditional rasterising, I'd be waaaaay more convinced that they'd appear in the next generation.
 
So:
Phil Spencer going on the record at E3 and mentioning ray tracing for the next Xbox platform
Microsoft launching DXR
NVIDIA doing RTX
AMD saying they are going to do ray tracing too
Intel officials excited for ray tracing, probably will integrate it into their dGPUs too considering their history with ray tracing on Larrabee
Major developers making demos and playing with ray tracing in major engines
Most developers speaking enthusiastically about doing ray tracing

All of that is not enough to convince some that rasterization has reached it's limits, and that the general trend in the industry is to use ray tracing to advance real time graphics?

Seeing on blog and twitter dev it is a bit less enthusiastic most of the developer hate the BVH black box. Some are not convinced of the utility of raytracing for the moment and think is is probably too soon... And many seems to think Morgan Mc Guire himself we will have limted RT functionnality in next console generation and it will become a first class citizen next next generation...

EDIT: From my understanding we will see less and less shadow maps next generation RT cores or not but other raytracing effect will not be used. Shadow and AO cost less than other global illumination effect....
 
The Ray Tracing Pipeline
The flow of data through the ray tracing pipeline differs from the traditional raster pipeline. Figure 2 shows an overview of the two pipelines for comparison.

Gray blocks are considered to be non-programmable (fixed-function and/or hardware). These evolve and improve over time as the underlying implementation matures. White blocks represent fully programmable stages. Diamond-shaped stages are where scheduling of work happens.

pasted-image-0-7-1024x398.png

Figure 2. Traditional rasterization pipeline versus the ray tracing pipeline
Unlike rasterization, the number of “units” (rays) of work performed depends on the outcome of previous work units. This means that new work can be spawned by programmable stages and fed directly back into the pipeline.

Four key components make up our ray tracing API:

  • Acceleration Structures
  • New shader domains for ray tracing
  • Shader Binding Table
  • Ray tracing pipeline objects

https://devblogs.nvidia.com/vulkan-raytracing/
 
Status
Not open for further replies.
Back
Top