Next Generation Hardware Speculation with a Technical Spin [2018]

Status
Not open for further replies.
All of that is not enough to convince some that rasterization has reached it's limits, and that the general trend in the industry is to use ray tracing to advance real time graphics?

Depends on who you'd ask, RT is probally more popular for pc people whilst for console it might not yet be that all impressive cause we dont know yet if they will contain RT hardware, likely not but you never know.
 
Is there scope for programmable versions or equivalents, or is it just inherent to what they are or do?

There are approaches using programmable hardware, which is to use "regular" compute shaders to perform raytracing. It's what Radeon ProRender does with any OpenCL / Metal 2 compliant hardware, or nvida OptiX does with.. well, nvidia hardware.
Turing's approach to RT uses dedicated fixed-function hardware units that originate from papers published in 2011, according to Anandtech.


New RT Cores are incorperated into the ComputeUnits. They are relative cheap like TensorCores.
You'll have to define what "cheap" means, because all Turing GPUs are ~50% larger when compared to their Pascal counterparts with similar compute and rasterization units.
 
Looked but did not see this posted. Interview comments from Tim Sweeney.

May 4, 2018
“It turns out that at around 25 teraflops operations per second, ray tracing becomes the best way to produce realistic looking pixels,” Sweeney said while speaking with MCVUK. “The demo we showed in partnership with ILMxLab is the first step in that direction. Part of the scene is rendered and part is ray traced, all the shadows and reflections come from ray tracing, and like movies, game engines are going to adopt this. You’re going to see more and more ray-traced elements in our scenes, and I think ten years from now you might find nothing but ray tracing in our engines. Everybody who’s starting a triple-A project, they all should be thinking about ray tracing.”

And when exactly might this technology become more common? Sweeney believes that within as little as two years, we might be seeing single GPU units with that kind of computing power. “It’s not coming to your smartphone anytime soon,” he said. “But GPUs move fast. You might find within two years that you have that amount of computing power in a single GPU. And suddenly it becomes possible at high-end.”
https://wccftech.com/tim-sweeney-aaa-ray-tracing-consoles/
 
Why is it that, as far as I understand, RT cores can’t be used for anything else? It would be nice if all that silicon was used for something at all times, even when no RT is required. What am I missing?
But they're not supposed to be. DXR is documented to be as flexible as possible in the same way our compute should be. Nvidia entering the ring like such could have very well been a cost cutting exercise, and perhaps there will be at a later date, a RT build directly into compute without such loss of silicon.
 
While I cannot begin to guess where and how things will shake out with all of the technical stuff and developer support, isn't it safer to assume that 9th gen consoles (PS5, Scarlett) will not have RT hardware? Aside from all the talk and hype for cloud streaming, and I'm not buying into what Ubisoft said about 1 more generation of traditional consoles. So assuming there is a 10th generation of traditional consoles (i.e. PS6, Xbox Zodiac, or whatever) with all the processing hardware and RAM in your home, around 2026+ (or late 2020s) that might be the soonest consoles would have hardware RT support, and enough time gone by for realtime RT for games to have matured, shaken out and improved in terms of cost, etc. And not counting hypothetical mid-gen upgraded 9th gen consoles (i.e. PS5 Pro, Xbox Scarlet XX).

Of course, even by the late 2020s, realtime rendering will not use 100% raytracing or path trading, but some combination ray+raster hybrid. I would think. Could be wrong, but, then remember GDC 2018's RT for games roadmap:

gxYHc1J.jpg
It's definitely going to be a hybrid approach forever more or less, the next era is looking pretty steep in requirements. There will likely never be a time where we are 100% ray traced. It's just not efficient.
And as for the assertion that it would be safer to assume it would not contain RT hardware. I think the answer is yes.

I re-kindled my position on this argument not because I think we're 100% RT. The discussion seemed to put RT at a 5% or less possibility of appearing on consoles. I think with the information available, it may be sitting closer to 45%. Which is still gives rasterization the edge here, but I still think RT is a plausible chance of appearing in at least Microsoft's console.
 
It's definitely going to be a hybrid approach forever more or less, the next era is looking pretty steep in requirements. There will likely never be a time where we are 100% ray traced. It's just not efficient.
And as for the assertion that it would be safer to assume it would not contain RT hardware. I think the answer is yes.

I re-kindled my position on this argument not because I think we're 100% RT. The discussion seemed to put RT at a 5% or less possibility of appearing on consoles. I think with the information available, it may be sitting closer to 45%. Which is still gives rasterization the edge here, but I still think RT is a plausible chance of appearing in at least Microsoft's console.

I somewhat agree. Fully fledged RT on consoles, requiring the silicon use of the RTX2070, I think still sits at 5%.

I think we're looking at quite a high probability if CU's can be customised in some relatively cheap way to provide limited RT - pulling figures out of my arse, let's say it makes CU's 10% larger, and 50% as effective as Nvidia's fixed function units, but allows devs greater flexibility over how much they focus on RT.
 
And running at 1080p30.Now if you read what I said properly, I'm saying rasterised hacks, though hacks, may well be good enough approximations and also run fast enough.

Why are you willing to think that RT is going to get much better in the next couple of years and worth inclusion into console but aren't willing to accept that rasterisation has plenty of room for improvement too?

Firstly, it was an example of how compute has plenty left to offer, not a proposed universal solution. Secondly, what's static about Dreams and Claybook? Thirdly, you can combine technologies. Use simplified SDF models for shadow casting, say, on top of conventional rasterising. There are options that should be equally explored alongside the RT options, because the next consoles are going to be with us for 5+ years and it'd be pretty tragic if they come with first-gen raytracing that's so ineffectual in real-world games that it barely makes a difference while framerates struggle for the entire generations, or if they skip raytracing altogether and it turns out RT is a super-efficient technique across the board to improve games and the consoles end up a generation behind rendering technology for their whole lives.
1) What makes you think first round tacked on rushed naive implementations are an accurate benchmark of what will be possible in a couple of years? With all the resources invested on rasterization research the improvements we've gotten this gen have been marginal and those that stand out are special cases that can't be used for most games. RTRT is a new field with minimal resources invested and already provides great results.

2) Static AND procedural geometry I said. Also if speed is a concern, hardware acceleration > compute units.

Why do you consider clever algorithms and quality trade offs for rasterization techniques but not for ray tracing? Like voxel ray tracing for example, which is much faster than triangle RT in many cases.

Also as some other people have said, AMD is in charge of next gen console design not NVIDIA so speculation based on Turing can only go so far.
 
I think with the information available, it may be sitting closer to 45%. Which is still gives rasterization the edge here, but I still think RT is a plausible chance of appearing in at least Microsoft's console.

Considering Phil's comment, I believe it's closer to 90% chance of appearing on the next Xbox. Whether it's the base console or the refresh is anyone's guess. If the base Xbox has it and it proved to be a significant enough advantage, then the refresh PlayStation will incorporate it as well.
 
I somewhat agree. Fully fledged RT on consoles, requiring the silicon use of the RTX2070, I think still sits at 5%.

I think we're looking at quite a high probability if CU's can be customised in some relatively cheap way to provide limited RT - pulling figures out of my arse, let's say it makes CU's 10% larger, and 50% as effective as Nvidia's fixed function units, but allows devs greater flexibility over how much they focus on RT.
AMD's response be different for sure so I'm curious to see what will happen. I also don't believe that it needs to be done the nvidia way.
But the wind is certainly blowing in another direction and it's clearly a matter of time before RT comes to console. It's a great debate happening right now reading the cross points of view.

Everything I've read so far on this thread from all posters, is pretty valid on both sides of the argument at least from my pov. Which is going to be great for when they actually reveal the consoles next year or so.
 
Considering Phil's comment, I believe it's closer to 90% chance of appearing on the next Xbox. Whether it's the base console or the refresh is anyone's guess. If the base Xbox has it and it proved to be a significant enough advantage, then the refresh PlayStation will incorporate it as well.
I was joking with Alnets on the side. Al's got some great thoughts but he refuses to share ;)
But half jokingly I said, MS announced 2 consoles at E3.
1 with RT
1 without.
streaming is for mobile.

It was just an attempt to sort of bridge these two positions together.
 
Yep. And the newer GPU features will enable things like shader LOD to accelerate rendering such features
So:
Phil Spencer going on the record at E3 and mentioning ray tracing for the next Xbox platform
Microsoft launching DXR
NVIDIA doing RTX
AMD saying they are going to do ray tracing too
Intel officials excited for ray tracing, probably will integrate it into their dGPUs too considering their history with ray tracing on Larrabee
Major developers making demos and playing with ray tracing in major engines
Most developers speaking enthusiastically about doing ray tracing

All of that is not enough to convince some that rasterization has reached it's limits, and that the general trend in the industry is to use ray tracing to advance real time graphics?
We know RT is the future. This line of 'debate' (parrotting a viewpoint ad nauseum) is boring and pointless. There's actual implementation concerns to discuss here. RT is 1) possible on compute, 2) demonstrably slow given current data, 3) reliant on a lot of unknows, like what is AMD's solution going to be.

Could all those who's only contribution are just to say "Raytracing is the future" please stop, and only post if you're genuinely discussing the pros and cons of the different possibilities and hardware choices.
 
Why is it that, as far as I understand, RT cores can’t be used for anything else? It would be nice if all that silicon was used for something at all times, even when no RT is required. What am I missing?
At the moment, we only understand there's fixed-function HW in RTX. Quite possibly the shaders could be adapted, or the memory logic, or as suggested by some a more general purpose 'data structure navigation' processor could maybe be employed. RTX isn't the be all and end all. At the moment though, comparing Turing to Pascal, we've a lot more silicon and yet marginally more CUDA cores, so the correlation, though weak, is RT takes up a lot of space (including Tensor cores).
 
Figure 2. Traditional rasterization pipeline versus the ray tracing pipeline
Unlike rasterization, the number of “units” (rays) of work performed depends on the outcome of previous work units. This means that new work can be spawned by programmable stages and fed directly back into the pipeline.
Except modern APIs allow for shaders to spawn graphics work, no?
 
1) What makes you think first round tacked on rushed naive implementations are an accurate benchmark of what will be possible in a couple of years?
I've already said it doesn't.
="me", post:2046155]Going by the current demo, which obviously doesn't represent absolutely what RT can do,

With all the resources invested on rasterization research the improvements we've gotten this gen have been marginal and those that stand out are special cases that can't be used for most games. RTRT is a new field with minimal resources invested and already provides great results.
This is the first gen with decent compute and with little headroom because that same silicon is having to render the graphics. You could take a game targeting XB1's 1.4 TF and add 5 TFs on top for computing fancy lighting and whatnot. I also argue that improvements are marginal. Comparing the latest, greatest games to launch titles, there are significant imrpovements.

2) Static AND procedural geometry I said. Also if speed is a concern, hardware acceleration > compute units.
Not if your fixed function units can't do what you need them to do to accelerate your workloads. The rasterising units in the PS4 are sitting idle when rendering Dreams.

Why do you consider clever algorithms and quality trade offs for rasterization techniques but not for ray tracing?
I haven't denied anything on that part of RT. Of course RT is going to improve - I've explicitly said as much numerous times. What I'm not willing to do is just assume that though we're getting 1080p30 now on 500 mm² GPUs, we'll be able to get 4k30/4k60 in a console in hardware starting production next year because it can be assumed everything will be wonderful by then. I'm wanting to discuss the viability based on known techs and intelligent forecasts preferable backed with some decent evidence or reference.

Also as some other people have said, AMD is in charge of next gen console design not NVIDIA so speculation based on Turing can only go so far.
Absolutely. But it shouldn't be discussed on faith. We have data on how many rays are being processed in the latest RT hardware, and we see results. It's stupid to ignore that and imagine AMD's RT hardware that's going to appear will do far better and so should feature in the new hardware. We should be discussing what needs to happen for RT to make an appearance by comparing it fairly to the alternative, and be willing to accept neutrally if actually, this gen, RT is too early, or alternatively that the consoles should wait a couple more years. Or whatever - the discussing shouldn't be people pressing an agenda for the sake of it. This isn't politics. ;)
 
Status
Not open for further replies.
Back
Top