Next Generation Hardware Speculation with a Technical Spin [2018]

Status
Not open for further replies.
What's the definition of 'raytracing core'?

I'm not completely sure. Whatever Nvidia's defined as one? I think BVH accelerator would be the correct term.

No. Deriving algorithms needs true intelligence. ML uses neural nets to find inferred patterns. I think the Tensor cores are required to scan the images and compare them to learnt behaviour to find a match.

Sorry, that's not quite what I meant. I was asking if tensor cores, offline, could derive algorithms that could be run realtime on compute. So use offline tensor cores to develop a realtime compute solution for AA and denoising.
 
1) Ray tracing gives better results at the same cost.
Source? Current examples are 1080p30 on a 13 TF GPU with raytracing hardware, versus some 30+ fps demos on 4TF GPUs. How are you calculating RT gives better quality at the same cost? It's better quality in terms of features (reflections) but that's at higher cost, while I don't know that there's any comparable metrics for the lighting aspect in terms of quality and cost.
 
Source? Current examples are 1080p30 on a 13 TF GPU with raytracing hardware, versus some 30+ fps demos on 4TF GPUs. How are you calculating RT gives better quality at the same cost? It's better quality in terms of features (reflections) but that's at higher cost, while I don't know that there's any comparable metrics for the lighting aspect in terms of quality and cost.
Back of my head wondering because RT does AO, GI, reflections, and soft shadows likely being able to calculate all of them? using a single ray, more load, but another ray may not need to be cast in the sense that each feature doesn’t need another ray pass.

I think if you wanted all of them using rasterization techniques I think you’re going to be doing a lot of separate passes and techniques. In that way it’s likely more efficient as you move further into graphical complexity.

Then again I could be wrong. I need to read some dev words on the topic.
 
And there will be a lot of potential for optimization in the future. I also don't see anything that would solve the shadow problem without raytracing. HFTS looks worse and costs 50% of the performance. The shown raytracing solutions are faster and look better. I don't see a solution for reflections either. While on the other hand GI and AO can work more likely without Raytracing. GI and AO are also often made at the same time.
 
Last edited:
I have to ask, how many of you pro RT folks are still on a 1080p set?
not understanding the relevance. in all the demos we've seen tensor cores have not been used yet for AI upscaling. If everything comes together, I can see it being closer to 4K with AI upscale.
 
Source? Current examples are 1080p30 on a 13 TF GPU with raytracing hardware, versus some 30+ fps demos on 4TF GPUs. How are you calculating RT gives better quality at the same cost? It's better quality in terms of features (reflections) but that's at higher cost, while I don't know that there's any comparable metrics for the lighting aspect in terms of quality and cost.
The quality and performance we've seen of existing games and demos, that's my source. The voxel ray tracing link I posted earlier is a good one.

I have to ask, how many of you pro RT folks are still on a 1080p set?
Native 4K is a waste of resources. 4 times the pixel processing computations for a marginal increase in sharpness that 90% of people won't even notice during gameplay. The fact that people need Digital Foundry to tell them resolution differences at the 1080p range already proves it. Just do a plain bilinear upscale and call it a day.
 
Native 4K is a waste of resources. 4 times the pixel processing computations for a marginal increase in sharpness that 90% of people won't even notice during gameplay. The fact that people need Digital Foundry to tell them resolution differences at the 1080p range already proves it. Just do a plain bilinear upscale and call it a day.
not necessarily but... here's another opinion on it:

https://www.resetera.com/posts/14188341/

on the topic of size to distance relationship:
Gonna disagree when it comes rasterised graphics, which have aliasing - causing flickering, pixel pop in and other macro effects that distract beyond the ability to resolve resolution.
 
not understanding the relevance. in all the demos we've seen tensor cores have not been used yet for AI upscaling. If everything comes together, I can see it being closer to 4K with AI upscale.
As long as we don't have solid evidence of how effective AI upscaling could get a base 1080p buffer to even come close to the sharpness of a native 4k buffer, your point is redundant.
Native 4K is a waste of resources. 4 times the pixel processing computations for a marginal increase in sharpness that 90% of people won't even notice during gameplay. The fact that people need Digital Foundry to tell them resolution differences at the 1080p range already proves it. Just do a plain bilinear upscale and call it a day.
Absolutely incorrect if the latest RDR 2 analysis is anything to go by ;). And that's not even factoring the larger screen sizes most people are gaming on these days, it'll only get larger from this point on and high resolution is needed ever more. 1080p just wont cut it. I'm not saying 4k native all the time but even reconstruction 4k or CBR is far more sharper, cleaner than vanilla 1080p.
Tech advancement is not solely focusing on one aspect to the extreme but everything else as a whole and not only is resolution a crucial contributing aspect, it is also bound by the mainstream display/TV market, the benefit is tangible and it should not be ignored.
 
Why are you claiming that every game with Raytracing only has 1080p/30fps? Enlisted has 90+fps with Raytracing GI in UHD.
I'm not stating that as a claim, but from what I'd noticed in the demos, they weren't running 4K or high framerate. Apparently some are, so I stand corrected, but that's where people providing data and more detailed discussion rather than one-liners like "better quality at same cost" really helps discussions along.

Back of my head wondering because RT does AO, GI, reflections, and soft shadows likely being able to calculate all of them? using a single ray.
You need multiple rays for tracing these things. Reflections need coherent rays to preserve the quality of the source. Ambient lighting (simulating surface roughness) needs scattered rays to sample a wide area. Shadows need rays per light, with soft shadows needing multiple rays per light to produce better shadowing with less noise. However well your denoising works mitigates some of those requirements, reducing sample quality on shadow rays etc.
 
The 4K versus upscaled discussion is moot here because it applies equally to rasterisation and raytracing. If 1080p upscaled is good enough to RT, it's good enough for rasterisation, which'll gain equally providing more flops per pixel for rasterisation and compute based solutions.
 
not necessarily but... here's another opinion on it:

https://www.resetera.com/posts/14188341/

on the topic of size to distance relationship:
Those problems still exist at 4K and its cheaper to solve them at 2K.

As long as we don't have solid evidence of how effective AI upscaling could get a base 1080p buffer to even come close to the sharpness of a native 4k buffer, your point is redundant.

Absolutely incorrect if the latest RDR 2 analysis is anything to go by ;). And that's not even factoring the larger screen sizes most people are gaming on these days, it'll only get larger from this point on and high resolution is needed ever more. 1080p just wont cut it. I'm not saying 4k native all the time but even reconstruction 4k or CBR is far more sharper, cleaner than vanilla 1080p.
Tech advancement is not solely focusing on one aspect to the extreme but everything else as a whole and not only is resolution a crucial contributing aspect, it is also bound by the mainstream display/TV market, the benefit is tangible and it should not be ignored.
Sure, you can see differences when you freeze a frame and zoom in. Is that the case 99% of the time? No.

I didn't hear any complaints about the base PS4 version which runs at 1080p. The complaints were about the sub 1080p resolution of the base XBO version and the weird resolution tricks of the PS4 Pro and base XBO versions.

I'm not stating that as a claim, but from what I'd noticed in the demos, they weren't running 4K or high framerate. Apparently some are, so I stand corrected, but that's where people providing data and more detailed discussion rather than one-liners like "better quality at same cost" really helps discussions along.

You need multiple rays for tracing these things. Reflections need coherent rays to preserve the quality of the source. Ambient lighting (simulating surface roughness) needs scattered rays to sample a wide area. Shadows need rays per light, with soft shadows needing multiple rays per light to produce better shadowing with less noise. However well your denoising works mitigates some of those requirements, reducing sample quality on shadow rays etc.
1) I actually posted a very specific metric last page. I don't know why you keep ignoring it.

2) Here are the results of doing the whole enchilada at 1spp:

http://cg.ivd.kit.edu/atf.php

Pretty good I would say.
 
The 4K versus upscaled discussion is moot here because it applies equally to rasterisation and raytracing. If 1080p upscaled is good enough to RT, it's good enough for rasterisation, which'll gain equally providing more flops per pixel for rasterisation and compute based solutions.
Actually, it doesn't apply equally since with RT you can do adaptive super sampling. You only pay the cost for a few pixels instead of rendering the whole screen at a higher resolution.
 
1) I actually posted a very specific metric last page. I don't know why you keep ignoring it.
I'm not ignoring it. You just haven't qualified it. They improved the denoising for 1spp tracing. Okay. Now qualify how you get better results at the same cost. What's your reference data for voxelised lighting?
Pretty good I would say.
Definitely. It doesn't prove quality is better than voxelised lighting at the same cost though. For that, you need comparable data on the alternative.
 
Status
Not open for further replies.
Back
Top