Next gen lighting technologies - voxelised, traced, and everything else *spawn*

Was this post directed at me?
I was just answering Shifty's question...Deep Down trailer was pre-rendered. Book of The Dead was/is not. Whether it's applicable to gameplay wasn't really the subject of my post..
nah I responding to whether Voxel GI was enough.
 
While I totally agree RT does GI better in the end but the GI effect perceived by human eyes from a high quality VOGI still impresses on a similar wow level. It still totally makes sense for a rasterizer for the hardware grunt we are limited with currently, it may sound contradicting but compromise is sometimes needed for better overall result, that's my theory. Of course I'm not saying we sticking to it for good, at some point we shall fully embrace it but I don't believe right now is the time, especially when a $1200 gpu is only running Shadow of the Tomb Raider at 1080p 30-50fps.
http://www.pcgameshardware.de/Grafi...ormance-in-Shadow-of-the-Tomb-Raider-1263244/
That's hardly an optimized title that consoles would be. RT is a bolt-on for all the games being released this year. 2-3 years for now, if RT is part of the base console unit, games will be designed differently.
 
Eeep ~30 FPS in the brightly lit complex mechanical area vs up to 70 staring at a wall in shade, I thought RT was more or less a constant load (per lightsource) once you had decided on how many rays you were going to trace?
Is it DXR what's causing the slowdown?
 
That's hardly an optimized title that consoles would be. RT is a bolt-on for all the games being released this year. 2-3 years for now, if RT is part of the base console unit, games will be designed differently.
If the base consoles can run RT at 4k or even 1440p without cutting back too much other visuals in 2-3 years then I'm all in but I somehow highly highly doubt that.
 
Is it DXR what's causing the slowdown?
We can't say for sure but if a brand new high end card is struggling to run Tomb Raider at 1080p at >30 fps it has to be something, further analysis will require people having direct hands on and the ability to play with the knobs on every feature but it's hard to imagine it's anything but RT pulling the FPS down like that.

Edit: IIRC some other more knowledgeable folks have pointed out that TSMC 7nm node doesn't offer much in the way of area savings over 12nm, so while the node shrink should help w/thermals it probably won't lead to dramatic area savings. Nope was wrong it's ~3 times the density as per below
 
Last edited:
TSMC 7nm has about a 3X gain in density. From their website

TSMC’s 10nm
With a more aggressive geometric shrinkage, this process offers 2X logic density than its 16nm predecessor, along with ~15% faster speed and ~35% less power consumption.

7nm
Compared to its 10nm FinFET process, TSMC’s 7nm FinFET features 1.6X logic density, ~20% speed improvement, and ~40% power reduction.

I wouldn't expect a shrunk down 2080Ti in the next consoles, but something closer to a 2070 with further improvements and refinements in ray tracing acceleration. At 7nm that's probably around 250mm for the GPU + 100mm or so for the CPU.
 
We can't say for sure but if a brand new high end card is struggling to run Tomb Raider at 1080p at >30 fps it has to be something, further analysis will require people having direct hands on and the ability to play with the knobs on every feature but it's hard to imagine it's anything but RT pulling the FPS down like that.

Edit: IIRC some other more knowledgeable folks have pointed out that TSMC 7nm node doesn't offer much in the way of area savings over 12nm, so while the node shrink should help w/thermals it probably won't lead to dramatic area savings.

Wait...what? TSMC’s 7nm is suppose to offer 3.3X the density than its 16nm. And TSMC’s 12nm is just an enhanced version of its 16nm.
 
If the base consoles can run RT at 4k or even 1440p without cutting back too much other visuals in 2-3 years then I'm all in but I somehow highly highly doubt that.
It all depends I think.
Once we get into checker boarding etc, using the tensor cores to possibly help with up-resolution. There are a lot of different ways for this all to play out because a console sets a standard for hardware.

Everything inside the Turing chip is desirable, there are lots of options available for the developer to use at their disposal. RT hardware support is just 1 pillar of Turing, there are other cores in there as well.
 
If the base consoles can run RT at 4k or even 1440p without cutting back too much other visuals in 2-3 years then I'm all in but I somehow highly highly doubt that.
Current graphics are unbalanced. Even PS1 pre-rendered videos have better shadows than modern games. I'd cut back on polygon density and resolution if that'd get me ray tracing and volumetric effects.

480p raytracing & volumetrics > 4k rasterization and billboards.
 
It all depends I think.
Once we get into checker boarding etc, using the tensor cores to possibly help with up-resolution. There are a lot of different ways for this all to play out because a console sets a standard for hardware.

Everything inside the Turing chip is desirable, there are lots of options available for the developer to use at their disposal. RT hardware support is just 1 pillar of Turing, there are other cores in there as well.
I know this is all new tech, new cores and new optimizations but for the most powerful card 2080 ti not able to run these new games above 1080p/60fps is rather alarming. Word is Battlefield V also did not maintain 60fps at 1080p.
https://www.pcgamesn.com/nvidia-rtx-2080-ti-hands-on
Not that we'll get tensor cores in PS5 or Scarlet? anyway since they're AMD bound.
 
Current graphics are unbalanced. Even PS1 pre-rendered videos have better shadows than modern games. I'd cut back on polygon density and resolution if that'd get me ray tracing and volumetric effects.

480p raytracing & volumetrics > 4k rasterization and billboards.
Um, I thought volumetrics are already getting more prevalent at least on major AAA games even at 4k. And I don't even want Infinity War level CGI at 480p on my 4k TV :). That'd be like one step forward and three leaps backward.
 
Current graphics are unbalanced. Even PS1 pre-rendered videos have better shadows than modern games. I'd cut back on polygon density and resolution if that'd get me ray tracing and volumetric effects.

480p raytracing & volumetrics > 4k rasterization and billboards.
Unbalanced by what metric? They certainly have a different balance to 90's CGI, but I think it's a better one. I'm glad we are playing games with PS4 quality models, textures, and shading with blobby shadow maps instead of the garish plastic-like world's of a ps1 fmv coated by pixel perfect shadows.
 
In fact, I'd bet if devs were willing to sacrifice everything else to the point it looked as primitive as a typical PS1 CGI FMV, they'd be able to spend enough grunt on shadowmaps large enough to effectively look just as precise. That would have to be the least worth-it trade-off in the history of gaming.
 
$499 is the cost of an Xbox One X today. Where am I getting overtly excited to expect something similar in 2 years in a console at a similar price point ?

DXR is out today. The games are coming today. DX12 maturity and DXR will be substantially better in 2020. Developers will be ready to support

That's why I have a hard time believing that the next-generation of consoles are going to be $399... more like $499. Since MS is offering two (possibly 3) models of the next Xbox at launch, I could see the streaming-box at $199-249, the contemporary/modern system at $499, and possibly a surface-like Xbox with docking capabilities at 599$.

As far as RT capabilities I agree with you. If AMD's NAVI architecture doesn't have some-type of dedicated RT hardware, then that would mean two things for it's PC brethren. a) Nvidia will essentially be two generation ahead (with dedicated RT hardware) of AMD, if the dedicated RT only shows up in the their next architecture following Navi (Next-Gen), or b) the Navi PC variant is RT ready, and the consoles are SOL on that end. In the end, I just can't see AMD allowing Nvidia to go unchecked for two generations (a), nor them splitting the Navi architecture into two different flavors (b). We shall see...
 
I know this is all new tech, new cores and new optimizations but for the most powerful card 2080 ti not able to run these new games above 1080p/60fps is rather alarming. Word is Battlefield V also did not maintain 60fps at 1080p.
https://www.pcgamesn.com/nvidia-rtx-2080-ti-hands-on
Not that we'll get tensor cores in PS5 or Scarlet? anyway since they're AMD bound.
I think the biggest thing is to think about whether or not the power is going to be there, you don’t want start a generation of rasterization again for another 6 to 8 years when ray tracing has begun now.

You want the generation to have features, let the power come with the mid generation refresh.
 
Well going by those tomb raider fps it looks a bit like the RTX 2070 will be a waste of time.

Looks like BFV can't hit 1080p 60fps either on a RTX2080ti
 
Um, I thought volumetrics are already getting more prevalent at least on major AAA games even at 4k. And I don't even want Infinity War level CGI at 480p on my 4k TV :). That'd be like one step forward and three leaps backward.
I should have been more specific. I meant this kind of volumetrics:


Unbalanced by what metric? They certainly have a different balance to 90's CGI, but I think it's a better one. I'm glad we are playing games with PS4 quality models, textures, and shading with blobby shadow maps instead of the garish plastic-like world's of a ps1 fmv coated by pixel perfect shadows.
Rasterized real time lighting/shadows are awful for the level of fidelity of current gen models/shaders. The BFV RTX demo also shows how lame particle effects are.
 
One is an entire system of ray tracing methods that can be employed to do a variety of graphical and suitable for audio effects that bring us much closer to realism.
Has anyone showcased non-graphical raytracing with these latest hardwares? The DXR API seems to be entirely graphical from what I've seen, rather than a generic interface to access RT hardwares.
Eeep ~30 FPS in the brightly lit complex mechanical area vs up to 70 staring at a wall in shade, I thought RT was more or less a constant load (per lightsource) once you had decided on how many rays you were going to trace?
Not at all! Geometry complexity doesn't affect rendering times as long as you have a suitable memory access solution, which allows for things like the old "one million sunflowers" demos. But secondary rays and shaders increase rendering times exponentially. Consider a room rendered with 5 rays per pixel. If each point sampled then casts five rays to sample its environment, that's 25 rays. If you allow a third iteration, a reflective surface reflecting a wall receiving bounced light from the ceiling, at 5 rays per sample you're now at 125 rays in total. Every surface will have its shader evaluated for every ray...lots of optimisation is needed.

In offline raytracing, it was common to prebake data to speed this up - you could raytrace lighting into a lightmap applied to the scene. I don't know if that's still a thing or not - maybe there's so much power in the hardware that it can be brute forced? This RT hardware is going to be wonderful for that. (I'm presently raytracing some graphics for ionAXXIA and a day's-plus work has mostly been 5 minute render times. Real-time rendering of the scenes would be very welcome!!)
 
Back
Top