Next gen lighting technologies - voxelised, traced, and everything else *spawn*

Rasterized real time lighting/shadows are awful for the level of fidelity of current gen models/shaders. The BFV RTX demo also shows how lame particle effects are.
Yeah, but how often do you think the rest of the graphics would look like were engines spending as much compute on perfect shadows as that requires? Would that be balanced?

EDIT: by "often" I meant awful.
 
Last edited:
Has anyone showcased non-graphical raytracing with these latest hardwares? The DXR API seems to be entirely graphical from what I've seen, rather than a generic interface to access RT hardwares.

Nvidia said:
Turing’s RT Cores can also simulate sound, using the NVIDIA VRWorks Audio SDK. Today’s VR experiences provide audio quality that’s accurate in terms of location. But they’re unable to meet the computational demands to adequately reflect an environment’s size, shape and material properties, especially dynamic ones.

VRWorks Audio is accelerated by 6x with our RTX platform compared with prior generations. Its ray-traced audio technology creates a physically realistic acoustic image of the virtual environment in real time.

From here.

Edit: More on VRWorks Audio
 
Hmmm if this DXR stuff doesn't fix the exponential complexity growth of RT I really, really doubt this will be a mid-range thing any time soon. I mean RTs problem has always been that exponential computational load if the new tech is just "moar powah" without any clever hacks (aforementioned prebaked passes to cheat etc) it's just a checkbox feature to help justify the purchase like the daft expensive hair tech and such over the years.

If the past is anything to go on we'll be lucky to see any RT until at least the Pro refresh next gen.

Edit: VR Works audio sounds like Aureal A3D, will forever bear a grudge against Creative for choking the life out of it
 
Hmmm if this DXR stuff doesn't fix the exponential complexity growth of RT I really, really doubt this will be a mid-range thing any time soon
It's only exponential if you throw power at the problem. Just as clever solutions have been applied to rasterisation clever solutions will be applied to ray-tracing, and with RT out in the wild for devs to explore, progress should be significant. Looking at my hypothetical case, you'd use just one ray for the reflected surfaces and apply a blur filter, perhaps, to simulate surface roughness. You could build some accelerated structures for secondary rays, 'pre-baking' on the fly for data that isn't changing.

RT is going to be huge. It'll even mean better application for non-tesselated surfaces, so perfect spheres and metaballs. Combine it modern representations like SDF - the graphics world is going to get a great shake-up the next years. Plenty to look forwards to!
 
At some point developers have to consider whether they want to make screen space effects more complex or go for raytracing. For example screen space effects can become very slow. Then it can be less efficient to work with rasterization and tricks/fake stuff than simply using raytracing.

Without raytracing and voxel effects there wont be a generation leap. For reflections and shadows I haven't seen anything satisfactory outside of (hybdrid)raytracing yet.

With Raytracing developers do not need to put that much effort into enhancing the graphics for the PC version since a raytracing implementation is easier and higher quality at the same time.
 
Last edited:
Yeah, but how often do you think the rest of the graphics would look like were engines spending as much compute on perfect shadows as that requires? Would that be balanced?
Doesn't have to be perfect, just good enough to break out of the "video game graphics" look. In fact we could have had PCSS shadows in games for a lomg time now but many gamers would whine about pixelation. It's precisely that fixation with ultra sharp pixels what holds back gaming graphics.

It's only exponential if you throw power at the problem. Just as clever solutions have been applied to rasterisation clever solutions will be applied to ray-tracing, and with RT out in the wild for devs to explore, progress should be significant. Looking at my hypothetical case, you'd use just one ray for the reflected surfaces and apply a blur filter, perhaps, to simulate surface roughness. You could build some accelerated structures for secondary rays, 'pre-baking' on the fly for data that isn't changing.

RT is going to be huge. It'll even mean better application for non-tesselated surfaces, so perfect spheres and metaballs. Combine it modern representations like SDF - the graphics world is going to get a great shake-up the next years. Plenty to look forwards to!
As an example of non-polygonal surfaces, take a look at this new feature of Octane, "Vectron":

 
Again with the arbitrary dollar amounts. Die cost plus licensing is the real cost. Let’s see how big the 7nm shrinks are.

It's not arbitrary prices, it's the price range of the console BOM. You need to stop ignoring aspects you don't agree with.
 
He's not. The price of the graphics card doesn't reflect the cost to make the GPUs. For all we know (unless you have evidence to the contrary! ;)) it costs $150 to make the 2080 GPU and nVidia are getting $300 markup for the silicon while their IHV partners are each getting $300 per 2080 with the rest going on manufacturing.

The question isn't 'how much do the graphics card cost?' which is defined by the market value, but 'how much would it cost to produce these and would nVidia be eating into their more profitable GPU markets if they supplied chips/tech to a console?' And more advanced than that, 'what would it cost to license and fabricate the same blocks in a custom part?'
 
I'm going to repeat myself again..but it must be made clear that supporting DXR doesn't require any special hardware (other than a DX12 class GPU and obviously drivers enabling it). Anybody can go and download & compile Microsoft's DXR samples and run them on Pascal/Maxwell GPUs and it just works (not so much on Radeon because of the drivers I guess). A GTX 1080 can reach 1Grays/s in one of the sample which is 10x slower that the RT Core in the 2080Ti for example but "only" 6x slower than a 2070. So I wouldn't worry too much about futur console support for RT. Now, will they have dedicate HW for it? That depends on AMD but they can still claim support for it (even the Xbox/PS4 could..) anyway.
 
Last edited:
He's not. The price of the graphics card doesn't reflect the cost to make the GPUs. For all we know (unless you have evidence to the contrary! ;)) it costs $150 to make the 2080 GPU and nVidia are getting $300 markup for the silicon while their IHV partners are each getting $300 per 2080 with the rest going on manufacturing.

The question isn't 'how much do the graphics card cost?' which is defined by the market value, but 'how much would it cost to produce these and would nVidia be eating into their more profitable GPU markets if they supplied chips/tech to a console?' And more advanced than that, 'what would it cost to license and fabricate the same blocks in a custom part?'
Those are really huge mark ups. The way some products are priced are a big problem once you see the bigger picture. Technological adoption has potential to be higher and it may push technological progress faster, But the business strategy to squeeze all profit potential from every piece of hardware old and new makes progress much much slower.
 
Hm, as far as I see it, these two things can happen:

A - Next gen consoles will be developed in a "traditional" way without RT to achieve a good balance between price and noticeably prettier rasterized graphics, plus more compute power for other simulations. In short, prettier and more complex games.

B - Next gen consoles will include specific hardware to process RT, and the jump in graphics will be huge regarding lighting and materials but not so much in other aspects. However, in this case I perfectly see consoles as a sort of Trojan horse for RT in the mainstream market. Once this first RT generation is complete, all we'll see will be RT from then on, so developers will be able to focus on other things... at last.
 
IMO, greatly expanded capability of physics processing [mass scale soft/rigid object collisions] would be more welcome to me than inclusion of raytracing in gen9 consoles. Fistfights in the water, fully realistic clothes, lifelike explosions of body tissues in horror games, interactive particles of all types, crazy VFX particle effects [Housemarque on steroids], etc. Physics processing can directly impact gameplay.
Like raytracing, all of this was possible before, but it was slow to render when larger amount of particles was introduced.
 
I could see Nvidia wanting to partner with MS on the next console to specifically drive adoption of ray-tracing. They would not only have the PC market but a giant presence in the console industry. 2 out of the 3 platforms would run on their hardware. In addition partnering with MS on their streaming console could also drive sales of data center GPU's used for the data center processing.
 
I could see Nvidia wanting to partner with MS on the next console to specifically drive adoption of ray-tracing. They would not only have the PC market but a giant presence in the console industry. 2 out of the 3 platforms would run on their hardware. In addition partnering with MS on their streaming console could also drive sales of data center GPU's used for the data center processing.
There is IMO practically near-zero chance of this happening given that:
- Work on Scarlet is already well under way (same with PS5 which is all but confirmed to be using a variant of Navi)
- Nvidia can't provide a APU SoC with an X86 CPU (and Ryzen is too good to pass on)
- MS wants full and perfectly working BC support for all of it's console games.

But there's a higher probability that the servers on which the games will be running for the future game streaming service on Azure will be powered by NV GPUs.
 
Last edited:
Hm, as far as I see it, these two things can happen:

A - Next gen consoles will be developed in a "traditional" way without RT to achieve a good balance between price and noticeably prettier rasterized graphics, plus more compute power for other simulations. In short, prettier and more complex games.

B - Next gen consoles will include specific hardware to process RT, and the jump in graphics will be huge regarding lighting and materials but not so much in other aspects. However, in this case I perfectly see consoles as a sort of Trojan horse for RT in the mainstream market. Once this first RT generation is complete, all we'll see will be RT from then on, so developers will be able to focus on other things... at last.
If it's AMD hardware, it is very unlikely to have RT silicon by 2020. It seems like AMD got caught by surprise. I think Nvidia came to MS with their RT acceleration plans, not the other way around. MS then worked on the API and said to AMD, "Hey guys, guess what?" AMD then scrambles to make driver support and has roadmap meetings for hardware support.

Personally, real time RT is so new that I don't mind if it stays in PC until next-next gen but others could disagree which is understandable.
 
IMO, greatly expanded capability of physics processing [mass scale soft/rigid object collisions] would be more welcome to me than inclusion of raytracing in gen9 consoles. Fistfights in the water, fully realistic clothes, lifelike explosions of body tissues in horror games, interactive particles of all types, crazy VFX particle effects [Housemarque on steroids], etc. Physics processing can directly impact gameplay.
Like raytracing, all of this was possible before, but it was slow to render when larger amount of particles was introduced.
Yes, yes, YESSS!! It will be either this or that. Maybe in the following gen after next gen we will be able to enjoy both RT and a higher fidelity in physics simulations, but IMO physics simulations need to evolve a huge step NOW.

Yesterday I saw some "leaked" DOA 6 screenshots, and people were pleasantly surprised commenting how next gen it looked. While they said these things, I just saw THIS:
upload_2018-8-22_0-45-38.png
This needs to stop. And I'm even trying not to look at some clear polygon edges or the lack of proper hair, but come on! Cloth simulation began way back in the first PlayStation console, and we're still like this? Just apply some darn physics to the cloth in a "simple" 2 characters on screen game, so that the sleeves don't end up looking and feeling like rigid tubes, at least! I'd rather prefer realistic clothing over a perfectly accurate reflection.

Ew at the horrid dubstep music in the last video. :D
 
I could see Nvidia wanting to partner with MS on the next console to specifically drive adoption of ray-tracing.

Or Intel. Intel's new GPU available in 2020 will contain RT-functionality, far better than the defunct Larrabee technology. And Intel has been hinting about a major partner "partnering" with them on using their GPUs/CPUs within the console space around 2020.
 
Back
Top