Impact of nVidia Turing RayTracing enhanced GPUs on next-gen consoles *spawn

Status
Not open for further replies.
Last edited:
I'd love to see raytracing coupled with foveated rendering. That'd free the power limits, focussing on 10% of the screen, meaning proper realtime raytracing could be performed with full, unified lighting. Machine-learning reconstruction could readily fill in the low fidility blanks around the fovea portion of the display. With foveated rendering, rasterisation hacks at their best may hit photorealistic but raytracing would solve all the production aggro.

This might be possible with Microsoft's next generation Mixed Reality for their next generation console. They are holding back on Mixed Reality and we know that they are working on realtime ray-tracing for their next console and they do have patents for Foveated rendering.
 
I am speculating here, but I do believe all of this raytracing development is meant to be paid down in two more years. By the end of 2020 Microsoft intends to launch their next generation console and even Phil himself talked about having raytracing as a part of their next generation console.

It's a means to an end and they are using Nvidia to get working code and working hardware to get development prepped for
their next generation console.

Microsoft will either use Nvidia as the GPU tech (which I doubt personally), or they will go in house and build their own GPU that has more of a raytracing focus and
this would also fall in line with what they have for their Mixed Reality goals as well.
 
Better in performance and quality? Lovely!

How does it perform in terms of LOD? Certainly, this generation, we've seen a lot of egregious shadow map LOD'ing. Could ray traced shadows spell the end of that bullshit?
 
Last edited:
Better in performance and quality? Lovely!

How does it perform in terms of LOD? Certainly, this generation, we've seen a lot of egregious shadow map LOD'ing. Could ray traced shadows spell the end of that bullshit?

I believe in the Battlefield RT demo (possibly DF talk I forget) they talk about effects and how in the shown build they are using low poly structures as it was assumed this would help, and it probably did help their Pascal GPUs, but they did say with Turing they can use the full fat geometry with very little hit and will be using that in the retail release.
 
I haven't seen much around what DICE are doing beyond what was said for the Turing launch from the context then I'm guessing they meant for Turing cards but that does raise an interesting challenge for devs i.e. do they now need tunable quality settings for the RT solution as well?
 
Nvidia should be interested in developers making sure that RT is not only available for people with Turing cards, so that every gamer can see how crappy their performance is without a "proper Nvidia card". :cool:
 
some games actually allowed you to run the effects made for GPU PhysX VERY (like going to single digit fps) poorly on the CPU.
Wasn't that blamed on the library making a whole bunch of calls to the basically deprecated x87 ISA instead of using modern vector math via the SSE ISA and it's descendants?
 
Wasn't that blamed on the library making a whole bunch of calls to the basically deprecated x87 ISA instead of using modern vector math via the SSE ISA and it's descendants?
And single-threaded AFAIR.

nVidia definitely didn't make that kind of implementation out of innocence or incompetence. In 2011 they had a PhysX path for their 4-core NEON-less ARMv7 @1.5GHz (Tegra 3) that ran pretty well.

 
That was only true for version 2 iirc. PhysX 3 was multithreaded and had proper SIMD support. Remember, the company that Nvidia bought also had an incentive to not develop an optimized x86 path. :D

In general as far as a physics API goes (ignore that it's tied to Nvidia), PhysX is pretty good! The "extra/bonus PhysX effects" that can run on a GPU/PPU were never that compelling anyways and not the reason that made PhysX appealing. It's just a good middleware physics API!
 
Oh I wasn't trying to denigrate PhysX as a library rather that NV failed to capitalise on getting an additional use case for their GPUs by making the GPU acceleration NV specific thus making it extra work to support for a minority of even NV users. Glad to hear that the current version is properly threaded and optimised
 
That was only true for version 2 iirc. PhysX 3 was multithreaded and had proper SIMD support. Remember, the company that Nvidia bought also had an incentive to not develop an optimized x86 path. :D

In general as far as a physics API goes (ignore that it's tied to Nvidia), PhysX is pretty good! The "extra/bonus PhysX effects" that can run on a GPU/PPU were never that compelling anyways and not the reason that made PhysX appealing. It's just a good middleware physics API!

I wasn't talking about the current implementation of CPU PhysX.

I think we were all commenting on the odd timing of the

February 2008: nvidia buys AGEIA,

April 2008: 3dmark Vantage releases, with one of the major CPU benchmarks using PhysX for Ageia's PPUs and CPU (x87 path). There's just no way nvidia didn't know PhysX was going to be used in Vantage when they bought AGEIA 2 months earlier.

June 2008: nVidia launches a driver that supports PhysX running in their GPUs, and CPU results go through the roof for people using G80+ graphics cards

July 2008: nVidia launches G92 55nm refresh cards, the 9800GT and 9800GTX+, with the first reviews actually appearing in late June

late July 2008: 3dmark invalidates GPU PhysX results on Vantage. Too bad that the G92b reviews were already out, obviously with boosted scores.

2 years later in July 5th 2010: David Kanter exposes the x87 single-threaded PhysX path. The research was prompted by the huge performance deficit that games like Arkham Asylum showed when enabling PhysX without a nvidia GPU or PPU.

3 days later in July 8th 2010: nVidia says "hey we're not hobbling CPU PhysX on purpose! Look our next PhysX 3 even has SSE and multithreading support!"

1 year later in July 2011: nvidia launches PhysX 3 with SSE and multithreading. Curiously at the same time they make software PhysX for ARMv7 to run on their Tegra 3.


nVidia purchased AGEIA and implemented PhysX on the GPU with the incredibly precise timing of influencing 3dmark Vantage's CPU results for the 9800GT/9800GTX+ reviews, while not giving Futuremark enough time to invalidate those results for said reviews.
Besides that, it took nVidia a couple of months to put out a driver release with GPU PhysX support, but a 3 whole years to launch a CPU path with SSE and multithreading (which boosted CPU PhysX performance by up to 800%).





Now, just to put all of this in the context of raytracing on consoles:
- If the past is anything to go by then nvidia will do everything in their power to use RTX to undermine the competition and sell only new cards. Much more so than to make raytracing widespread.
We're already seeing how the first games supposedly implementing Microsoft DXR are using proprietary paths for RTX, meaning they won't work with even the 9 months-old $3000 Titan V.
 
Status
Not open for further replies.
Back
Top