Next gen lighting technologies - voxelised, traced, and everything else *spawn*

This just made me realise that this video was all bullocks. I doubt the PS3 could do those visuals in real time with such scale.
Most of the buildings are just low poly boxes. You don't notice because of the camera motion.

Weird. I interpreted that tweet differently, as in overwhelmingly positive for games going forward.

I get the need to actually break down the Gigaray question but we run into similar issues on the rasterization side. We quote TF but that doesn’t translate into performance. Thus we get people quoting AMD and Nvidia flops.

That being said, Sebbbi being the beast of the programmer that he is may have found some sick optimizations to make things fly for his game. One would need to question how applicable what his implementation is for other titles and also how far he could take things with Claybook on a 20XX RTX card.
Claybook uses signed distance fields for rendering instead of polygons. That certainly helps.
 
Well, one can say that they at least have something, unlike AMD... Turing is a great GPU for Pros...
I think this is the real point. RT is invaluable for professional graphics. nVidia have secured the professional visual markets for another round or three of GPUs. The presence in next-gen gaming cards may be as much marketing as anything. I guess if RT gets used in games through the GPUs as you suggest, these high-end cards will have a huge quality/framerate advantage.
 
What exactly makes RT hardware "RT hardware"? Is it a super specialized vector SIMD geared towards shooting rays?
 
The why not is that you're paying for hardware that, in all likelihood, is going to see very little use in mainstream games for a few generations. Remember PhysX cards? This looks like this decades's PhysX. :yep2:

It looks a bit.. well.. desperate. They Nvidia wanted to ship something genuinely new that they didn't think AMD could counter with an alternative and just went with this because what else do they have?
I don't think this has anything to do with AMD. They need something new and shiny to sell, because money makes the world go around. It's not that gamers are desperate to pay for refraction in puddles in Battlefront, but tech enthusiasts are, and can be milked. At some point, it may even bring a genuinely better mousetrap to the table. But I think everyone who is not desperate to support nVidia understand that this generation is not when it's going to happen.
 
Shouldn't something like an APU be good as a hybrid renderer? Handle ray casting on the CPU end, materials and screen space calculations on the CPU end ... or something?

Specially written jobs with a CPU and GPU competent, a step marker and bounce counter, batch and execute based on common parameters (material, direction, last surface etc) .... or something?
 

Honestly these examples are somewhat deceptive of actual real-world gaming performance. A benchmark and a nice demo doesn't convey the complexity of an actual game running with RT routines. If RT was so easily done on current GCN/Maxwell/Pascal architectures, then where are the games using RT? Nvidia isn't trying to pull the wool over anyone eyes... just trying to improve developers and consumers experiences.
 
As I said, Sony/MS/AMD/Intel whatever can claim support for real time RT if they which..simply because there are tons of way to do real-time ray tracing on today's DX12 Class GPUs without the need of RT Cores:

Yeah but like 10 times slower.

Why are some people hurt about next consoles possibly not having RT? Games will look nice anyway.
 
Honestly these examples are somewhat deceptive of actual real-world gaming performance. A benchmark and a nice demo doesn't convey the complexity of an actual game running with RT routines. If RT was so easily done on current GCN/Maxwell/Pascal architectures, then where are the games using RT? Nvidia isn't trying to pull the wool over anyone eyes... just trying to improve developers and consumers experiences.
It's just to prove that the XXGrays/s that Nvidia is touting is totally meaningless without context. :smile2: Unfortunately this is the metric that the company is using to compare against Pascal GPUs (10x more powerful!) and to sell overpriced consumer GPUs (but super cheap Pro grade GPUs because the 2080ti is a steal compares to a vastly inferior Quadro RTX 5000 @ $2300)
 
Yeah but like 10 times slower.

Why are some people hurt about next consoles possibly not having RT? Games will look nice anyway.
giphy.gif[COLOR=
giphy.gif
 
It's just to prove that the XXGrays/s that Nvidia is touting is totally meaningless without context. :smile2: Unfortunately this is the metric that the company is using to compare against Pascal GPUs (10x more powerful!) and to sell overpriced consumer GPUs (but super cheap Pro grade GPUs because the 2080ti is a steal compares to a vastly inferior Quadro RTX 5000 @ $2300)

I believe the context is that it works. That it works in real-time gaming scenarios. That an entry starting price of $499 (RTX 2070) allows you access to RT enabled game features. Sure we can debate about performance and Nvidia's methods of calculating such data, but let's have some perspective on comparing apples to apples. If there are some RT enabled PC games floating around, then that should be the metric of comparing Nvidia's RT claims (when the RTX cards are released).
 
I believe the context is that it works. That it works in real-time gaming scenarios. That an entry starting price of $499 (RTX 2070) allows you access to RT enabled game features. Sure we can debate about performance and Nvidia's methods of calculating such data, but let's have some perspective on comparing apples to apples. If there are some RT enabled PC games floating around, then that should be the metric of comparing Nvidia's RT claims (when the RTX cards are released).
It works at doing what exactly? That's the whole debate here. There's no single & unique way of doing ray tracing and NVidia has yet to explain what is their "definition" of it and what exactly those RT Cores accelerate beside "BVH acceleration" as they said. As the BF5 implementation shows it sure looks better at reflecting occluded content..but the shortcuts taken (non-reflected assets like the cars) are sure as hell worst than some of the SSR related shortfalls we currently experience in most games. We can like it or not but there are already shipping games that do real-time ray tracing is some shape or form like Claybook, Tomorrow children etc (Voxel cone Ray Tracing is still ray tracing). That's why the use of Grays/s to quantify performance is ridiculous in this case at this moment (and yes the RTX products are faster at RT than current GPUs..but by how much and with what type of RT implementation? etc..).
 
When BFV RTX comes out and they compare Turing to Pascal, it’s not going to be close. The marketing for number of GigaRays could be entirely messed but Turing is going to slay Pascal in these applications. More so as time goes on.
Which is rather obvious given that the RT implementation in BF5 will be tailored to RTX's HW implementation. But this was not the subject of the discussion..which was Nvidia's misleading metrics to compare GPU architectures, claims that "it just works" and that before Turing real-time RT was impossible..
 
It works at doing what exactly? That's the whole debate here. There's no single & unique way of doing ray tracing and NVidia has yet to explain what is their "definition" of it and what exactly those RT Cores accelerate beside "BVH acceleration" as they said. As the BF5 implementation shows it sure looks better at reflecting occluded content..but the shortcuts taken (non-reflected assets like the cars) are sure as hell worst than some of the SSR related shortfalls we currently experience in most games. We can like it or not but there are already shipping games that do real-time ray tracing is some shape or form like Claybook, Tomorrow children etc (Voxel cone Ray Tracing is still ray tracing). That's why the use of Grays/s to quantify performance is ridiculous in this case at this moment (and yes the RTX products are faster at RT than current GPUs..but by how much and with what type of RT implementation? etc..).

Work meaning: less performance penalties on doing such operations. I'm pretty sure Nvidia isn't trying to redefine what RT is, but offering a more optimized way of doing it. Sure their bulletpoint/PR wording is somewhat hyperbolic, but what new products selling points aren't? We can't fault Nvidia for presenting data points or metrics, when the competition is lagging behind on presenting something to compare it to. I don't believe Nvidia's method will be the end of all of doing things. But a rightful kick in the ass on jump starting RT development and more innovative ways of accomplishing RT.
 
Back
Top