Impact of nVidia Turing RayTracing enhanced GPUs on next-gen consoles *spawn

Status
Not open for further replies.
Isnt it the other way around? I mean personally i have my consoles not for the graphics but for the games i cant get on pc. I have my pc's if i want the newest tech and graphics, highest frame-rates, image quality etc.
Are there people here really getting a PS4 for anything else then its exclusives? Perhaps ease of use?

Friends lists, ease of borrowing games off your mates, because it's all they care to know about? Probably quite a few because of Pro 4K and HDR support. Probably some combination of all of them + exclusives for some people (multi factor and so hard to pin down). Early on the fact the PS4 proved a large amount of graphical punch for a reasonable price

I don't own a current gen console, which seems odd as I've not skipped a generation since I started buying consoles in the 80s. I'm PC only. Current GPU is long in the tooth, probably about PS4 level, give or take (OC gtx 680). This idea of "console graphics" vs "PC graphics" is incredibly dumb.

As is the idea of people being "anti ray tracing" because of concerns about the silicon and performance costs of implementation. Spending large amounts of money on PC components doesn't validate an opinion on technology.

It sure did on my 8800GTS/Q6600, i couldnt run highest settings but even paired back somewhat and still on a higher res then most PS3/360 titles where doing, but it still looked better then anything else out then. The consoles got a version too but it was a far cry from what Crysis was.

I still have my physical copy of Crysis. I was not blessed with a C2D and 8800GTS on my first play. I was playing on PC but apparently not experiencing PC graphics.

Except obviously, I was.

Do more people own a One x then a 1060/RX580 gpu or higher?

It's not a contest. The point I've been trying to make is that architectures, algorithms and assets are shared between PC and console and the differentiator in graphics is what you want and what you spend. You can go high or low on both platforms.

The experience that 90%+ of gamers get in terms of graphics is available on either PC or console, with the exception of HFR gaming. Things like free online, mouse and keyboard, modding support, graphics tweaking in drivers are differentiators, as may be HDR support, but by and large the graphical techniques and resolutions are common to both. And that's a really important point.

Because RT in consoles won't succeed or fail because "it's not console graphics" or something similarly stupid, it'll succeed or fail because of the results it offers (perceived, because that's all that matters) for the cost (silicon, R&D, efficiency) it incurs.

And that will be true for PC and console. Same games, same engines, same assets, same architectures, same manufacturing processes, and almost entirely overlapping customer expectations. Success in the PC realm pretty much guarantees success in the console realm - and vice versa.

RTX cards prove that ray tracing is tantalisingly close, but also that it isn't there ... yet.
 
Pity that PVR dropped off the (consumer) map. The hybrid RT demos from two or three years back on 28nm were pretty cool.

https://www.imgtec.com/blog/powervr-hardware-accelerated-ray-tracing-api/

Looks like their ray tracing unit was outside of the shader arrays. I wonder if any of their tech could be licensed by a console vendor and integrated into their designs?

A big fat [edit because unfinished] chiplet in lieu of crossfire?
 
Last edited:
Are there people here really getting a PS4 for anything else then its exclusives? Perhaps ease of use?
The vast majority of PS4 gamers would be gamers that have been with PlayStation for many years and generations. They might or might not be interested in exclusives, but they sure as hell are interested in staying “with PlayStation” which in many countries is synonym with “video games”. The PS brand is very powerful, and to say that PS gamers only buy in because of exclusive is very reductive.

You only have to look at the sales figures for PS4 exclusives: the biggest ones sell what, 8 million units? With a user base of 90 million, it can be inferred that the vast majority of the games played on the system are multi platform - although to be 100% accurate we’d need to know the average number of games bought per system.
 
The vast majority of PS4 gamers would be gamers that have been with PlayStation for many years and generations. They might or might not be interested in exclusives, but they sure as hell are interested in staying “with PlayStation” which in many countries is synonym with “video games”. The PS brand is very powerful, and to say that PS gamers only buy in because of exclusive is very reductive.

You only have to look at the sales figures for PS4 exclusives: the biggest ones sell what, 8 million units? With a user base of 90 million, it can be inferred that the vast majority of the games played on the system are multi platform - although to be 100% accurate we’d need to know the average number of games bought per system.

The five top-selling titles on PS4 are COD, COD, FIFA, FIFA and GTA.

https://blog.us.playstation.com/2018/11/15/celebrate-the-fifth-anniversary-of-playstation-4/
 
Last edited:
Back to the impact of Nvidia RTX on next-gen consoles ... I think one of the keys to understand the cost is understanding how the tensor cores fit in. Do consoles need something like DLSS? Is an ai-based de-noiser key to RTX performance, or is an algorithmic approach suitable? What percentage of the RTX silicon goes to fitting in tensor cores? What's the performance target, and what's the silicon budget that would be acceptable for a next-gen console? Until we really have a better understanding of whether, or how, devs are using tensor cores and until the sdk and software have bug fixes and some more optimization passes, I don't think any of that can be answered. So RTX ray-tracing performance, by my eyes, is still up in the air, so the impact can't be evaluation.

https://www.nvidia.com/content/dam/...ure/NVIDIA-Turing-Architecture-Whitepaper.pdf
 
A question to people who think RT will be the next big thing next gen: What about Tessellation?

Subdivision surfaces and geo displacement mapping has been a key feature of CGI way before ray-tracing was realistic even for high-end offline productions. And as such, it has always been taunted for the past 20 years as something that would make it's way into real time rendering, but never seemed to manage the transition fully. I mean, the first claim that Subdivision surfaces were the next big thing for real time rendering dates back to before 1999's Quake 3 for god's sake. We've been flerting with that idea ever since with every new console gen. PS2's VPUs were said to be able to do it, but rarely were actually given that task in released games (SSX resolved bezier surfaces in real time I think). Matrox Cards created their HW acceleration for them, later ATI with TruForm (Which was supported by a dozen or so PC games!) X360 did have HW tessellation in it which rarely got used, DX11 incorporated it in the standard specification, and even then, with both current gen consoles, PS4 and XB1, having HW accelleration compliant with DX11 standards, after 15 years of talk about this feature, most devs still show little interest in making that a foundational feature of their engines, even though that has been a bare minimum standard of CGI industry since it's beginning.
I know these two features: dynamic Tessellation (including subdiv and displacement) and Ray Tracing, are not completely comparable, and I can think myself of reasons why RT may be easier to adopt than the former, but it's worth remembering that even when a feature is highly desirable and it has been introduced in an actual product and implemented in actual games, it does not mean it will become an industry standard that fast. Ironically, the two things thing I hear from devs as reasons for Tessellation not having become completely ubiquitous still are that although reasonably programmable by now, the current standard is still not programmable enough for many desired applications, and the other one is that even being HW accelerated, it is bottlenecked by other aspects of the architecture in ways that make the cost not worth the end result. Both are very similar to complaints about the state of the HW RT implementation at the moment.
For next-gen, with all the talk about AMD's highly programmable next-gen geometry pipeline, and rumors of the architecture's impressive performance with micro-polygon rendering and massively complex meshes, it might be the case that Subdiv models will finally become the standard, and even then I'm a little skeptical. With that holy grail so eternally out of reach, It's hard not to be pessimistic about the other one that's arguably even holier, but also even further away.
 
Last edited:
Is an ai-based de-noiser key to RTX performance, or is an algorithmic approach suitable?
All the RTX announcement demos were using algorithmic denoisers and not nVidia's ML one, so denoising works fine without. Insomniac's upscaling is great without needing AI, so DLSS isn't entirely necessary either, although at least that'd make upscaling uniform and not be engine/developer dependent on whether good results could be achieved or not.
 
We can also perform DLSS without tensor cores. The tensor cores are just faster. But a large part of compute for data science has been based on compute for a great deal of many years before google designed tensor flow
 
Friends lists, ease of borrowing games off your mates, because it's all they care to know about? Probably quite a few because of Pro 4K and HDR support. Probably some combination of all of them + exclusives for some people (multi factor and so hard to pin down). Early on the fact the PS4 proved a large amount of graphical punch for a reasonable price

Friends list i have on Steam, its atleast as easy and nice to use as PS4's social system. Borrowing games i can agree on, for this gen atleast, MS wants to go digital downloads only for next gen, dont know if it will happen but i read it. 4K and HDR are avaible on pc too, and for both you need to upgrade.
Exclusives is the biggest reason as far as my friends and family go, but im sure that varies wildly.

I don't own a current gen console, which seems odd as I've not skipped a generation since I started buying consoles in the 80s. I'm PC only. Current GPU is long in the tooth, probably about PS4 level, give or take (OC gtx 680). This idea of "console graphics" vs "PC graphics" is incredibly dumb.

As is the idea of people being "anti ray tracing" because of concerns about the silicon and performance costs of implementation. Spending large amounts of money on PC components doesn't validate an opinion on technology.

Modedit : removed PC vs console opinionating

I still have my physical copy of Crysis. I was not blessed with a C2D and 8800GTS on my first play. I was playing on PC but apparently not experiencing PC graphics.

Except obviously, I was.

My Q6600 was bought early 2007, the 8800GTS late 2006, so my system was about a year old, and not the highest end when bought. You were experiencing pc graphics, you have a choice if you can pay for it, low end, mid end, high end.
Offcourse a console is cheaper, but that wasnt my point either, even if the games are mostly more expensive and pay for online.

but by and large the graphical techniques and resolutions are common to both

Yes with the difference that on pc one can have native 4k, 60fps and whatnot, your always more limited on console.

Because RT in consoles won't succeed or fail because "it's not console graphics" or something similarly stupid, it'll succeed or fail because of the results it offers (perceived, because that's all that matters) for the cost (silicon, R&D, efficiency) it incurs.

If it will succeed or fail depends on if manufacturers have enough time and resources to implement them, seeing that AMD is behind nvidia on about everything its hard to say if your console will have RT support, or DLSS, Mesh shading etc.

RTX cards prove that ray tracing is tantalisingly close, but also that it isn't there ... yet.

But atleast its there, and 2020/2021 will probally see more advanced version of it.

The vast majority of PS4 gamers would be gamers that have been with PlayStation for many years and generations. They might or might not be interested in exclusives, but they sure as hell are interested in staying “with PlayStation” which in many countries is synonym with “video games”. The PS brand is very powerful, and to say that PS gamers only buy in because of exclusive is very reductive.

You only have to look at the sales figures for PS4 exclusives: the biggest ones sell what, 8 million units? With a user base of 90 million, it can be inferred that the vast majority of the games played on the system are multi platform - although to be 100% accurate we’d need to know the average number of games bought per system.

I agree, its the brand that sells the system very well, it was for me the case with PS2.

The five top-selling titles on PS4 are COD, COD, FIFA, FIFA and GTA.

I dont know if thats something to be proud of, but your right, it cant be the exclusives that sell the system. For me personally however it would be the only reason.

For next-gen, with all the talk about AMD's highly programmable next-gen geometry pipeline, and rumors of the architecture's impressive performance with micro-polygon rendering and massively complex meshes, it might be the case that Subdiv models will finally become the standard, and even then I'm a little skeptical. With that holy grail so eternally out of reach, It's hard not to be pessimistic about the other one that's arguably even holier, but also even further away.

And you dont think Nvidia is employing such tech, or is going to?

I mean, the first claim that Subdivision surfaces were the next big thing for real time rendering dates back to before 1999's Quake 3 for god's sake. We've been flerting with that idea ever since with every new console gen. PS2's VPUs were said to be able to do it, but rarely were actually given that task in released games (SSX resolved bezier surfaces in real time I think).

Intresting, is it something uniqe to the PS2 hardware?
 
Last edited by a moderator:
A question to people who think RT will be the next big thing next gen: What about Tessellation?

Subdivision surfaces and geo displacement mapping has been a key feature of CGI way before ray-tracing was realistic even for high-end offline productions. And as such, it has always been taunted for the past 20 years as something that would make it's way into real time rendering, but never seemed to manage the transition fully. I mean, the first claim that Subdivision surfaces were the next big thing for real time rendering dates back to before 1999's Quake 3 for god's sake. We've been flerting with that idea ever since with every new console gen. PS2's VPUs were said to be able to do it, but rarely were actually given that task in released games (SSX resolved bezier surfaces in real time I think). Matrox Cards created their HW acceleration for them, later ATI with TruForm (Which was supported by a dozen or so PC games!) X360 did have HW tessellation in it which rarely got used, DX11 incorporated it in the standard specification, and even then, with both current gen consoles, PS4 and XB1, having HW accelleration compliant with DX11 standards, after 15 years of talk about this feature, most devs still show little interest in making that a foundational feature of their engines, even though that has been a bare minimum standard of CGI industry since it's beginning.
I know these two features: dynamic Tessellation (including subdiv and displacement) and Ray Tracing, are not completely comparable, and I can think myself of reasons why RT may be easier to adopt than the former, but it's worth remembering that even when a feature is highly desirable and it has been introduced in an actual product and implemented in actual games, it does not mean it will become an industry standard that fast. Ironically, the two things thing I hear from devs as reasons for Tessellation not having become completely ubiquitous still are that although reasonably programmable by now, the current standard is still not programmable enough for many desired applications, and the other one is that even being HW accelerated, it is bottlenecked by other aspects of the architecture in ways that make the cost not worth the end result. Both are very similar to complaints about the state of the HW RT implementation at the moment.
For next-gen, with all the talk about AMD's highly programmable next-gen geometry pipeline, and rumors of the architecture's impressive performance with micro-polygon rendering and massively complex meshes, it might be the case that Subdiv models will finally become the standard, and even then I'm a little skeptical. With that holy grail so eternally out of reach, It's hard not to be pessimistic about the other one that's arguably even holier, but also even further away.

This is why I said AMD geometry pipeline with a rasterizer like the one they have in an AMD patent(micropolygon rasterizer) would be exciting. If we keep it with a current rasterizer non micropolygon friendly, I think we will see the limit.
 
Last edited:
Intresting, is it something uniqe to the PS2 hardware?

PS2 had vector units fully programmable through microcode (following the footsteps of N64 to some extent) it was up to the dev to chose how to use that processing resource. It was mostly used for typical skinning and transform & lighting stuff, but it could, theoretically, be made to create procedural geometry, curved surfaces and etc. just few devs ever bothered.
 
Last edited:
A question to people who think RT will be the next big thing next gen: What about Tessellation?

Subdivision surfaces and geo displacement mapping has been a key feature of CGI way before ray-tracing was realistic even for high-end offline productions. And as such, it has always been taunted for the past 20 years as something that would make it's way into real time rendering, but never seemed to manage the transition fully. I mean, the first claim that Subdivision surfaces were the next big thing for real time rendering dates back to before 1999's Quake 3 for god's sake. We've been flerting with that idea ever since with every new console gen. PS2's VPUs were said to be able to do it, but rarely were actually given that task in released games (SSX resolved bezier surfaces in real time I think). Matrox Cards created their HW acceleration for them, later ATI with TruForm (Which was supported by a dozen or so PC games!) X360 did have HW tessellation in it which rarely got used, DX11 incorporated it in the standard specification, and even then, with both current gen consoles, PS4 and XB1, having HW accelleration compliant with DX11 standards, after 15 years of talk about this feature, most devs still show little interest in making that a foundational feature of their engines, even though that has been a bare minimum standard of CGI industry since it's beginning.
I know these two features: dynamic Tessellation (including subdiv and displacement) and Ray Tracing, are not completely comparable, and I can think myself of reasons why RT may be easier to adopt than the former, but it's worth remembering that even when a feature is highly desirable and it has been introduced in an actual product and implemented in actual games, it does not mean it will become an industry standard that fast. Ironically, the two things thing I hear from devs as reasons for Tessellation not having become completely ubiquitous still are that although reasonably programmable by now, the current standard is still not programmable enough for many desired applications, and the other one is that even being HW accelerated, it is bottlenecked by other aspects of the architecture in ways that make the cost not worth the end result. Both are very similar to complaints about the state of the HW RT implementation at the moment.
For next-gen, with all the talk about AMD's highly programmable next-gen geometry pipeline, and rumors of the architecture's impressive performance with micro-polygon rendering and massively complex meshes, it might be the case that Subdiv models will finally become the standard, and even then I'm a little skeptical. With that holy grail so eternally out of reach, It's hard not to be pessimistic about the other one that's arguably even holier, but also even further away.
As I understand it, Mesh Shaders will (may?) succeed over Tessellation as developers actually get to have control over how it tessellates as opposed to the forced standards of the API.

I think we will see mesh shaders/primitive shaders in some format enter the standard graphics AP, I guess it's just a matter of when. But we certainly need a Mesh Shader thread for sure as much as we have a RT thread. And since this is a Turing feature.. I guess for now we can discuss it here.

https://devblogs.nvidia.com/introduction-turing-mesh-shaders/

And you are correct that they aren't the same thing, but every bit as important. HRT is still very much hybrid, and mesh shaders would be an advancement in a massive way on the rasterization side of things.
 
Last edited:
I think it's ok if there are people who want to pay more to get RTRT in games. There are more options out there for people who don't. I don't see the "harm", and I don't see how the tech is not ready, because it already is out in the market and "it just works" ( :D , ok, I mean that we can see it now and it works, even though it's not perfect and at 9999 FPS).

As long as it's not the only option and they don't impose anything on anybody, I'm fine with it. I for one wouldn't buy one of these cards now, but I'm happy to see that they offer something new (which we have been waiting for a long, long time) and that some people want it and can get it, along with the experimental ground this implies for both developers and manufacturers: if something works, that can be the way to go and it can be improved, and if it doesn't, we can rule it out and try other options until we find something better.

Expensive items in comparison to cheaper equivalents with similar features are not that rare... people buy iPhones and most of the time is just because of the brand, even though the hardware itself is not significantly better than other options nor offers something that new and exclusive that could explain the difference in the price (at least, not all of it). Lame example, maybe, but I hope it can be understood. RTX offers something new, at least.
Someone with a bit of fantasy and happiness, I am crying

So mesh shading is more intresting feature on RTX 20 series then RT, DLSS etc?

https://www.geeks3d.com/20180920/nvidia-mesh-shading-asteroids-demo-on-a-rtx-2080/


mesh shading is said to be more advanced and usable then primitive shading.
it looks very nice
 
Status
Not open for further replies.
Back
Top