Impact of nVidia Turing RayTracing enhanced GPUs on next-gen consoles *spawn

Status
Not open for further replies.
Doesn't it run 30fps during the visually taxing playbacks?
In resolution mode, yes.
Performance mode has 60fps playback. (1080p)

Would like an option to have high-res 60fps without heavy post processing.
Mostly everything is baked in GT Sport. So looks fantastic and runs well. Good setup for racing. But.. some other limitations as a result
Yup.
They have some sweet quality GI tools to bake light to vertexes.

Tracing from vertexes could be ok for games using RTX.
 
Last edited:
Mostly everything is baked in GT Sport. So looks fantastic and runs well. Good setup for racing. But.. some other limitations as a result

Oh I think it looks bland as hell most of the time, with a few sparks of brilliance here and there (helped mostly by the amazing HDR implementation).
 
It's quite strange seeing it in motion, because it looks both very, very real, and very, very artificial. In the arcadey sense.

A bit of uncanny valley, I suppose. I wonder how much more prevalent that'll become over the years?

Edit: I was referring to GT Sport btw
 
Last edited:
Wont dlss+RT gain better performance than without dlss?
As a replacement for TAA and apparently rendering at a lower resolution, likely yes. But without DLSS and RT implemented on the same title, there's no way to know if utilization of the tensor cores for de-noising RT will hamper performance of DLSS.
Also metro is said to use a better RT implementation then bfv does.
Not necessarily "better", as that implies a superior implementation. DICE decided to only use RT reflections in BFV, especially since performance is a big factor. Metro will utilize GI and shadowing AFAIK. We don't know performance penalties.
 
Oh I think it looks bland as hell most of the time, with a few sparks of brilliance here and there (helped mostly by the amazing HDR implementation).
Lol I think it look a pretty good. But they are having a hard time coming out with content to keep their fan base satisfied. Rain etc.
 
As I said in other posts, I'm excited that we're finally enjoying RTRT in games, so I'm obviously pro-RT in the sense that I appreciate what Nvidia has done now, even though I'm not pro-Nvidia (I honestly don't care about the manufacturer). No, it's not the best solution and it's expensive as fuck, but come on, guys! It's friggin RTRT! Can't we cut them some slack?

Thanks to this step mainstream developers are implementing and further testing this tech in a better scale right now. All this will be used to improve both software and hardware in the future. Should we expect any better, bearing in mind all the circumstances and facts of the current tech/market?

I honestly don't understand the shitstorm against Nvidia. They took a risk and they pulled out better graphical cards with the addition of another hardware solution which you're totally free not to use if you want to push rasterization further, instead. If you don't like the features, you're not forced to buy them either!

Of course I understand all the criticism, but, IMO, let other people praise and enjoy the positive aspects which they already have.

Not a shitstorm against NV, just tempering expectations about viable RT for the next console generation at...this is the most important part...at console price points. At the end of the day, it's still about games and gameplay feeling good. Can RT be had at console price points (meaning the GPU silicon used in a SOC or MCM or component will likely have to cost south of 150-200 USD, 200 being fairly extravagent) while still being fluid and responsive and offering a significantly improved visual IQ over rasterization only rendering?

NV has certainly gotten the ball rolling more than PowerVR did, but that has more to do with NV playing in a space that is more conducive to adopting RT (PC) than PowerVR (mobile).

I do sometimes wonder how things would have gone, had PowerVR still been a player in the PC graphics space. Ah well.

Also, discussions about whether blackbox, inflexible RT as done in RTX will end up holding back RT rendering more than helping it by reducing the amount of work done with alternative game centric algorithms. Something offline rendering for the past few decades hasn't had to think about. Hence, whether it's appropriate for the next gen consoles versus a more flexible, general, and potentially slower RT solution.

Ergo, many of us thinking that NV's future consumer GPUs that accelerate RT will try to do away with fixed function black box RT specific hardware. And that part is the one that is truly worth getting excited about, IMO. Alternatively a bleaker future is one where there is no significant progress in silicon nodes and no viable alternative is found leading to a future of increasingly fixed function hardware that doesn't allow for creative rendering like we've had since the X360/PS3 generation.

I don't think anyone is against RT. Most here are likely eagerly wanting RT in the future. But at what cost (monetarily and algorithmically)? And in what form? Either intermediate, or long term.

Regards,
SB
 
Last edited:
Not a shitstorm against NV, just tempering expectations about viable RT for the next console generation at...this is the most important part...at console price points. At the end of the day, it's still about games and gameplay feeling good. Can RT be had at console price points (meaning the GPU silicon used in a SOC or MCM or component will likely have to cost south of 150-200 USD, 200 being fairly extravagent) while still being fluid and responsive and offering a significantly improved visual IQ over rasterization only rendering?

NV has certainly gotten the ball rolling more than PowerVR did, but that has more to do with NV playing in a space that is more conducive to adopting RT (PC) than PowerVR (mobile).

I do sometimes wonder how things would have gone, had PowerVR still been a player in the PC graphics space. Ah well.

Also, discussions about whether blackbox, inflexible RT as done in RTX will end up holding back RT rendering more than helping it by reducing the amount of work done with alternative game centric algorithms. Something offline rendering for the past few decades hasn't had to think about. Hence, whether it's appropriate for the next gen consoles versus a more flexible, general, and potentially slower RT solution.

Ergo, many of us thinking that NV's future consumer GPUs that accelerate RT will try to do away with fixed function black box RT specific hardware. And that part is the one that is truly worth getting excited about, IMO. Alternatively a bleaker future is one where there is no significant progress in silicon nodes and no viable alternative is found leading to a future of increasingly fixed function hardware that doesn't allow for creative rendering like we've had since the X360/PS3 generation.

I don't think anyone is against RT. Most here are likely eagerly wanting RT in the future. But at what cost (monetarily and algorithmically)? And in what form? Either intermediate, or long term.

Regards,
SB
I agree and, as I said, I understand the criticism. But at least Nvidia did something. That's my point. From this experience things should get better (I hope).
 
I agree and, as I said, I understand the criticism. But at least Nvidia did something. That's my point. From this experience things should get better (I hope).
Doing something makes a great deal of sense for professional imaging. If RTX.... was a pro only card and not released for gaming, it's whole perception would be different. A cynical view sees RTX as a pro card released to gaming before the tech is ready for gaming. That's part of the discussion.
 
Price points for nvidia hardware is high. It’s 2070 is 455mm^2.

Compared to a console of 360mm^2
It’s nearly 30% larger. That’s a big part of the costs there before we get into the R&D
Though of course we'd be looking at a node shrink for next-gen consoles, so ~265 if we're lucky. I think that points nicely to your suggestion of 2070 performance being what next-gen consoles could hope for, so it's that card to be scrutinised.
 
Doing something makes a great deal of sense for professional imaging. If RTX.... was a pro only card and not released for gaming, it's whole perception would be different. A cynical view sees RTX as a pro card released to gaming before the tech is ready for gaming. That's part of the discussion.
I think it's ok if there are people who want to pay more to get RTRT in games. There are more options out there for people who don't. I don't see the "harm", and I don't see how the tech is not ready, because it already is out in the market and "it just works" ( :D , ok, I mean that we can see it now and it works, even though it's not perfect and at 9999 FPS).

As long as it's not the only option and they don't impose anything on anybody, I'm fine with it. I for one wouldn't buy one of these cards now, but I'm happy to see that they offer something new (which we have been waiting for a long, long time) and that some people want it and can get it, along with the experimental ground this implies for both developers and manufacturers: if something works, that can be the way to go and it can be improved, and if it doesn't, we can rule it out and try other options until we find something better.

Expensive items in comparison to cheaper equivalents with similar features are not that rare... people buy iPhones and most of the time is just because of the brand, even though the hardware itself is not significantly better than other options nor offers something that new and exclusive that could explain the difference in the price (at least, not all of it). Lame example, maybe, but I hope it can be understood. RTX offers something new, at least.
 
Not a shitstorm against NV, just tempering expectations about viable RT for the next console generation at...this is the most important part...at console price points. At the end of the day, it's still about games and gameplay feeling good. Can RT be had at console price points (meaning the GPU silicon used in a SOC or MCM or component will likely have to cost south of 150-200 USD, 200 being fairly extravagent) while still being fluid and responsive and offering a significantly improved visual IQ over rasterization only rendering?

NV has certainly gotten the ball rolling more than PowerVR did, but that has more to do with NV playing in a space that is more conducive to adopting RT (PC) than PowerVR (mobile).

I do sometimes wonder how things would have gone, had PowerVR still been a player in the PC graphics space. Ah well.

Also, discussions about whether blackbox, inflexible RT as done in RTX will end up holding back RT rendering more than helping it by reducing the amount of work done with alternative game centric algorithms. Something offline rendering for the past few decades hasn't had to think about. Hence, whether it's appropriate for the next gen consoles versus a more flexible, general, and potentially slower RT solution.

Ergo, many of us thinking that NV's future consumer GPUs that accelerate RT will try to do away with fixed function black box RT specific hardware. And that part is the one that is truly worth getting excited about, IMO. Alternatively a bleaker future is one where there is no significant progress in silicon nodes and no viable alternative is found leading to a future of increasingly fixed function hardware that doesn't allow for creative rendering like we've had since the X360/PS3 generation.

I don't think anyone is against RT. Most here are likely eagerly wanting RT in the future. But at what cost (monetarily and algorithmically)? And in what form? Either intermediate, or long term.

Regards,
SB
I don't see anybody complaining about the fixed rasterization pipeline and how much it has held back graphics. Seems quite unfair when contrasted with opinions against RT. Why does rasterization get a pass?

You won't get better speeds with general purpose hardware, that's just silly. NVIDIA even tried to make a full rasterizer on compute a few years ago and the results were an order of magnitude slower than hardware acceleration. You want speed? You need specialized hardware.

Besides, is the implementation of HRT in Battlefield V representative of what we could expect from console games two years from now? Absolutely not. Just the fact that the game was fully developed with rasterization in mind and HRT being an afterthought should give you guys a hint.
 
I see no opinions against RT.
I see some opinions questioning the cost benefit of current RT in terms of $ and silicon.

Please stop reading into things as being attacks when they do not exist.
 
I don't see anybody complaining about the fixed rasterization pipeline and how much it has held back graphics. Seems quite unfair when contrasted with opinions against RT. Why does rasterization get a pass?

You won't get better speeds with general purpose hardware, that's just silly. NVIDIA even tried to make a full rasterizer on compute a few years ago and the results were an order of magnitude slower than hardware acceleration. You want speed? You need specialized hardware.

Besides, is the implementation of HRT in Battlefield V representative of what we could expect from console games two years from now? Absolutely not. Just the fact that the game was fully developed with rasterization in mind and HRT being an afterthought should give you guys a hint.

So what you're saying is that we'd be better off with VLIW based hardware instead of unified shaders?

Fixed function hardware T&L instead of the more flexible lighting solutions that exist today?

And on and on. Graphics the past 2 console generations have flourished with the move away from fixed function hardware and towards more generalized hardware.

Allowing the developers the ability to be creative with how things are rendered rather than trapping them inside a box and saying THIS is how you render 3D graphics is why we've seen rapid evolution of how 3D graphics are rendered.

RT is nice. No-one is arguing against RT. I have no idea where you are coming up with that idea. Fixed function black box RT while nice to get things kickstarted, however, can only hold the industry back if it remains a fixed function black box.

Graphics have progressed so much purely because developers have had the ability to experiment with different ways of doing things, often in ways the hardware makers didn't expect due to the flexible nature of the hardware and the fact that it wasn't a black box. Locking them into a singular way of doing things while potentially not harmful in the short term will stifle creativity and advancement in 3D graphics in the long term.

Just like if we'd never progressed past fixed function hardware T&L after it was introduced in GeForce 256. It'd definitely be faster than general purpose compute and shaders for lighting, but we'd certainly be a lot worse off if it hadn't quickly been replaced with more generalized solutions for achieving the same effect.

What people question is whether hardware accelerated RT is feasible for the next generation of consoles and what form it might take if it is.

Regards,
SB
 
Last edited:
I see no opinions against RT.
I see some opinions questioning the cost benefit of current RT in terms of $ and silicon.

Please stop reading into things as being attacks when they do not exist.
Semantics.

So what you're saying is that we'd be better off with VLIW based hardware instead of unified shaders?

Fixed function hardware T&L instead of the more flexible lighting solutions that exist today?

And on and on. Graphics the past 2 console generations have flourished with the move away from fixed function hardware and towards more generalized hardware.

Allowing the developers the ability to be creative with how things are rendered rather than trapping them inside a box and saying THIS is how you render 3D graphics is why we've seen rapid evolution of how 3D graphics are rendered.

RT is nice. No-one is arguing against RT. I have no idea where you are coming up with that idea. Fixed function black box RT while nice to get things kickstarted, however, can only hold the industry back if it remains a fixed function black box.

Graphics have progressed so much purely because developers have had the ability to experiment with different ways of doing things, often in ways the hardware makers didn't expect due to the flexible nature of the hardware and the fact that it wasn't a black box. Locking them into a singular way of doing things while potentially not harmful in the short term will stifle creativity and advancement in 3D graphics in the long term.

Just like if we'd never progressed past fixed function hardware T&L after it was introduced in GeForce 256. It'd definitely be faster than general purpose compute and shaders for lighting, but we'd certainly be a lot worse off if it hadn't quickly been replaced with more generalized solutions for achieving the same effect.

What people question is whether hardware accelerated RT is feasible for the next generation of consoles and what form it might take if it is.

Regards,
SB
Is the rasterization pipeline fully programmable? No. Why? Because it would be too slow.

You keep calling RTX a black box, but really, exactly what things does it prevent you from doing? Do you have any specific examples of the things we would miss out on if it wasn't for its adoption or is it just FUD?

If we didn't adopt it at least in the short term I can tell you what we would miss out on: fast triangle-intersecting ray tracing.
 
I don't see anybody complaining about the fixed rasterization pipeline and how much it has held back graphics. Seems quite unfair when contrasted with opinions against RT. Why does rasterization get a pass?
There are countless criticisms about the results from rasterising, for years and years people pointing out what's wrong with them and their ugly hacks. However, it's the only way to get decent framerate 3D graphics from a computer and will remain the only way through hybrid renderers still until we have notably faster tech that'll allow all ray tracing, if that ever happens. Excepting special cases like traced SDF, which again are only now possible and it'd have been really stupid to demand the ditching of rasterising in favour of traced SDF over the past few decades because traced SDF wasn't possible.

"We don't want rasterised graphics because they suck! Give us raytraced games at one frame every four hours!!"
 
RT is nice. No-one is arguing against RT. I have no idea where you are coming up with that idea. Fixed function black box RT while nice to get things kickstarted, however, can only hold the industry back if it remains a fixed function black box.

Problem is that you need 10x the compute performance to match the fixed-function block. In a post Moores Law world you should find a real complex hobby to get past the next two decades. :D
 
Status
Not open for further replies.
Back
Top