Next Generation Hardware Speculation with a Technical Spin [2018]

Status
Not open for further replies.
If it is well done it theoretically runs even faster than without raytracing. Finally one does not have to use the cuda cores for Screen Space Reflections etc. anymore. Screen Space Reflections can cost up to 30 % alone.

I think your math needs checking.

Even assuming SSR can cost 30% - obviously it depends on a lot of factors and I’d be extremely surprised if such a relatively small part of the pipeline took so much render time in controlled situations, especially on consoles - then you would only “run faster” with RT enabled if and only if RT cost less than 30%.

Right now the evidence points to RT sucking a whole lot more than 30%, constantly, under all conditions, in very limited, unplayable demos, even on top of the line cards that cost as much as three consoles at today’s prices.

Sure it’s early days, and optimisations will help, but the evidence today is that you go from 4k60+ without RT to 1080p and very questionable frame rates (and results I might add, more on this below) with RT enabled. That’s a whole lot more than 30%.

RT is a whole host of things (reflections, shadows, GI) and each demo I’ve seen so far has focused on one of them at a time, so it’s a bit of a lottery just trying to figure how it all affects performance.

Battlefield V has cool reflections, but still not perfect (they were still a bit broken even with so-called “perfect RT reflections”). And the standard version looked pretty close anyway at the speed you normally play the game at.

Tomb Raider had nice soft shadows, but again, it took me a few double takes to see a difference.

Metro’s RT GI solution? Sorry but the game looks basically the same as what I’ve played before over the last few years. Sure when you open *that window*, light comes in realistically, and the room is illuminated like it should. But the Rasterised version looked pretty similar, so much so that I only really noticed the difference After the guy mentioned it, switched between each version a few times and I looked at it in slow motion. Otherwise, meh, like Tomb Raider.

So yeah. So far I literally don’t see the point of the humongous performance hit.

We have gotten so good at ‘faking’ great graphics that it’s gonna be a tough sell for a while. Diminishing returns rearing its head again.

Then again, now that I think of it, while I played through Spider-Man on PS4 all I could see were those typically broken but otherwise perfectly fine SSR+Cubemaps on the buildings and kept thinking “damn, some RT there would be just perfect”. Heck I’d buy a new platform just to replay that game all over again with fully Ray Traced reflections on those shiny NYC buildings. But I’m completely insane.

Perhaps I’m being a Debby downer for nothing.
 
Last edited:
I think your math needs checking.

Even assuming SSR can cost 30% - obviously it depends on a lot of factors and I’d be extremely surprised if such a relatively small part of the pipeline took so much render time in controlled situations, especially on consoles - then you would only “run faster” with RT enabled if and only if RT cost less than 30%.

Right now the evidence points to RT sucking a whole lot more than 30%, constantly, under all conditions, in very limited, unplayable demos, even on top of the line cards that cost as much as three consoles at today’s prices.

Sure it’s early days, and optimisations will help, but the evidence today is that you go from 4k60+ without RT to 1080p and very questionable frame rates (and results I might add, more on this below) with RT enabled. That’s a whole lot more than 30%.

RT is a whole host of things (reflections, shadows, GI) and each demo I’ve seen so far has focused on one of them at a time, so it’s a bit of a lottery just trying to figure how it all affects performance.

Battlefield V has cool reflections, but still not perfect (they were still a bit broken even with so-called “perfect RT reflections”). And the standard version looked pretty close anyway at the speed you normally play the game at.

Tomb Raider had nice soft shadows, but again, it took me a few double takes to see a difference.

Metro’s RT GI solution? Sorry but the game looks basically the same as what I’ve played before over the last few years. Sure when you open *that window*, light comes in realistically, and the room is illuminated like it should. But the Rasterised version looked pretty similar, so much so that I only really noticed the difference After the guy mentioned it, switched between each version a few times and I looked at it in slow motion. Otherwise, meh, like Tomb Raider.

So yeah. So far I literally don’t see the point of the humongous performance hit.

We have gotten so good at ‘faking’ great graphics that it’s gonna be a tough sell for a whole. Diminishing returns rearing its head again.

Then again, now that I think of it, while I played through Spider-Man on PS4 all I could see were those typically broken but otherwise perfectly fine SSR on the buildings and kept thinking “damn, some RT there would be just perfect”. Heck I’d buy a new platform just to replay that game all over again with fully Ray Traced reflections on those shiny NYC buildings. But I’m completely insane.

Perhaps I’m being a Debby downer for nothing.

Mr. Boy, you put forth perfectly all my feelings regarding current RT much better than I ever had the patience to do. In the world of internet forums its simply impossible to be realistic without being a debby downer. It's just how it is. But good job there. It was some very eloquent Debby downing.
 
Isn’t real time ray tracing just part of the new DirectX feature set and, for all intents and purposes, something that can be accelerated with the right type and amount of computational power? Regardless of next gen consoles, AMD will eventually release something to support DXR in some way, and the technology will progress from RTX.
Personally, I don’t think the results I’ve seen from Nvidia actually justify the performance hit. As far as what they’ve shown so far.
I'd be ok if we saw developers actually engage in an argument of what's more important higher resolution and frame rates or additional effects including RT?

I realize it's not this simple but it feels like graphics discussion in gaming especially on consoles is disproportionately about resolution and frame rate compared to most everything else. Granted it's easier to discuss, as nearly anyone can grasp the relative differences between 1080p and 4k or 60 versus 30 fps but at least from my perspective RT and other effects are a bigger interest and likely to make gaming more interesting than simply resolution increases.

To say it another way, if the next generation consoles are simply higher resolution experience of games we're seeing now I may have to make a move to PC and I have no interest in that. I am deep in diminishing returns territory wrt resolution and to lesser degree frame rates.
 
I'd be ok if we saw developers actually engage in an argument of what's more important higher resolution and frame rates or additional effects including RT?

I realize it's not this simple but it feels like graphics discussion in gaming especially on consoles is disproportionately about resolution and frame rate compared to most everything else. Granted it's easier to discuss, as nearly anyone can grasp the relative differences between 1080p and 4k or 60 versus 30 fps but at least from my perspective RT and other effects are a bigger interest and likely to make gaming more interesting than simply resolution increases.

To say it another way, if the next generation consoles are simply higher resolution experience of games we're seeing now I may have to make a move to PC and I have no interest in that. I am deep in diminishing returns territory wrt resolution and to lesser degree frame rates.

The problem I’m talking about is that even with RT we would get games that do look pretty darn similar to what we would have without RT anyway. Only they’d run much worse.
 
The fact the RT comparisons have sometimes used worst-case rasterising examples to show how awesome RT is shows the advances aren't that significant over what's possible now. A proper RT engine will be a thing to behold, and I do believe the first game to build the lighting from the ground up around realtime RT will look significantly superior, but the existence of this RT hardware in $600+ GPUs now does not equate to it being ready for next-gen consoles.

I cross my fingers that AMD have something awesome that is ready for the next consoles though. ;)

And it needn't be large. PowerVR's raytracing was in a mobile chip. We've zero evidence that nVidia's solution is the best-case, optimal solution. That's just the way they've gone with their processor designed for graphics+machine learning+raytracing. Ditch the machine learning part and throw in some focussed, specialist silicon, perhaps RT will be possible and cheap enough.
 
Last edited:
Mr. Boy, you put forth perfectly all my feelings regarding current RT much better than I ever had the patience to do. In the world of internet forums its simply impossible to be realistic without being a debby downer. It's just how it is. But good job there. It was some very eloquent Debby downing.
Too kind!
 
I think your math needs checking.

Even assuming SSR can cost 30% - obviously it depends on a lot of factors and I’d be extremely surprised if such a relatively small part of the pipeline took so much render time in controlled situations, especially on consoles - then you would only “run faster” with RT enabled if and only if RT cost less than 30%.

Right now the evidence points to RT sucking a whole lot more than 30%, constantly, under all conditions, in very limited, unplayable demos, even on top of the line cards that cost as much as three consoles at today’s prices.

Sure it’s early days, and optimisations will help, but the evidence today is that you go from 4k60+ without RT to 1080p and very questionable frame rates (and results I might add, more on this below) with RT enabled. That’s a whole lot more than 30%.

RT is a whole host of things (reflections, shadows, GI) and each demo I’ve seen so far has focused on one of them at a time, so it’s a bit of a lottery just trying to figure how it all affects performance.

Battlefield V has cool reflections, but still not perfect (they were still a bit broken even with so-called “perfect RT reflections”). And the standard version looked pretty close anyway at the speed you normally play the game at.

Tomb Raider had nice soft shadows, but again, it took me a few double takes to see a difference.

Metro’s RT GI solution? Sorry but the game looks basically the same as what I’ve played before over the last few years. Sure when you open *that window*, light comes in realistically, and the room is illuminated like it should. But the Rasterised version looked pretty similar, so much so that I only really noticed the difference After the guy mentioned it, switched between each version a few times and I looked at it in slow motion. Otherwise, meh, like Tomb Raider.

So yeah. So far I literally don’t see the point of the humongous performance hit.

We have gotten so good at ‘faking’ great graphics that it’s gonna be a tough sell for a whole. Diminishing returns rearing its head again.

Then again, now that I think of it, while I played through Spider-Man on PS4 all I could see were those typically broken but otherwise perfectly fine SSR+Cubemaps on the buildings and kept thinking “damn, some RT there would be just perfect”. Heck I’d buy a new platform just to replay that game all over again with fully Ray Traced reflections on those shiny NYC buildings. But I’m completely insane.

Perhaps I’m being a Debby downer for nothing.

I have a completely different opinion.

Just look at the shadows in video games. Every second they all look completely wrong and ugly. I haven't even seen Softshadows on the consoles yet. In this respect, even Crysis 2 from 2011 is much further along with ist soft shadows. Hard shadows are very repulsive to me and even on PC most games still have them. Raytracing shadows would even be much better than soft shadows and area light shadows don't exist as far as I can tell.

The same applies to almost all SSAOs. With the exception of GTAO they are all simply wrong and unrealistic. I don't understand why the current implementations are considered as a good approach. When one knows how it would look right almost all SSAO implementations just feels wrong and fake. Raytracing AO would also be relatively cheap.

When it comes to GI, I see in Cryteks Hunt how well advanced GI can be. Hunt has the best lighting of any video game I know. Even pre-baked lighting doesn't come close. That Metros visuals do not impress as a whole is due to the game. Metro games never looked very good from my point of view. Raytracing alone doesn't save its graphics either.

Since when do SS reflections look good? They are the worst (very many artifacts, very limited, expensive) and often even deactivated on consoles since the performance cost is too high. Due the low resolution of the SS reflections on console games they obviously take a lower performance hit than 30 %. Good looking planar relections are mostly too expensive for open environments while cube maps are low res, static and false.
 
Last edited:
I have a completely different opinion.

Just look at the shadows in video games. They all look completely wrong and ugly. I haven't even seen Softshadows on the consoles yet. In this respect, even Crysis 2 from 2011 is much further along with ist soft shadows. Hard shadows are very repulsive to me and even on PC most games still have them. Raytracing shadows would even be much better than soft shadows and area light shadows don't exist as far as I can tell.

The same applies to almost all SSAOs. With the exception of GTAO they are all simply wrong and unrealistic. I don't understand why the current implementations are considered as a good approach. When one knows how it would look right almost all SSAO implementations just feels wrong and fake.

Since when do SS reflections look good? They are the worst and often even deactivated on consoles since the performance cost is too high. Good looking planar relections are too expensive for open environments while cube maps are low res and false. Due to the low resolution of the SS reflections on console games they obviously take a lower performance hit than 30 %.
I never said that today's solution look especially amazing, in fact I'm the first to say that everything in rasterisation just breaks down here and there, especially with reflections and shadows.

All I - and many others - are saying is that right now the better looking RT solutions suck a whole lot of performance even on stupidly expensive high end Nvidia cards. And that given the performance hit, 'faking' things is OK, for now.
 
Because it's the first steps. If the developers can now work with the hardware and gain experience, then it will soon run much faster and look better. The consoles are designed for many years and not for now. If Nvidia had released the Raytracing hardware 10 years later, the first steps in 10 years wouldn't be performant either.
 
Because it's the first steps. If the developers can now work with the hardware and gain experience, then it will soon run much faster and look better. The consoles are designed for many years and not for now. If Nvidia had released the Raytracing hardware 10 years later, the first steps in 10 years wouldn't be performant either. Basically, the earlier the hardware comes, the better.
First steps in hardware tend to be dead-ends. XB360 had hardware tessellation - silicon that went to waste. PSP had hardware bezier support - used for squat. These RT cards will be used in offline raytracing and professional imaging. Applications will be optimised for performance, and nVidia will invest in making them faster and better. Pixar will release papers on best-case rendering techniques. Etc. So raytracing doesn't need gaming's involvement this early on to progress it. Putting it out there for game developers to dabble with while greatly limiting long-term performance is a very questionable choice. A console with a limited die budget has to allocate it to where it'll get the best returns. If raytracing at a useful rate requires 100 mm² of die out of 350 total budget for GPU and CPU, it's too expensive and we can live without. If RT can be achieved with only 20 mm² of that budget, it's worth doing.

Ultimately, it's about looking at the economy and not just blindly chasing a future. The future will happen, but it cannot be made to happen any early than technologically possible. Console's can't look to implement RT hardware until that can be done meaningfully in budget.
 
I feel the same as many of you. As I previously said and quite in line with @London-boy 's comments, current tech has allowed us to produce very pretty/advanced graphics without the use of RT, and in most cases the difference is not night and day at a first glance.

However (and even though I don't like everything Nvidia do), I think we owe them the credit for making what may be considered the first true step to finally push RT into mainstream games (currently available or that will be available in a near future), no matter if their solution is expensive AF at the moment or if there's a bit of confusing regarding the tech behind it, the targeted professional market and a clear roadmap for both the hardware and the software (APIs, whatever).

Once this door has been open, things can only improve. Maybe we needed this just to get started at last, even though it looks forced and not very promising from the start.
 
I've said it before. No one should launch next gen machines if they are stuck with barely any RAM increase. I think companies should wait until at least 32GB is viable.

Or they should adopt fast NAND to compensate for the lower RAM amount. Or embrace AMD's HBCC and go with fast NAND + slow RAM + fast RAM.
 
I'd be ok if we saw developers actually engage in an argument of what's more important higher resolution and frame rates or additional effects including RT?

I realize it's not this simple but it feels like graphics discussion in gaming especially on consoles is disproportionately about resolution and frame rate compared to most everything else. Granted it's easier to discuss, as nearly anyone can grasp the relative differences between 1080p and 4k or 60 versus 30 fps but at least from my perspective RT and other effects are a bigger interest and likely to make gaming more interesting than simply resolution increases.

I don't think LB is making the case for resolution to be the end all be all graphical feature. It's just a damn convenient metric to measure the cost of RT in current games. In a game that would otherwise run at 4k60 on ultra settings to go 1080p and with choppy fps at that, it means RT is taking a considerable hit. Much more than the hypothetical 30% of frame time for SSR mentioned here.
If you are willing to go down the "more pixels vs. prettier pixels" path, fine. Let's sacrifice resolution for graphical features. Let's build a next-gen game targeting a 1080p res on turing. What would look best. A game with current gen assets and effects aside from RT reflections and shadows (for just some of the lights, by the way) or one with all the crazy shit devs might be able to do if they made a game from ground up with that kind of performance baseline using all the rasterization tricks they love?
 
there is probably a ps5 with Navi and a ps5 with arcturus both in development (if arcturus, is the name of the family)

Which one they green light will probably depend on a lot of factors

This is my thinking as well. I think it makes sense to spend the NRE to have contingency plans for the SoC depending on what your competition does. I imagine it would be quite a big deal if one launches in 2019 with a Navi based SoC while the other launches in 2020 with an Arcturus based SoC.

It sounds as if Arcturus could be their “Super-SIMD” architecture.
 
I never said that today's solution look especially amazing, in fact I'm the first to say that everything in rasterisation just breaks down here and there, especially with reflections and shadows.

All I - and many others - are saying is that right now the better looking RT solutions suck a whole lot of performance even on stupidly expensive high end Nvidia cards. And that given the performance hit, 'faking' things is OK, for now.

First steps in hardware tend to be dead-ends. XB360 had hardware tessellation - silicon that went to waste. PSP had hardware bezier support - used for squat. These RT cards will be used in offline raytracing and professional imaging. Applications will be optimised for performance, and nVidia will invest in making them faster and better. Pixar will release papers on best-case rendering techniques. Etc. So raytracing doesn't need gaming's involvement this early on to progress it. Putting it out there for game developers to dabble with while greatly limiting long-term performance is a very questionable choice. A console with a limited die budget has to allocate it to where it'll get the best returns. If raytracing at a useful rate requires 100 mm² of die out of 350 total budget for GPU and CPU, it's too expensive and we can live without. If RT can be achieved with only 20 mm² of that budget, it's worth doing.

Ultimately, it's about looking at the economy and not just blindly chasing a future. The future will happen, but it cannot be made to happen any early than technologically possible. Console's can't look to implement RT hardware until that can be done meaningfully in budget.

In my opinion raytracing will quickly be used in the development for baking lightmaps, showing previews etc while it will take a short time until the raytracing effects end up in the mainstream. The question is when and not if at all. In one to two generations it should also be available for the masses. At the moment there is the performance/quality question which will probably be the focus in the upcoming months. Just as the detail of other effects can be regulated there will certainly be compromises here as well. Although developers had access to raytracing trough the Titan V they couldn't adjust to the performance with it. The top engines have much more distribution and therefore there are no costs for the game developers. The professional sector is more into Path Tracing which is much further away at the moment. Most of the rasterizers have already been banned by the professional sector because they know that their clients do not need many fps.

Like the rasterizer the RT cores are dedicated but one can also calculate collisions for AI or audio with them. Anything where one needs to make such spatial requests. Of course one does not need any extra units since there is forever raytracing on GPUs but it is clearly slower. Dedicated cores can accelerate the ray faster and more efficient than doing the same with generic compute units. Increasing the raw performance is not the future since the requirements grow beyond what conventional process increases can provide. That is why innovative solutions have to be found. Such innovations are at the expense of immediate short-term big increases but one has to have the courage to dare real progress in the long run. A shrink generation like Pascal is more consumer friendly but developers are happy about the big feature catalog of the new chip.

I don't think LB is making the case for resolution to be the end all be all graphical feature. It's just a damn convenient metric to measure the cost of RT in current games. In a game that would otherwise run at 4k60 on ultra settings to go 1080p and with choppy fps at that, it means RT is taking a considerable hit. Much more than the hypothetical 30% of frame time for SSR mentioned here.
If you are willing to go down the "more pixels vs. prettier pixels" path, fine. Let's sacrifice resolution for graphical features. Let's build a next-gen game targeting a 1080p res on turing. What would look best. A game with current gen assets and effects aside from RT reflections and shadows (for just some of the lights, by the way) or one with all the crazy shit devs might be able to do if they made a game from ground up with that kind of performance baseline using all the rasterization tricks they love?

Pixel/perormance/visuals is subjective and one can make the same argument for every effect. When UHD/140fps are not possible it does not worth it for some people while others will prefer to see the accurate picture etc.
 
Last edited:
In a swedish article they said Arcturus would be a complete new architecture, with a possible 2020 date.
Sounds as if it’s all potentially a false alarm.


False alarm, folks. The code names are just something we will be using in the open source driver code, so we can publish code supporting new chips for upstream inclusion before launch without getting tangled up in marketing name secrecy.

Trying to avoid the confusion we have today where we publish new chip support as "VEGA10" and then the product gets marketed as "Vega 56" or "Vega 64". In future the new chip support might get published as something like "MOOSE" and then be marketed as something completely different.

The code names are per-chip not per-generation.
 
Status
Not open for further replies.
Back
Top