Oh look you've deleted my fully technical reply again calling it a "market share discussion".
I don't think that we'll be going into the direction of magic optimization techniques making RT lighter with time at all. Even if (that's a big "if" at this point in time) in some cases such optimizations would still happen whatever performance you will get from this will end up being spent on adding more RT into the frame most likely.
And yet, that's the exact direction that UE5 is going. And if reports are to be believed roughly half of all AAA games in development now are on UE5 with most if not all of them using consoles as the primary development target.
And that is unlikely to change until the next generation of consoles arrive with more performant RT.
Regards,
SB
I don't think that we'll be going into the direction of magic optimization techniques making RT lighter with time at all. Even if (that's a big "if" at this point in time) in some cases such optimizations would still happen whatever performance you will get from this will end up being spent on adding more RT into the frame most likely.
We are already in a situation with a varying ratio of games using RT "lightly" and games using RT "heavily" - but all of them will end up using RT eventually and if the last several years are anything to go off the number of "heavy" RT titles will increase with time, not diminish.
It is? Last time I've checked UE5 was in fact adding h/w RT features and not removing them.And yet, that's the exact direction that UE5 is going.
Which means what for PC market and PC GPUs exactly? Some of the most demanding RT games on the market right now are using UE4 - while their console versions don't use RT at all.And if reports are to be believed roughly half of all AAA games in development now are on UE5 with most if not all of them using consoles as the primary development target.
I mean we already have some such titles and a) they don't really mitigate anything, just show a lower comparative advantage, b) it's not like AMD has a FLOPs advantage either and it's not clear why would they have it in the future, and c) these things could be a great fit for AI which, again, well...My comment was about the relative balance of ray casting vs shading workloads. As RT games get “smarter” it seems there will be less emphasis on casting rays and more emphasis on importance sampling, temporal and spatial reuse etc. This means Nvidia’s (and Intel’s) raw ray casting advantage could be mitigated in future RT titles.
Never gonna happen unless someone magically finds a way to revive moore's law (and fucks a hole in the memory wall while we're at it).The end game is still full path tracing after all.
Nah. Your hypothetical calculations hold no weight. If the HW-RT mode could be faster by using lower quality similar to SW-Lumen, Epic sure as hell would've done that in high scalability mode, which targets 60 fps on consoles. But instead they are going to use SW-RT mode.It is? Last time I've checked UE5 was in fact adding h/w RT features and not removing them.
They do improve their s/w Lumen path as well but that's expected - and the results are also expected with h/w path being either faster or resulting in higher quality.
So in a sense of this discussion you're likely looking at a hypothetical 2x faster non-RT GPU ending up at the same speed/quality in UE5 as a 1/2 as complex one with RT h/w would.
Which then means that it would again be 2x faster at the same complexity.
Sorry about the purely made up math of course but you get the idea.
We haven't really seen this "60 fps" mode in action though did we? Judging from the description in UE docs it will have severe quality cutbacks compared to both "30 fps s/w" mode and RT h/w mode so if it's cut back beyond what people would even expect from RT then there's little point in using h/w RT for it.Nah. Your hypothetical calculations hold no weight. If the HW-RT mode could be faster by using lower quality similar to SW-Lumen, Epic sure as hell would've done that in high scalability mode, which targets 60 fps on consoles. But instead they are going to use SW-RT mode.
Bold claims with nothing to back them up.Never gonna happen unless someone magically finds a way to revive moore's law
I'd say it's the exact opposite - because of scaling being "dead" the idea of putting 2/3/4X more FP32 SIMDs into a chip is what's "never gonna happen" while opting for a "smart" dedicated h/w doing the same thing for a fracture of complexity (and power likely) will.Bold claims with nothing to back them up.
Yea bro making 20yo titles shiny is surely the best application of very limited xtor budgets.There are actually path tracing titles in works as well as already productized games.
Tech demos. NV loves throwing those around (remember that car sim with SMP haha).such games, mods, remasters
That's like half your GPU already.while opting for a "smart" dedicated h/w
Cyberpunk, Portal with RTX, Justice, Minecraft with RTX are all new games with modern content (yes, even minecraft when pushed to the limits), so nothing 20yo in the list.Yea bro making 20yo titles shiny is surely the best application of very limited xtor budgets.
Many play it, as well as myriads of other modded games out of nostalgia or for many other reasons.No one's seriously playing q2rtx.
Yes, because of Dennard scaling, and as long as it doesn't scale as good as the number of xtors there will always be extra xtors left for energy efficient stuff)That's like half your GPU already.
The what.there will always be extra xtors left
Surely amd and others must see that working smarter is the only way forward. Even if we manage to scale smaller several nodes below 1nm, you approach an insurmountable problem of cooling. We are talking about generating power at watts per cm^2 that will be greater than nuclear power output.That's like half your GPU already.
Again, each xtor gonna cost more now for every upcoming node.
Those are SIMD machines, there's only so much you can do to make em smarter.Surely amd and others must see that working smarter is the only way forward
Cooling isn't really the biggest issue (particularly for GPUs that, unlike, say, DC CPUs, spend relatively little power on driving their I/O) given we're making fairly huge dies or huge area MCPs those days.you approach an insurmountable problem of cooling.
Sure, we have. 8nm Ampere will be as fast as RDNA3 on 5nm with Raytracing. That is basically twice the numbers of transistor useable to spend on other things - like L2 cache, higher clock rates, more Tensor Cores etc.The what.
We don't have extra xtors left anymore since each one costs more now.
The logarithmic scaling is nice, but it does not change how expensive tracing is ragardless.RT scales logarithmically with triangles counts, so it's way easier to push it for billions of micro triangles in real-time, no need for complex LODs and other voodoo.
I think we do know those reasons quite well:It's just Epic's choice, for reasons which we don't know yet in full.
Yes, assuming end game means game over for the gaming industry, committing suicide by requesting unaffordable HW from anyone.The end game is still full path tracing after all.
We don't. You guessing is not us knowing.I think we do know those reasons quite well
Epic choose what they implement for what feature/performance level.1. Epic does not choose
This is true and it will most likely be possible to still get 60 fps while using RT h/w on modern consoles in UE5 - with a different set of limitations than those of a 60 fps non-h/w RT mode. It will be interesting to see how many developers will end up choosing such option instead of going with Epic's "recommended" one.it's the dev who decides which path to use for what
Do we? UE5 needs it because it targets mobile platforms where RT is in its infancy and will remain so for some time most likely. On PC we really don't need a non-RT path for games coming out in 2024+. On consoles we don't need it right now already.Currently we just still need an alternative to HW RT anyway.
We know that it's slower in scenes designed as many overlapping instances and we also know that this is an engine world design issue, not a h/w RT issue.2. We know HW RT is slower with overlapping instances (natural scenes)
Yep.but faster with non overlapping instances (technical scenes)
Neither can s/w RT in UE5 as it's using SDF simplification for that. And h/w RT likely can support Nanite but it should be implemented differently for that.3. HW RT can't support Nanite or any other fine grained LOD solution due to API design flaws.
Nowhere does a "marketing buzz word" imply the number of RPPs you need to be compatible with it. You can shoot one ray per scene for all anyone care and if your "caching scheme" will allow you to reconstruct the whole scene from that one ray in something which will be close to full detail - then it will still be PT.What will happen (and already did) is that game devs call their caching tech 'Path Tracing', just for the sake to use a marketing buzz word.