Predict: Next gen console tech (10th generation edition) [2028+]

I would imagine that it's a lot more tractable to make a good attempt at solving just one or two problems (dynamic world space specular reflections/soft shadows) rather than trying to solve an entire zoo (GI/AO/translucency/area lights on top of reflections/shadows too) of problems with even more cans of worms (acceleration structure/noise). For our possible purposes, some tools (ray tracing) can be overkill (some high quality rendering effects). We're not looking to match offline quality rendering, are we ?

It's unfortunate that different vendors may have different ideas but that's the reality when it's becoming more unclear overtime if consoles can continue to keep following PCs ...
I'm definitely at my knowledge limits in being able to discuss any finer points further, so I'll just say that whatever happens, whatever the future ends up being, thanks for answering and providing some interesting talking points.
 
Large on-die caches benefit more than just RT.
Console vendors likely still wouldn't want them either way so how do you propose that they offset this deficit ?
AI HW was used for upscaling/antialiasing first, frame generation later, and denoising after that, and the PS5 Pro is coming with AI HW for the first purpose already.
Regardless of the future of ray tracing, does better anti-aliasing alone justify the inclusion of AI HW ? (worse that it probably can't be implemented as a separate die) One of it's other application (frame generation) also require computer vision hardware circuits (optical flow accelerator) as well ...
And with Microsoft pushing NPUs in both PCs and allegedly the next Xbox, more applications for AI HW will be found.
NPUs look like they're even less capable with a far more constrained programming model. So far their only rendering application includes a spatial upscaler like Auto SR which is a clear downgrade from GPUs with integrated matrix extensions that can be used to improve temporal upscaling ...

AMD's CEO sniped a Microsoft representative over the fact how the NPUs were taking up quite a bit of die area in a recent keynote ...
 
Epic being dead set on supporting virtual geometry in their engine and the rising cost of more advanced digital circuits is WHY we must look into other directions. The dreams of a path traced future for AAA console centric games are well behind us since real-time rendering is a constantly moving target (geometry/materials/other) ...

I would imagine that it's a lot more tractable to make a good attempt at solving just one or two problems (dynamic world space specular reflections/soft shadows) rather than trying to solve an entire zoo (GI/AO/translucency/area lights on top of reflections/shadows too) of problems with even more cans of worms (acceleration structure/noise). For our possible purposes, some tools (ray tracing) can be overkill (some high quality rendering effects). We're not looking to match offline quality rendering, are we ?

It's unfortunate that different vendors may have different ideas but that's the reality when it's becoming more unclear overtime if consoles can continue to keep following PCs ...

"Pathtracing" is the future simply because it's simple conceptually, just shoot rays. And yes, we're trying to match offline quality rendering, why wouldn't we? The tricky part is maintaining backwards compatibility, and forwards compatibility, and compatibility with whatever hybrid renderer is happening now, and doing so using the designs that have bene hyper refined over decades.

But that's just engineers, engineers want to do engineering, and rays are a conceptually scalable straightforward way to get to Avatar 2 in realtime. Companies need to appeal to customers primarily and customers want mobile, the Switch is the best selling Nintendo console ever.

How do we do both? Probably get rid of black boxes wherever possible. Who needs to deal with giant ubershader register usage in a pathtracer if you already pre-compiled all the materials down to your material parameters in a DXT compressed texture? That's a programmer trick, not a hardware one. Or who knows what acceleration structure is fastest to traverse and rebuild at the same time? Instead of relying on hardware let software figure it out, they can get better and better on the same hardware.

On PS4 we went from Killzone Shadowfall all the way to Forbidden West a huge leap. Give software access to as much as possible and just let them spend a decade+ improving how games look on the next generation of consoles. Yes I've heard "but hardware is faster" with anisotropic filtering being given as an example. Except software just improved on that, just give software the opportunity, and maybe we'll get to mostly pathtracing on mobile hardware from 2026 by 2035.
 
Regardless of the future of ray tracing, does better anti-aliasing alone justify the inclusion of AI HW ? (worse that it probably can't be implemented as a separate die) One of it's other application (frame generation) also require computer vision hardware circuits (optical flow accelerator) as well ...

The future of inferencing in games is very uncertain at this point but I wouldn't discount it simply based on our current lack of imagination. One of the easiest ways for something to gain traction is to add it to mass market consoles.

RT is simple from a software perspective but that is absolutely not true hardware implementation side where the leading vendor has all sorts of accelerated HW states to speed up the process, a large on-die cache to efficiently spill to, and AI HW for denoising. Consoles only feature RT as a sort of temporary experiment ...

I would not be absolutely so sure of it being a permanent solution since I don't see how something like SER can be of much use to them when they probably don't want to spend their die area on large caches and AI HW doesn't have many rendering applications either too to justify by itself. RT from a hardware standpoint could not be more far away from being the claim of a "simple solution" as we're seeing in practice ...

I'm willing to trade in more hardware complexity for software complexity if it's more palatable for a specific segment of the industry ...

That hardware complexity will cost transistors too. Why not spend those transistors on RT instead and avoid the unnecessary pit stop on half baked solutions that don’t solve the underlying challenges of light transport?
 
That hardware complexity will cost transistors too. Why not spend those transistors on RT instead and avoid the unnecessary pit stop on half baked solutions that don’t solve the underlying challenges of light transport?
Do you think that real-time rendering has enough capacity in the future to solve similar problem sets as seen in modern offline rendering ?

What does it matter if we place this final destination on bespoke paradigms instead of a unified method ? Especially when either approaches can be used to potentially achieve similar results in quality for our context ?
 
What does it matter if we place this final destination on bespoke paradigms instead of a unified method ? Especially when either approaches can be used to potentially achieve similar results in quality for our context ?

There’s no evidence that other approaches can deliver similar results at all or for lower cost. Let’s take the basic visibility query at the heart of all of this - are two world space coordinates visible to each other. Tracing rays through some world space data structure is the obvious and simplest way to answer that question. Attempting to do so via raster is inherently clumsy and in many cases prohibitively expensive.

So I think we should pose the question differently. If we already have a proven, elegant solution for light transport why should the industry waste resources on less capable, less scalable and less usable alternatives?
 
"Pathtracing" is the future simply because it's simple conceptually, just shoot rays. And yes, we're trying to match offline quality rendering, why wouldn't we? The tricky part is maintaining backwards compatibility, and forwards compatibility, and compatibility with whatever hybrid renderer is happening now, and doing so using the designs that have bene hyper refined over decades.

But that's just engineers, engineers want to do engineering, and rays are a conceptually scalable straightforward way to get to Avatar 2 in realtime. Companies need to appeal to customers primarily and customers want mobile, the Switch is the best selling Nintendo console ever.

How do we do both? Probably get rid of black boxes wherever possible. Who needs to deal with giant ubershader register usage in a pathtracer if you already pre-compiled all the materials down to your material parameters in a DXT compressed texture? That's a programmer trick, not a hardware one. Or who knows what acceleration structure is fastest to traverse and rebuild at the same time? Instead of relying on hardware let software figure it out, they can get better and better on the same hardware.

On PS4 we went from Killzone Shadowfall all the way to Forbidden West a huge leap. Give software access to as much as possible and just let them spend a decade+ improving how games look on the next generation of consoles. Yes I've heard "but hardware is faster" with anisotropic filtering being given as an example. Except software just improved on that, just give software the opportunity, and maybe we'll get to mostly pathtracing on mobile hardware from 2026 by 2035.
I think in a more perfect universe with less deadlines, less constraint for profit, lettinf software doing a lot of heavy lifting would make sense. In our world though, you have deadlines, money constraints and also Limits of human capacity to constantly invent (burnout is real! And so is a limited imagination). I think for the next 10 to 15 years hardware assissted RT with standardised APIs makes sense so as to have an "easier" way to get quick performance with established production technology (triangles).

Not everyone trying to ship a game will have the mental, physical, temporal, and monetary resources bootstrap lighting solutions like Epic did with Software Lumen. And even that has issues with quality and performance on target hardware which complicates development.

There should be room for both, and thankfully with compute shaders it is already there if a developer cannot wait for the APIs/HW to evolve.
 
Yes I've heard "but hardware is faster" with anisotropic filtering being given as an example. Except software just improved on that, just give software the opportunity,
So IHVs shouldn't have bothered with AF hardware and we should have just waited until now to solve the problem? That kinda proves your point. We need solutions now. Waiting indefinitely on academia isn't realistic. I mean, what if we didn't upgrade our hardware and bring out new consoles/computers until existing hardware was fully maxed out? We are still finding new techniques on 30+ year old hardware!

We can progress rendering with both hardware and software solutions. Software takes longer, if even possible. Ergo, use hardware. And worth noting is we discussed the need for RTRT hardware in the current consoles citing plenty of WIP software alternatives, and these alternatives did not provide the solution we hoped for. In the end, the hardware has definitely proven beneficial. Search the forum for Raytracing topics going back 20 years - realtime raytracing coming real soon now! :runaway:

Realtime solutions are compromised. Software solutions are also compromised.
 
AMD's CEO sniped a Microsoft representative over the fact how the NPUs were taking up quite a bit of die area in a recent keynote ...
Yet Microsoft has already demanded that from AMD, and their leaked next Xbox slides already talk about an NPU in Xbox. And AMD folded and did it anyway. It's quite clear to anyone that the industry as a whole is moving into these directions, more AI acceleration and Ray Tracing acceleration. Every mobile GPU is ray tracing compliant now, and NPUs are being added everywhere.

CP2077 is also the game that led CD Projekt RED into adopting UE5
CPR is going to add path tracing to their UE5 projects, just like Black Myth: Wukong and Desordre. In the span of 4 years, we have 8 path traced AAA and AA titles, with 3+ more coming this year alone, not to mention path tracing mods (about 8 too), which are set to explode in number when several RTX projects are finished, that's more than anyone imagined ever.

Reaching path tracing quickly after the introduction of ray tracing couldn't have happened without the existence of capable hardware, software couldn't have gotten us to this point quickly enough if ever. I think this very clear to everyone by now.
 
Last edited:
Yet Microsoft has already demanded that from AMD, and their leaked next Xbox slides already talk about an NPU in Xbox. And AMD folded and did it anyway. It's quite clear to anyone that the industry as a whole is moving into these directions, more AI acceleration and Ray Tracing acceleration. Every mobile GPU is ray tracing compliant now, and NPUs are being added everywhere.
NPUs may make more sense in general purpose computing devices though. Their value to a console depends entirely in what they can bring to gaming. Unless, I guess, consoles diverge somewhat, such as the next XBox being a more general purpose capable PC, or built in video AI somethingsomething needing an NPU.
 
NPUs may make more sense in general purpose computing devices though
I guess NPUs in my context can apply to a separate die dedictaed for AI, or integrated acceleration for AI like the tensor cores in RTX GPUs, according to Xbox leaked slides, next Xbox will have either, PS5Pro is rumored to have either as well, so we are definitely moving into that direction?
 
There’s no evidence that other approaches can deliver similar results at all or for lower cost. Let’s take the basic visibility query at the heart of all of this - are two world space coordinates visible to each other. Tracing rays through some world space data structure is the obvious and simplest way to answer that question. Attempting to do so via raster is inherently clumsy and in many cases prohibitively expensive.
A couple of modern games today still use planar reflections for some assets (mirrors in particular) so I propose that it's not out of the realm of possibility we go about more efficiently extending this system for multiple planar specular surfaces ...

There's very different costs associated with respect to ray traced (maintaining/traversing acceleration structure) vs raster (rendering secondary views) techniques. Your characterization of rasterization only pessimizes it's disadvantages when a more balanced assessment of it involves it's own major advantages such as good spatial locality and much simpler hardware implementation ...
So I think we should pose the question differently. If we already have a proven, elegant solution for light transport why should the industry waste resources on less capable, less scalable and less usable alternatives?
Is it really "less usable" when we have real applications out in the wild rasterizing a low number of geometry passes and if the elegant solution turns out to be more complex on the hardware side then maybe just maybe we ought to reframe the entire problem statement for real-time rendering ...

This phenomena of following/bridging between real-time & offline rendering has been the *exception* thus far in recent history compared to the past and we should not assume that this trend will somehow continue indefinitely. There'd be more freedom for real-time rendering to evolve if it were allowed to diverge from offline rendering because I strongly suspect that they do NOT have the same problem sets as each other. Offline rendering doesn't really care about the hardware implementation at hand so it'll continue to evolve long past the end of hardware advancements. Real-time rendering on the other hand DOES care about the hardware implementation and that has profound effects on what solutions can be explored to it's problem space ...
 
Last edited:
The next consoles will probably get much more powerful NPUs. 45 TOPS is sufficient for simpler calculations in the currently released laptops, so you should not start from this value when it comes to the next AI-based games.

Just some data. In principle, the Xbox Series X can do 97 TOPS in 4-bit integer, but this is probably a value that takes up the entire GPU capacity of the console, so it is also an incomprehensible number in itself. At the same time, current PC GPUs show a theoretical value of 500-1000 TOPS, but it is also a question whether this is together with graphics rendering, or just a figure for when the full capacity of the GPU is used for such calculations without graphics. I think it could be the latter, so these numbers alone are not worth much.

Currently, it is not known what can be achieved from an NPU capable of, say, 1000 or 2000 TOPS in a console.
 
Currently, it is not known what can be achieved from an NPU capable of, say, 1000 or 2000 TOPS in a console.
Currently it's not really known what any amount of TOPS or POPS can bring to console gaming. If it can be leveraged for things like physics solvers, it might have a lot of tangible benefits. Presently I think ML methods are limited to upscaling.
 
Currently it's not really known what any amount of TOPS or POPS can bring to console gaming. If it can be leveraged for things like physics solvers, it might have a lot of tangible benefits. Presently I think ML methods are limited to upscaling.

It can’t be anything radical that will preclude cross platform or PC versions. E.g. I can imagine a world where difficulty levels map to different trained DL networks that govern enemy behavior. However that sort of core dependency only works if it’s supported on every target platform.
 
Console vendors likely still wouldn't want them either way so how do you propose that they offset this deficit ?
Delivering a massive performance gain (for both rasterization and ray tracing) over the PS5 Pro will likely require a big cache, a big memory bus, or GDDR7+/GDDR8. They have to pick their poison. Big memory buses also take up die area, and it's anyone's guess whether sufficiently fast GDDR will be available at an acceptable price and quantity when the next-gen consoles launch.
Regardless of the future of ray tracing, does better anti-aliasing alone justify the inclusion of AI HW ? (worse that it probably can't be implemented as a separate die) One of it's other application (frame generation) also require computer vision hardware circuits (optical flow accelerator) as well ...
Anti-aliasing alone? No. Anti-aliasing and upscaling together? Yes. Dedicated checkerboard rendering HW on the PS4 Pro justified itself, and so does AI HW for upscaling on the PS5 Pro. FSR 3.0 proved that frame generation is possible with neither AI HW nor OFA. It should be possible to create a solution that leverages AI HW to to provide higher quality than FSR 3.0 without needing OFA.
 
There's very different costs associated with respect to ray traced (maintaining/traversing acceleration structure) vs raster (rendering secondary views) techniques. Your characterization of rasterization only pessimizes it's disadvantages when a more balanced assessment of it involves it's own major advantages such as good spatial locality and much simpler hardware implementation ...

What's the incremental hardware cost of accelerating rasterization of secondary views? Rasterization is fundamentally limited to 2D viewports. Doing anything interesting that's not facing the main camera gets expensive very fast - i.e. it's not scalable. I'm not being pessimistic but I don't share your optimism that there's some magic raster fix for that fundamental flaw.

Is it really "less usable" when we have real applications out in the wild rasterizing a low number of geometry passes and if the elegant solution turns out to be more complex on the hardware side then maybe just maybe we ought to reframe the entire problem statement for real-time rendering ...

We're already seeing good progress and adoption of half-assed (current console) implementations of the elegant solution. Why don't you think that real investment in RT in the next console generation will deliver the goods? As usual in these debates you're diminishing real tangible progress while offering vague hypotheticals as an alternative.

This phenomena of following/bridging between real-time & offline rendering has been the *exception* thus far in recent history compared to the past and we should not assume that this trend will somehow continue indefinitely. There'd be more freedom for real-time rendering to evolve if it were allowed to diverge from offline rendering because I strongly suspect that they do NOT have the same problem sets as each other. Offline rendering doesn't really care about the hardware implementation at hand so it'll continue to evolve long past the end of hardware advancements. Real-time rendering on the other hand DOES care about the hardware implementation and that has profound effects on what solutions can be explored to it's problem space ...

I don't think it has anything to do with chasing offline rendering. Even if we just look at practical performance and cost constraints of real-time use cases, raytracing (bvh, sdf etc) is still the most obvious answer for visibility calcs.
 
The future of inferencing in games is very uncertain at this point but I wouldn't discount it simply based on our current lack of imagination. One of the easiest ways for something to gain traction is to add it to mass market consoles.

Inferencing is pretty certain, it's very useful as a speedup for animation graphs and samples, which can be compressed as well. Not sure if it's shipping yet but I'm pretty certain if it's not it will be with the launch of AC Shadows this year. Speaking of compression, deep learning is excellent at that, and compressing 4k textures/uprezzing with a neural net is already shipping. Upscalers themselves (for final output) don't look like they're going anywhere fast, and they're all moving towards inference.

Finally there's voice, which is so popular with gamedevs for an overwhelming amount of reasons that if the voice actors don't want to sign any AI contracts I'm pretty sure most devs will just go non union instead. The current tech demos of the LLM controlled NPCs are overwhelmingly stupid, but that doesn't mean AI voice variations in lines and pitch and etc. couldn't ship today, with more and more as time goes on. It's way too valuable an idea to pass up.

I'm certain NPUs would be used by triple a games in the future as long as there's a standard way to program for them.
 
Delivering a massive performance gain (for both rasterization and ray tracing) over the PS5 Pro will likely require a big cache, a big memory bus, or GDDR7+/GDDR8. They have to pick their poison. Big memory buses also take up die area, and it's anyone's guess whether sufficiently fast GDDR will be available at an acceptable price and quantity when the next-gen consoles launch.
They'd rather not have huge amounts of SRAM cells taking up a significant chunk of the die area since the last time it was attempted it was a part of the reason why a AAA game console vendor lost the power race early on. They'll just overclock their main memory next time since that's the only proven "good balance" in terms of cost so far. Maybe they'll consider chiplet memory caches on older process technologies since there's more potential for 'alternative' foundries to be competitive but that's still won't be good enough for ray tracing ...
Anti-aliasing alone? No. Anti-aliasing and upscaling together? Yes. Dedicated checkerboard rendering HW on the PS4 Pro justified itself, and so does AI HW for upscaling on the PS5 Pro. FSR 3.0 proved that frame generation is possible with neither AI HW nor OFA. It should be possible to create a solution that leverages AI HW to to provide higher quality than FSR 3.0 without needing OFA.
Upscaling is just a secondary objective with integrated AI HW and I wouldn't compare the PS4 Pro's "ID buffer" which takes up FAR LESS die space in comparison to integrated AI HW these days ...
What's the incremental hardware cost of accelerating rasterization of secondary views? Rasterization is fundamentally limited to 2D viewports. Doing anything interesting that's not facing the main camera gets expensive very fast - i.e. it's not scalable. I'm not being pessimistic but I don't share your optimism that there's some magic raster fix for that fundamental flaw.
I'm not expecting any "magic fix" but there's obvious HW implementation limits that are running up the wall with a unified solution so I suggest we split up the giant mess of a problem that is light transport into it's smaller/simpler subsets ...
We're already seeing good progress and adoption of half-assed (current console) implementations of the elegant solution. Why don't you think that real investment in RT in the next console generation will deliver the goods? As usual in these debates you're diminishing real tangible progress while offering vague hypotheticals as an alternative.
I wouldn't obsess over a temporary experiment and your idea of "real investment" probably involves implementing a circus clown show of more special HW states, large on-die caches, an AI denoiser and all of those are NOT hypotheticals. My recommendation of taking a divide and conquer method can't possibly be any worse than what we're already seeing right now ...
I don't think it has anything to do with chasing offline rendering. Even if we just look at practical performance and cost constraints of real-time use cases, raytracing (bvh, sdf etc) is still the most obvious answer for visibility calcs.
But CLEARLY some world space data structures are more tailored towards unified approach (BVH) than others (SDFs) which strongly encourages the reader to find other solutions to unresolved problems so it's a bit nebulous to group/classify them together ...

As attractive of an idea as it is to extend our models but if many hardware designs in the future can't keep up then we have to try to exhaust every other option out there ...
 
Back
Top