Predict: Next gen console tech (10th generation edition) [2028+]

Those social media contributors are a tiny minority of people who buy consoles. History has shown time and again a console doesn't have to be the most powerful to sell best, and chasing the power crown doesn't lead inevitably to a significant sales gain. Even this gen, PS5 is notably underpowered, but the market doesn't seem to care much! To get a significant power advantage now costs a fortune in more hardware. I think any and every console company should pursue the most efficient hardware design, a perfect balancing act to land right on the optimal cost/benefit sweet-spot.
It all matters. It's PR. I think just because Sony and Nintendo get by with less power doesn't mean Xbox has to follow suit. Power is part of the Xbox brand. Own it MS! $100 per unit more on the APU could do wonders.
 
Is it better to spend more transistors on ray tracing and AI upscaling rather than traditional rasterization?

Let's say if PS6 has 30Tflops and next-gen upscaling & 3x RT performance than PS5 Pro.
 
Is it better to spend more transistors on ray tracing and AI upscaling rather than traditional rasterization?

Let's say if PS6 has 30Tflops and next-gen upscaling & 3x RT performance than PS5 Pro.
It's all about the right sweatspot. Right now PS5 Pro focusing (supposedly) on improved RT and AI upscaling is the right move. Who know for PS6 what will be the next best thing in graphics. But RT and AI upscaling will still be very important in 3-4 years
 
It all matters. It's PR. I think just because Sony and Nintendo get by with less power doesn't mean Xbox has to follow suit. Power is part of the Xbox brand. Own it MS! $100 per unit more on the APU could do wonders.
This is also why I think they are making their next APU with Intel. The local production line probably has more favorable conditions for the processor to be manufactured more cheaply than at TSMC. Thus, they can provide the best technology at a relatively favorable price. MS has an order for "an unnamed" processor from Intel, which will be based on the 18a technology specifically aimed at the high-end market, and this processor is MS's own design. This could be the APU of the new Xbox.
 
Last edited:
This is also why I think they are making their next APU with Intel. The local production line probably has more favorable conditions for the processor to be manufactured more cheaply than at TSMC. Thus, they can provide the best technology at a relatively favorable price. MS has an order for "an unnamed" processor from Intel, which will be based on the 18a technology specifically aimed at the high-end market, and this processzor is MS's own disegn. This could be the APU of the new Xbox.
Consoles in general should abandon tsmc, I'm sure that Samsung and Intel will accept low margins for a console contract. It would keep those foundries going for years.
 
Is it better to spend more transistors on ray tracing and AI upscaling rather than traditional rasterization?

Let's say if PS6 has 30Tflops and next-gen upscaling & 3x RT performance than PS5 Pro.

Yes, nextgen has to be all in on raytracing, otherwise you're just going to end up with PS5Pro++.
 
Yes, nextgen has to be all in on raytracing, otherwise you're just going to end up with PS5Pro++.
I suspect this community is going to find out in the most gruesome way that a generalized approach (ray tracing) to real-time graphics is going to hit the wall very quickly in terms of cost ...
 
Power consumption is the one thing that's actually advancing in silicon anymore. A Series S has all of 20CUs at 1.5ghz, in like two months you'll be able to buy an RDNA 3.5 laptop with 16CUs and > 1.8ghz clockspeeds, same thing at 15w this year just in terms of "teraflops".

CPU as well, relatively speaking. Zen 5c at 2.2ghz should have the IPC to match Zen 2 at 3.5ghz for gaming purposes. ROG Ally 2 at Series S level performance here we go!
The Z1 extreme in a lot of handhelds is Zen4 8 core at 3.3ghz base up to 5.1ghz single core boost.

The Z1 extreme also has a Radeon 780m which is a RDNA3 12Cu up to 2.7ghz clock speed. So you are already pretty close

You just need better cooling and better power efficency.

i agree a zen 5c + rdna 4 might get us there
 
I suspect this community is going to find out in the most gruesome way that a generalized approach (ray tracing) to real-time graphics is going to hit the wall very quickly in terms of cost ...
Kind of. If we keep trying to push towards like path tracing in everything, yea. But I do expect ray tracing will get cemented as fairly standard in the next generation. Using current kind of RTGI or whatever will ultimately form a smaller percentage of the overall graphics budget, for example. Of course if you want to pile on or up the quality of ray tracing effects or go with full path tracing then yea, it can easily start to chew through any graphics budget again, but consoles are always about making smart decisions in lieu of the limited hardware you've got. So as said above, both in terms of hardware and from developers, it's gonna be about finding a sweetspot between traditional and ray traced rendering. Both will be important, though I think it's reasonable to assume ray tracing will gain more traction. Even in this gen, I expect RTGI at least to become increasingly common in big AAA titles. By the time we have consoles that are far more suited to doing RT than RDNA2, it should make it feasible to become fairly standard.

And from then, I think it'll be a slow march to increasing RT/PT effects in games, because they'll simply be the best ways to tackle certain things. There'll still be plenty of scope outside these areas to improve graphics a lot, too.

Diminishing returns are gonna be a thing as they always are, but I dont think we're anywhere near hitting a 'wall' in terms of what can practically be done with the levels of improvements we're still getting in hardware.

What kind of revolution are you thinking is needed, specifically?
 
What kind of revolution are you thinking is needed, specifically?
I'm thinking that we bring back exotic hardware features but this time around we take more pragmatic decisions with them ...

If our entire screen consists of multiple view captures for planar reflections /w no support for recursion, there's no reason why we shouldn't be able to retain a bounded 2x G-buffer/shading cost because we're only rendering a total of 2x pixels in the worst case. Rasterizing multiple views could very easily involve more than >2x geometry of rasterizing the main scene. To support multiple planar reflections more easily, we could introduce hardware support for multi-view rasterization and support output generation of irregular data structures so that we can have irregularly shaped/sparse G-buffers at the correct resolution/lower memory consumption ...

"Portal reflections" are an even easier case since the 'reflection' (duplicated level geometry) is already included in the main scene (1x geometry/G-buffer/shading cost) ... ("render-to-texture methods" aren't needed)

Rendering soft shadows today are still a pretty hard problem due to the current limitation of shadow maps only accounting for a single occluder (can lead to light leaking) so it would be very useful to have hardware support for data structures that can represent multiple occluders (not necessarily linked lists) ...

Depending on how much more memory we have for the next generation, we could significantly increase the resolutions of SDFs or other volumetric spatial data structures with a 32GB pool of memory. They'll be used as fallbacks for curved/recursive reflections and world space AO ...

GoW Ragnarok's "ray traced cubemaps" (pre-baked static world space look up table) are a sign that we need more exploration into high quality alternatives since solutions like those are admittedly more ugly and less powerful (no dynamic lights/geometry) than some other methods that don't involve ray tracing ...
 
Last edited:
I'm thinking that we bring back exotic hardware features but this time around we take more pragmatic decisions with them ...

If our entire screen consists of multiple view captures for planar reflections /w no support for recursion, there's no reason why we shouldn't be able to retain a bounded 2x G-buffer/shading cost because we're only rendering a total of 2x pixels in the worst case. Rasterizing multiple views could very easily involve more than >2x geometry of rasterizing the main scene. To support multiple planar reflections more easily, we could introduce hardware support for multi-view rasterization and support output generation of irregular data structures so that we can have irregularly shaped/sparse G-buffers at the correct resolution/lower memory consumption ...

"Portal reflections" are an even easier case since the 'reflection' (duplicated level geometry) is already included in the main scene (1x geometry/G-buffer/shading cost) ... ("render-to-texture methods" aren't needed)

Rendering soft shadows today are still a pretty hard problem due to the current limitation of shadow maps only accounting for a single occluder (can lead to light leaking) so it would be very useful to have hardware support for data structures that can represent multiple occluders (not necessarily linked lists) ...

Depending on how much more memory we have for the next generation, we could significantly increase the resolutions of SDFs or other volumetric spatial data structures with a 32GB pool of memory. They'll be used as fallbacks for curved/recursive reflections and world space AO ...

GoW Ragnarok's "ray traced cubemaps" (pre-baked static world space look up table) are a sign that we need more exploration into high quality alternatives since solutions like those are admittedly more ugly and less powerful (no dynamic lights/geometry) than some other methods that don't involve ray tracing ...

It seems your proposed alternative to the simplicity and scalability of RT is to introduce even more convoluted hackery with an even longer list of limitations and caveats than current rasterization. To begin with there’s no guarantee such a clunky architecture will be faster than RT for the same quality.

I think the solution for the RT performance wall is better RT APIs and hardware. We don’t need to go backwards.

Then there’s all the other stuff - material quality, volumetrics, , BVH mgmt, compilation stutter etc that would benefit from more flexible and efficient APIs. That’s where I hope the focus shifts next.
 
It seems your proposed alternative to the simplicity and scalability of RT is to introduce even more convoluted hackery with an even longer list of limitations and caveats than current rasterization. To begin with there’s no guarantee such a clunky architecture will be faster than RT for the same quality.
'Convoluted' ? Sure but I don't know about "hacky or longer list of limitations" when it's a major improvement over SSR (support for dynamic world space geometry/lighting) or single layer depth shadow maps (no more light leaking soft shadows) ...

Not every problem needs to have a "conceptually elegant solution" for it. The world doesn't have to be any simpler per se but just adequate/appropriate ...
I think the solution for the RT performance wall is better RT APIs and hardware. We don’t need to go backwards.

Then there’s all the other stuff - material quality, volumetrics, , BVH mgmt, compilation stutter etc that would benefit from more flexible and efficient APIs. That’s where I hope the focus shifts next.
It's not just a "performance wall" that you have to worry about anymore. It might be a necessity for other platforms to diverge (introduce bespoke exotic features) once more in order to keep costs down. The promise of subsidized HW development might not be living up to the expectations of the leading AAA console vendor anymore so ad hoc/specialized propositions may need to be considered to get bigger improvements ...

A path traced future for them looks to be unrealistic in the next 5 years or ever so stopgap measures absolutely have to be looked at ...
 
As impressive as the various non-RT lighting solutions Sony's studios developed are, the fact that they still can't match HWRT despite their complexity only reinforces the point that HWRT is the way forward. It's telling that Insomniac - developers of the two most significant non-remake/remaster PS5-only titles on a proprietary engine - have embraced HWRT. Even if path tracing isn't feasible on the PS6, hybrid ray tracing with RT shadows, RT reflections, and RTGI is still the ideal stopgap measure for most use cases.

The biggest complication for HWRT in the near future though is Nanite/virtualized geometry. It will be interesting to see how Epic and other devs try to reconcile the two. Just run HWRT against the fallback mesh and accept the artifacts that come with it? Find a way to combine screen-space tracing against Nanite, virtual shadow maps, and HWRT against the fallback mesh as seamlessly as possible?
 
I'm thinking that we bring back exotic hardware features but this time around we take more pragmatic decisions with them ...

If our entire screen consists of multiple view captures for planar reflections /w no support for recursion, there's no reason why we shouldn't be able to retain a bounded 2x G-buffer/shading cost because we're only rendering a total of 2x pixels in the worst case. Rasterizing multiple views could very easily involve more than >2x geometry of rasterizing the main scene. To support multiple planar reflections more easily, we could introduce hardware support for multi-view rasterization and support output generation of irregular data structures so that we can have irregularly shaped/sparse G-buffers at the correct resolution/lower memory consumption ...

"Portal reflections" are an even easier case since the 'reflection' (duplicated level geometry) is already included in the main scene (1x geometry/G-buffer/shading cost) ... ("render-to-texture methods" aren't needed)

Rendering soft shadows today are still a pretty hard problem due to the current limitation of shadow maps only accounting for a single occluder (can lead to light leaking) so it would be very useful to have hardware support for data structures that can represent multiple occluders (not necessarily linked lists) ...

Depending on how much more memory we have for the next generation, we could significantly increase the resolutions of SDFs or other volumetric spatial data structures with a 32GB pool of memory. They'll be used as fallbacks for curved/recursive reflections and world space AO ...

GoW Ragnarok's "ray traced cubemaps" (pre-baked static world space look up table) are a sign that we need more exploration into high quality alternatives since solutions like those are admittedly more ugly and less powerful (no dynamic lights/geometry) than some other methods that don't involve ray tracing ...
I couldn't ever speak to the practicality of doing all this, but I would imagine that hardware will advance slowly towards what is needed, as a balance. Just designing some hugely custom acceleration for everything seems like it'd be a bit much, and would probably leave developers in over their heads trying to actually utilize all this stuff properly versus the standardized, generalized implementations we have today. And it could be even worse if different GPU manufacturers had different ideas of which 'custom' direction to go in, leaving devs and consumers in an awkward spot(I never want to see the days of early 3d graphics cards again...).

I also sadly wouldn't expect any large leaps in memory quantity ever again. We only got 2x leap this last generation for a reason, and that's pretty much all I expect again next time.
 
The biggest complication for HWRT in the near future though is Nanite/virtualized geometry. It will be interesting to see how Epic and other devs try to reconcile the two. Just run HWRT against the fallback mesh and accept the artifacts that come with it? Find a way to combine screen-space tracing against Nanite, virtual shadow maps, and HWRT against the fallback mesh as seamlessly as possible?
Epic being dead set on supporting virtual geometry in their engine and the rising cost of more advanced digital circuits is WHY we must look into other directions. The dreams of a path traced future for AAA console centric games are well behind us since real-time rendering is a constantly moving target (geometry/materials/other) ...
I couldn't ever speak to the practicality of doing all this, but I would imagine that hardware will advance slowly towards what is needed, as a balance. Just designing some hugely custom acceleration for everything seems like it'd be a bit much, and would probably leave developers in over their heads trying to actually utilize all this stuff properly versus the standardized, generalized implementations we have today. And it could be even worse if different GPU manufacturers had different ideas of which 'custom' direction to go in, leaving devs and consumers in an awkward spot(I never want to see the days of early 3d graphics cards again...).
I would imagine that it's a lot more tractable to make a good attempt at solving just one or two problems (dynamic world space specular reflections/soft shadows) rather than trying to solve an entire zoo (GI/AO/translucency/area lights on top of reflections/shadows too) of problems with even more cans of worms (acceleration structure/noise). For our possible purposes, some tools (ray tracing) can be overkill (some high quality rendering effects). We're not looking to match offline quality rendering, are we ?

It's unfortunate that different vendors may have different ideas but that's the reality when it's becoming more unclear overtime if consoles can continue to keep following PCs ...
 
Not every problem needs to have a "conceptually elegant solution" for it. The world doesn't have to be any simpler per se but just adequate/appropriate ...

No, but if that elegant solution is also cheaper to build and easier to use then the choice is obvious.

It might be a necessity for other platforms to diverge (introduce bespoke exotic features) once more in order to keep costs down. The promise of subsidized HW development might not be living up to the expectations of the leading AAA console vendor anymore so ad hoc/specialized propositions may need to be considered to get bigger improvements ...

A path traced future for them looks to be unrealistic in the next 5 years or ever so stopgap measures absolutely have to be looked at ...

Why are you assuming exotic hardware will be cheaper to build and use? Simple solutions have a better chance of adoption and efficient implementation. And nothing comes close to the simplicity and accessibility of RT. If the leading AAA console vendor thought ad hoc solutions were needed why are they investing in RT hardware?

I'm pretty optimistic about the viability of PT in the next 5 years. In 2018 would you have guessed that CP2077 would be possible in 5 years? If anything the last few years should encourage optimism that progress will come faster than expected.
 
No, but if that elegant solution is also cheaper to build and easier to use then the choice is obvious.
"Easier to extend" is what I would more aptly describe ...
Why are you assuming exotic hardware will be cheaper to build and use? Simple solutions have a better chance of adoption and efficient implementation. And nothing comes close to the simplicity and accessibility of RT. If the leading AAA console vendor thought ad hoc solutions were needed why are they investing in RT hardware?
RT is simple from a software perspective but that is absolutely not true hardware implementation side where the leading vendor has all sorts of accelerated HW states to speed up the process, a large on-die cache to efficiently spill to, and AI HW for denoising. Consoles only feature RT as a sort of temporary experiment ...

I would not be absolutely so sure of it being a permanent solution since I don't see how something like SER can be of much use to them when they probably don't want to spend their die area on large caches and AI HW doesn't have many rendering applications either too to justify by itself. RT from a hardware standpoint could not be more far away from being the claim of a "simple solution" as we're seeing in practice ...

I'm willing to trade in more hardware complexity for software complexity if it's more palatable for a specific segment of the industry ...
I'm pretty optimistic about the viability of PT in the next 5 years. In 2018 would you have guessed that CP2077 would be possible in 5 years? If anything the last few years should encourage optimism that progress will come faster than expected.
CP2077 is also the game that led CD Projekt RED into adopting UE5 and hardware improvement isn't occurring at a fast enough rate at all ranges to seriously consider common usage of PT in AAA games. Game consoles with 4 digit price tags aren't going to be a thing at all even in the next decade to make it happen ...
 
"Easier to extend" is what I would more aptly describe ...

RT is simple from a software perspective but that is absolutely not true hardware implementation side where the leading vendor has all sorts of accelerated HW states to speed up the process, a large on-die cache to efficiently spill to, and AI HW for denoising. Consoles only feature RT as a sort of temporary experiment ...

I would not be absolutely so sure of it being a permanent solution since I don't see how something like SER can be of much use to them when they probably don't want to spend their die area on large caches and AI HW doesn't have many rendering applications either too to justify by itself. RT from a hardware standpoint could not be more far away from being the claim of a "simple solution" as we're seeing in practice ...

I'm willing to trade in more hardware complexity for software complexity if it's more palatable for a specific segment of the industry ...

CP2077 is also the game that led CD Projekt RED into adopting UE5 and hardware improvement isn't occurring at a fast enough rate at all ranges to seriously consider common usage of PT in AAA games. Game consoles with 4 digit price tags aren't going to be a thing at all even in the next decade to make it happen ...

If there is a $1,000 Xbox version, that means that console will get PC-level hardware that can be built for around $2,000. This will give you the maximum PC quality level along with the best RT, PT and BT.

But if the rumors are true, then the next Xbox will be OEM-like, essentially with PC games on an Xbox software interface, and several VGAs can be connected to this, which can be purchased separately. In this case, the graphics are absolutely not limited.

And this console will be released in 2026.
 
RT is simple from a software perspective but that is absolutely not true hardware implementation side where the leading vendor has all sorts of accelerated HW states to speed up the process, a large on-die cache to efficiently spill to, and AI HW for denoising. Consoles only feature RT as a sort of temporary experiment ...

I would not be absolutely so sure of it being a permanent solution since I don't see how something like SER can be of much use to them when they probably don't want to spend their die area on large caches and AI HW doesn't have many rendering applications either too to justify by itself. RT from a hardware standpoint could not be more far away from being the claim of a "simple solution" as we're seeing in practice ...
Large on-die caches benefit more than just RT. AI HW was used for upscaling/antialiasing first, frame generation later, and denoising after that, and the PS5 Pro is coming with AI HW for the first purpose already.

And with Microsoft pushing NPUs in both PCs and allegedly the next Xbox, more applications for AI HW will be found.
 
Back
Top