Predict: Next gen console tech (10th generation edition) [2028+]

The hardware can also evolve.

But it's really awkward, because it partly invalidates the previous generation. There is a strong force to pot commit to pencil ray tracing, almost everyone is researching better ways to do importance sampling and post-filtering and have blinders on that they are on a fundamentally wrong road for real time rendering. RTX drove the entire industry down the wrong path.
 
But it's really awkward, because it partly invalidates the previous generation. There is a strong force to pot commit to pencil ray tracing, almost everyone is researching better ways to do importance sampling and post-filtering and have blinders on that they are on a fundamentally wrong road for real time rendering. RTX drove the entire industry down the wrong path.

Why is improved pencil ray tracing the wrong path? Support for more flexible BVH formats or LOD would go a long way. It’s an inherently scalable paradigm that you can throw transistors at.
 
But it's really awkward, because it partly invalidates the previous generation. There is a strong force to pot commit to pencil ray tracing, almost everyone is researching better ways to do importance sampling and post-filtering and have blinders on that they are on a fundamentally wrong road for real time rendering. RTX drove the entire industry down the wrong path.
My assumption is that IHVs who are in constant contact with developers, they will be guided on what’s needed. I fully expect the hardware to change over time, only because the business will demand it. Both are necessary directions to spread into, it’s just a question of how legacy will be handled.
 
Why is improved pencil ray tracing the wrong path? Support for more flexible BVH formats or LOD would go a long way. It’s an inherently scalable paradigm that you can throw transistors at.
There's only so many singular rays you can trace, and notably the gains from increasing rays become exponentially small. 5 rays is much better than 1 ray, but 10 rays isn't a great deal better than 5 but comes at twice the cost. Cone tracing covers more information in less effort.

I think one of the mindsets for HWRT was looking to create better lighting, the only solution was off-line path tracing. It's been a 'holy grail' to accelerate path tracing and HWRT took that a step further. It's an algorithm that solves every problem and can produce photorealism, if only it can be achieved fast enough. HWRT has greatly accelerated offline rendering for one and it's development has helped nVidia's position in professional productivity, reinforcing the value of HWRT to them.

However, gaming doesn't need that exact quality, and other algorithms could provide a better solution; algorithms that were in their infancy at the beginning of this gen. The hardware went one route that it could do, while the software is stuck in the middle of using that hardware or trying something else. If there's not enough software to drive a reason for hardware to accelerate that 'something else', it won't gain proper hardware support.

It's really a rematch of the origins of compute. GPUs were all about fixed function graphics hardware. Software had to hoodwink it into doing general purpose GPU work. But as software started to use the hardware differently, there became value in changing how the hardware worked to support the new workloads. Hence GPU moved from graphics to compute. HWRT is exactly the same hardware solution to a specific problem as hardware vertex and pixel shaders. The question then is, "is the problem being tackled by HWRT the right problem to be tackling?" But the decision making for console hardware has to pick between the devil you know and the devil you don't. Do you gamble on Unified Shaders or go with established discrete shaders? Do you ditch graphics hardware entirely and go with a software renderer and a novel CPU? Do you go all in on streaming when designing your hardware and hope the devs will use it, or do you double down on preloading because that's what everyone's doing now and you can't be sure they'll be willing and able to adapt to a different data system? Do you go with unified RAM and as wide a bus as possible, or give the devs a scratchpad of insanely fast but frustratingly tiny EDRAM?

Right now, HWRT is a known solution to the current and near-term future workloads of games. It's used offline in productivity and IHVs wanting to be competitive there want to have fast RT solutions. But maybe software can provide better solutions if not tied to the HWRT mindset? Maybe AMD will implement a great new arch. And maybe devs will use, or maybe it'll not catch on and HWRT will win out for another generation?

The big problem here is, AFAIK, no-one's created a demo that shows these alternative solutions othe than tech demos that have their pros and cons. And we've had various GI solutions for a decade without anything competing with what HWRT pathtracing is currently managing.
 
There's only so many singular rays you can trace, and notably the gains from increasing rays become exponentially small. 5 rays is much better than 1 ray, but 10 rays isn't a great deal better than 5 but comes at twice the cost. Cone tracing covers more information in less effort.
I think for traditional rendering this would win out. But in the advent of AI based upscaling, it appears as though the tradeoff is reasonable. From a silicon perspective tensor cores eat up significantly more than RT cores, though I could be wrong on that. From a hardware perspective, the greatest cost limitation to IHVs for RT appears to be available VRAM, or in general dedicated memory for the BVH structure. If you aren’t required to trace absolutely everything and AI models can fill in the holes, that might be enough.

In the end, AI models can solve a lot of massive computational approximations that would be difficult for any studio to take on all on their own. It’s far from perfect, but it would still be an improvement I think.

Though There is over reliance on AI making up the approximation here. Which may not be a terrible thing, but you’re taking it out of the hands of developers. Which may not sit well with certain crowds.
 
The big problem here is, AFAIK, no-one's created a demo that shows these alternative solutions othe than tech demos that have their pros and cons. And we've had various GI solutions for a decade without anything competing with what HWRT pathtracing is currently managing.

Maybe nostalgia is tinting my recollection but it seems the demo scene in graphics programming is pretty dead. If we’re lucky we get some pictures in a side deck or research paper. Back in the day it seems we got a lot more demos from individual devs and from the IHVs.

It’s surprising because by definition a demo doesn’t need to be efficient or hit an arbitrary fps or resolution target in order to be compelling. My skepticism about these RT alternatives is partly driven by the lack of demos showing them off. If they’re meant to work well in consoles games surely can run as targeted demos years earlier at 1080p/30fps on pc hardware where the frame time budget is significantly higher.
 
I wasn't thinking demoscene so much as a released game. All the gorgeous looking upcoming and released games are, AFAIK, using RT, and the best use path tracing. We get good results with other solutions but they aren't as good as the solutions that leverage the HW. eg. Desordre looks its best using pathtracing.

If someone could point to a game using something else that looks as good or better with better performance and no distracting artefacts (temporally accumulated lighting with laggy lights is kinda offputting to me), we'd move the discussion from the hopefully hypothetical.
 
What are the chances of Intel or Samsung getting those console contracts?

They both have somewhat of a bad reputation in the semiconductor business, so a constant revenue streams from consoles would be beneficial to them.

TSMC doesn't need console contracts, and their prices are through the roof right now.
Meanwhile Samsung and Intel would gladly accept to build those chips, even with very slim margins. They wouldn't be cutting edge compared to TSMC, but I'm sure both of them can get close to their 3nm chips, at least in 2028, for reasonable prices and yields.

I think we're still 4 years away at least from a next-gen, and I'm not sure 3nm will offer enough performance/area and performance per watt for a next gen either. In my opinion, TSMC 2nm is the node that the consoles need to target. 3nm will only be able to give us a PS5Pro+ equivalent.

It's supposed to be in production next year, by 2028, the big players Apple and Nvidia have moved onto A16 node or something beyond that. In theory in 2028, 2nm is a ~2-3 year old node.

I don't think Samsung or Intel can realistically match anything in the next 5 years, especially Intel, the company is a disaster at the moment.
 
I think we're still 4 years away at least from a next-gen, and I'm not sure 3nm will offer enough performance/area and performance per watt for a next gen either. In my opinion, TSMC 2nm is the node that the consoles need to target. 3nm will only be able to give us a PS5Pro+ equivalent.

It's supposed to be in production next year, by 2028, the big players Apple and Nvidia have moved onto A16 node or something beyond that. In theory in 2028, 2nm is a ~2-3 year old node.

I don't think Samsung or Intel can realistically match anything in the next 5 years, especially Intel, the company is a disaster at the moment.
The rumours are that PS5 pro is still 6nm. Older nodes aren't getting cheaper for TSMC, so, for a 500$/€ console, you can't go and spend 50% more on the chip thanks to a more recent process node. 2nm TSMC would be great, but I don't think it's feasible.
 
The rumours are that PS5 pro is still 6nm. Older nodes aren't getting cheaper for TSMC, so, for a 500$/€ console, you can't go and spend 50% more on the chip thanks to a more recent process node. 2nm TSMC would be great, but I don't think it's feasible.
The most likely (and logical) is 5nm.
 
The most likely (and logical) is 5nm.
From a digital foundry article:

"Sony doesn't talk about fabrication in its documentation (or generally, at all), but the evidence points to PS5 Pro running on the same 6nm process as the Slim. PS5 Pro only has limited clock speed increases (or actual decreases potentially) and the size of the GPU architecturally has not doubled in the way it did with PS4 Pro. Instead, machine learning upscaling is used to make up the difference."

I can absolutely see it being 6nm as 5nm would be a big increase in cost, and the decrease in size of the chip maybe isn't enough to offset that.
 
Only way I see that happening is if Sony is ok with shipping a much larger and power hungry console. PS5 even in the "slim" is pushing 225W while gaming. The pro would easily push that to ~300W.

If they stick with 3nm for next gen, they will have to push to a 300W box. I don't think that will be acceptable.
 
Only way I see that happening is if Sony is ok with shipping a much larger and power hungry console. PS5 even in the "slim" is pushing 225W while gaming. The pro would easily push that to ~300W.

If they stick with 3nm for next gen, they will have to push to a 300W box. I don't think that will be acceptable.
The 6nm ps5's consume less than the 7nm ps5's of course.


Around a 25-30 watts difference probably. I can see a 6nm PS5 pro consuming around 270-280 watts compared to a 6nm PS5 at 200-210 watts. It would be acceptable.
 
From a digital foundry article:

"Sony doesn't talk about fabrication in its documentation (or generally, at all), but the evidence points to PS5 Pro running on the same 6nm process as the Slim. PS5 Pro only has limited clock speed increases (or actual decreases potentially) and the size of the GPU architecturally has not doubled in the way it did with PS4 Pro. Instead, machine learning upscaling is used to make up the difference."

I can absolutely see it being 6nm as 5nm would be a big increase in cost, and the decrease in size of the chip maybe isn't enough to offset that.
5nm would allow for a cheap pathway to 4nm fab. What are their sources BTW?

Anyways we know now PS5 Pro clocks up to 2.35 Ghz which points to a 5nm APU, not 6nm.

EDIT: 280W would bit a bit too much using 6nm aswell. Rumours are saying PS5 Pro devkit is exactly the same as launch unit which was consuming 230W max.
 
5nm would allow for a cheap pathway to 4nm fab. What are their sources BTW?

Anyways we know now PS5 Pro clocks up to 2.35 Ghz which points to a 5nm APU, not 6nm.

EDIT: 280W would bit a bit too much using 6nm aswell. Rumours are saying PS5 Pro devkit is exactly the same as launch unit which was consuming 230W max.
The PS5 is practically silent. If they got to PS4 levels of fan noise I'm sure they could use the same chassis. That would probably be the cheapest option for under 600$ at launch.

But what do we know. If it comes out that it's 4-5nm, then it was cheaper than 6nm thanks to a smaller chip and less cooling.
 
The PS5 is practically silent. If they got to PS4 levels of fan noise I'm sure they could use the same chassis. That would probably be the cheapest option for under 600$ at launch.

But what do we know. If it comes out that it's 4-5nm, then it was cheaper than 6nm thanks to a smaller chip and less cooling.

Pretty sure Pro is 6/4nm mix w/chiplet I/O, they'll be able to switch to TSMC 4c (c literally standing for cheap here) next year to save $

Considering Microsoft seems to be targeting 2026 for a new gen of consoles (or whatever) it'll be interesting to see games scale from Switch 2 all the way up to PS5 Pro/Xbox 1080 (they should call the desktop one Xbox 2160, get it. Get iiiiiit???). From 540p 30 to 2160p 120 for the same gen of games, woot woot!
 
Pretty sure Pro is 6/4nm mix w/chiplet I/O, they'll be able to switch to TSMC 4c (c literally standing for cheap here) next year to save $

Considering Microsoft seems to be targeting 2026 for a new gen of consoles (or whatever) it'll be interesting to see games scale from Switch 2 all the way up to PS5 Pro/Xbox 1080 (they should call the desktop one Xbox 2160, get it. Get iiiiiit???). From 540p 30 to 2160p 120 for the same gen of games, woot woot!
Do you have a source for the 6/4nm with chiplet i/o? Isn't the decompression hardware in the PS5 really really small? Why would they make it an entire chiplet?
 
Do you have a source for the 6/4nm with chiplet i/o? Isn't the decompression hardware in the PS5 really really small? Why would they make it an entire chiplet?

I/O meaning the PHYs (memory bus), just like RDNA3, upon which the PS5 Pro chip is based. RDNA3 is also on TSMC 5/4, there's no reason for AMD or Sony to go backport the entire IP to an older node, especially not with 5/4nm slowly transitioning towards more legacy chips as N3 ramps up over the next few years.
 
I/O meaning the PHYs (memory bus), just like RDNA3, upon which the PS5 Pro chip is based. RDNA3 is also on TSMC 5/4, there's no reason for AMD or Sony to go backport the entire IP to an older node, especially not with 5/4nm slowly transitioning towards more legacy chips as N3 ramps up over the next few years.
RDNA 3 being 5/4nm and the Pro being the same would make sense, even if the Pro is going to be a mish mash of RDNA 3, 3.5, 4, 2, 1 and a bit of Polaris, probably 😅

Makes all the pre next gen launch discussion so silly in retrospect.

In the end, I wish the Pro to be on the most recent process node possible. But then I look at TSMC pricing and I weep, not only for the Pro, but for next gen too.

Wafer-price-by-technology-node.png

That is brutal.

And that's not all, they are raising prices by 10% soon. Get Samsung or Intel on the phone.
 
Last edited:
Back
Top