Predict: Next gen console tech (9th iteration and 10th iteration edition) [2014 - 2017]

Status
Not open for further replies.
I see a good opportunity for Sony to come out of of the gate first and push 60fps. 350mm^2 at 7nm should be good for 8 Ryzen cores and ~40-50 Vega/Naga CUs. Any one-up by Microsoft would be limited to more pixels. Die size economics being what they are, we're looking at roughly a 2x GPU increase over Pro/X at best and TF-based marketing will start to lose steam unless they resort to creative accounting. A strong 60fps push would be a meaningful way to change the conversation and make it about gameplay, I'll take 1440p@60 vs 2160p@30 any day and I think most gamers would agree. Please don't skimp on CPU people.

My other exotic theory is an Intel HBM/Optane hybrid with dedicated inference hardware.
 
Last edited:
If Sony aim for backwards compatibility with the PS5, then striving for a system that can "boost mode" older titles to 60fps may help with that push. It would also limit the investment for smaller developers, who would only need to build their game for the tens of millions strong PS4/Pro market.
 
A straightforward if inelegant solution would be to keep the 8 jaguar cores but also have a 4 core/8 thread zen module as well.

You could probably upclock the jaguar cores to their upper range limit at 7nm without too much trouble, and unlike previous generations (launch PS3 with a PS2 built in I am looking at you) they wouldn't only be their to service backwards compatibility.

The 4 core zen module would be their to provide the extra oomph and the needed increase in single threaded performance.
 
I see a good opportunity for Sony to come out of of the gate first and push 60fps. 350mm^2 at 7nm should be good for 8 Ryzen cores and ~40-50 Vega/Naga CUs. Any one-up by Microsoft would be limited to more pixels. Die size economics being what they are, we're looking at roughly a 2x GPU increase over Pro/X at best and TF-based marketing will start to lose steam unless they resort to creative accounting. A strong 60fps push would be a meaningful way to change the conversation and make it about gameplay, I'll take 1440p@60 vs 2160p@30 any day and I think most gamers would agree. Please don't skimp on CPU people.

People always frame the "weak cpu"/60fps arguments as if there is some hardware end-game in which if you get to all games will then be 60fps. Only way you are getting 60fps in all games is if Sony or Microsoft mandates it for games on their platform...and I am sceptical they will impose that rule.

Best case scenario is the slight trend we see on these iterative consoles where developers give you options of either high visuals or high rame rate modes and this trend carries over to next gen.
 
People always frame the "weak cpu"/60fps arguments as if there is some hardware end-game in which if you get to all games will then be 60fps. Only way you are getting 60fps in all games is if Sony or Microsoft mandates it for games on their platform...and I am sceptical they will impose that rule.

Best case scenario is the slight trend we see on these iterative consoles where developers give you options of either high visuals or high rame rate modes and this trend carries over to next gen.
Games are 30fps because developers choose it to be.
 
What about many small cores for efficiency? Like 16 lower clocked zen with reduced cache for smaller footprint and some disabled for yield...

A Zen Lite based cell processor. :runaway:
 
What about many small cores for efficiency? Like 16 lower clocked zen with reduced cache for smaller footprint and some disabled for yield...

A Zen Lite based cell processor. :runaway:

mm... well, the Raven Ridge die shot at least seems to indicate that they don't save too much space with half the L3.

(Green and yellow rectangles were just to line things up, but it's just a ballpark)
asdf.jpg
 
Last edited:
If the PS5 contained a 4-core, 8-thread Zen CPU that was clocked at 3.2GHz, would that be sufficient to render any 30fps PS4 game at 60fps?
probably, but if you have the gpu power to ramp up and they want to push the graphics envelope, they'll naturally push it to 30fps.
I'm mean look at it this way specifically at the GPU.
60 FPS @ 12 TF is = 30 FPS @ 6TF. They do very close to the same amount of work if they are at the same resolution. If PS5 is 60fps mandated, and is 12 TF, it cannot make a generational graphical difference from 1X. You'd barely notice the difference. At 30fps is a different story.

I'm really sorry but 30fps is here to stay. Games that want to be 60fps are 60fps. Wasting silicon on more CPU to make the CPU/GPU ratio heavier in favour of CPU is probably the wrong the way to go; given everything we know about async compute, huma, gpu side dispatches. When we move to GPU side dispatching, I'm thinking the volume of CPU work will drop drastically, either increasing frame rate or freeing the CPU to do a lot of other things. Not to mention the GPU is going to be moving a later faster since it's issuing it's own commands.

All these wonderful technologies that have been developed to offload work from the CPU to the GPU. If you use them, then you only need a bigger GPU.

Draw calls -> CPU
gpuprofiler-2.jpg


TLOU: Remastered
Game code: Green squares
Rendering code: Red Squares
Green background: 16ms
Purple background: 33ms
If you could imagine a future where there was little to none render code.... it would already be 60fps.
xU3bHe5.png
 
Last edited:
I agree with your points, but if this gen has shown anything is that you're kind of stuck with the CPU you start with while the GPU can be expanded with revisions. I would like a good CPU baseline that can last for the entire generation allowing for 60fps to be viable option if the player picks the "performance" mode. And I'm not talking about games that make sacrifices to get there on Jaguar, I'm talking about GTA6 at 60fps with a low-res "performance" mode.

Now if VRR TVs are available by 2020 I'd be ok with an unlocked framerate of 45 or more, still a massive improvement over 30.
 
Last edited:
mm... well, the Raven Ridge die shot at least seems to indicate that they don't save too much space with half the L3.

(Green and yellow rectangles were just to line things up, but it's just a ballpark)
View attachment 2301
That's surprisingly small for 8MB.

I guess even 12 cores would be quite a stretch on a 7nm soc.
 
That's surprisingly small for 8MB.

Just so we're clear, I've highlighted the 8MB L3 section on the original die in green here, and the yellow is what was removed (4MB) for RR, although the RR iteration adds some other interconnect glue in place, so it's not a 1:1 area reduction.

Summit Ridge quad (green 8MB L3 complex , yellow 4MB L3)
quadSR.jpg

Raven Ridge quad (green 4MB L3 complex, yellow glue)
quadRR.jpg
 
Last edited:
Going from PS360 to PS4P/XBX, have we seen anything but graphics improvement?
I can't think of anything 360/PS3 can't do by cutting down densities. RAM was a huge limiter but we're probably not getting anywhere near the order of 512 MB to 8 GB difference next gen.

Maybe AAA developers are not so imaginative or don't want to make bets on emergent ideas with current dev costs, or the AAA market is mostly interested in a narrow set of genres, but I see future gen console games as more of the same with better graphics.
 
probably, but if you have the gpu power to ramp up and they want to push the graphics envelope, they'll naturally push it to 30fps.
I'm mean look at it this way specifically at the GPU.
60 FPS @ 12 TF is = 30 FPS @ 6TF. They do very close to the same amount of work if they are at the same resolution. If PS5 is 60fps mandated, and is 12 TF, it cannot make a generational graphical difference from 1X. You'd barely notice the difference. At 30fps is a different story.

I'm really sorry but 30fps is here to stay. Games that want to be 60fps are 60fps. Wasting silicon on more CPU to make the CPU/GPU ratio heavier in favour of CPU is probably the wrong the way to go; given everything we know about async compute, huma, gpu side dispatches. When we move to GPU side dispatching, I'm thinking the volume of CPU work will drop drastically, either increasing frame rate or freeing the CPU to do a lot of other things. Not to mention the GPU is going to be moving a later faster since it's issuing it's own commands.
...
Yes, but the amount of 60fps games is increasing this gen, Whatever the urban legends, we have more 60fps games this gen than the previous gen.
 
Yes, but the amount of 60fps games is increasing this gen, Whatever the urban legends, we have more 60fps games this gen than the previous gen.
They choose to be right, it's ultimately a choice ;) if the developers want to push the graphics envelope, you can't stop them. They'll aim for 30fps to get the most of the gpu.
 
They choose to be right, it's ultimately a choice ;) if the developers want to push the graphics envelope, you can't stop them. They'll aim for 30fps to get the most of the gpu.
That's great, please keep pushing the envelope. Meanwhile, include a lower-res performance mode that actually works. If that takes a higher ratio of die space to the CPU in the first iteration of the next gen I think gamers would respond favorably. If it can be done in other ways even better.
 
Last edited:
probably, but if you have the gpu power to ramp up and they want to push the graphics envelope, they'll naturally push it to 30fps.
I'm mean look at it this way specifically at the GPU.
60 FPS @ 12 TF is = 30 FPS @ 6TF. They do very close to the same amount of work if they are at the same resolution. If PS5 is 60fps mandated, and is 12 TF, it cannot make a generational graphical difference from 1X. You'd barely notice the difference. At 30fps is a different story.

I'm really sorry but 30fps is here to stay. Games that want to be 60fps are 60fps. Wasting silicon on more CPU to make the CPU/GPU ratio heavier in favour of CPU is probably the wrong the way to go; given everything we know about async compute, huma, gpu side dispatches. When we move to GPU side dispatching, I'm thinking the volume of CPU work will drop drastically, either increasing frame rate or freeing the CPU to do a lot of other things. Not to mention the GPU is going to be moving a later faster since it's issuing it's own commands.

All these wonderful technologies that have been developed to offload work from the CPU to the GPU. If you use them, then you only need a bigger GPU.

Draw calls -> CPU
gpuprofiler-2.jpg


TLOU: Remastered
Game code: Green squares
Rendering code: Red Squares
Green background: 16ms
Purple background: 33ms
If you could imagine a future where there was little to none render code.... it would already be 60fps.
xU3bHe5.png

Thanks.

I ask because the next generation will be interesting, especially from Sony, because Microsoft have already made clear their commitment to backwards compatibility.

Hopefully Sony will do the same and pursue backwards compatibility. If they do, it will enable developers to target the PS4/Pro, and also get their game on the PS5, running better. Roughly speaking, if it's 16TF and has an 8 thread Zen CPU clocked at 3.2GHz, it can run a PS4Pro's 1080p30 game at 4K60.

That might be enough to set a standard.
 
Status
Not open for further replies.
Back
Top