Predict: Next gen console tech (9th iteration and 10th iteration edition) [2014 - 2017]

Status
Not open for further replies.
I'd be curious to see how cpu resources are split between audio, graphics, animation, ai, physics etc on a game like Assassin's Creed. That's how you'd get a sense of how much the cpu would need to scale to hit 60fps
 
I wonder why so many people are sticking to unified memory using motherboard-mounted GDDR chips in their predictions.
Memory makers are shutting down GDDR production lines to give way to DDR4 and LPDDR4 due to overwhelming demand, and the only other production lines I heard were being ramped up are for HBM due to its good price-per-stack.
There's really no sign in the industry pointing to lower GDDR prices within the next 3-4 years IMO, so why would the console makers use it instead of HBM, or HBM+DDR4/LPDDR4?


24 TF
48 GB of GDDR5 memory or less with equivalent bandwidth
3TB HDD

With GDDR6 widely available during 2018 with an initial capacity of 2GB/chip, who's going to use GDDR5 in 2020?
Besides GDDR5 is probably going to top at 1GB per chip, with further capacity advances only going to GDDR6.
No one would want 48 chips of GDDR5 in a console. Or a 768bit bus even if they use clamshell.
The power consumption for the memory alone would be enormous, let alone the PCB area and sheer amount of PCB layers.

I could easily see the GPU doing 12 TF FP32 / 24 TF FP16 / 48 TOPS UINT8, with lots of stuff being using neural networks.

24 TF FP32 seems like a little bit too much. That's as big as two Vega 10, meaning around 25B transistors so we're looking at 400-500mm^2 just for the GPU part if using 7nm.
2019/2020 is when most foundries will transition to 7nm EUV, so 5nm won't arrive to mass production until late 2021 to 2022 (or later).

A GPU of that magnitude in a console APU doesn't sound realistic, unless you think gen9 won't release within the next 5 years or so.




512-bit bus?
(16Gbps*512/8)

hm...

Not only that, but it would need at least 16 chips in non-clamshell mode, like the original Xbone did with DDR3.

NP2MjSK.jpg



Doesn't look like a good place to start, for a console.
It looks even worse when almost 1TB/s HBM2 looks like this:

m08Z2M0.jpg
 
Oddball question: is there a specific technical path or improvement that tacked on to a PS5 would help current PSVR games? Kinda like how a PS4 Pro could help existing PS4 games, is there a path to improve PSVR?

I ask because I think Sony will want to double down on the PSVR and if the current headset is “good enough” assuming there’s enough power behind it they won’t want to upgrade it for awhile. Cost reducing it with an even more powerful PS5 might be enough to extend the life and reach.

Just a curiosity.
 
Machine learning, neural nets, AI are the buzzwords used throughout technology today.

Phones are touting having custom ML chips.

Will the next generation of consoles get in on this game? Of course games developers have been touting AI in games forever.

Will they tout persistent AI, which learns your behavior and adjusts, across game sessions?
 
Machine learning, neural nets, AI are the buzzwords used throughout technology today.

Phones are touting having custom ML chips.

Will the next generation of consoles get in on this game? Of course games developers have been touting AI in games forever.

Will they tout persistent AI, which learns your behavior and adjusts, across game sessions?

Whenever I hear somebody call for better AI in video games I can't help but think they never played against the DarkSim bots in Perfect Dark...
 
Why ? PS4 could do clamshell with GDDR5.
Using clamshell halves each chip's width. Where a single GDDR5 provides a 32bit width, using two in clamshell can only do 16bit from each of the chips.
So for a 512bit total width using GDDR5 chips, you'd always need at least 512/32 = 16 chips. Using them in clamshell would only be useful to increase memory amount, not width.


The PS4 used clamshell mode to achieve 8GB total, because at the time the largest GDDR5 chips were 2Gbit.
The PS4 could use 8+8 clamshell chips while the Xbone had to spend PCB area for 16 non-clamshell chips, because GDDR5 in clamshell does 16bit+16bit while the very old DDR3 does either 16bit single or 8bit+8bit clamshell.
 
It's unlikely consoles would use > 256bit bus with GDDR6. Xbox One X had to only because they need bandwidth to emulate ESRAM. 256bit bus + high frequency is much cheaper.
I think Sony at least could go for a custom low-cost HBM + 4GB LPDDR4 for system use via PCI Express connected ARM coprocessor/io bridge.
 
I think we will be sticking with 4K in the mainstream for the foreseeable future, ten years or more. TV makers will try to sell 8K but do consumers care?

For the most part I don't think they care about 4k that much. Do we know the adaption rate 4k is at?
 
With GDDR6 widely available during 2018 with an initial capacity of 2GB/chip, who's going to use GDDR5 in 2020?
Besides GDDR5 is probably going to top at 1GB per chip, with further capacity advances only going to GDDR6.
No one would want 48 chips of GDDR5 in a console. Or a 768bit bus even if they use clamshell.
The power consumption for the memory alone would be enormous, let alone the PCB area and sheer amount of PCB layers.

I could easily see the GPU doing 12 TF FP32 / 24 TF FP16 / 48 TOPS UINT8, with lots of stuff being using neural networks.

24 TF FP32 seems like a little bit too much. That's as big as two Vega 10, meaning around 25B transistors so we're looking at 400-500mm^2 just for the GPU part if using 7nm.
2019/2020 is when most foundries will transition to 7nm EUV, so 5nm won't arrive to mass production until late 2021 to 2022 (or later).

A GPU of that magnitude in a console APU doesn't sound realistic, unless you think gen9 won't release within the next 5 years or so.
I was lazy in my predictions, so I just 4x everything a Xbox One X is.
  • 24TF is 4x the amount yes ;) We need it to be really a generation difference from Scorpio. This is the point in which 4K becomes a normal resolution and this new console needs to focus on lighting and shadows and other newer techniques to really draw out the next generation of graphics.
  • I should have said 1200 GB/s of bandwidth as opposed to 48GB of GDDR5.

It's huge! I know! I don't know when the timing of the next consoles will be, and I don't know at which price point. But I feel anything less than this will not feel like a big upgrade over 1X. And this is important because when we look at reducing to 30fps, and checker boarding, and pushing other type savings, 6TF can be stretched rather far.

So my feeling is that 4Pro and 1X can hold the fort while the mainstream transitions to 4K sets. But once 4K is mainstream, then we need a mainstream console that supports 4K and has the power to push next gen visuals.

There might be other ways to lay the chip or separate the GPU/CPU in the future, but still support features like HUMA etc.
 
But MS/Sony/AMD has shown that they prioritize GPU over CPU. Somehow TFLOPS became a marketing bullet point this gen.
Not just marketing, but also a development foundation. Devs are used to that kind of a setup.

Tim Sweeney from Epic has recently [in the last year or so] said that he expect that thread of small CPUs and large GPUs will continue to be a central focus in console gaming.
 
For the most part I don't think they care about 4k that much. Do we know the adaption rate 4k is at?
Consumers maybe don't care and can barely notice the difference but that's what's out there as mainstream now.

I think the next few consoles will likely target 4K as an upper limit, including PS6/XBox XXX in the mid to late 2020's. I for one will be glad if the resolution wars is over or at least pauses for the foreseeable future.

Something that's curious for me is how much RAM is really needed to saturate a 4K (~8.3 mil pixels) screen? Maybe a split pool makes sense. At some point, transfer speed could become more important than capacity. Historically, a follow up console has 10x or more RAM, but I don't think we'll see anywhere near that amount this time.
 
Consumers maybe don't care and can barely notice the difference but that's what's out there as mainstream now.

I think the next few consoles will likely target 4K as an upper limit, including PS6/XBox XXX in the mid to late 2020's. I for one will be glad if the resolution wars is over or at least pauses for the foreseeable future.

Something that's curious for me is how much RAM is really needed to saturate a 4K (~8.3 mil pixels) screen? Maybe a split pool makes sense. At some point, transfer speed could become more important than capacity. Historically, a follow up console has 10x or more RAM, but I don't think we'll see anywhere near that amount this time.
If we continue the small CPU very big GPU, which nets the most results, then unified memory is a must. The CPU will continually offload more of its traditional work onto the GPU for processing.

The problem of increasing the CPU/GPU ratio is that only a few games will require that much more CPU. For everyone pushing the graphics envelope 30fps is targeted and you all this silicon real estate tied up in a stronger CPU which could have been used to push more graphics.

I think it's entirely possible the ratio will get larger next gen. Just keep pushing more CPU work onto the GPU and build out special hardware or APIs to support that.
 
I think we will be sticking with 4K in the mainstream for the foreseeable future, ten years or more. TV makers will try to sell 8K but do consumers care?

But MS/Sony/AMD has shown that they prioritize GPU over CPU. Somehow TFLOPS became a marketing bullet point this gen. It's literally on the box of the XBox X. If things stay the same, the GPU will be about twice the die size as the CPU next gen. We're probably getting a Threadsipper instead of a Threadripper.
GPU portions of current gen consoles are already over twice the size of the CPU portions (counting eSRAM as part of GPU portion on Xbox One)
 
I wonder why so many people are sticking to unified memory using motherboard-mounted GDDR chips in their predictions.
Memory makers are shutting down GDDR production lines to give way to DDR4 and LPDDR4 due to overwhelming demand, and the only other production lines I heard were being ramped up are for HBM due to its good price-per-stack.
There's really no sign in the industry pointing to lower GDDR prices within the next 3-4 years IMO, so why would the console makers use it instead of HBM, or HBM+DDR4/LPDDR4?
HBM remains a high risk product. There are more and more indications that the reason GDDR6 exists is precisely because HBM failed to deliver on time, price, speed, and yield.

Maybe it will improve in the next few years, but it's not looking good outside of high-end GPUs. If PS5 is +20TF it will need HBM, otherwise GDDR6 looks like a lower cost and less risky proposition.
 
What if we were to compare the size of something like the die size of the Ryzen 5 to that of the Vega 64? Sure a GPU that large is out of the pricerange of a $400 console nowdays, however in 3-4 years it definitely wouldn't be.
 
Last edited:
I don't believe raw flops will keep scaling like they used to. Xbox one x is pushing console power envelope at mature 14nm, console pricing and "only" reaching 6TFlop. 7nm is not magically allowing a huge leap over that. Realistic expectation might be something like 10TFlop fp32, 20TFlop fp16. Whatever improvements happen over that would have to be heavily architecture related to allow for more efficiency, accelerating new types of algorithms and utilizing lower precision like fp16/int8. Assuming 399/499 price point it's pretty limited brute force gains what next gen can have over xbox one x.

I'm wondering if next gen would benefit from inference and algorithms using "ai" to do processing? That could be a game changer or complete dud. Exposing tiling explicitly to developers might also be one way to drive efficiency up.
 
Last edited:
I think we will be sticking with 4K in the mainstream for the foreseeable future, ten years or more. TV makers will try to sell 8K but do consumers care?

But MS/Sony/AMD has shown that they prioritize GPU over CPU. Somehow TFLOPS became a marketing bullet point this gen. It's literally on the box of the XBox X. If things stay the same, the GPU will be about twice the die size as the CPU next gen. We're probably getting a Threadsipper instead of a Threadripper.
People buy new phones every year just because they're new. People care. Not that I disagree that 8k would be a huge waste. I wouldn't expect to see console games at 8k until the ps5 pro if that's a thing.

Wasn't the ps4 CPU already half the size of its GPU? Wii u's CPU must've been like a fourth the size of its GPU lol.
 
Status
Not open for further replies.
Back
Top