24 TF
48 GB of GDDR5 memory or less with equivalent bandwidth
3TB HDD
512-bit bus?
(16Gbps*512/8)
hm...
Why ? PS4 could do clamshell with GDDR5....
Not only that, but it would need at least 16 chips in non-clamshell mode, like the original Xbone did with DDR3.
Doesn't look like a good place to start, for a console.
Machine learning, neural nets, AI are the buzzwords used throughout technology today.
Phones are touting having custom ML chips.
Will the next generation of consoles get in on this game? Of course games developers have been touting AI in games forever.
Will they tout persistent AI, which learns your behavior and adjusts, across game sessions?
Using clamshell halves each chip's width. Where a single GDDR5 provides a 32bit width, using two in clamshell can only do 16bit from each of the chips.Why ? PS4 could do clamshell with GDDR5.
TV makers will try to sell 8K but do consumers care?
I think we will be sticking with 4K in the mainstream for the foreseeable future, ten years or more. TV makers will try to sell 8K but do consumers care?
I was lazy in my predictions, so I just 4x everything a Xbox One X is.With GDDR6 widely available during 2018 with an initial capacity of 2GB/chip, who's going to use GDDR5 in 2020?
Besides GDDR5 is probably going to top at 1GB per chip, with further capacity advances only going to GDDR6.
No one would want 48 chips of GDDR5 in a console. Or a 768bit bus even if they use clamshell.
The power consumption for the memory alone would be enormous, let alone the PCB area and sheer amount of PCB layers.
I could easily see the GPU doing 12 TF FP32 / 24 TF FP16 / 48 TOPS UINT8, with lots of stuff being using neural networks.
24 TF FP32 seems like a little bit too much. That's as big as two Vega 10, meaning around 25B transistors so we're looking at 400-500mm^2 just for the GPU part if using 7nm.
2019/2020 is when most foundries will transition to 7nm EUV, so 5nm won't arrive to mass production until late 2021 to 2022 (or later).
A GPU of that magnitude in a console APU doesn't sound realistic, unless you think gen9 won't release within the next 5 years or so.
Not just marketing, but also a development foundation. Devs are used to that kind of a setup.But MS/Sony/AMD has shown that they prioritize GPU over CPU. Somehow TFLOPS became a marketing bullet point this gen.
Consumers maybe don't care and can barely notice the difference but that's what's out there as mainstream now.For the most part I don't think they care about 4k that much. Do we know the adaption rate 4k is at?
If we continue the small CPU very big GPU, which nets the most results, then unified memory is a must. The CPU will continually offload more of its traditional work onto the GPU for processing.Consumers maybe don't care and can barely notice the difference but that's what's out there as mainstream now.
I think the next few consoles will likely target 4K as an upper limit, including PS6/XBox XXX in the mid to late 2020's. I for one will be glad if the resolution wars is over or at least pauses for the foreseeable future.
Something that's curious for me is how much RAM is really needed to saturate a 4K (~8.3 mil pixels) screen? Maybe a split pool makes sense. At some point, transfer speed could become more important than capacity. Historically, a follow up console has 10x or more RAM, but I don't think we'll see anywhere near that amount this time.
GPU portions of current gen consoles are already over twice the size of the CPU portions (counting eSRAM as part of GPU portion on Xbox One)I think we will be sticking with 4K in the mainstream for the foreseeable future, ten years or more. TV makers will try to sell 8K but do consumers care?
But MS/Sony/AMD has shown that they prioritize GPU over CPU. Somehow TFLOPS became a marketing bullet point this gen. It's literally on the box of the XBox X. If things stay the same, the GPU will be about twice the die size as the CPU next gen. We're probably getting a Threadsipper instead of a Threadripper.
HBM remains a high risk product. There are more and more indications that the reason GDDR6 exists is precisely because HBM failed to deliver on time, price, speed, and yield.I wonder why so many people are sticking to unified memory using motherboard-mounted GDDR chips in their predictions.
Memory makers are shutting down GDDR production lines to give way to DDR4 and LPDDR4 due to overwhelming demand, and the only other production lines I heard were being ramped up are for HBM due to its good price-per-stack.
There's really no sign in the industry pointing to lower GDDR prices within the next 3-4 years IMO, so why would the console makers use it instead of HBM, or HBM+DDR4/LPDDR4?
People buy new phones every year just because they're new. People care. Not that I disagree that 8k would be a huge waste. I wouldn't expect to see console games at 8k until the ps5 pro if that's a thing.I think we will be sticking with 4K in the mainstream for the foreseeable future, ten years or more. TV makers will try to sell 8K but do consumers care?
But MS/Sony/AMD has shown that they prioritize GPU over CPU. Somehow TFLOPS became a marketing bullet point this gen. It's literally on the box of the XBox X. If things stay the same, the GPU will be about twice the die size as the CPU next gen. We're probably getting a Threadsipper instead of a Threadripper.