Predict: Next gen console tech (9th iteration and 10th iteration edition) [2014 - 2017]

Status
Not open for further replies.
Do they have anything competitive now or in the pipeline that would be fit for a console?
They talked about a GT7900 last year or so. Things would probably change by the time 10nmFF arrived, of course.

What might be more interesting is the FP16/FP32 shading ( e.g. TegraX1/Pascal besides PVR stuff). Not sure what the die area costs are, but I thought @sebbbi discussed a bit about FP16 being fine for a lot of things, so maybe we'd get an intermediate trade-off of sorts.

e.g. 10nm (2 node jumps) -> ~4x current gen FP32 ALUs -> ~8x FP16.
 
Last edited:
Doubt it. Where would they get the GPU from? Maybe Nvidia might have something ARM based for consoles in the future but right now AMD is the only solution I can see, especially if Zen is competent.

AMD can make a chip with ARM CPU and GCN/Polaris GPU, that follows the HSA specs.
In my opinion, the way this chip works and how you would develop for it is not so much different from the same but with x86 cores. Same GPU, same API similar to DX12 or Vulkan, same OS i.e. Windows or BSD-derived etc.

Doesn't keep me from feeling the consoles would rather have x86 Zen cores (four? six? five and a spare?).
x86 is funnily where you find really high performance per watt too : Sandy Bridge, Haswell, Skylake and then Zen somewhere along that line. x86 servers have actually higher performance per watt than ARM servers, albeit we've not seen how Cortex-A72, K12 etc. behave.
Performance-oriented ARM would likely be useful to save silicon area. Perhaps lower watts at the cost of lower performance. Nintendo would likely go with AMD + ARM while MS and Sony, we don't know but we seem to expect x86.

Crazy option : use both x86 and a quad-core ARM that does "apps", OS GUI, background stuff.
 
Funny, for some reason everyone seems to be forgetting Opteron A1100. AMD has a ARM core now too people. Granted its not very competitive but.

One that is also HSA compliant and just as easily put into an APU if there are appetizing billion console dollars in revenue waiting for it.

But no way ARM's ending up in consoles. As someone already mentioned in the Zen discussion, in terms of raw performance and power nothing comes close to x86. And no amount of multi-threading will make a difference if the real time thoroughput per thread is lower.

Anyway, 8 cores are more than enough. 1 audio, 1 network, 1 system (OS, kernel, I/O), 2 scripting, 2 game engine, 1 graphics, no?

Besides, how're you going to have x86 and ARM work together in the same system? How'll the OS behave and interact across two archs? CISC and RISC no less?
 
Funny, for some reason everyone seems to be forgetting Opteron A1100. AMD has a ARM core now too people. Granted its not very competitive but.
AMD currently has a late ARM chip. It doesn't have an ARM core, as it is using standard A57 cores in an implementation that is iffy in power and performance relative to the ARM cores on newer versions and nodes.

K12 is the proposed AMD ARM core, if it should come to pass. It is something of a Zen sibling, so it's not clear how K12 would differentiate itself from the backward-compatible AMD option.
 
All parties will likely be sorely tempted by ARM this go round. It was close this past time.
I don't believe that will happen. Last time they did not want to stay on PowerPC, so they took really hard look at all options and best architecture for easy to use and powerful home console won.
 
IBM was not interested into making high performance consumer CPU and still isn't.
Making a cut down "big iron" CPU is a technical possibility (past speculations on PS4 / Xbox One having POWER7 cores). That is what they once did and was known as PowerPC G5. That ended up as a leaky watercooling problem on some Macs, no laptops and no upgrade path for Apple.

Other option, barring the one-time Cell PPE (a variant of which is in Xbox 360) is a small and slow CPU as seen in Wii U.

Then we want a unified CPU + GPU chip which IBM is even less interested in. That'd be a huge undertaking (POWER + nvidia APU?) for a single console product only.
ARM on the other end is where you go for custom integrated designs. Can get an ARM with a GPU from AMD, Nvidia, PowerVR, Qualcomm, Vivante etc. Highish performance ARM cores have been designed.
But next-gen consoles will differentiate better from the current consoles if they get a much faster CPU than the current Jaguar ones :)
 
In my opinion, only AMD and NVIDIA have the proven console level GPU's. I don't think either Sony or MS would go with another companies' GPU. Sure they may look fine on paper, but those other companies haven't shipped successful consumer GPU's and that would be a risk.

I think the only way ARM ends up in the consoles is if NVIDIA makes the chip. If it's AMD, might as well stay with x86. ARM might have advantages in a smaller form factor, but the console die area and power consumption will be dominated by the GPU so whether the CPU is x86 or ARM, I think the differences in power and area will be in the noise.
 
I don't believe that will happen. Last time they did not want to stay on PowerPC, so they took really hard look at all options and best architecture for easy to use and powerful home console won.

I just feel like it will.

http://www.forbes.com/sites/patrick...and-sony-chose-amd-for-consoles/#59e0d22d9559

My sources have confirmed for me that both Sony and Microsoft felt that MIPS didn’t have the right size developer ecosystem or the horsepower to power the new consoles. Then it came down to ARM versus X86 architecture. I am told there was a technical “bake-off”, where prototype silicon was tested against each other across a myriad of application-based and synthetic benchmarks. At the end of the bake-off, ARM was deemed as not having the right kind of horsepower and that its 64-bit architecture wasn’t ready soon enough. 64-bit was important as it maximized memory addressability, and the next gen console needed to run multiple apps, operating systems and hypervisors. ARM-based architectures will soon get as powerful as AMD’s Jaguar cores, but not when Sony or Microsoft needed them for their new consoles.

So I feel like it was close last time, could be a shoe in this time. Could also be paired with something from Nvidia for the GPU.

Also worth noting when this has come up, Sebbi has been very bullish on it. He mentions the sheer overwhelming amount of ARM CPU's out there, so people familiar with it etc.

Of course the boring tried and true path is more all AMD-SOC's, and that could certainly happen. AMD seems to be hurting so bad they may be willing to cut almost any deal to stay in consoles to help stay afloat.

In my opinion, only AMD and NVIDIA have the proven console level GPU's.

And Intel is probably there already with IGP's. From a mobile vendor? Not really sure. I guess maybe not.
 
And Intel is probably there already with IGP's. From a mobile vendor? Not really sure. I guess maybe not.
Intel is probably as competitive as it is simply due the fact that they're 2 full nodes ahead on manufacturing process and eDRAM. I expect AMD's 14nm APUs to leave Skylake+4e in dust if they figure out a way to feed it enough bandwidth (eDRAM/eSRAM or HBM)
 
TDP isn't an issue. They can (and I'm pretty sure do) fix the clockspeed during a game and get a known heat output, which the custom cooling solution can be engineered to deal with. The only way you'd have/want throttling is if your cooling solution can't deal with max heat. This happens in ultrabooks and mobiles because there isn't room for a decent cooling solution, but not consoles.

Yeah, for the 360S MS went nuts with the thermal testing. They showed in one of their (hotchips I think it was) presentations about the SoC that they'd tested the chip and its cooling using a power virus running on both the CPU and GPU components simultaneously.

So I am a bit surprised to hear this, I thought one of the main problems of the old gen was heat issues like the red ring of death on the 360 - I always assumed that no one would practically want a system pulling more than 200W given the risks / cost of supporting components like the PSU and more expansive cooling system.

But no matter even discounting that and focusing only on the cost side of things - I assume the CPU is still going to be the junior partner in any next gen SoC, therefore there will be incentive to reduce the die area it occupies. ARM currently says that you could fit 4 Cortex A72's + 2MB L2 cache in the same area as a single core broadwell + 256kB L2. I just look at the state of the CPU in PC gaming at the moment and assume it will still be largely unimportant in the grand scheme of future designs focused on gaming. I would expect to be able to clock smaller cores higher than larger cores within a console anyhow to make up some for the dearth of single thread performance.

Who knows maybe someone will surprise us all and go with a Big.Little style asymmetric processor with 2 - 4 larger cores focused on single thread performance backed up by 4 - 8 smaller cores.

Does anyone have a link to core utilisation tests in games - I am wondering how a well threaded game like crysis loads cores if more than 2 cores are heavily loaded etc?
 
The real issue on X360 and PS3 was the swap over to lead-free solder. It wasn't solely heat related.
 
So I am a bit surprised to hear this, I thought one of the main problems of the old gen was heat issues like the red ring of death on the 360 - I always assumed that no one would practically want a system pulling more than 200W given the risks / cost of supporting components like the PSU and more expansive cooling system.
Heat was only an issue at the start of last gen because of the enforced change to lead-free solder and no-one really knowing what the limits and behaviour was. Had solder been allowed to remain lead based, there'd have no issues. And once the manufacturers got to grips with the solder, it wasn't (and isn't) an issue.

I would expect to be able to clock smaller cores higher than larger cores within a console anyhow to make up some for the dearth of single thread performance.
Why? The highest clocked CPUs available are also the largest cores while current small console CPUs are clocked pretty low.
 
It looks like Zen will support a superset of the instructions that Jaguar supports. It supports some things like FMA that Jaguar does not and dumps a number of extensions that were introduced for Bulldozer that Jaguar did not get.
It seems like it should work terms of instructions, even if it turns out to be less optimal for the changed CPU organization.
 
This is a 3 weeks old article, but relevant here. Samsung is mass producing HBM2 (4GB stacks). They'll do 8GB stacks later this year.

http://www.extremetech.com/extreme/...ass-production-of-next-generation-hbm2-memory

So 32GB of HBM2 in GPUs for the end of the year is looking good IMO. Can we realistically count on at least 64GB of HBM2 for a 2020 release with 16GB stacks thanks to 10nm instead of 16nm (supposedly used in HBM2 currently)?

Now for my bold 128GB upper prediction: what would be the easier way to do that? are 8 stacks of 16GB each doable with a 10nm process? Can they put HBM2 stacks in both sides of the interposer similar to the GDDR5 ram in PS4?

They could limit the devkits to 128GB during the first ~2 years. It's a strategy that worked pretty well on PS4.

EDIT: mmm just looked the Samsung diagram. It's going to be hard to put HBM2 stacks in both sides of the board...
 
Last edited:
I wonder how much die area is required for each HBM memory controllers on the SoC itself. The first HBM products coming out right now have a large die, very high end stuff, and they have only 4 stacks.

I was having fun thinking of a 6 stacks configuration, putting 3 stacks on each side of a rectangular die.
Let's say a 1 mm gap between dies.
SoC : 23mm x 16mm (368mm2)
HBM : 5mm x 7mm

The entire interposer could be a reasonable 23mm x 28mm?

Hey, I'm an armchair engineer. I can only do the tetris part. :oops:
 
HBM is 5.48x7.29, which is a bit larger. There may be other spacing requirements besides the package, since the stacks have a fair amount of open space around them for Fiji. This might be related to uncertainties as to the eventual package size for the tech, as HBM2 is significantly larger, or there are other constraints that require more space besides the size of the stacks themselves.

HBM2 is 7.75x11.87, which has less favorable math.
 
Status
Not open for further replies.
Back
Top