Next Generation Hardware Speculation with a Technical Spin [2018]

Status
Not open for further replies.
Really? One of the touted features of chiplets is cost savings because you can use a larger percentage of the wafer thin mint.

Cost saving Vs a monolithic 64 core 7nm chip.

I wonder what the crossover point for cost is? Are two 8 core chiplets + IO die cheaper than higher binning with a 16core package?
 
An APU level solution wouldn't need such a beastly IO chip. You could maybe integrate those features needed into a GPU die if you still wanted to use a stock CPU chiplet.
 
Really? One of the touted features of chiplets is cost savings because you can use a larger percentage of the wafer thin mint.
Yes, but building a custom chiplet designs, having wide enough busses between GPU and CPU chiplets and IO is a whole another matter than a high profit server chips whichs design can be utilized across the whole CPU-field
At least that's what I believe anyway.

Also if you'd use more than one GPU chiplet things would get quickly far more complex
 
All true, and it might not be any cheaper initially, but these are devices intended to sell from 6-10 years, in an era where mid-gen consoles have been established.

Just looking at the Zen 2 chiplet design, and its 16nm controller, I think a chiplet design would allow the consoles to take advantage of process and node improvements sooner and cheaper e.g. "plug in" some 5nm Zen 2 cores when they're cheap enough, and use a cheaper cooling solution.

And a PS4Pro "butterfly" design would probably be cheaper than ever to R&D.
 
I also think Sony and MS are going to use some form of Zen+Navi to emulate the base console with their "mini" iterations, rather than pay to port outmoded architectures to new nodes. A chiplet design could make that credibly cheap to maintain, being able to use the cheapest chiplets without having to redesign the entire APU.

And soon, common parlance will contain the phrase "as cheap as chiplets."
 
All true, and it might not be any cheaper initially, but these are devices intended to sell from 6-10 years, in an era where mid-gen consoles have been established.

Just looking at the Zen 2 chiplet design, and its 16nm controller, I think a chiplet design would allow the consoles to take advantage of process and node improvements sooner and cheaper e.g. "plug in" some 5nm Zen 2 cores when they're cheap enough, and use a cheaper cooling solution.

And a PS4Pro "butterfly" design would probably be cheaper than ever to R&D.

This is what i thought when thinking about chiplet-designed console...
 
I thought the high density libraries were more a trade off of clock speed for a smaller size as they switched to a different metal stack to enable greater density.
The denser libraries did place a lower ceiling on the clocks for Excavator. Whether lower clocks or the physical customization are cause or effect isn't clear to me. The shift to bulk and the process tweaks made clock declines for the CPU likely, and once that was inevitable that could have made AMD focus the implementation more on density at the expense of clocks that weren't going to be competitive anyway.

The more intense engineering on a tailored node may have given it more weight against the more generic Jaguar platform, although by default the lower clock speeds and early SOC focus may have baked-in some level of density improvement versus the SOI-based Bulldozer cores.

Is this the case for games in general?
From what I've seen DP is not important for games, or most client software in general.
 
What would be the challenges to create a next-gen machine using CPU & GPU chiplets?
I suppose right now it's a proven concept only for cpu levels of bandwidth, which seems to be fine with low cost organic interposers?

GPUs need an order of magnitude more bandwdth, maybe it would require a completely different design, instead of a central IO chip being the memory controller, each chiplet might needs to serve a fraction of the memory and the central chip would be a crossbar of sorts? Still, it multiplies the bandwidth which has to go through the interposer. Each chiplet need as much bandwith to it's section of memory as it needs to/from the central hub.

If they manage to use organic interposers in this very high bandwidth situation, maybe it solves the same issues with the long promised HBM on organic interposers.

I'm also curious if right now the center chip is 14nm because it's main purpose is to drive a lot of powerful external PHYs, making 7nm a waste of money, so it's a great cost saving to stay at 14nm.
 
I suppose right now it's a proven concept only for cpu levels of bandwidth, which seems to be fine with low cost organic interposers?

GPUs need an order of magnitude more bandwdth, maybe it would require a completely different design, instead of a central IO chip being the memory controller, each chiplet might needs to serve a fraction of the memory and the central chip would be a crossbar of sorts? Still, it multiplies the bandwidth which has to go through the interposer. Each chiplet need as much bandwith to it's section of memory as it needs to/from the central hub.

If they manage to use organic interposers in this very high bandwidth situation, maybe it solves the same issues with the long promised HBM on organic interposers.

I'm also curious if right now the center chip is 14nm because it's main purpose is to drive a lot of powerful external PHYs, making 7nm a waste of money, so it's a great cost saving to stay at 14nm.

It's more complicated than that with several GPU-chiplets, this is a diagram of how AMD thinks they could use several GPU-chiplets and a CPU-chiplet in same MCM

amd-chiplet-mcm-20180628.jpg


and the article it came from https://spectrum.ieee.org/tech-talk...iplet-revolution-with-new-chip-network-scheme
 
I would like to see at least one of them take some risks and push the boat out. Not just bigger faster apu than we have today, just navi and zen based.
I think MS would have to be the one to take risks out of the two. If both consoles are close in power and price it's a Sony "win".

Chiplets (including gpu), MCM, may be easier in a closed set environment like a console, RT hardware, tensor equivalent for upscaling, it would just be nice that there were some fundamental differences.
 
Perhaps folks are overthinking things greatly for a $399 box.

Everyone wants massive amounts of power, if those dreams come true we wont know. DF did a video about a chinese pc with specs that could be the PS5's, with upgrades, as navi etc.



Its a windows 10 pc but its not too unrealistic as a next gen console, with offcourse a more powerfull GPU instead of just 4TF.

https://www.gamesradar.com/sony-ps5-release-date-news-specs-features/

Some good insight information too i think.

''Mark Cerny, PS4’s lead architect, told Digital Foundry last year that the realistic limits for a next-gen console GPU would most likely top out at eight teraflops''

From that link.
 
Last edited:
Status
Not open for further replies.
Back
Top