Predict: The Next Generation Console Tech

Status
Not open for further replies.
I am just as confused... where did we get any confirmation that Sony is going for a SoC?

There isn't any such confirmations, but regarding nVidia, it was already rumored what, year ago that AMD has all the next gen console deals sealed, and no rumors whatsoever on nV continuing with Sony
 
Assuming that to be the case (I could be wrong), I'd think the most cost effective solution would be to use an existing GCN design. Perhaps even a binned part that could also be used for PC GPUs with the "glue" being on the much smaller CPU...;)
First I agree for most part with your two posts so I won't nitpick further.

I'm not sure I get this part though. You suggest using a existing part, OK. I guess that it would be a Pitcairn. You suggest even a binned part could do the trick, why not.

I see 2 possible issues.
First about "using an existing design is the most costs effective solution". For example Pitcairn is supposed to embark 22 SIMDs, should use a 256 bit bus, etc. Say the binned part chosen for console used is to Pitcairn what Barts where to Cypress, say something like ~16 SIMD on a 192 bit bus. AMD agreed with MS and plan at some point to produce such proper parts (not salvage one). So now AMD can sell both binned and proper parts to MS.
That sounds good but there quiet some blind spot. How much Pitcairn AMD think it can sell? So how many wafers it plan on using? From there how many binned parts would be available for MS?
From there the real question is that quantity relevant to the kind of mass production MS envision?
I don't know but if the answer is "no" (that would be my bet by the way) then there isn't that much benefits for MS.
MS may have rebate on a fraction on the production early on in the product life cycle but they have to deal with two form factors.Having chip of different size may trigger productions overhead/costs that offset the benefits made on the few binned part, I don't know just pointing out the possibility.

Second point is about the CPU and the "glue" do I understand properly, You're speaking of something akin to xenon/xenos relationship, right?
So the GPU would act as the north bridge, right?
In this case I' don't know (FYI my bet would be no with high odds) if this is doable this standard parts. How the GPU and the CPU would speak together? Say as a link the PCI express connection is usedas the link (again I don't know if it's doable, I would bet no), the gpu memory controller still has to be aware that there is another client (CPU) wanting to access ram. May be the thing is already clever enough may be not which means that AMD would have to add unnecessary hardware to its hardware.

Overall I'm not sure about this "existing part" possibility being a win.

You may be right though that a tiny CPU + an average GPU (acting like xenon, xenos so the north bridge is on the GPU) is possibly a better choice than an average SoC + tiny GPU.
You don't deal with a second GPU but neither you can explicitly both GPU to work on parallel on various tasks (rendering or not) or you benefit from low latency communications between the CPU and GPU (if there is significant advantage to ripped there).

To some extend I still go back to what Tim Sweeney and Andrew Richard stated while interviewed by Charlie .D: they both saw at the time high benefits to be ripped of low latency communications between the CPU and the GPU (so SoC) and if they are to be 2 chips they prefer to have twice the same. It was worse noticing because it was one of the
really
few things they agreed on.

So if one SoC is not powerful enough (or too big) there is still a strong argument to use two, both on technical and economical front.

EDIT
I'm having a tough time finding a proper way to express that but I believe that you may want to use an "old school" north bridge if you go with two SoCs.

EDIT 2
In fact I could see a beefy north bridge made on cheaper process "solves" quiet some issues bandwidth related as well as production (the fact the IO doesn't "scale" and so big busses can get in the way of price reduction for consoles). It may not be a super clear statement I could give more concrete examples about what I'm think about.

EDIT 3
The more I think about it the more I like my north bridge idea. It may solve some problems EDRAM (ie bandwdith and IO scalability) solves but for "cheaper".
 
Last edited by a moderator:
There isn't any such confirmations, but regarding nVidia, it was already rumored what, year ago that AMD has all the next gen console deals sealed, and no rumors whatsoever on nV continuing with Sony
Regarding SOC for PS4.

http://mandetech.com/2012/01/10/sony-masaaki-tsuruta-interview/
Masaaki Tsuruta, CTO of Sony Computer Entertainment, says that the company is working on a system-on-chip (SoC) to underpin the product for "seven to 10 years".

Regarding Sony and Nvidia. Nvidia has confirmed they are working together with a console manufacturer in a public questioning last fall. That could be Sony and the PS4.
Lots of rumours floating around, not ruling out anything yet.
 
Regarding SOC for PS4.

http://mandetech.com/2012/01/10/sony-masaaki-tsuruta-interview/


Regarding Sony and Nvidia. Nvidia has confirmed they are working together with a console manufacturer in a public questioning last fall. That could be Sony and the PS4.
Lots of rumours floating around, not ruling out anything yet.
Thanks for that completely forgot about it.

-----------------------------------------------

On a completely different matter I tried to search the web for the price charge by founder for wafers, without much success.

I'm interested about how the cost varies with the process (28nm, 40, 55/65, 80/90).

Does somebody here managed to gather information in this regard?
 

From the interview:
"the main SoC for the incoming console is likely to be a 3D stack incorporating thru-silicon-via technology and could be the first $1bn hardware design project."

I thought Sony said they wouldn't invest as much in RnD for the next generation, and use existing technologies, I guess they were only talking about Vita. Can we assume 3D-stacking is a technology available for the xbox720 as well?
 
I'm interested about how the cost varies with the process (28nm, 40, 55/65, 80/90).

Does somebody here managed to gather information in this regard?

The closest I've found is a general assumption that recent wafers are roughly $5,000 each, and 28nm ones were going for roughly 25% premium.
 
That sounds good but there quiet some blind spot. How much Pitcairn AMD think it can sell? So how many wafers it plan on using? From there how many binned parts would be available for MS?

The idea with binning is that the chip could also be used in the pc space, not the the pc chip would be binned down for MS, but the logistics of this either way would mostly be down to yield.

What I envisioned was the console space would be demanding higher volume than a PC part would, but it is also being put into a tightly controlled environment which may produce a significant number of chips which don't fit in the console space, but are fine in the PC space.

So binning could be used to provide two chips: On the high end which could run at high speeds, high power, sold at a premium in the pc space. And on the low end, chips which have partial defects or which can't hit the required speed/watt requirement needed for the console.

Both cases provide additional revenue for the chip runs outside of the targeted use of the console.


Second point is about the CPU and the "glue" do I understand properly, You're speaking of something akin to xenon/xenos relationship, right?
So the GPU would act as the north bridge, right?

I'd think they'd want the communication logic in the CPU, not GPU. This may also make sense for the Memory controller as well, but it might have better performance in the GPU, I don't know. But the idea of using binned parts outside of the console world would require using a pc compatible design.

Your idea of a separate Northbridge die would work as well, but would increase the package complexity. Ideally this logic would be embedded in the CPU design, but either way would work.



Overall I'm not sure about this "existing part" possibility being a win.

Neither am I, it's just a thought to maximize the resources! :smile:

To some extend I still go back to what Tim Sweeney and Andrew Richard stated while interviewed by Charlie .D: they both saw at the time high benefits to be ripped of low latency communications between the CPU and the GPU (so SoC) and if they are to be 2 chips they prefer to have twice the same. It was worse noticing because it was one of the
really
few things they agreed on.

Interesting.

Maybe there is some merit to the Fusion rumors ....

So if one SoC is not powerful enough (or too big) there is still a strong argument to use two, both on technical and economical front.

EDIT
I'm having a tough time finding a proper way to express that but I believe that you may want to use an "old school" north bridge if you go with two SoCs.

EDIT 2
In fact I could see a beefy north bridge made on cheaper process "solves" quiet some issues bandwidth related as well as production (the fact the IO doesn't "scale" and so big busses can get in the way of price reduction for consoles). It may not be a super clear statement I could give more concrete examples about what I'm think about.

EDIT 3
The more I think about it the more I like my north bridge idea. It may solve some problems EDRAM (ie bandwdith and IO scalability) solves but for "cheaper".

Good points here.
 
From the interview:
"the main SoC for the incoming console is likely to be a 3D stack incorporating thru-silicon-via technology and could be the first $1bn hardware design project."


What the heck is this and does it say who the manufacturer is e.g. IBM?

EDIT: Did some research and it sounds like IBM is ahead of TSMC (who won't be using it until '15 or '16) in this tech, so is this a hint IBM is involved?
 
Last edited by a moderator:
regarding northbridge.
my pet theory for the x720 is you would have a 64bit controller on the CPU, ddr4 is possible, and 128bit gddr5 on the GPU. they can read each other's memory through coherent PCIe 3.0 or hypertransport, a bit slow but we can live with that. that would give 2GB + 1GB or something like that.

just a PC-like design. edram only as L2 or L3 in the PowerPC CPU.
 
regarding northbridge.
my pet theory for the x720 is you would have a 64bit controller on the CPU, ddr4 is possible, and 128bit gddr5 on the GPU. they can read each other's memory through coherent PCIe 3.0 or hypertransport, a bit slow but we can live with that. that would give 2GB + 1GB or something like that.

just a PC-like design. edram only as L2 or L3 in the PowerPC CPU.

That would work, but it goes against the Unified Memory approach that MS has employed for every Xbox up to this point.
With MS being very developer centric (hence the unified ram), I'd think the concept is DoA.
Whatever they choose, it will likely be UMA.

Regarding EDRAM, the manufacture process has been a headache up to this point so I wouldn't be surprised to see MS dump EDRAM in the future, but the performance benefits are difficult to ignore.
 
That was news to me, cheers.



Got link for this? With quick googling your post was the only reference to this I found

Couldn´t find the one I was thinking of but found this one instead. Which says the same.

Q: Do you think there will be another round of consoles coming?

A: Oh, no question about it.

Q: And can you predict when it will be in terms of how many years from now?

A: We will build one of them, right. And the reason for that is because the world doesn’t have enough engineering talent for anybody to build three of them at one time. It takes the entire livelihood of a computer graphics company to build one of them. And every single time they build one, my life is in danger. You build it once every five or seven years, but you have to build it all in a very short time. That’s because they wait and wait and then they say, ‘Can I have it next week?’
 
What the heck is this and does it say who the manufacturer is e.g. IBM?

EDIT: Did some research and it sounds like IBM is ahead of TSMC (who won't be using it until '15 or '16) in this tech, so is this a hint IBM is involved?

I have spent considerable time googling information about 3D and 2.5D IC stacking, I could spam you with links on the subject.
2.5 D stacking is happening right now and TSMC is a driving force, they are preparing manufacturing and assembly plants to start offering such services on a larger scale in 2013. They already have it in production for some customers (Xilinx), but it is still a very exclusive technology.
 
regarding northbridge.
my pet theory for the x720 is you would have a 64bit controller on the CPU, ddr4 is possible, and 128bit gddr5 on the GPU. they can read each other's memory through coherent PCIe 3.0 or hypertransport, a bit slow but we can live with that. that would give 2GB + 1GB or something like that.

just a PC-like design. edram only as L2 or L3 in the PowerPC CPU.
Well that's an option and looking forward if the two chips are to end together it may still be possible to fit the 128 bit bus and the 64 bits on the same chip.
That's for me the reason Sony may not be looking into a SoC as MS did with Xenon/Xenos it's unclear if the chip would fit x2 128 bit bus.

The reason I considered north bridge is that it lets the memory controller out of the main system chips (obviously).
Say we have two SoC, each with its memory controller and connected together by a coherent link, pretty much like in a Cell blade. Depending on the SoC size you may want sooner than later put them together, that where fitting two 128 bit bus could hurt you casts reduction plan.

I think it also as some nice benefit for the chip, it take away pretty power hungry memory controller from your chip and you have more choice about how you will connect your chip(s) to the north bridge. This is relevant no matter what you connect to the north bridge. (in the same negative way this time you will deal with higher latencies).

I asked for the price of wafer at various process because depending on the differences in price and the performances goals for the design, it may make sense to make a north bridge big enough to fit a 128 bit bus, 192 bit bus and may be even a 256 bit bus.

The tiniest part I saw supporting a 256 bit bus was ~230 sq.mm so it implies quiet a chip. for the reference it was the geforce gts 250 produced @55nm including 750millions transistors.

So it's really about price, for example how the price of 230 sq.mm chip on either 55, 65, 80 or 90 nm compare to chip using to say a 80 sq.mm chip using EDRAM either @45nm or 32nm?
I could not find any proper information.

Then depending on the process it is economic to use you end up with a "free" silicon budge (it also free power budget you pay in extra latency but INtel completely integrated the north bridge only with Nehalem),
@90 nm and building a lot of redundancy into the chip, could it be possible to implement the ROPs here? Honestly I don't know big modern ROPs are. You could include the south bridge that a given.
If 90nm doesn't allow for the ROPs, I'm confident the 65nm would.

It could allow you to free some area in your chips using way more costly wafer/lithography.

So clearly all this is pointless without price, I can't find anything relevant only information about 100, 200, 300mm wafer using even older processes. :cry:
 
Regarding SOC for PS4.

http://mandetech.com/2012/01/10/sony-masaaki-tsuruta-interview/


Regarding Sony and Nvidia. Nvidia has confirmed they are working together with a console manufacturer in a public questioning last fall. That could be Sony and the PS4.
Lots of rumours floating around, not ruling out anything yet.
Ah interesting, I must have missed that completely. So it seems like Sony wont go cheap on their nextgen console after all then assuming that $1billion project taking place.
As for TSV 3d stacking, if everything goes well then they could potentially save tons of manufacturing cost while getting some seriously revolutionary performance and uber bandwidth. I still wonder how much performance gain it would have over a traditional cpu+gpu combo though.
All these info sounds like a late launch to me as with all things new tech related, but still at least we know the path they're going towards now.
 
Couldn´t find the one I was thinking of but found this one instead. Which says the same.

But look at the reasoning.

We will build one of them, right. And the reason for that is because the world doesn’t have enough engineering talent for anybody to build three of them at one time. It takes the entire livelihood of a computer graphics company to build one of them. And every single time they build one, my life is in danger. You build it once every five or seven years, but you have to build it all in a very short time. That’s because they wait and wait and then they say, ‘Can I have it next week?’
That's nothing more than an assumption. And things have been pointing towards AMD having all three GPUs.

EDIT: Beaten
 
I am not so sure I'm buying the "AMD will have all 3 GPU's" thing quite so quickly.

I have a bit of a hard time seeing Nvidia allowing themselves to get completely frozen out of the console market. A lot of people have been using as AMD's rational that being in consoles is crucial since so many PC games are now console ports, and it's a advantage if they are programmed for your hardware. But it seems to me that would apply maybe doubly to Nvidia making sure they dont get frozen out. As of now you have an AMD GPU and an Nvidia one in the HD twins, so there's no advantage, but if one vendor controlled both, yeah, one can see why Nvidia might fight that pretty hard. Plus I haven't heard much that I exactly find credible on this, so much like "6670 in next xbox" I give it almost zero weight until proven otherwise.

A second major issue is that, people were surprised when both Wii and 360 went ATI back in the day. I recall ATI talking about the great lengths they went to to ensure the projects were completely shrouded in secrecy from each other intra-company. They were on separate campuses, no communication was allowed between them and it was strictly enforced, that sort of thing. Now, if they went to that level when they were basically working on non-competing parts, because the Wii aimed so low and the 360 high, I have real questions if Sony and MS would be comfortable allowing one company to do both their GPU's that presumably both aim high and are in direct competition. Heck it would even introduce issues of "hey, are they giving our competitor better tech?" or something like that. Or what about worries about "psst, the other guy has this so you should get it too" type whispers intra-AMD? Just a lot of worries there I think.

You can just avoid all that if one is with Nvidia (most likely Sony) and the other AMD (MS). I still find that the most likely scenario, also because of the powerful underlying technical minutia, familiarity, and backwards compatibility issues as well.

I think a lot of people on the internet seem to like this idea because they perceive AMD as flat out "better" than Nvidia, therefore they hope it's true, therefore the thin rumor has become almost treated as fact. I'd be cautious with that line of thinking, though. There's nothing saying Nvidia couldn't end up smoking an AMD part in a console, be careful what you wish for.
 
That's definitely a fair assessment. Me personally I prefer nVidia GPUs, so I definitely don't feel the need to see AMD dominating all the consoles.

But what if both Sony and MS were to use GCN? Off the top of my head the differences would most likely deal with the amount of CUs, clock speed, and the amount TUs and ROPs. I don't know how drastic the changes could be beyond those things when looking directly at the GPU. (I say "directly" because I'm purposely leaving out memory bus and clock.) I don't think the issue would be AMD employees sharing info, but how much money would Sony and MS be willing to spend on the GPU. If it was leaked to MS that Sony was targeting x number of CUs, etc., MS tech guys might want to see a bump to compete, but the "suits" might say they have no plans of deviating from their target cost. And that would be it.
 
Status
Not open for further replies.
Back
Top