Predict: The Next Generation Console Tech

Status
Not open for further replies.
If next gen consoles from Sony and MS won't release until late 2013. Why wouldn't they have AMD HD8000/9000 series GPU inside them?

I don't see why they wouldn't tbh, or at least something based on the technology of those architectures. But in raw power terms they are more likely to be form the low-mid range of those series GPU's than the high end. This is purely because since the time the PS360 launched, PC GPU's have ballooned in power draw and heat output in order to achieve their current performance. So the level of power drawn and heat given off by high end PC GPU's no longer fits within the allowed limits of consoles like they did back in the PS360 launch days.
 
I don't see why they wouldn't tbh, or at least something based on the technology of those architectures. But in raw power terms they are more likely to be form the low-mid range of those series GPU's than the high end. This is purely because since the time the PS360 launched, PC GPU's have ballooned in power draw and heat output in order to achieve their current performance. So the level of power drawn and heat given off by high end PC GPU's no longer fits within the allowed limits of consoles like they did back in the PS360 launch days.

This is a good chart to look at :D

http://www.tomshardware.com/reviews/geforce-radeon-power,2122-6.html
 
And hotter... and just add to the yield issues circa 2005...
On top of it aren't the ROPS in the smart edram simplistic?
Sane ROPs may have bring Xenos on parity with the RSX as far as size /transistors budget is concerned.
So it may have come without extra shading power (or marginal increase, maybe allowing for downclocking the whole thing, avoiding RroD incident may have been better than a few percents of extra performances).

Still I wonder how it would have turned. I think of something like 512MB of DDR2 and 128MB VRAM (so GDDR3). The memory budget (in $) might have remain lower than for the ps3 and the 360 we know.

As Xenos and Xenos are custom I would expect ATI and IBM to be able to make the system so the GPU access the main ram, say for texture read (so having the 128MB of VRAM dedicated for the various render targets and the frame buffer) and for things like memexport and the GPU reading from Xenon L2 ( a bit a reverse 360, north bridge would be on the CPU side).

I'm a bit on Ranger side on that one I would bet the system above would do better than both the ps360. The shift to deferred rendering was not really anticipated by any manufacturers, I think this configuration would have proved superior to the one we got.

128MB allow Killzone size G-Buffer + other niceties :) and the system would have been left with more RAM than both the ps360.

EDIT
I think indeed (letting the CPU aside) that it would have been close to the optimal system for this gen.
 
Last edited by a moderator:
The more that I read about Hybrid Memory Cube the more that I am convinced it's possible and necessary to have in one of Orbis / Durango. Considering MS joined the consortium few weeks ago, and rumors of Durango having 4-8 GB of Ram, the chances are good.
I'm worried about the timing. I look at WideIO which finalized the specs around December/January, and samples came out very soon after. Based on this, we can expect HMC (or the jedec standard) to start sampling after the end of 2012 when the specs are finalized.

Is there time for either MS or Sony to adapt their chip, production, dev kits, to have a console in 2013?
 
Haswell on it's own would make a pretty good next gen console. Full DX11 GPU coming in at 5-6x the power of Xenos/RSX plus a quad core CPU with another 500 GFLOPS to throw at graphics and 4-5x the general computing power of Xenon. All that in a single chip that will probably clock in at under 100w.

The only issue I see with it would be memory bandwidth. I guess they'd need to do something custom with the memory controller to handle GDDR5.
 
Haswell on it's own would make a pretty good next gen console. Full DX11 GPU coming in at 5-6x the power of Xenos/RSX plus a quad core CPU with another 500 GFLOPS to throw at graphics and 4-5x the general computing power of Xenon. All that in a single chip that will probably clock in at under 100w.

The only issue I see with it would be memory bandwidth. I guess they'd need to do something custom with the memory controller to handle GDDR5.

I doubt 5-6x the power to be honest... You're only looking at around 3-% performance jump from Ivy's iGPU and they're still slower then AMD's efforts.

Discrete CPU+GPU is the way to go...

http://www.tomshardware.com/reviews/pentium-g620-amd-a8-3870k-radeon-hd-6670,3140-5.html
 
I doubt 5-6x the power to be honest... You're only looking at around 3-% performance jump from Ivy's iGPU and they're still slower then AMD's efforts.

I don't know if that's a typo or not but Haswell will be vastly more than 3% (or even 30%) faster than Ivy's GPU. The current rumours put it af 2.5x the CU's likely with greater efficiency and at a higher clock speed. Then you've got the rumoured shared L4 so i't not unreasonable to expect 3x the performance. And Ivy is already easily 1.5x or more the performance of the current gen.

Discrete CPU+GPU is the way to go...

That kinda goes without saying. I was just pointing out that the CPU alone would make a fairly serviceable next gen console which is pretty impressive.
 
I don't know if that's a typo or not but Haswell will be vastly more than 3% (or even 30%) faster than Ivy's GPU. The current rumours put it af 2.5x the CU's likely with greater efficiency and at a higher clock speed. Then you've got the rumoured shared L4 so i't not unreasonable to expect 3x the performance. And Ivy is already easily 1.5x or more the performance of the current gen..

From the rumors and leaks I've seen it has 30 EU's as opposed to Ivy's 18.

GT3 - 30 EU's ( Comes with 64mb cache )
GT2 - 20 EU's
GT1 - 10 EU's

And from looking at Ivy's reviews it not any closee to being console level.
 
From the rumors and leaks I've seen it has 30 EU's as opposed to Ivy's 18.

GT3 - 30 EU's ( Comes with 64mb cache )
GT2 - 20 EU's
GT1 - 10 EU's

Where have yo seen the 30 EU rumor? I know VR zone was saying 20 while SA is saying 40. I've not heard of 30 though. I'll admit my post above was based on the optimistic hope that SA is correct.

Incidentally Ivybridge only has 16 EU's which would give Haswell 2.5x more if the 40 EU rumour were correct.

And from looking at Ivy's reviews it not any closee to being console level.

http://forum.beyond3d.com/showthread.php?t=61950
 
I didn't see this posted yet so:

Intel, Samsung and TSMC today announced they have reached agreement on the need for industry-wide collaboration to target a transition to larger, 450mm-sized wafers starting in 2012.

Read more: http://vr-zone.com/articles/intel-tsmc--samsung-on-450mm-wafers-in-2012/5748.html#ixzz1wUGmQXkl

For those concerned that chip costs will never reduce, and thus, MS/Sony need to target a low BOM and spec at introduction ...

Again, I say this is a farce.

The spec limitations being bantered about are merely a ploy to (hopefully in the minds of MS and Sony) enjoy even greater profits on hardware.

450mm wafers can accommodate twice as many chips as 300mm wafers. This obviously leads to lower production costs, increased production, and higher yields per wafer.

This in combination with 20nm due late next year at TSMC paints a rather quick reduction in production costs ... even if 14nm takes a bit longer to mature (~2016).
 
It's always a matter of time.

Agreed.

And with the move to 450mm wafers, that time of cost reduction is closer rather than further.
Ditto 20nm.

In other words, "the node reduction sky is falling" crowd need not worry about Ms/Sony being "forced" to target a <$200 BOM for fear of never being able to bring it below that price point over the life of the console.

Price reductions will come, along with node reductions.

Anyone suggesting otherwise is speaking nonsense and helping the spin cycle push the FUD of "out of control console costs" which attempt to justify gimped spec.
 
Last edited by a moderator:
Who said this? It's always a matter of time. "Never" is just hyperbolic nonsense.
Chef is just being hyperbolic. We say that process shrinks are going to take longer and won't have all the benefits they used to ( due to higher transistor leakage), and he translates that as "OMG, so you're saying there'll never be any more process shrinks!"
 
Chef is just being hyperbolic. We say that process shrinks are going to take longer and won't have all the benefits they used to ( due to higher transistor leakage), and he translates that as "OMG, so you're saying there'll never be any more process shrinks!"

The only one that looks like it will encounter an unusual delay is 14nm which at this point looks like it will be around 2016.

2013 Launch = 28nm
2014 mature 28nm on 450mm wafers (reduced cost ~unknown)
2015 shrink = 22nm (twice density = half cost)
2017 shrink = 14nm (twice density = half cost)

I'm not seeing the "OMG everything changes now, so we better launch @ <200mm2 total die budget to fit that into a $200 BOM down the road" problem that others seem to be fearing. Looks to me like every two years, node shrink...

Granted: leaky transistors, possible delays, TSMC potentially charging more, all of these should be accounted for. And an adjustment to bring MSRP as close to BOM as possible should do the trick to avoid another round of billion dollar losses. But that doesn't mean slap a tiny APU in the box to hit $200 day one for fear of a lack of node shrink progress...

In fact, the above roadmap price reductions if accompanied by pricedrops would be MORE aggressive than what MS has done this gen.

xbox360 has been sitting on the same MSRP for FOUR YEARS. And it isn't exactly cheap at $300 for xb360+HDD. Thus the pricing power to introduce a new box for $400 when the old box is on sale right now for $300 is pretty good. $400 BOM can buy some pretty potent chips.

So again, I'm not seeing a justification for an uber small, uber cheap chipset in xb720 (other than foolish greed).

Greed I understand, but foolishly chasing it for a short term gain is just ... well foolish.

IMHO
 
Last edited by a moderator:
Anyone suggesting otherwise is speaking nonsense and helping the spin cycle push the FUD of "out of control console costs" which attempt to justify gimped spec.

Well, just bear in mind that pushing the power envelope now is considerably different than it was 7 years ago (i.e. power densities). The cooling solution or chassis size will have to be significantly better in order to avoid certain outcomes even for similar die areas.

It's a web of costs, and we really don't know how far they are willing to go this time around as their market priorities and concerns will have shifted since 2005/2006*. But then we don't know exactly what their near-future plans are for the current devices (well, at least until Monday, but we have fairly strong hints for the 360-side), and then how that might fit into a future strategy for the next consoles (e.g. co-existence, online content/XBLA/PSN compatibility, subscriptions).

* edit/insert: For example, if the power consumption/cooling requirements are extreme, then they'd take years to get to a small enough chassis for a more affordable set-top-box, just like 360. They can't depend on 360 serving that role until 14nm arrives for the next console.
 
The only one that looks like it will encounter an unusual delay is 14nm which at this point looks like it will be around 2016.

2013 Launch = 28nm
2014 mature 28nm on 450mm wafers (reduced cost ~unknown)
2015 shrink = 22nm (twice density = half cost)
2017 shrink = 14nm (twice density = half cost)
Except we're not getting twice the densities and half the cost each node. We're getting small improvements, and people like nVidia saying node reductions aren't proving worth the cost; that is, they aren't getting any notable benefits. That's why PS3 is still on 45nm for Cell. It launched at 90nm in 2006 when 90nm was 4years old. 65 nm was late. Cell got a shrink in 2007 ASAP to get it to the intended launch node. With the slim in 2009, Cell was shrunk to 45 nm. And that's where they've stopped. That's effectively 2 node shrinks, ignoring the late start, over 6 years. On PS2, EE and GS got process shrinks every year. Whereas PS3 has had two. We're theoretically able to produce 22 nm parts, let alone 32nm, but we're not seeing the die shrinks. Why is this? Could it have anything to do with what the industry is saying about die shrinks not yielding the same benefits as they used to...?

So launching at 22 nm now means having virtually no process shrinks that can be relied upon, at least for several years until they are providing sufficient gains to make the transistion worthwhile (if that happens). However big and hot your chip is from day 1, whether made on 300 or 450mm wafers, is how big and hot it'll be on day 1500. Unless some other technology comes along to save the day, which is a significant gamble. Sony lost plenty of money believing 65nm would be available at launch for PS3. Imagine if 65nm didn't appear until 2010!

You should read this to get a better understanding of our concerns.
 
Well, just bear in mind that pushing the power envelope now is considerably different than it was 7 years ago (i.e. power densities). The cooling solution or chassis size will have to be significantly better in order to avoid certain outcomes even for similar die areas.

Agreed, but it isn't completely different.

Smarter decisions and designs I'm sure will be implemented this time. In fact, I'll go out on a limb and guess neither MS or Sony will put a CPU or GPU under the optical drive! :p

Even with that, I can understand a smaller die budget. Say from the 470-438mm2 range of xb360/ps3 down to ~400mm2

That's reasonable.

Especially considering the core msrp can be safely bumped from $300 last gen to $400 this gen.

Smaller die budget, no expensive EDRAM, more mature node, smarter box design, and higher MSRP.

Oh, and roughly $2B annually in xbl revenue to offset any hiccups in manufacturing costs or expensive launch ad campaigns. ;)

It's a web of costs, and we really don't know how far they are willing to go this time around as their market priorities and concerns will have shifted since 2005/2006*.

Indeed those priorities should be shifting. They went from building from the ground up with not much to lose to having a $2B+/yr golden goose to protect.

With that, they should be willing to go pretty far to keep that goose happy...

* edit/insert: For example, if the power consumption/cooling requirements are extreme, then they'd take forever to get to a small enough chassis for a more affordable set-top-box, just like 360. They can't depend on 360 serving that role until 14nm arrives for the next console.

Interesting point, but again, I'm not seeing how xb360 would not serve that role until even 2016.

Unusual for sure, but again, that's a healthy golden goose right now. I'd say it's a smart idea to keep it happily laying those golden eggs year after year. If that means waiting until ~2016 for xb720 to roll into a lower tier pricepoint, then so be it.

It's only 3 years after xb720 launch. Not completely unheard of.
 
Last edited by a moderator:
Except we're not getting twice the densities and half the cost each node. We're getting small improvements, and people like nVidia saying node reductions aren't proving worth the cost; that is, they aren't getting any notable benefits. That's why PS3 is still on 45nm for Cell. It launched at 90nm in 2006 when 90nm was 4years old. 65 nm was late. Cell got a shrink in 2007 ASAP to get it to the intended launch node. With the slim in 2009, Cell was shrunk to 45 nm. And that's where they've stopped. That's effectively 2 node shrinks, ignoring the late start, over 6 years. On PS2, EE and GS got process shrinks every year. Whereas PS3 has had two. We're theoretically able to produce 22 nm parts, let alone 32nm, but we're not seeing the die shrinks. Why is this? Could it have anything to do with what the industry is saying about die shrinks not yielding the same benefits as they used to...?

So launching at 22 nm now means having virtually no process shrinks that can be relied upon, at least for several years until they are providing sufficient gains to make the transistion worthwhile (if that happens). However big and hot your chip is from day 1, whether made on 300 or 450mm wafers, is how big and hot it'll be on day 1500. Unless some other technology comes along to save the day, which is a significant gamble. Sony lost plenty of money believing 65nm would be available at launch for PS3. Imagine if 65nm didn't appear until 2010!

You should read this to get a better understanding of our concerns.

I've read up on the concerns and they are mostly monetary. Nvidia is unhappy with what TSMC is expected to charge for the new nodes. Not that the transistor densities aren't shrinking enough. Not that power/transistor isn't shrinking.

TSMC was asking for 25% more for the 28nm wafers late last year. Just a guess, but next year, that price will likely come down. And again when 450mm wafers are ramped up.

Nvidia and TSMC are having nothing more than a pricing dispute which is likely spurred by increasing demand by smart phones. Supply and demand.

Not to say these new nodes will be cheap to implement and TSMC will never increase their price again, but supply and demand being what it is, the pricing will balance out as TSMC ramps up production.

As to the 90nm node being ancient by the time Sony used it, the first GPU to use 90nm was in late 2005 - AFAICS.

Why haven't Sony and MS shifted to a smaller node yet? TSMC skipped 32nm. 28nm just came online late last year and for a non-premium line like xb360 and ps3, it doesn't make sense to pay a premium price for a premium node. When that price comes down (late this year, early next) I expect we will see new slim models.

And BTW, I've never suggested Sony or MS launch at 22nm. (unless one of them get's the bright idea to wait until 2014 ... :rolleyes: )

28nm is the node I've suggested they launch with.
 
Status
Not open for further replies.
Back
Top