Predict: The Next Generation Console Tech

Status
Not open for further replies.
When is the 3d transistor competing tech going to be implemented outside intel, 28nm or node after? and does finfet offer similar benefits to intel's 3d transistor?
Most disclosures indicate they are thinking of something after 20nm, at the 16/14nm node.
There may be FDSOI at 20nm or possibly after for some of the foundries.

The general concept of a finfet is physically similar to what Intel has done, or rather that Intel's transistor is an elaboration on the concept.
 
Midrange PC hardware (e.g. a GTX460 + i3-2100) is much more than 15-25% slower than high end PC hardware (e.g. a GTX580 and i7-2600). Many games will not reflect this because they are built with very low spec hardware in mind (consoles).
Again can't seem to edit my post,

But honestly i3? I thought i5 was midrange, i3 lowbudget.


PS

Is editing disabled, hidden, or does it become available after a certain post count?
 
I'm not talking about the first full year, I'm talking about at and around launch (typically launching at the end of the year for holiday sales).

In the US (I don't have reliable numbers for other regions):
ps3 from launch to end of the year 688k
xb360 from launch to end of the year 607k

I wouldn't expect much more than this for a console launched in 2012. More would be great, but I'm certainly not expecting it and I wouldn't encourage MS/Sony to wait a year so they could stockpile consoles just to have a few million ready at launch so as to avoid shortages...

PS3 didn't even make it to Europe in 2006. Although I doubt the manufacturing process was the issue there (Bluray!) they could have sold another couple of million had they had them.

Delaying a launch soley so as to stockpile would be a risky plan to create a big bang, but choosing to launch 10 months later with very strong supplies for a global launch, while also being able to make a cheaper and faster system (this will be case going from immature to mature process), while also being able to milk, grow and therefore further milk a big cash cow is a pretty good looking option to me!

Right product at the right time, with the right marketing, and the right brand, and the right support, and the right price, and the right games, and the right dimensions, and the right aesthetic, and the right interface.

Ps2's victory over DC is hardly indicative of the concept that waiting and launching on an old process node is what won the market for ps2 ....

PS2's victory over everything is clear evidence that launching on a new process should not be considered in isolation.

seriously ... ?

Deadly. Right product, right time. Xbox 360 was launched on 90nm. Kinect is burning things up right now despite the burden of a 90nm launch.

You think Nintendo are kicking themselves for the Wii not being first on 90nm? Wuu's biggest challenge is not going to be having a 45nm CPU.
 
The Cell is manufactured by IBM in Fishkill.

Oh, I thought Sony had also made Cell at their fabs, when they had them. Maybe that was RSX.

Power7+ on 32nm is supposed to be out Q1/2012. I don't believe they're in any hurry with it though, they skipped the Power6+ all together. They're better off concentrating getting Power8 @ 22nm into production.

Okay, so it is still planned to go ahead at least. Just in time for ... 2013 consoles. :eek:

For whatever reason, GloFo is a bit of a mess right now (Or maybe its alot of CYA spin by AMD :shrug:). They need to get their act together though or someone like Samsung will swoop in and snatch up those contracts.

Sony or MS could certainly go 32nm with their cpu and ganer most of the benefits they'd get at 28nm but it would leave them with an undesirable shrink to 28nm.

Maybe they could just skip a half node? With the 360 MS skipped 55nm on the GPU ... don't know if half nodes were ever an option for the CPU.
 
PS2's victory over everything is clear evidence that launching on a new process should not be considered in isolation.

Indeed.

My point (which you originally quoted in my response to AlStrong) was that the process node is not a limiting factor. Every Xbox up to now has been launched on the "latest and greatest process node".

I don't see why xb720 would be any different.

As for the concept of waiting for the node to mature:
Code:
node	trans	size	t/s	clock	watts
65	180	85	2.12	625	20
65	390	153	2.55	600	35
55	666	192	3.47	668	75
55	181	67	2.7	600	25
55	378	132	2.86	725	65
55	242	73	3.32	600	25
55	514	146	3.52	600	48
55	956	256	3.73	625	110
55	959	282	3.4	700	130
40	826	137	6.03	750	80
40	2154	334	6.45	725	151
40	1040	170	6.12	700	86
40	292	59	4.95	650	19
40	627	104	6.03	650	39
40	1700	255	6.67	775	127
40	2640	389	6.79	800	200
40	370	67	5.52	625	18
40	716	118	6.07	650	44
40	1040	170	6.12	700	86

Looking at the transisters / die size, I'm not seeing a marked improvement in Radeon HD6000 series over HD5000 series, both using 40nm, which started production in q2 2009. The newer models from early this year (a year and a half after 40nm AMD gpus started shipping) have roughly the same transistor density on the same node.

~5.91 trans/ square mm 5000 series
~6.23 trans/ square mm 6000 series

Both of which are a marked improvement over their 55nm GPUs which average 3.29 trans/square mm (A full node shrink).

Almost double actually ...

Not hard to imagine what another full node shrink from 40nm to 28nm would bring again ...


Deadly. Right product, right time.

Good luck trying to replicate the wii gimmick magic and basing design and sales projections off that.
 
Last edited by a moderator:
When is the 3d transistor competing tech going to be implemented outside intel, 28nm or node after? and does finfet offer similar benefits to intel's 3d transistor?

TSMC says 20nm for finfet, Intel is using their finfet at 22nm, the IBM club is leaning towards FDSOI at 22nm. Intel's 3dGate is finfet with a trademark.

Sony and MS are in a bit of a quandry here, it's doubtful anyone is actually shrinking a design to a finfet node, It's just too costly a redesign for what is supposed to be a cost saver. So for those two, they're best option may be to design for and implement finfet at 28nm in the hopes of a cheaper/easier/better shrink to 20nm. Or they may skip a shrink altogether nextgen and run the whole way at 28nm. No shrink though means no cost savings down the road, which means costs at launch are more important and less likelyhood of a slim console to bolster sales.
 
well it seems like you want to ignore cost/yield as a factor as you keep skipping them

Show me the hard data and then we can discuss.

Otherwise it's just a given that as time goes by, yields are expected to improve on a given node. No facts behind it for the previous nodes and none for 28nm. (ie: (costs/yields at die-size at launch = x) (costs/yields at die-size after 1 year = y) (costs/yields at die-size after 2 years = z) )

There isn't enough information to rule out the possible use of 28nm for a 2012 launch.

Two things:

1) NV/AMD are launching using the 28nm node early next year ... leaving plenty of time for the process to improve for a 2012 launch on said node

2) MS/Sony have never launched with millions of consoles ready to go. Shortages are expected at launch. The last two (xb360&ps3) saw less than 700k for launch in the US.



Cost yield didn't prevent MS from launching on 150nm in 2001, nor did it prevent them from launching 90nm in 2005. I don't see why 28nm would be the roadblock preventing a 2012 launch.
 
Indeed! Nintendo and IBM have history. But IBM have designed the CPUs for MS and Sony and they've been fabbed at somewhere other than IBM. Perhaps its the on die edram? But in that case, why not IBM's 32 nm process?

Googling for IBM I found this about Global Foundries from last year:

http://www.xbitlabs.com/news/other/...lems_with_IBM_s_32nm_Fabrication_Process.html

Where is IBM's 32 nm stuff?

My bad. I assumed you were listing it as an AMD CPU. :???:

I'm assuming they are sticking with it not just because it's "safer", but because of what the cores are based on.

I think there may be a bit more to it. But the choice of process strongly implies that Nintendo is not taking any chances in terms of time to market hick-ups due to process yield problems. It also implies that they are not pushing any envelopes in terms of die size or high power draws.

Then again, if they are not doing big, power hungry dies, they also haven't got all that much to gain by targeting an unproven process. They may as well enjoy the safety and moderate cost of 45nm.

Personally, I'd simply use an updated Xenon design (more on-die memory, faster communication to GPU memory management) and be done with it. Decent performance, simplest possible porting, low power and smallish die size at 45nm, probably below 100mm2 unless they go berserk with on-die memory. Performance would be a significant but not game changing step up from the 360. They would have room to spend more money and effort on GPU and of course controller.

Since Nintendo likes OoO CPUs, I'm thinking they may do something like what was done with Xenon. Just like Xenon took and used a modified the PPE from Cell, I'm thinking Nintendo is doing something similar with the POWER7 core by modifying it, if not building something from the ground up based on it. I want to say I saw it mentioned that the POWER7 core is close in size to the PPE. If they build it from the ground up with two threads instead of four and cut out some other unnecessary things from the core, that might/should reduce the size and TDP of the CPU.

I originally thought they would copy a Xenon design except have the things and be OoOe, but we learned something interesting recently. Apparently the one of the cores works as a "master" core for the other two and it has more L2 cache than the other two. With multiple indications of the CPU have 3MB of L2 cache, I'm thinking it breaks down to something like 1.5MB/768KB/768KB. And I wonder if they might give that core four threads and dedicate two to the OS. On the surface that sounds like a plausible hypothesis. But I'd like more input on that notion.
 
2) MS/Sony have never launched with millions of consoles ready to go. Shortages are expected at launch. The last two (xb360&ps3) saw less than 700k for launch in the US.
That's just a single month and only in US but the number is still FAR greater than the volume of GPUs on a new node they sell per-month and with GPUs they can use most of the dies due to binning.
 
Sony and MS are in a bit of a quandry here, it's doubtful anyone is actually shrinking a design to a finfet node, It's just too costly a redesign for what is supposed to be a cost saver. So for those two, they're best option may be to design for and implement finfet at 28nm in the hopes of a cheaper/easier/better shrink to 20nm. Or they may skip a shrink altogether nextgen and run the whole way at 28nm. No shrink though means no cost savings down the road, which means costs at launch are more important and less likelyhood of a slim console to bolster sales.
A logic designer can't implement a gate structure. It's either part of the process node or it isn't. 28nm doesn't have it.
 
That's just a single month and only in US but the number is still FAR greater than the volume of GPUs on a new node they sell per-month and with GPUs they can use most of the dies due to binning.

And this is different than the 90nm and 150nm launches .. how?
 
Point is you can't really compare a whole range of GPUs to a single SKU in consoles.

Point is to compare the same node at launch, and a year or more later.

Noted improvements are expected to be yields/cost. Yet, we don't have that data.

All we have is power draw, density, trans, speed.

That's it.

So is it worthwhile to wait for 28nm to "mature" or take advantage of the increased density "right now" over 40nm?

Looking at the data, it's pretty clear one will give substantially more density and should more than offset the lower initial yields.

If not, there are ways around it.

If those are prohibitive, then wait.

Simple.
 
Another comp:

Transistors powered per watt on 55nm node was averaged to 8.89 when adjusting TDP for 600MHz clockspeed.

Initial bump to 40nm improved trans/watt to 15.66, and a year later on the same node to 17.61.

Granted, it improved 24% on 40nm in that time, but the improvement straight away to first generation 40nm from 55nm was 76%.

By doing an average of the entire line of the GPU lineup on the process node, it covers all bases for yield, binning & architecture changes (aside from mobile parts).
 
Last edited by a moderator:
Cost yield didn't prevent MS from launching on 150nm in 2001, nor did it prevent them from launching 90nm in 2005. I don't see why 28nm would be the roadblock preventing a 2012 launch.

Their situation they had with the xbox in 2004 was quite different from the situation they now have with the 360 in 2011.
 
Status
Not open for further replies.
Back
Top