Predict: The Next Generation Console Tech

Status
Not open for further replies.
...and for the yield hungry:

http://semiaccurate.com/2009/08/18/nvidia-takes-huge-risk/

August2009_40nm said:
Just looking at the cost, TSMC 40nm wafers are priced at about $5,000 each right now, and we hear Nvidia is paying a premium for each of the risk wafers. 9,000 X ($5,000 + premium) is about $50 million. If there is a hiccup, you have some very expensive coasters for the company non-denominational winter festivity party. Then again, putting them in the oven this early makes dear leader less angry, so in they go.

Costs for this are pretty high, but here again, Nvidia has an out. The rumor is that TSMC, as a way to slide Nvidia money through backchannels to prevent it hopping to GlobalFoundries, is charging Nvidia per good chip while yields are below 60%. Some knowledgeable folks who design semiconductors for a living estimate the initial yields at 20-25% after repair. For reference, early yields for GT200 on a known 65nm process were about 62% for the GT280 and yield salvaged GT260.
 
I've brought in countless quotes and links suggesting this is not the case, and 28nm is coming along just swell.

This is exactly the problem. There is no possible way one can be sure that 28nm is coming along "just swell". You have no idea what you are talking about. Then you base other guesses (they are guesses, not estimations) on something that's not verifiable. And that's what is driving everyone nuts: you presenting guesses based on other guesses as fact. You can repeat your guesses over and over again until you develop carpal tunnel syndrome, but that's not going to change anyone's mind.


In fact, at this point I've brought in so much evidence which points to 2012, you all will have to convince me that 2012 WON'T be the year MS launches xb720

I honestly feel sorry for you. You are going to be one disappointed fan. At this point I think you just have to agree to disagree and move on.
 
This is exactly the problem. There is no possible way one can be sure that 28nm is coming along "just swell"...

Well ... let's see, we've only heard positive reports about TSMC 28nm.

Is that reason to suggest 2012 is impossible to use the tech?

Is that a reasonable assumption at this point?

Especially when as I said:
xb1 new node within the year it was introduced.
xb360 new node within the year it was introduced (a month actually).

Why the assumption (without proof) that 28nm will be a problem prohibiting a 2012 launch?
 
Well ... let's see, we've only heard positive reports about TSMC 28nm.
sure, this is the exact reason why 28nm GPUs were pushed back and higher end stuff will come months after low-end things.


Again, main point has been that it's not technically impossible to launch at 28nm next year, just financially stupid.
 
...it's not technically impossible to launch at 28nm next year, just financially stupid.

How so?

What are the costs per wafer and yields which make you think it will be so expensive as to be prohibitive and give up a years sales?

What kind of cost savings could be had by waiting 6 months and 6 months after that?

Are there power savings which could be had by waiting 6 months and/or 12 months? If so, how much?

What would a cooling solution cost for a q4/2012 300mm^2 28nm gpu and what savings could be had by modifying it for q2/2013 28nm and again for q4/2013?
 
Last edited by a moderator:
Have I told you yet I just love how you always ignore the interesting questions and only reply to the non-essentials?
 
This is off-topic, but I think it's relevant to everyone. With respect to "next generation" (and that goes for anything: consoles, gpu's, cpu's, etc.), I have seen countless times where people don't predict the most practical or likely outcome, but instead predict the outcome they want to see. They find every possible (often irrelevant) evidence they can find, look for hidden meanings in articles, assure themselves there is no possible way they can be wrong and that it's the others that simply don't get it. They lose their ability to objectively analyze the most likely outcomes.

TheChefO has fallen into this trap. He's tired of the quality of current consoles and wants a replacement asap. So instead of thinking what's most practical for Microsoft, he has convinced himself that the xbox720 has to be coming next year. Console and game sales still increasing? Casual users. 28nm process node? no issues! There's an up tick in pc hardware sales? People are clearly tired of current consoles. It seems to me he has lost objectivity.

So my advice (to everyone, not just TheChefO) is when you partake in these types of threads, ask yourself "is this really likely or this just something I want". Because it's very rare that something you want ends up being the actual outcome. And maybe, just maybe, the fault (and faulty logic) lies within you and not the rest of the forum.
 
Have I told you yet I just love how you always ignore the interesting questions and only reply to the non-essentials?

I'm sorry, but didn't you just do the same?





Regarding the GPU discussion, latest rumours point to AMD's high-ends being delayed but only until February. We're probably talking about a 3Billion transistors chip, which most certainly is a lot more than Wii U's GPU (at least 3x more, given the most optimistic performance targets so far?).


Given that PC GPUs (larger than Wii U's GPU) are already being used as pipe-cleaners for 28nm, by the end of Q2 2012 AMD will have had half a year of pipe-cleaning. That has to mean something.
I don't know if that's enough for a full production ramp of a chip that must be available in several million consoles until the end of 2012, but I'm pretty sure many of the people claiming that a 28nm GPU is "impossible" for the Wii U don't know either.


I think 28nm for the Wii U could've been "impossible" (I love how people throw this word around without a second thought in a tech forum BTW) if the console was launched as we initially though - Q2 2012.
But with the console launching late Q3/Q4 2012 (after E3 2012 as confirmed by Nintendo), I don't think 28nm for the GPU is impossible.



EDIT: Just figured out the conversation was about X720, not Wii U lol.
Nonetheless, if both GPUs are designed by AMD, the same applies :p
 
Last edited by a moderator:
EA, UBI, EPIC have had dev kits for a while...

Earliest devkits in rumors went out during E3 of this year, so that's something like 1.5 years of dev time for launch titles. Which is enough for ports, I guess.

We also had new rumors about early dev kits at EA just a month ago.

Lastly, we had the six core, 2 gig ddr3, and dual AMD gpus rumors.

We don't what's going on. For a console launching in the same year as the wii-u, we're hearing a LOT less about dev kit specs.
 
Last edited by a moderator:
Earliest devkits in rumors went out during E3 of this year, so that's something like 1.5 years of dev time for launch titles. Which is enough for ports, I guess.

We also had new rumors about early dev kits at EA just a month ago.

Lastly, we had the six core, 2 gig ddr3, and dual AMD gpus rumors.

We don't what's going on. For a console launching in the same year as the wii-u, we're hearing a LOT less about dev kit specs.

Is there any reason besides thermal concerns to do dual GPUs as opposed to integrating them all on a custom die? My only other guess is yield concerns.

If 1/4 of the single gpu die fails, and the rate of failure is directly proportional to surface area, it would follow that an integrated gpu would have a higher rate of rejection (1/4*3/4 + 1/4*1/4 + 3/4*1/4).

Given how good scaling has become, I can see the reason behind taking a small performance hit for better yields.
 
Is there any reason besides thermal concerns to do dual GPUs as opposed to integrating them all on a custom die? My only other guess is yield concerns.

If 1/4 of the single gpu die fails, and the rate of failure is directly proportional to surface area, it would follow that an integrated gpu would have a higher rate of rejection (1/4*3/4 + 1/4*1/4 + 3/4*1/4).

Given how good scaling has become, I can see the reason behind taking a small performance hit for better yields.

Or you have 6990s in dev kits to approximate a single gpu solution more powerful than a 6970.

Looking at the 6950m, that's a 1.1 teraflop card with a TDP of 50. Theoretically, the closest desktop equivalent at 1.1 teraflops would be around 100W. What did AMD do to cut power draw in half (approximately)?

Inductively speaking, we can fit a 200W or more desktop GPU in the Xbox 720 or PS4 if they can replicate the power reductions for the 69XXm series.

I Chefoed the last part.
 
Last edited by a moderator:
Is there any reason besides thermal concerns to do dual GPUs as opposed to integrating them all on a custom die?

...

Given how good scaling has become, I can see the reason behind taking a small performance hit for better yields.

It's about the same reasoning as for the existence of dual GPU cards: Higher performance now versus waiting for process scaling to arrive before coming up with something that's on par for performance, be it on paper or in practise or a mix of both since we know that multiGPU scaling can vary wildly. At least on console, devs can target such a setup specifically, and there may be other design considerations in effect to accommodate or mitigate the inefficiencies of multi-GPU.

That said, I would not expect them to merge the two dice as an entirely new single GPU design (when it becomes appropriate/feasible). It would literally be duct-taping and making sure the twin dice operate exactly as if they were separated.

In recent history, you can look at how IBM had to emulate the old FSB for the 360's CPU within the CPU/GPU die rather than accepting the inherent advantages brought about by merging the two (latency). It just simplifies a lot of things: devs targeting single spec still, QA for games, making sure old games work on new revision.. etc.

Regarding thermal dissipation, I'm not sure there will be much of an advantage. You might want to compare R700 to Cypress to get some sort of idea (consider performance & clocks, power, heatsink solution).


Looking at the 6950m, that's a 1.1 teraflop card with a TDP of 50. Theoretically, the closest desktop equivalent at 1.1 teraflops would be around 100W. What did AMD do to cut power draw in half (approximately)?
hm... that doesn't sound right.

I presume you're looking at something between 5750 and 5770 on desktop, whose 86W-108W includes the entire card. That 50W TDP is just for the mobile chip alone though, so you'll have to add in the RAM and MXM package. There is also the much higher clock speed of the desktop GPUs that can contribute a fair bit. You may also want to consider that mobile chips may be of a higher grade bin than your "average" desktop part. Console manufacturers don't get the luxury of cherry picking the chips as they'll want as many as possible, so no I don't think the "200W" desktop will transfer over to "100W" in your example.
 
This is off-topic, but I think it's relevant to everyone. With respect to "next generation" (and that goes for anything: consoles, gpu's, cpu's, etc.), I have seen countless times where people don't predict the most practical or likely outcome, but instead predict the outcome they want to see. They find every possible (often irrelevant) evidence they can find, look for hidden meanings in articles, assure themselves there is no possible way they can be wrong and that it's the others that simply don't get it. They lose their ability to objectively analyze the most likely outcomes.

TheChefO has fallen into this trap. He's tired of the quality of current consoles and wants a replacement asap. So instead of thinking what's most practical for Microsoft, he has convinced himself that the xbox720 has to be coming next year. Console and game sales still increasing? Casual users. 28nm process node? no issues! There's an up tick in pc hardware sales? People are clearly tired of current consoles. It seems to me he has lost objectivity.

So my advice (to everyone, not just TheChefO) is when you partake in these types of threads, ask yourself "is this really likely or this just something I want". Because it's very rare that something you want ends up being the actual outcome. And maybe, just maybe, the fault (and faulty logic) lies within you and not the rest of the forum.

I've had a pc which outclasses xb360/ps3 for quite a while.

It isn't about "what I want".

It's about predicting the next gen.

My intent with digging up all that I did was to show what is possible.

Past trends, current trends, current capabilities, future capabilities, past capabilities, past budgets, projected budgets, etc.

None of which was framed in the "I want my new console NOW, mommy!" framework.

And BTW, for those around here so insistant that my projections were somewhow so far out of left field that it is ridiculous, I'm not seeing anyone chiming in with projections which are reasonable and numbers which show why my projections are improbable or impossible other than to state "that's expensive".

No hard data behind it.

Kettle.
 
Past trends, current trends, current capabilities, future capabilities, past capabilities, past budgets, projected budgets, etc.

If this gen has taught us anything, it's that people shouldn't go by past/current trends, current/future capabilities, budgets, etc.
 
It's about the same reasoning as for the existence of dual GPU cards: Higher performance now versus waiting for process scaling to arrive before coming up with something that's on par for performance, be it on paper or in practise or a mix of both since we know that multiGPU scaling can vary wildly. At least on console, devs can target such a setup specifically, and there may be other design considerations in effect to accommodate or mitigate the inefficiencies of multi-GPU.

That said, I would not expect them to merge the two dice as an entirely new single GPU design (when it becomes appropriate/feasible). It would literally be duct-taping and making sure the twin dice operate exactly as if they were separated.

Thanks, but I was talking more from the inception of the design. i.e. Start with this dual GPU design and just merge the cores versus keep the cores separate. I was musing on that point and whether or not thermal or die yield would be the one to break the bank since you are at complete freedom to make the chip however big you want to fit in your enclosure. I'm guessing that since Nvidia typically does the larger monolithic designs and also has more rumors of poor yield surrounding it, a monolithic GPU would not be the way to go. The question in my mind then becomes how weak will each GPU be and still warrant a dual die approach? For instance, people get hyped up about a dual GPU approach and it turns out it's two relatively underpowered GPUs (say 6770s versus 6950 for wild example).

In recent history, you can look at how IBM had to emulate the old FSB for the 360's CPU within the CPU/GPU die rather than accepting the inherent advantages brought about by merging the two (latency). It just simplifies a lot of things: devs targeting single spec still, QA for games, making sure old games work on new revision.. etc.

Interesting question when it comes to talk of BC with previous generations. I've often assumed msoft would go powerpc + AMD simply to make BC a shoe-in.
 
Status
Not open for further replies.
Back
Top