Nvidia GT300 core: Speculation

Status
Not open for further replies.
DegustatoR, do you think NVidia will launch a D3D11 GPU this year? More than one? What performance segment or segments?

Jawed
 
wrt to--->http://pctuning.tyden.cz/images/stories/g300.png
Ooh, haven't seen an image for a long time that's so desperate to be taken as legit. :LOL:

This early in any SKU's stage (even for the mostly-confirmed-earlier Evergreen chips), clocks are obviously not finalized, and anything could be a variable.

Knowing OBR, if this was actually concrete 100%, he'd be making childish noises over at XtremeSystems already.
 
DegustatoR, do you think NVidia will launch a D3D11 GPU this year? More than one? What performance segment or segments?
I have no idea. Last i heard they were hoping to release the first DX11 GPU in autumn.
What you should understand is that it's not necesseraly GPU itself being the reason for any possible delays. AFAIK G300 is ready to go into production for quite some time now. But when it'll be able to - is still a question.
Here's a five dollar question for all of you: do you consider RV740 "a success" right now?
 
I have no idea. Last i heard they were hoping to release the first DX11 GPU in autumn.
What you should understand is that it's not necesseraly GPU itself being the reason for any possible delays. AFAIK G300 is ready to go into production for quite some time now. But when it'll be able to - is still a question.
Here's a five dollar question for all of you: do you consider RV740 "a success" right now?

You're [DELETE] [edit] mistaken[/edit]. NV had problems with 40nm, they are working through them right now. Interestingly enough, Altera didn't have any problems:

http://www.semiconductorblog.com/2009/07/06/tsmc-40nm-yield-issues/

Here's from NV's concall:

Hans Mosesmann - Raymond James

Thanks. Jen-Hsun, on the 40-nanometer, since you brought it up, I know you are not going to talk about new products that are not announced but can you give us an update on how the ramp is going relative to previous process nodes in terms of the ramp, the volumes, and can you give us an update on what percentage of the mix by the end of the year could be coming through that process node? Thanks.

Jen-Hsun Huang

Let’s see -- the ramp is going fine. It’s during the -- you know, we are ramping 40-nanometer probably harder than anybody and so we have three products in line now in 40-nanometer and more going shortly. So there’s -- this is a very important node. TSMC is working very hard. We have a vast majority of their line cranking right now with new products, and so we are monitoring yields and they are improving nicely week-to-week-to-week, and so at this point, there’s really not much to report.

In terms of the mix towards the end of the year, let’s see -- I haven’t -- my rough math would suggest about 25%, is my guess. I mean, there’s still going to be a lot of 55-nanometer products. A lot of our MCP products, ION, for example, is still based on 55-nanometer and ION is going to be running pretty hard. I think you heard in David’s comments earlier that our Intel chipset product line is our fastest growing business and so my sense is that that’s going to continue to be successful and that is still in 55-nanometer. So I would say roughly 25% to 30% is my rough estimate going into the end of the year.




If the yields at TSMC were too low for production, but the GT300 was ready to go,
they would be passing out samples to board partners, so they can start prepping for high volume production.

You don't make any sense.

DK
 
Last edited by a moderator:
What you should understand is that it's not necesseraly GPU itself being the reason for any possible delays. AFAIK G300 is ready to go into production for quite some time now. But when it'll be able to - is still a question.
Here's a five dollar question for all of you: do you consider RV740 "a success" right now?
RV740 is available to buy right now. It can't have been a success in terms of revenue if it wasn't on sale for 1-2 months though.

But, saying "G300 is ready to go into production" is like saying "in May 2008 RV740 was ready to go into production". Or, indeed, "in May 2008 GT218, GT216 and GT214/5 were ready to go into production". It's a bland, meaningless statement in the present. Those chips (well maybe not all of the NVidia ones) were meant to be on the market for Christmas 2008.

If your first chips on a new node are used in partnership with TSMC to ramp-up that node (the implication is that Altera, AMD and NVidia are ramp partners, dunno to what degree though) that implies some degree of design flexibility. Dunno if that would go as far as taping-out more than once (is that meaningful?).

If you're AMD or NVidia then later chips (Evergreen, GT300) will be held back until a sufficient degree of confidence has arisen from ramp-chips (RV740, GT21x). Obviously, we've got no idea when that confidence occurred. Juniper (128-bit chip, 181mm² - it seems - worth noting it's not massive in comparison with RV740) was running within 1 quarter of RV740, before 40nm "was fixed". GT300 could have been up and running at the same time as GT21x, solely because the latter crashed into GT300's ramp schedule, which hasn't changed.

AMD's only demonstrated one chip, so there's no reason to assume Cypress (256-bit chip) is as ready as Juniper (unless it's MCM - erm, unless the MCM bit is broke).

I personally have no idea of the status of GT300 - just the wild and wildly entertaining disparity in rumours of tape-out timing. GT218 is something like 60mm². The gulf between it and ~470mm² for GT300 (say) is a hell of a lot of engineering and process ramp :???:

Anyway, as late-nazi, I do have the most wonderfully concrete date to judge the timeliness of GT300 - W7 launch :p

Jawed
 
You're an idiot. NV had problems with 40nm, they are working through them right now. Interestingly enough, Altera didn't have any problems:

http://www.semiconductorblog.com/2009/07/06/tsmc-40nm-yield-issues/
That article is horribly off at the start. For instance, 40nm GPUs are supposed to have been on the market for 6 months+, so basing the article on recent reports about 40nm, when the problems go way back, is pretty lame.

Second, Altera's chips are way simpler than NVidia's. Even if NVidia is "relatively incompetent" (see bumpgate) that can't explain all of the reasons for NVidia's problems. And don't forget AMD had problems too.

Pity they didn't bother to look at ATI GPUs while they were probing 40nm. But then when you read laughable paragraphs like this:

My question was and is, “What does a fabless company gain by jumping onto a new node before it’s mature?” Another trusted colleague suggested that it is Nvidia’s customers who drive this decision. They can leverage the idea that a process shrink will reduce production costs for the Nvidia GPU’s – perhaps not today but down the road. Of course, they are playing dumb on the cost issue to get better performance than they would if Nvidia just waited until it was sure TSMC could provide a more reasonable number of working die per wafer. This seems to fit with the DigiTimes reports out of Nvidia about the transitioning to 40nm. Nvidia’s current plan is to move to 40nm only for OEM devices and will switch its own branded products sometime later.
which effectively posits that AMD doesn't exist as a competitor in graphics :rolleyes: you know the author is on something.

If the yields at TSMC were too low for production, but the GT300 was ready to go,
they would be passing out samples to board partners, so they can start prepping for high volume production.
How do you know they haven't? How do you know, for example, game developers aren't working with it?

Jawed
 
RV740 is available to buy right now. It can't have been a success in terms of revenue if it wasn't on sale for 1-2 months though.
So the only reason for RV740 not being successful right now is little time passed since its introduction?

But, saying "G300 is ready to go into production" is like saying "in May 2008 RV740 was ready to go into production". Or, indeed, "in May 2008 GT218, GT216 and GT214/5 were ready to go into production". It's a bland, meaningless statement in the present.
It's as far as i can disclose, sorry. You may think it's bland and meaningful but that's your choice of what to do with this information. And no, it's not like saying those other things you've quoted. But it's a bit like saying "G200 was ready to go into production since March'08", yes.

Those chips (well maybe not all of the NVidia ones) were meant to be on the market for Christmas 2008.
It doesn't matter what was meant. G80 was meant to be on the market in 2005 and R600 in 2006. GT200 was meant to be G100 384SP/384-bit GDDR3. There are many reasons for chips to go into production when they do and how they do and the chips themselfs are only partly responsible for their schedules.
What i'm saying is that even if G300 has already taped out it doesn't mean that it'll launch in retail sharply 3 months after that happened. It doesn't work like this any more.
AMD had wafers of RV840s (probably) in May but will they launch them in August or even September? Highly doubtful.
It's even more true in case of G300 which is a more complex chip. Why do you think that NV hasn't been doing the same 40G testing AMD did? They both have exactly the same access to the new process and NV needs that process even more than AMD does. So why would AMD has a wafer of DX11 GPUs and NV hasn't even taped out any? THAT doesn't make any sense at all.

If your first chips on a new node are used in partnership with TSMC to ramp-up that node (the implication is that Altera, AMD and NVidia are ramp partners, dunno to what degree though) that implies some degree of design flexibility. Dunno if that would go as far as taping-out more than once (is that meaningful?).
Of course they may have taped out more than once, more than one version of the chip. But what i'm saying is that G300 is taped out in it's final form some time ago. What was before that is unknown to me. Whether they'll need a respin and how much respins is unknown either.

If you're AMD or NVidia then later chips (Evergreen, GT300) will be held back until a sufficient degree of confidence has arisen from ramp-chips (RV740, GT21x).
That's true and is partly what i'm saying here. But that doesn't mean that you held back the first tape out also. There are no reasons to held that back.

Obviously, we've got no idea when that confidence occurred. Juniper (128-bit chip, 181mm² - it seems - worth noting it's not massive in comparison with RV740) was running within 1 quarter of RV740, before 40nm "was fixed". GT300 could have been up and running at the same time as GT21x, solely because the latter crashed into GT300's ramp schedule, which hasn't changed.
Agreed. But ramp is the start of mass production. You don't need to start a mass production to have a first silicon.

I personally have no idea of the status of GT300 - just the wild and wildly entertaining disparity in rumours of tape-out timing. GT218 is something like 60mm². The gulf between it and ~470mm² for GT300 (say) is a hell of a lot of engineering and process ramp :???:
There are (were) other GT21xs...

Anyway, as late-nazi, I do have the most wonderfully concrete date to judge the timeliness of GT300 - W7 launch :p
I don't get this W7 launch woo-hoo. I think that W7 appearing in retail will do exactly ZERO for GPU/videocards sales. So does it really matter to have a DX11 GPU by the time W7 launches? I do believe that something like CoDMW2 will have a much bigger impact than W7.

Try reading my sig.
Why should I? Your postings are quite enough to know exactly who you are.
 
Big machines running 24/7 for weeks to months? :oops:
Or mostly just in working hours with pauses for human intervention & then re-running?
What sort of scale of machine do they use, like Top 500 scale or just decently big?

Synth is primarily a single thread workload.
 
My thinking is, Nvidia is more concerned with countering Larrabee in every way, than it is with getting NV60 / G300 out in time to meet AMD's DX11 GPUs late this year.
 
So the only reason for RV740 not being successful right now is little time passed since its introduction?
Eh?

It's as far as i can disclose, sorry. You may think it's bland and meaningful but that's your choice of what to do with this information. And no, it's not like saying those other things you've quoted. But it's a bit like saying "G200 was ready to go into production since March'08", yes.
No interest in beating that dead horse.

It doesn't matter what was meant. G80 was meant to be on the market in 2005 and R600 in 2006.
R600 was meant for 2005 too, it had 3 revisions and then was strategically delayed to line up with lower-class chips that then got delayed (due to 65nm woes), and the whole thing was cack.

GT200 was meant to be G100 384SP/384-bit GDDR3. There are many reasons for chips to go into production when they do and how they do and the chips themselfs are only partly responsible for their schedules.
Wonder if that was broken or "the wrong product" or both...

What i'm saying is that even if G300 has already taped out it doesn't mean that it'll launch in retail sharply 3 months after that happened.
Until we see a chip that does achieve a 3 month interval by intent, I'll stick to assuming 6 month intervals.

It doesn't work like this any more.
AMD had wafers of RV840s (probably) in May but will they launch them in August or even September? Highly doubtful.
What's doubtful? :???: You mean it's likely to be October, because of the 40nm grief?

If you're planning for the first version of a chip to be production ready, then 3-4 months would be reasonable. But does anyone write a plan on that basis?

AMD may only be able to launch in October because version 1 is fine. Otherwise it'd be a January launch. Dunno. Maybe RV740's schedule crashed into Juniper's. Maybe the expectation is that smaller chips can go from tape-out to product shelf in 3-4 months, but there's no decent evidence anyone's planning it to work that way.

It's even more true in case of G300 which is a more complex chip. Why do you think that NV hasn't been doing the same 40G testing AMD did? They both have exactly the same access to the new process and NV needs that process even more than AMD does. So why would AMD has a wafer of DX11 GPUs and NV hasn't even taped out any? THAT doesn't make any sense at all.
I'm confused. NVidia has done the same testing as AMD. Supposedly NVidia was ahead of AMD last year (well NVidia gave off that vibe). NVidia has "launched" 3 chips, but it seems GT215 won't actually appear for a few months. So NVidia has two 40nm chips in production. GT215 may not be in production because of continuing problems. Or it could be nothing more than production capacity.

(The whole GTS240/GTS250 renamed-G92 nonsense seems to have arisen out of the missing 40nm chips, so NVidia had used the names regardless of the fact the chips didn't exist. That's quite some trick :p )

For what it's worth I'd expect NVidia to have taped-out GT300. I'm incredulous at the rumours that it hasn't happened. The only variable (not likely) would be that NVidia was originally planning a 2010Q1 launch of GT300 (assuming that consumer W7 was launching then - and then NVidia was caught out by Microsoft bringing the launch forward by months). Still for a major new generation of chips, it doesn't make sense for a short tape-out->launch interval. So from that point of view it makes sense that GT300 should have taped out by now.

So any rumours of GT300 not taping out are nominally silly. Indeed, thinking about it now, it's radically silly to think that such rumours would be planted by NVidia to put-off AMD, as your normal expectation is that AMD would laugh that off as a plant for being so ridiculous. Loops of cold-war thinking are fun, eh? Well, as long as there's no nukes involved.

That's true and is partly what i'm saying here. But that doesn't mean that you held back the first tape out also. There are no reasons to held that back.
Depends if the first chips were meant as tests or not and the importance of the resulting dependencies in GT300. But as I said before, GT300 may have taped-out according to its original schedule - it could be a case of GT21x slipping into GT300's schedule, nothing more.

Agreed. But ramp is the start of mass production. You don't need to start a mass production to have a first silicon.
By "ramp schedule" I was referring to GT300 being given the go-ahead to be taped-out etc. i.e. GT21x are all so late that GT300 arrived at TSMC without NVidia's planned confidence having been achieved.

I don't get this W7 launch woo-hoo. I think that W7 appearing in retail will do exactly ZERO for GPU/videocards sales. So does it really matter to have a DX11 GPU by the time W7 launches? I do believe that something like CoDMW2 will have a much bigger impact than W7.
If GT300 is late I expect NVidia to be saying exactly the same :p Remember, D3D10.1 is a waste of time. Oh, wait...

NVidia's not been executing well for quite a long time (e.g. 3 spins for GT200b, hmm, same as R600), so GT300 being on time would be a cool surprise.

Jawed
 
My thinking is, Nvidia is more concerned with countering Larrabee in every way, than it is with getting NV60 / G300 out in time to meet AMD's DX11 GPUs late this year.

Well unless they know what AMD is coming with they should be very concerned as AMD is a far more credible threat than Intel right now. But it is really sad that we had all these delays and cancellations in the past few years. Things might have been a lot more exciting otherwise.
 
Actually SI and Cap can have a large effect on circuits in chips.
I didn't claim otherwise. But eventually, the parameter you care about is timing. It's all those effects rolled up into one number.
Chalnoth wrote that you need to care about SI and cap when doing a shrink. No, you don't, not explicitly: it's a given that those will be fixed as part of your standard flow.

Just churning synth can takes weeks to months.
Synth may be a single threaded process, but it's not as if you're going to do run 'compile' on your top level and have 1 process churn through the full chip.

Any full node change is fairly significant. There are tons of things that worked in the prior processes that will need reworking to work in the next process if you care at all about frequency.
Yes, for analog blocks and custom circuits. For vanilla standard cell RTL design that run below, say, 1GHz? If you stay within the same process class, you never see frequency reductions when shrinking in 99% of cases.

Or any number of other issues ranging from bad parametrization or just the will of the universe. Timing discrepancies have a large list of factors between EDA and silicon.
You work in a different environment, where 3GHz+ clocks are normal and heavy pessimism in process characterization is not acceptable. This is not where pretty much all fabless companies live. They get to live with the libraries that are given to them by fabs that prefer to make their process look a bit slower up front instead of having to explain later why silicon doesn't perform at speed in all corners.
 
That article is horribly off at the start. For instance, 40nm GPUs are supposed to have been on the market for 6 months+, so basing the article on recent reports about 40nm, when the problems go way back, is pretty lame.

Second, Altera's chips are way simpler than NVidia's. Even if NVidia is "relatively incompetent" (see bumpgate) that can't explain all of the reasons for NVidia's problems. And don't forget AMD had problems too.

Yes, FPGAs are simpler. But for some reason, those FPGAs had fine yield with larger die size, while GPUs from NV and ATI did not. Why could that be?

My point wasn't that I believe the article word for word, but that they are raising some excellent points about what could possibly cause the problems.

How do you know they haven't? How do you know, for example, game developers aren't working with it?

It's possible game developers are working with GT300. However, I'd be shocked if that was the case and info hadn't leaked. Not saying it's impossible, just that it seems unlikely.

DK
 
Yes, FPGAs are simpler. But for some reason, those FPGAs had fine yield with larger die size, while GPUs from NV and ATI did not. Why could that be?
Because FPGA's are in that respect similar to RAMs? Highly repetitive structures that lend themselves extremely well to repair with redundancy?
 
Yes, FPGAs are simpler. But for some reason, those FPGAs had fine yield with larger die size, while GPUs from NV and ATI did not. Why could that be?
For a start you can use an arbitrary amount of redundancy within the structure of the FPGA, whereas a GPU's wildly varying types of functional units don't all have the scale to easily offer such flexibility.

~29% of RV770, for instance, has ALUs with 1 in 17 redundancy (though the blocks of memory within each ALU amount to ~32% of the area and presumably have higher resilience). You could achieve far more resilience in an FPGA, e.g. 1 in 9, but whatever you choose, you have fair freedom. And that would apply for rather more than 30% of the die. Don't know anything about redundancy levels in other parts of RV770. Clearly there are variants such as HD4830 with stuff turned off on a more coarse level.

Do you know the redundancy built into Altera's designs?

My point wasn't that I believe the article word for word, but that they are raising some excellent points about what could possibly cause the problems.
The article raises a solitary point with absolutely no reference to historical GPUs nor competitors.

It's possible game developers are working with GT300. However, I'd be shocked if that was the case and info hadn't leaked. Not saying it's impossible, just that it seems unlikely.
I think G80 was out with some game studios before its launch...

Jawed
 
When was RV740 launched? In March? OK, let's say May. It's been 2+ months. Was RV770 successful in September? How much time do we need to consider a chip successful?
I'm just pointing out a reason why a "ready" chip may be "put on hold" because of the issues besides the chip itself.

Until we see a chip that does achieve a 3 month interval by intent, I'll stick to assuming 6 month intervals.
RV670, G92, G94, G94b. Dunno about G92b.

What's doubtful? :???: You mean it's likely to be October, because of the 40nm grief?
October or even later. Although they certainly may try to repeat the "success" of RV740 and launch the thing in August or September.

I'm confused. NVidia has done the same testing as AMD. Supposedly NVidia was ahead of AMD last year (well NVidia gave off that vibe). NVidia has "launched" 3 chips, but it seems GT215 won't actually appear for a few months. So NVidia has two 40nm chips in production. GT215 may not be in production because of continuing problems. Or it could be nothing more than production capacity.
So why do you think that these chips (there where more than 3; let's not forget GT214 and GT212) are all that NVs been working on since last year? Wouldn't it make much sense for them to work on G30x generation in parallel to using GT21x generation as a "testing" chips?

For what it's worth I'd expect NVidia to have taped-out GT300. I'm incredulous at the rumours that it hasn't happened. The only variable (not likely) would be that NVidia was originally planning a 2010Q1 launch of GT300 (assuming that consumer W7 was launching then - and then NVidia was caught out by Microsoft bringing the launch forward by months). Still for a major new generation of chips, it doesn't make sense for a short tape-out->launch interval. So from that point of view it makes sense that GT300 should have taped out by now.
Well if it hasn't then you probably shound't expect it sooner than Spring'10. Which combined with GT212 cancellation would be a disaster for NV.

Depends if the first chips were meant as tests or not and the importance of the resulting dependencies in GT300. But as I said before, GT300 may have taped-out according to its original schedule - it could be a case of GT21x slipping into GT300's schedule, nothing more.
...Or it could be the case of 40G being unable to handle GT21x line-up...

If GT300 is late I expect NVidia to be saying exactly the same :p Remember, D3D10.1 is a waste of time. Oh, wait...
I highly doubt that sole G300 will make any difference anyway even if it won't be late. The question is: why is it "assumed" that W7 will have any impact on GPU sales? Even DX11 won't have any impact until some DX11 games show up. W7 itself? I have no idea why people think that it'll do anything to GPU/videocards sales...
As for 10.1 - i'm still waiting for the details of its implementation in GT21x. Maybe it _is_ a waste of time on NV's hardware.

NVidia's not been executing well for quite a long time (e.g. 3 spins for GT200b, hmm, same as R600), so GT300 being on time would be a cool surprise.
GT200b went to production in B2 revision. B3 was made for GTX295. It's not like B2 was faulty or anything.
Otherwise it's hard to say anything about NV's execution because it's hard to determine the reasons for NV's woes lately. Was GT21x postponement/cancellation a problem of NV's execution or TSMC's technology? Was GT200 lateness (and comparative suckiness) a problem of NV's execution or NV's tactical mistake? I'd say that they are executing as good as they can right now. It's not like they have many options other than TSMC's 40G.
 
So the only reason for RV740 not being successful right now is little time passed since its introduction?

It's as far as i can disclose, sorry. You may think it's bland and meaningful but that's your choice of what to do with this information. And no, it's not like saying those other things you've quoted. But it's a bit like saying "G200 was ready to go into production since March'08", yes.

It doesn't matter what was meant. G80 was meant to be on the market in 2005 and R600 in 2006. GT200 was meant to be G100 384SP/384-bit GDDR3. There are many reasons for chips to go into production when they do and how they do and the chips themselfs are only partly responsible for their schedules.
What i'm saying is that even if G300 has already taped out it doesn't mean that it'll launch in retail sharply 3 months after that happened. It doesn't work like this any more.

Taping out doesn't really mean a whole lot. Their first tape out could be a total dud and require a radical revision.

AMD had wafers of RV840s (probably) in May but will they launch them in August or even September? Highly doubtful.
It's even more true in case of G300 which is a more complex chip. Why do you think that NV hasn't been doing the same 40G testing AMD did? They both have exactly the same access to the new process and NV needs that process even more than AMD does. So why would AMD has a wafer of DX11 GPUs and NV hasn't even taped out any? THAT doesn't make any sense at all.

Probably because NV has always been very conservative about moving to new process technology since the NV30. It seems that ATI has a better physical design team that is able to deal with a novel process, while NV really doesn't. And it's not because the folks at NV are stupid, it's just that they decided to take a conservative approach to new process tech and thus don't need or have folks who can find obscure process characterization bugs.

It's as far as i can disclose, sorry. You may think it's bland and meaningful but that's your choice of what to do with this information. And no, it's not like saying those other things you've quoted. But it's a bit like saying "G200 was ready to go into production since March'08", yes.

The problem is that your statement is useless and probably wrong. If GT300 was ready to go into production, that would mean that they could achieve reasonable yields.

The reality is that when you design a chip, it's not ready for production until it will achieve acceptable yields. If you have a problem with the process, it may very well require a work around in design. That kind of stuff happens all the time and it's why close collaboration is required.

Of course they may have taped out more than once, more than one version of the chip. But what i'm saying is that G300 is taped out in it's final form some time ago. What was before that is unknown to me. Whether they'll need a respin and how much respins is unknown either.

If it taped out in it's final form, how could there be more respins? That does not make sense.

Agreed. But ramp is the start of mass production. You don't need to start a mass production to have a first silicon.

That's pretty obvious. However, if you have first silicon for a GPU a long time before you have mass production, then you planned poorly and screwed up.

DK
 
When was RV740 launched? In March? OK, let's say May. It's been 2+ months. Was RV770 successful in September? How much time do we need to consider a chip successful?
I'm just pointing out a reason why a "ready" chip may be "put on hold" because of the issues besides the chip itself.

Why don't you come out and say what you mean instead of trying to imply nebulous crap?

If the chip is ready but the foundry is not, that is a huge strategic mistake by the project managers. If the chip is ready but partners are not, that's also a huge strategic mistake.

When you design chips on a cutting edge node you need to work directly with the foundry and cooperate to get anything out the door with reasonable yield.

The bottom line is that if NV is suffering in the market (they are), they want to get their GPU out the door as soon as feasibly possible. You seem to imply that the GPU is finished, but other things are holding them up and that just doesn't seem very likely...especially given what NV management has said about their expectations for 40nm.

RV670, G92, G94, G94b. Dunno about G92b.

Funny, those are all derivative products and not particularly new nor innovative. Supposedly GT300 will be new and innovative. If nothing else they need to support DX11. And don't forget that testing and validation is almost as much work as designing the thing in the first place.

October or even later. Although they certainly may try to repeat the "success" of RV740 and launch the thing in August or September.

So why do you think that these chips (there where more than 3; let's not forget GT214 and GT212) are all that NVs been working on since last year? Wouldn't it make much sense for them to work on G30x generation in parallel to using GT21x generation as a "testing" chips?

Absolutely. I'm sure they are doing multiple chips in parallel. To hit their product cadence they have to.

However, not all learning from earlier chips will translate over. Sometimes you learn about problems far too late to fix the next gen. Remember, these things have to be locked down at least 6-12 months before first tape out.

Well if it hasn't then you probably shound't expect it sooner than Spring'10. Which combined with GT212 cancellation would be a disaster for NV.

...Or it could be the case of 40G being unable to handle GT21x line-up...

I suspect it has to do with both design and process technology.


I highly doubt that sole G300 will make any difference anyway even if it won't be late. The question is: why is it "assumed" that W7 will have any impact on GPU sales? Even DX11 won't have any impact until some DX11 games show up. W7 itself? I have no idea why people think that it'll do anything to GPU/videocards sales...
As for 10.1 - i'm still waiting for the details of its implementation in GT21x. Maybe it _is_ a waste of time on NV's hardware.

I think many folks still hanging onto Windows XP will want to upgrade to Win7. That will definitely drive upgrades and new system purchases.

GT200b went to production in B2 revision. B3 was made for GTX295. It's not like B2 was faulty or anything.
Otherwise it's hard to say anything about NV's execution because it's hard to determine the reasons for NV's woes lately. Was GT21x postponement/cancellation a problem of NV's execution or TSMC's technology? Was GT200 lateness (and comparative suckiness) a problem of NV's execution or NV's tactical mistake? I'd say that they are executing as good as they can right now. It's not like they have many options other than TSMC's 40G.

Here's the bottom line:
TSMC's 40G works fine for Altera. TSMC's 40G doesnt' work fine for NV (and appears to not work so great for ATI).

That is both TSMC's fault, but also NV's fault. Clearly NV and ATI have something in their design that causes yield problems that Altera doesn't. Perhaps it's embedded SiGe, perhaps it's something else.

You can blame failures on whoever you want, but ultimately canceling GT21x hurts NV more than TSMC...so does it matter who's fault it is? I just hope that whatever went wrong there doesn't impact GT300.

DK
 
Status
Not open for further replies.
Back
Top