Anand says - now that NV30 is taped out - we can reveal...

Re: Anand says - now that NV30 is taped out - we can reveal.

Ok, this has been going on too long, so I will keep this short. "First tape-out last week" - Doesn't this mean that there could be several more tape-outs ? And these could push retail release further away ?
 
anandtech

We've said this before and it still holds true to date, NV30 is faster than the R300 on paper. The only and major caveat being that ATI is days away from shipping R300 and NV30 has only just taped-out, making any technology or performance advantage the NV30 may have a moot point.


The first NV30 silicon taped-out last week, this is no less than three months behind schedule.

this confirms that the Tape-out of the NV30 has happend only in August, not earlier as mentioned by many Nvidiots (sorry!).
 
Three months late...original release planned for October + min 3 months.

This is a short term problem for NVIDIA. Long term though there is no reason for concern.

On paper it sounds faster than the R300 and it should be, but on paper the R200 was meant to be faster than the NV20 and NV25 at certain things, and the Parhelia on paper, even with its low clockspeed was also meant to be faster than xxx card. A bit pointless Anand stating on paper the NV30 is meant to be faster than the R300.
 
anandtech

NV30 taped-out just recently and is 3 - 4 months away from final production (silicon).

Does he mean production or production silicon?
If he means production silicon then the timeline does not work. With production silicon 3-4 months away we wont see cards on the shelf in december. If he means production then :

Only when already the A01 (or A00?) Silicon is ready for primetime (=production). Seems unlikely. A01 should be good enough for technical demonstrations, qualification and maybe even benchmarks (like R300) but I really doubt that A01 will be ready for production.

On top of it NV30 will be really expensive

The yields on 0.13-micron NV30 chips aren't high at all unfortunately; currently yields are between 10 and 20%, meaning that for every 10 chips made, only 1 or 2 are actually functional.
 
Short term – NV are now feeling to pressure from competition; if NV30 is operable but with “slightly more bugs than we’d like” on the second or so spin then they may be forced to ship it anyway; kinda the reverse of R200. They’ve built a rod for their backs by saying ‘Fall’ or ‘200’ or ‘100 days from tape out to production’ – how far are they willing to stick to those timings if the silicon has plenty of errors?

Longer term – they’ve blown their wad already. Is it really that impossible for ati to increase the shader programmability etc in their .13um refresh and have that within 3-6 months of NV30? ati already have more options to play with (DDR-II, shrinking R300 to .13um, R350 etc.) this could give ati, or others but ati is in the best position, the ability to really gain some headway.
 
The low yields do not necessarily mean a high price. I have seen fabs (who are at fault for low yields) offer wafer prices that equate to a guaranteed acceptable yield.

If they just had their initial tapeout, I doubt we'll see boards on the shelf for Christmas...there just isn't enough time (even if they don't do any revs) to do two full runs through the fab and get parts to the OEMs.

But all that's just from my experience.
 
I absolutely agree, the architectural (currently only paper) advantage of NV30 means little, especially due to the lateness of the part. Christmas is now possible, but not certain. It has been said that ATI indeed has a refresh part planned for winter/spring (although it is a departure from their traditional product launchtimes), no .13 shrink of R300, but rather the R350 at .13 micron. With all the talk about MS, DX and Nvidia and ATI's role in this, it seems only likely that R350 can and will be improved in a way to cath up in shader flexibility with NV30 or maybe exceed it (DX9.1?), we'll have to wait and see.

In respect to R350 three questions remain. First, how much improved over R300 and/or NV30 will R350 be? Second, does the NV30 delay mean a delay of NV35 as well or can it still be on time to compete with R350? Third, will NV35 improve more than just the usual refesh (like rumors about some architecture innovations that apparently got stripped from NV30 but might make it into NV35)? With the current timing and delays it is a bit annoying to always have the next best thing right on the horizon, at least the past few years product launches were timed in a way that after a short time of heat (like GF3->R8500->GF3Ti), there was a long period of getting used to what was on the market. The next 6-9 months however could end up much more packed, with less time between releases than ever before. Imagine a 6-month-cycle ATI and Nvidia, each releasing new products 3 months after the other... <shudder>

Forgive me my highly speculative post, but I'm about to head off for a month of vacation (Iceland!) and am probably gonna miss on a lot of exciting stuff from the GPU/VPU/xPU world (note that I prefer my vacation over 3d technology news by far though, hehe), so my mind is racing a bit ahead of the game... ;)

If I don't get to make another post, hope you people have a fun time over the next month, see you in Spetember! :)
 
I notice that Anand seems to assume that the first revision taped out last week will be fully functional and without bugs.

I'd guess that this probably wouldn't be the case. Assuming that nvidia identify and fix any bugs quickly and get the first respin working perfectly, how long would it be before they obtained the final chips for use in cards by their customers?

This question is really aimed at the people on this board who have experience or knowledge of the whole tapeout/fabbing/production process.

Edit: I see Russ answered my question before I got around to posting it!
 
mboeller said:
Does he mean production or production silicon?

http://www.anandtech.com/video/showdoc.html?i=1678&p=3

Anandtech said:
The first NV30 silicon taped-out last week, this is no less than three months behind schedule.

I'd assume that this was the very first silicon.

Gollum said:
The next 6-9 months however could end up much more packed, with less time between releases than ever before. Imagine a 6-month-cycle ATI and Nvidia, each releasing new products 3 months after the other... <shudder>

The basic driver for the refreshes from both vendors is the OEM refresh cycle - I'd wager that ATi and NVIDIA will fall back in line at some point. Either the time difference between NV30 and NV35 will be longer or shorter than the usual 6 months to realign with the OEM's IMO; Anand seem to think it will be shorter.

However, the wider question is: can everyone rely on silicon processes coming around as often in the future as previously? The length of time its taken to get .13um running and the yeilds its showing is sonding a little horrid; how do we know these issues won't be replicated with future processes such as .09u? Perhaps the 6 month refresh in the high end may be forced out sooner or later.
 
Um, I have some questions regarding chip production/yields, maybe Russ can answer those from his experience. Would be nice ... :)

I thought from initial taperout (which happened last week for NV30 according to Anand), it might take several weeks to get first silicon back? How come they know the yields already? More so, how come we knew almost exactly how bad NV30 yields are gonna be (~15%) for weeks, if it only now taped out? Is it common practice to be able to estimate yield for a specific chip even before the wafers are produced? If so, is low yield just a general problem of a specific production process, with actual chip design having little to nothing to do with it? Or rather is yield simply scaling negatively with a higher transistor budget? I am pretty sure it is at least to some degree, so much I can conclude from what I learned about chip production from this board, but till now I always thought chip design itself could also play a major role, making guesstimating yields hard?
 
That is what I meant when I said long term.

There is no need to assume one setback will mean than a company will always be running into problems. Main reason for this delay is the known issues with the process technology and NVIDIA are historically known to rely on it, e.g. TNT and GF2 and GF3/4.

Long term doesnt mean what companies are doing in 6 months time to me ;)
 
Yes, no, yes, no, maybe yes.

Well, to answer your question a little more precisely, yield can be estimated before hand, especially if you're using standard cell logic, based on average yield, die size, process, etc.

Wafers have X number of particulates per sq inch, for example. As process size goes down, fatal particulate size goes up. But on the other hand, the smaller process fabs have higher particulate standards. Anyways, you can statistically model the number of failures based on that environmental information which is available and die size.

It could be process variation problems (we had 80% yield one wafer, ~0% the next on one project until the fab worked out their problems).

GENERALLY, chip design doesn't play too much into the yield unless you screwed up timing analysis, or you're depending on something that the process can't guarantee. The numbers the fabs give concerning timing modeling should represent worst case 'corners' of the process, so your part should function at their worst case. Analog stuff is different, since that's usually all custom 'logic' and its not really logic at all, but black magic voodoo (which my company apparently does relatively well, compared to others) One small addendum: companies aiming for high performance chips may push the limits of the fab, and design would become more important. We aim for low power, so timing isn't so much an issue for us.

If TSMC is only getting 10-20% yield on all their .13u stuff, I guess that's a good starting spot. However, I haven't heard anything that poor (if average yield was that bad, my company wouldn't even consider it and when .13u comes up in design meetings, nobody says "OMFG! yield is so bad, we can't use that!")
 
The big thing I am wondering is will ATi experience the same 3 month set back when they go to a die shrink?

Will NVidia streak ahead for 2-3 months whilst ATi struggle with 0.13 micron fab issues?

Did NVidia just swallow their bitter medicine early and ATi's is yet to come - or will ATi somehow learn from NVidia's struggles and just by delaying die shrink by 3-6 months behind NVidia's will they have so much less hassles when they go 0.13 micron?
 
g__day said:
The big thing I am wondering is will ATi experience the same 3 month set back when they go to a die shrink?

Will NVidia streak ahead for 2-3 months whilst ATi struggle with 0.13 micron fab issues?

Did NVidia just swallow their bitter medicine early and ATi's is yet to come - or will ATi somehow learn from NVidia's struggles and just by delaying die shrink by 3-6 months behind NVidia's will they have so much less hassles when they go 0.13 micron?

You conclude that TSMC will never be able to improve the yield?
 
Chances are that by the time ATI or any other company is ready for .13u, TMSC will have worked some of the bugs out of their system. We really dont know why the yeilds are so low. Is it all process related? Is it a combination of the complexity of the part and the process? There are things you can do to your design that can help increase the yeilds so those types of lessons can be learned now and used later as well as TSMC learning how to make complex parts on a new process. The more time that passes the more bugs that are worked out. However that more time that passes can be a bad thing if you wait too long....
 
I draw no conclusions - I ask questions :)

I assume both NVidia and TSMC will make progress. I have no knowledge of how this experienced gained, this Intellectual Property vests between NVidia and TSMC.

I don't know if TSMC can offer the fruits of NVidia's first on the beach pain to ATi with no restrictions?

I assume alot they can, but some of the really hard, valueable stuff they can't for a while?

I have no knowledge of this world - and one set us all straight? If both NVidia and TSMC learn heaps about optimally designing for 0.13 micron must/will/can TSMC pass all this insight straight to ATi?
 
g__day said:
I have no knowledge of this world - and one set us all straight? If both NVidia and TSMC learn heaps about optimally designing for 0.13 micron must/will/can TSMC pass all this insight straight to ATi?

Yes. TSMC is in the business to make money by fabricating parts, not solving the same problems over and over again.
 
It seems I was right!

in a recent thread about Anands headline, I assumed he meant his article on NV30 would be on time.
:LOL: :LOL:
 
The way that Anand keeps stating that NV30, on paper, is faster than R300 makes me think it's a 8x2 pipe architecture, compared to R300's 8x1. It's gonna' need massive bandwidth to keep those 16 TCUs fed, unless it has greatly enlarged caches.
 
Back
Top