NV to leave IBM?

Bjorn said:
Comparing the R420 and the NV4X isn't exactly fair since it's 160-180 (from what we know at least) million transistors vs 222.

Agreed...which is why I said (earlier) "If Low-K is in part that which is responsible for allowing ATI to clock their chips at 500 Mhz without needing massive coolers, vs. nVidia at 400MHz with a massive cooler..."

;)
 
DaveBaumann said:
Greg - do your graphics chip regularly run over 150 degree's?
Pentium 4's don't run over 150 degrees either, but that didn't stop early 0.13 micron models from suffering via failures, necessitating a fix.

Given Intels issues with via migration and their resolution of the problem I would have thought TSMC would also have had the problem overcome by now.
 
Here we are back at comparing transistor count that is not:

1) Proven to be correct
2) Doesn't mean anything anyways (9700 had less transistors)
3) ATI doesn't count transistors the same as NV
 
People were surprised at the R300/R350 being at .15 and still able to hit 500 mhz with exotic cooling...Radeon 9600 XTs are hitting over 600 mhz with exotic cooling. The R420 should be no surprise.
 
Joe DeFuria said:
Bjorn said:
Comparing the R420 and the NV4X isn't exactly fair since it's 160-180 (from what we know at least) million transistors vs 222.

Agreed...which is why I said (earlier) "If Low-K is in part that which is responsible for allowing ATI to clock their chips at 500 Mhz without needing massive coolers, vs. nVidia at 400MHz with a massive cooler..."

;)

Missed that one :)

3 - You ought to be thanking ATI for bringing this to market and sorting it out for graphics as you favourite will have to use this at some time as well. Even if they don't go 130nm low-k all foundries are using it with 90nm.

Agreed. Same of course goes for the NV4X and all the complaints that it'll be to expensive to produce and that NVidia won't make any money from it.
 
Joe DeFuria said:
radar1200gs said:
Only TSMC reallly knows how/why and they sure as hell aren't telling.

Wrong...TSMC and nVidia know why, and neither of them are telling.

Of course, ATI just shrugs their shoulders and says "we don't know why nvidia had problems either..."

:mrgreen: :mrgreen: :LOL: :LOL: :oops: :oops:
 
karlotta said:
thought ATI used UMC also?, the 15m R300 design was mostly ** hand built, and ATI likes to maximize the power of the minimum spec with the highest IQ( Since the R300). What i just said makes sense only to me?


No it makes sence and it is true. They got some nice things in their case against intel. One of them was a guy that helped to do it. Crosslicensing is sure a funny name for an engineer to help hand lay pipes.
 
I have a little info on what low-k black diamond is. Its silicon carbide that is deposited on like a film. The nice thing is its not fragile like SOS or Nitrogen glass.


EDIT
As for the yeild problems with NVDA, The layout of the IC is about 80% of the problem. I have seen the same thing but laied out different. One has great yeild and the other has many problems.

What we are seeing is that the ATI IC layout team is much better then the NVDA layout team. I have seen the same thing but have different layout. One runs hot and the other is cool.

What Im trying to say is the IC layout is very importent in how the yeild is, power usage, and heat output.
 
radar1200gs said:
Pentium 4's don't run over 150 degrees either, but that didn't stop early 0.13 micron models from suffering via failures, necessitating a fix.

Given Intels issues with via migration and their resolution of the problem I would have thought TSMC would also have had the problem overcome by now.

What does this have to do with anything? I very much doubt any process problems Intel had were identical to those that TSMC are having. In fact, you proved that point yourself by mentioning that Intel's problems didn't just occur at high temperatures, whereas this report states that TSMCs issues are only at high temps.
 
radar1200gs said:
PatrickL said:
Hehe i was thinking his statement implied highend chips :)
It does.

RV360 is almost irrelevant here it has too few transistors and runs too cool be be affected much by Low-K's problems. A 16 pipe R420 will likely prove to be a different kettle of fish altogether.

Even if yields do come right up to where they should be, the problem of via migration has to be eliminated, or there is always the chance your chip simply won't work one day, no matter how many redundant vias you factor in (the problem get worse for overclockers, because overclocking heats the chip which aggravates the issue). Some companies may be happy selling chips that have the potential to die prematurely to their customers, others are not.


oh please are you refering to nvidia when you say some are not. I know you are but you failed the bias test again. Why dont you tell that to those owning current high end nv3X cards that simply die due to being oc'd from the factory. Hate to mention kyle again after the other day but he has quite a few nv59XX series cards that failed prematurely.e had not oc'd them or beat them up in any way.
 
I hope you three typed those replies about the rv360 up before readin the others. IF not it is kind of silly to say the exact same thing again and again. Anyway I will wait and see what is going on with the power connectors and so forth till ati's card is actually launched...
 
Doomtrooper said:
3) ATI doesn't count transistors the same as NV

Has that actually been proven true, and if so, why did ATI start counting transitors differently all of the sudden?
 
Geeforcer said:
Doomtrooper said:
3) ATI doesn't count transistors the same as NV

Has that actually been proven true, and if so, why did ATI start counting transitors differently all of the sudden?

1) I don't think ATI has changed their methods at all

2) It has been the point of many heated discussions on how the two companies count transistors. I believe NVIDIA counts every working transistor regardless of purpose while ATI excludes some transistors the NVIDIA includes.

As to why someone more skilled than me probably knows the answer to that
 
Stryyder said:
2) It has been the point of many heated discussions on how the two companies count transistors. I believe NVIDIA counts every working transistor regardless of purpose while ATI excludes some transistors the NVIDIA includes.

Ati's method sounds rather strange to me. What would they be excluding in that case ? And why ? (If they do count them differently, i don't actually believe that though)
 
I put this to Paul Asycough (did I get it right S3 guys? ;) ) at the Tech Day and asked if ATI counts everything as I had heard they don't count cache - Paul said they may count some cache but not necessarily all.

I think ATI look at the number of transistors they implement beforehand from a logic point of view, whereas NVIDIA counts everything once the chip is back. I think ATI doesn't really look to these types of metrics as being that meaningful - to them the bottom line is what counts, and thats die size.
 
ATI only(?) counts Logic trans. Tonny T states as much and Dave Baumann pointed out that Nvidia counts EVERYTHING awhile bac after asking the question to some engineer folks. As for Blk Dia. not ready for primetime, seems to come from the myth that Nvidia is a revoutionary GFX company. They are not. They only use tried and used tech and when vendors have the parts up to the new speed tech. When they do try to "leap" they get burnd.
But steady eddie refreshes have been good for the game industry. This is all a bit vage and im sure some ppl will say "wrong".

And now some fan_oy stuff: Radar u should realy be scared, What with a R423 extra running at 625 stock cooling with 833ddr3 being shown about this july running HL2 and D3. LOL fan_oys...lol
 
DaveBaumann said:
I put this to Paul Asycough (did I get it right S3 guys? ;) ) at the Tech Day and asked if ATI counts everything as I had heard they don't count cache - Paul said they may count some cache but not necessarily all.

I think ATI look at the number of transistors they implement beforehand from a logic point of view, whereas NVIDIA counts everything once the chip is back. I think ATI doesn't really look to these types of metrics as being that meaningful - to them the bottom line is what counts, and thats die size.

Yeah, that may be the case. But when you send the info of what to be fabricated, TSMC's count should be the one with all transistor included.
Does TSMC care about how you internally count it? Or should the data of what to be fabricated be a more accurate one than a PR one or a design perspective view?
I think if they give out the number to be fabricated then it should likely be the same between both of them. Well, I know we are just being feed with PR data. Come on, anyone with some friends or relative in TSMC that can shed some light on the internal transistor count of both?
 
{Sniping}Waste said:
As for the yeild problems with NVDA, The layout of the IC is about 80% of the problem. I have seen the same thing but laied out different. One has great yeild and the other has many problems.

What we are seeing is that the ATI IC layout team is much better then the NVDA layout team. I have seen the same thing but have different layout. One runs hot and the other is cool.

What Im trying to say is the IC layout is very importent in how the yeild is, power usage, and heat output.

Sometimes, this is a 'chicken and egg' dilemma. If the foundry's manufacturing process were 'perfect', then there would be no need for double-vias, metal-slotting, antenna-removal, and all the other 'yield-enhancement' bag of tricks in automated-layout tools. But of course, the real-world isn't so kind...and we have seen the reports of 'surprised customers' when their tapeout analysis met sign-off requirements, but the first silicon failed miserably in the lab.

For foundries like TSMC/UMC, the customer is responsible for the layout of his IC. But to have a good chance of success, the customer relies on the foundry to provide accruate/complete characterization info the manufacturing process. "Oh we forgot to tell you, you have to add slotting to all interconnect wider than 40microns, otherwise it'll crack. Oh and, those propagation delays for our 0.15u standard-cell library? The *real* delays are +15% larger than the published ones, sorry about that." This isn't meant to excuse ATI or NVidia (or any other fabless company) for bad risk management (since the foundry clearly defines 'risk' and 'production' fabs), but sometimes it really is the foundry's fault -- how can the customer produce a manufacturable IC-layout, if the foundry's information isn't complete?

At one time, TSMC's sign-off policy included some sort of quality guarantee -- if the submitted layout (GDSII) met their LVS/DRC design rule-deck, the customer was insured against certain silicon-failures (i.e., TSMC would re-spin a failed first-silicon part free of charge.) At 0.13u, they dropped that policy. The foundry does NOT guarantee working silicon, even if the submitted layout meets *ALL* published manufacturing-rules.

Many fabless semiconductor companies are 'returning' to the older customer/foundry hand-off, where the customer trusts the foundry to handle *ALL* back-end tasks (layout.) Presumably, the foundry is the best position to handle layout because it also handles the actual manufacturing process (But obviuosly, a foundry screw-up will still hurt the customer, because it delays delivery of production parts.)
 
Hanners said:
radar1200gs said:
Pentium 4's don't run over 150 degrees either, but that didn't stop early 0.13 micron models from suffering via failures, necessitating a fix.

Given Intels issues with via migration and their resolution of the problem I would have thought TSMC would also have had the problem overcome by now.

What does this have to do with anything? I very much doubt any process problems Intel had were identical to those that TSMC are having. In fact, you proved that point yourself by mentioning that Intel's problems didn't just occur at high temperatures, whereas this report states that TSMCs issues are only at high temps.
Probably not entirely identical, but extremely close.

The problem with VIA failures comes about because of the use of copper as the metal. Copper deforms and changes shape under the influence of temperature changes (look up how bi-metallic strips work).

This changing of shape actually physically forces the via's out of their holes, causing the failure.

Intel solved the problem in part by changing the shape of the via hole so the metal could not easily be forced out.
 
Actually at Beijing Nvidia CEO Jen-Hsun Huang told me 2 NV4X chips exist currently:

1.NV40 made by IBM
2.NV41 fabbled by TSMC,no details on it :(

It seems what Josh have said is right,though not precise.
 
Back
Top