NV to leave IBM?

Colourless said:
As advanced as the rest of the NV3x was, the register limiation of the chips was just a fucking stupid design decision. Why, why, why, why!

Must have been a scheduling or transistor budget problem. All chips are going to have some limitations. The NV40 also has register limits, they're just higher and the penalty is lower for spilling over them. Doesn't seem like anyone has tested the R300 out to 32 temp registers to see what, if any, limits there are to the FIFOs.

It happens that the vast majority of code doesn't need more than 4-6 registers live at any one time. NV3x was designed for a target of 2 FP32 registers live at any one time, so they got bitten by common usage. Any card that can support 4-8 registers with no penalty will run penalty free on most shaders.
 
1 - thats already been discussed
2 - The issues are at high tempuratures, not 3D graphics tempuratures.
3 - You ought to be thanking ATI for bringing this to market and sorting it out for graphics as you favourite will have to use this at some time as well. Even if they don't go 130nm low-k all foundries are using it with 90nm.
 
DaveBaumann said:
1 - thats already been discussed
2 - The issues are at high tempuratures, not 3D graphics tempuratures.
3 - You ought to be thanking ATI for bringing this to market and sorting it out for graphics as you favourite will have to use this at some time as well. Even if they don't go 130nm low-k all foundries are using it with 90nm.

You ought to be thanking ATI for bringing this to market and sorting it out for graphics as you favourite will have to use this at some time as well.
Some of the more rabid ATi zealots may want to keep that particular train of thought in their "minds" the next time they go to launch into nVidia for "useless features" such as 32 bit color on TNT-1, T&L on GF-256, SM3.0 on NV40 etc, etc...

... you favourite ...
Ah yes, its okay to be biased here, so long as your bias is toward ATi and not nVidia...
 
radar1200gs said:
WaltC
only people like you cause me to want to hit my head against a brick wall. Tell the owners of 5800's and 5800 Ultras that it was never released. Take your garbage elsewhere.

Don't bother reading his posts they make anyone want to bang their head against the wall :)

edit:

BTW I am impressed walt your later posts were succint and to the point, very refreshing after your usual verbose and long posts.
 
radar1200gs said:
Some of the more rabid ATi zealots may want to keep that particular train of thought in their "minds" the next time they go to launch into nVidia for "useless features" such as 32 bit color on TNT-1, T&L on GF-256, SM3.0 on NV40 etc, etc...

... you favourite ...
Ah yes, its okay to be biased here, so long as your bias is toward ATi and not nVidia...


Does that mean you will be thanking ATI for bringing fast SM2.0 to market with a first generation implementation, and giving NV40 a market to build on with SM3.0?

There's a difference between a new feature that is usable and gives advantages (such as low-k), and features such as 32bit colour that were so slow on initial TNTs that Nvidia told people to drop back to 16 bit when they complained of unplayable performance.
 
radar1200gs said:
Some of the more rabid ATi zealots may want to keep that particular train of thought in their "minds" the next time they go to launch into nVidia for "useless features" such as 32 bit color on TNT-1, T&L on GF-256, SM3.0 on NV40 etc, etc...

Why?

If Low-K is in part that which is responsible for allowing ATI to clock their chips at 500 Mhz without needing massive coolers, vs. nVidia at 400MHz with a massive cooler...how is Low-K that not "immediately beneficial" to the end user?

Ah yes, its okay to be biased here, so long as your bias is toward ATi and not nVidia...

No, it's OK to be biased as long as

1) Your bias doesn't interfere with your ability to make reasonable posts
2) Your bias doesn't interfere with your ability to make reasonable posts

and

3) Your bias doesn't interfere with your ability to make reasonable posts
 
Bouncing Zabaglione Bros. said:
radar1200gs said:
Some of the more rabid ATi zealots may want to keep that particular train of thought in their "minds" the next time they go to launch into nVidia for "useless features" such as 32 bit color on TNT-1, T&L on GF-256, SM3.0 on NV40 etc, etc...

... you favourite ...
Ah yes, its okay to be biased here, so long as your bias is toward ATi and not nVidia...


Does that mean you will be thanking ATI for bringing fast SM2.0 to market with a first generation implementation, and giving NV40 a market to build on with SM3.0?

There's a difference between a new feature that is usable and gives advantages (such as low-k), and features such as 32bit colour that were so slow on initial TNTs that Nvidia told people to drop back to 16 bit when they complained of unplayable performance.
I owned a TNT-1 very early on & I definitely do not recall 32 bit being unplayable. Slower than 16 bit, certainly, but not unplayable.

I think you are quite brave in calling Low-K as it pertains to TSMC & Black Diamond useable. When it is truly usable nVidia will be there to take advantage of it. Not before.
 
radar1200gs said:
I think you are quite brave in calling Low-K as it pertains to TSMC & Black Diamond useable. When it is truly usable nVidia will be there to take advantage of it. Not before.

Congratulations. You just failed all three requirements of the "It's OK to be biased as long as..." test! :rolleyes:
 
I think you are quite brave in calling Low-K as it pertains to TSMC & Black Diamond useable. When it is truly usable nVidia will be there to take advantage of it. Not before.

That a pretty stupid statement as ATI will release cards using that in a week. It s a thing to be biased, an other to deny facts.
 
Joe DeFuria said:
radar1200gs said:
I think you are quite brave in calling Low-K as it pertains to TSMC & Black Diamond useable. When it is truly usable nVidia will be there to take advantage of it. Not before.

Congratulations. You just failed all three requirements of the "It's OK to be biased as long as..." test! :rolleyes:


LOL :oops: :LOL:
 
PatrickL said:
I think you are quite brave in calling Low-K as it pertains to TSMC & Black Diamond useable. When it is truly usable nVidia will be there to take advantage of it. Not before.

That a pretty stupid statement as ATI will release cards using that in a week. It s a thing to be biased, an other to deny facts.

Actually, I'd say radar's statement is pretty stupid...considering that ATI has released chips using Low-K for the past 6+ months. (RV360)
 
PatrickL said:
That a pretty stupid statement as ATI will release cards using that in a week. It s a thing to be biased, an other to deny facts.

Well, its even worse than that seeing as chips using it have been shipping since September.
 
PatrickL said:
Hehe i was thinking his statement implied highend chips :)
It does.

RV360 is almost irrelevant here it has too few transistors and runs too cool be be affected much by Low-K's problems. A 16 pipe R420 will likely prove to be a different kettle of fish altogether.

Even if yields do come right up to where they should be, the problem of via migration has to be eliminated, or there is always the chance your chip simply won't work one day, no matter how many redundant vias you factor in (the problem get worse for overclockers, because overclocking heats the chip which aggravates the issue). Some companies may be happy selling chips that have the potential to die prematurely to their customers, others are not.
 
radar1200gs said:
Sounds like sour grapes on the part of the Tiawanese foundries to me.

As already pointed out, TSMC is overbooked already, so who will do the fabbing.

IIRC there were rumors spreading just after NV38 launched that nVidia and IBM were going to split, because IBM had better customers or some such tosh. Obviously that never happened either.

No not at all the better customer was AMD.
 
radar1200gs said:
RV360 is almost irrelevant here it has too few transistors and runs too cool be be affected much by Low-K's problems.

So then...where is nVidia's "smaller" chip utilizing Low-K....I mean, I guess it's not truly usable for such chips then by your own definition, because nvidia isn't using it yet... :rolleyes:

A 16 pipe R420 will likely prove to be a different kettle of fish altogether.

Yeah...it'll be 4 times as fast as the RV360, and still not require two molex connectors, etc. Certainly different than the NV40...

Some companies may be happy selling chips that have the potential to die prematurely to their customers, others are not.

Interesting comment given that nV40s have gone out to reviewers with memory clocked out of spec...
 
radar1200gs said:
I think you are quite brave in calling Low-K as it pertains to TSMC & Black Diamond useable. When it is truly usable nVidia will be there to take advantage of it. Not before.

You mean like how ATI have been using low-k for the 9600 for the last six months?

Was .13 truly usable when Nvidia decided go to .13 with the disasterous NV30, when their competitors were building better cards with .15? Was SM2.0 truly usable when Nvidia put it in NV3x, even though it's competitors were twice as fast? Was the NV30 dustbuster truly usable? Or Cg with it's lower performance? Or the "embrace and extend" shader compilers that default to allowing out of spec behaviour? Or the NV40 needing two molexes and 480 watt PSUs?

If you're relying on Nvidia for being the universal arbitrator of when a technology is ready for prime time, you must be making as many mistakes as they do.
 
Joe DeFuria said:
Yeah...it'll be 4 times as fast as the RV360, and still not require two molex connectors, etc. Certainly different than the NV40...

Comparing the R420 and the NV4X isn't exactly fair since it's 160-180 (from what we know at least) million transistors vs 222.
 
Back
Top