NV30 volume possibly earlier...

CMKRNL

Newcomer
It looks like nVidia has taken some significant risk in order to ensure that they can ramp up volume very quickly at TSMC. I don't know whether the second spin has taped out already or not, but it's possible that they may be able to pull in volume ramp by 4 weeks. Of course this could end up being a very costly mistake for them too -- we'll have to wait and see.
 
Well I think there's usually a big difference from what any firms PR arm tells the media and what the actual situation is. The EEtimes link is no different from what they've been saying recently.

What I'm saying is that they a ridiculously large number of wafers on hold at TSMC. I don't know the scope of the changes in the second spin and what impact it has on those wafers. They've assumed a huge risk by going this route, although it would pay off in a big way if everything goes according to plan.
 
If their changes only require metalization changes, they could have done this and shaved 2-3 weeks off of their production time.

But, as you said, it would be a bit of a risk--though maybe TSMC would share the burden because of whatever difficulties experienced may have been their fault. Or maybe not.
 
Well, I as well as every other true hardware geek on the planet sure wish them good luck in their bold gamble!

I love companies who are daring and willing to take risks, Nvidia has been that in the past, recently been a bit boring and complacent. Hearing NV30 may be clocked as high as 500MHz (or maybe more? "Ultra" version?), and now this... It makes me excited about them all over again! Hope this doesn't backfire on them.


I *ALSO* hope ATI has something equally exciting (400+MHz R300, or maybe even R350) ready and waiting to counter the NV30 threat!

Oh, if only I had $500 to spend! Never mind my financial woes though, it's sure fun to be a hardware geek right now. AMDs Hammer on the horizon, the P4 at last starting to shine, new video cards, serial ATA, etc etc drooooool... :D


*G*
 
Grall said:
I *ALSO* hope ATI has something equally exciting (400+MHz R300, or maybe even R350) ready and waiting to counter the NV30 threat!

*G*

Umm if ATIs intended response to the NV30 is not up to the charge then is it feasible for ATi to introduce such a product? ATi may not be able to outperform the NV30 using the R300 core and would have to rely on the R400 sometime in the future to regain the speed crown.
 
WEll. if the information posted today at news.com.. that the Nv30 will ship at 500mhz core. Then that could explain the *big risk*.

In my mind i am not all that impressed with a VPU that has to be clocked at 200mhz over its competitor just to net a 30% increase in speed. However i pretty sure the rest of you will disagree withme.

And btw. R350 is a real chip and it is due one hell of a lot sonner than you think. in other words. Nv30 had *better* get out by FEB.
 
Hellbinder[CE said:
]However i pretty sure the rest of you will disagree withme

As 'no-one' knows anything for sure, no-one can disagree with you, but please stop atacking the whole forum membership with statements like that.
 
In my mind i am not all that impressed with a VPU that has to be clocked at 200mhz over its competitor just to net a 30% increase in speed.

Well, I would be impressed simply by someone having engineered a part to run successfully at 500 Mhz. (About as impressive as getting the R300 to run at 325 Mhz on 0.15). Furthermore, I would be impressed if it is a 30% "across the board" (in almost all high resolution / AA / aniso) performance, with considerably less bandwidth.

Anyone have any real idea when the NDA is up? I'm assuming around 9:00 AM pacific time (12 noon eastern). What time does nVidia's launch scheduled for over in Vegas?
 
Hellbinder[CE said:
]In my mind i am not all that impressed with a VPU that has to be clocked at 200mhz over its competitor just to net a 30% increase in speed. However i pretty sure the rest of you will disagree withme.

Sure, I'll take the bait...

Yes, I disagree. Because it is a silly statement. It's not that NV30 *has to be* clocked at 200MHz over its competitor, it's that NV30 *was designed to be* clocked that high to meet it's performance targets. Not to mention that it *can be* clocked that high, by design, while it's competitor cannot.

Here's a twist on your statement to demonstrate what I mean: "In my mind I am not impressed with a VPU that has to have a data path twice as wide as it's competitor just to perform 30% slower".

The high clockspeed was a design decision for NV30, just like a 256 bit bus was a design decision for R300. The jury is still out on which was the better decision.
 
I'm wondering what lengths nVidia had to go to to get the part to clock at 500 MHz. Certainly they are making good use of their smaller process. Are they pushing it to the extent that, like ATI, they will need a separate power hookup?

I'll be much more impressed if both the $399 and $499 parts clock at 500 MHz (or if the $499 part clocks above 500 MHz) than if the $399 board is 425 MHz or something and the $499, 500 MHz board is a "benchmark-leader" that is never available in quantity.
 
Yes, I disagree. Because it is a silly statement. It's not that NV30 *has to be* clocked at 200MHz over its competitor, it's that NV30 *was designed to be* clocked that high to meet it's performance targets. Not to mention that it *can be* clocked that high, by design, while it's competitor cannot.

Yes, Nvidia is using the benefits of the .13 process to achieve higher clockspeeds.

That avenue is certainly open to ATi, who managed to achieve very nice results with .15.

Certainly if the Nv30 is 15-30% faster by using DDRII and .13, then ATi is in a better situation as all this tech is unexploited by them.
 
Well...We already know what the ATI part is capable (R300) when clocked to extreme levels.

Of course, I can't seem to find that link :)
 
Yes, I disagree. Because it is a silly statement. It's not that NV30 *has to be* clocked at 200MHz over its competitor, it's that NV30 *was designed to be* clocked that high to meet it's performance targets. Not to mention that it *can be* clocked that high, by design, while it's competitor cannot.

This is just plain wrong. It has very similar capabilities aside from bandwidth to the 9700. there for it should perform basically the same.. especially with their supposed 48GB effective bandwidth. Yet its pretty dang clear that if the two were clocked the same the Nv30 gets its A$$ handed to it.

No the Nv30 was not *designed* to be clocked at 500mhz right out of the freaking gate. This should be totally obvious to everyone. they are *lucky* that they are getting it to run that high. Look at the heat sink/fan? its totally gigantic. It was stated a while back that 500mhz was going to be approx the upper limit for a clockrate even on a .13u design with a transistor count that high.

Anyone who sits back and *honestly* looks at this situation can see it for what it is. Nvidia tweaked and delayed their a$$ off to get it to run this fast (after a final yet uncompleted respin) or the Nv30 would be a complete performance embarassment.

Furthermore, If ATi releases a 400mhz DDR-II 9700 the Nv30 is on shaky ground to be the undesputed performance leader already. The Comming Tyan board may by itself threraten the undisputed supiriority of the Nv30.

If they had not delayed so long and delivered the part on time with the Ram available in the fall when it should have been launched... the Nv30 is simply outclasses performance wise. you can say *no one knows yet* all you want. Nvidia are the ones claiming 25-50% and they are the ones that released numbers at the release breakfast and to anand that clealrly show the gap is a lot more narrow than the marketing department is clainming.

is .13u *possibly* saving their but? undoubtedly.. if they actually deliver a ship working at 500mhz. will the be the shortest performance leader in history? Very, very, likely.
 
There's only 1 small problem in stating your case...It's based only on supposition, not fact.

According to those that have known about NV30, there were no changes to specs, as has been purported on various forums.

In other words...there _really_ was a manufacturing issue in designing this thing on .13u...nothing more, nothing less.

Personally...I strongly believe that there is functionality missing from NV30 that was supposed to be there all along...

Maybe it was a new Antialiasing schema...who knows. But because of the extreme difficulty in getting the thing out the door, they decided to scratch it for now and just work on getting it out, and then allow the followup team to address those issues with NV35...

Again...this is completely guesswork here, but I was quite surprised that there was no mention of a completely new AA method...using 3dfx tech...or a combination.

Anyhow, I really do believe that they had one hell of a time getting the chip designed in a way that allowed them to not only meet their design specs, but also allow them to manufacture the thing with halfway decent yields.
 
Personally...I strongly believe that there is functionality missing from NV30 that was supposed to be there all along...

As do I. there is almost certainly something missing from the GF FX
that will be in the GF FX2. Like the TNT-to-TNT2, the GeForce256-to-GF2,
the GF3-to-GF4Ti. There is always something taken out of the first iteration of an Nv chip that is put back into the refresh.
I believe it is no different with the GeForce FX>GeForceFX2.

Maybe it was a new Antialiasing schema...who knows. But because of the extreme difficulty in getting the thing out the door, they decided to scratch it for now and just work on getting it out, and then allow the followup team to address those issues with NV35...

that's what I think as well. I believe NV35 will have the new anti aliasing (like NV25 had over NV20) as well as that 256-bit bus that NV30 should have had IMVHO. Perhaps a 2nd TMU per pipe as well.

Again...this is completely guesswork here, but I was quite surprised that there was no mention of a completely new AA method...using 3dfx tech...or a combination

Agreed. I was surprised and disappointed in this. but it leaves hope
for NV35/GeForce FX2. :)
 
Joe DeFuria said:
In my mind i am not all that impressed with a VPU that has to be clocked at 200mhz over its competitor just to net a 30% increase in speed.

Well, I would be impressed simply by someone having engineered a part to run successfully at 500 Mhz. (About as impressive as getting the R300 to run at 325 Mhz on 0.15). Furthermore, I would be impressed if it is a 30% "across the board" (in almost all high resolution / AA / aniso) performance, with considerably less bandwidth.

Anyone have any real idea when the NDA is up? I'm assuming around 9:00 AM pacific time (12 noon eastern). What time does nVidia's launch scheduled for over in Vegas?

I'm much more impressed with what ATI did at .15 microns than I am with what nVidia did at .13. I expected the nVidia chip to clock out somewhere between 450MHz to 550MHz, but I'm not so sure nVidia could have done a .15 micron R300, let alone a .15 micron nv30. And the 256-bit bus is a wonderful step forward--but that's me--I favor "wider, slower" buses over "narrow, faster" buses any day of the week. Interestingly enough, most currently shipping R300's have the potential to get darn near 400MHz at .15 microns--with proper cooling. Basically, ATI's paid scant attention to cooling the GPU, being concerned instead with good yields and an economically viable cooling model which would supply performance commensurate with bandwidth. Seriously, I think merely a serious tweak to ATI's cooling in its reference design would be a big step forward for them in even the current .15 micron R300.

Contrast this to what nVidia's doing with cooling, at least at this point--knocking its socks off to cool that bugger to the nth degree possible while still retaining a degree of cost-effectiveness. To me, this says that ATI has yet to begin stepping on the gas but that nVidia is stomping the pedal hard with nv30 coming right out of the gate, and already at .13 microns, too--which ATI still has ahead of it.

Also, no doubt 16-gigs/sec is less than ~19gigs/sec, but I certainly wouldn't consider it "considerably less." 1/2 the bandwidth would fit the "considerably less" concept...;)

Anyway--my own feeling here--completely unsubstantiated--is that there are an awful lot of question marks still up in the air about nv30--inside of nVidia. I haven't been impressed at all with the generality and the scarceness of specific information--but maybe that's par for the course at this stage. But a "Radeon Killer" the nv30 definitely is not, at least based on the info I've seen to date, it seems to me.
 
Boy oh boy. Isn't it interesting how armchair analysts who have no information about the actual design process that went into the chip, are all ready speculating about maximum clock limits on a chip that hasn't even shipped yet, or about supposed last minute clock design target alterations by Nvidia to match ATI.

You also have no clue as to the ultimate performance of this chip in the areas it was designed to really address: shading vis-a-vis ATI's implementation. So it may be 30% better on Quake3 or 3dMark, but about executing running Humus's phong shader?


HellBinder, do you even OWN a R9700 PRO?

Both ATI and NVidia are to be kudos for their designs. They chose different designs for different requirements and it's not neccessarily the case that one card is absolutely better than the other. The reality is, they each made different tradeoffs with their transistor budget, and that will be reflected in different performance profiles in different cases.

There is no clear absolute winner at this point, and speculation in the total absence of facts is mysticism and numerology.

Please quit it.
 
but about executing running Humus's phong shader

Do you think 500 MHz core is enough ? Maybe the Ultra is a higher clock version, who know they might get it up to 750 MHz, with that ugly gigantic thingy :)
 
Back
Top