A few questions on NV30, NV35

It is less than the difference between .18 micron and .13micron.

Yep, and the difference between 0.09 and 0.30 is even bigger :)

Does that suddenly make 30 % a small difference ?

Yeah, nothing the P4 has more transistors than the Athlon, what can you conclude from that?

That it runs faster :) (clock)

And since when have i or anyone else here concluded anything ?
All i see is speculating.
 
The question is, what is the TARGET power consumption limit of the NV30 design? I'll say this, if the NV30 is pushing AGP power consumption limits with 0.13, with a similar number of transistors compared to R-300, it's not a very power efficient design.


Well NV30 will consume around 30% less power then R300, and IIRC DDR2 is more power efficient as well. I highly doubt NVIDIA will have any power issues with NV30, but yields (hence clockspeed) will play a big role.

Speaking of yields anyway... Anand said NVIDIA's getting 10-20% yield with NV30 ATM... Is that at their target clock?
 
I just want to get rid of the idea the the extra power connector is some kinda of extra benefit for the R9700 like Pascal made made it out to be.

But, assuming NV30 does not have a power connector, it is an extra benefit to ATI in the same sense that the 0.13 process is a benefit for nVidia.

Designing on a more advanced process allows you to target higher clock rates. And designing with higher power consumption limits also allows you to target higher clock rates.

We know R-300 was designed with very high power consumption limits. We don't know about NV30. But speculating, I would be surprised it NV30's power consumption is similar to R-300, because they have similar transistor counts.
 
pascal said:
All what I say is that we cant conclude anything based on the info we have. We need more info.

You don't see me concluding anything here, do you?

There's items being listed that are used in support for "R300 could be faster", and I'm simply pointing out that those items hold no merit. The only reason why the R300 would be faster is one that hasn't been mentioned: NVIDIA didn't target it faster when doing synthesis, or they screw up timing analysis.

Edit: or, if at .13u they're using 1.5V as Vdd(same as .15u) and the power benefit of .13u is partially negated (as Pascal mentioned above)

Let me state very clearly: I'm not saying that NV30 will be faster, I'm stating that there's nothing out there to suggest that it won't be (other than speculation).

If that gets anybody worked up, then I'm sorry for having technical discussions here, and I'll leave the board for people to make unsubstantiated statements based on speculation.
 
Let me state very clearly: I'm not saying that NV30 will be faster, I'm stating that there's nothing out there to suggest that it won't be.

My suggestions:

* The suppossed 10-20% yields on the part. One way to get yields up (to an extent) is to lower the target clock speed. So while NV30 may have a high design target, who knows what the shipping speed will be. (See X-Box chip significant lowering of clock-speed.)

* nVidia's history indicates releasing new architectures at or below the fastest Mhz of the previous generation architecture. Performance improvments at similar clocks came from architecture and memory interface improvements.

* ATI's part is clocked significantly HIGHER than one would have expected based on the physical specs. (Based on specs, most were guessing something like Parhelia: low to mid 200's). So one must assume that whatever "magic" ATI did to reach the 325 Mhz part, nVidia would have to do similar.

* ATI's part has a high power consumption. Nvidia...?

Again, I don't see how you can say there is nothing to SUGGEST that NV30 might not be faster in clock than R-300. I guess we'll have to agree to disagree on that.

Every time you claim that we're making "unsubstantiated statements based on hunches and gut feelings", I can say the same about your posistion.
 
Joe DeFuria said:
But, assuming NV30 does not have a power connector, it is an extra benefit to ATI in the same sense that the 0.13 process is a benefit for nVidia.

I see it more as a drawback (a VERY small one though since i could care less if it uses a extra power connector) because of the 0.15 micron process. Maybe we just have to disagree on this ? :)

Designing on a more advanced process allows you to target higher clock rates. And designing with higher power consumption limits also allows you to target higher clock rates.

I would guess that Nvidia also tries to stay close to the
power consumtion limit. At least for the high end cards.
Trying maybe is a strong word. Maybe more like don't care if they get close to.

We know R-300 was designed with very high power consumption limits. We don't know about NV30. But speculating, I would be surprised it NV30's power consumption is similar to R-300, because they have similar transistor counts.

Yep, speculating, that's what we are doing in this thread.
And i agree, we don't know anything about the NV30's power consumtion.
But we can agree that it's probably lower.

But as Russ said:

So, assuming the same design that runs OK on only 100% AGP power in .15u at 300 MHz

You should be able to run that same design on 80% AGP power in .13u at 360Mhz (or 420MHz, if you were limited by power, and not timing).

It's still a disadvantage in theory.
As to what happens in practice, well that remains to be seen and it seems like it might take a while before we can make that comparision.

So i don't think Ati is regretting going for the 0.15 micron design.
 
I see it more as a drawback (a VERY small one though since i could care less if it uses a extra power connector) because of the 0.15 micron process. Maybe we just have to disagree on this ?

Heh...yeah. I do agree that "all else being equal", I'd prefer not to have the power connector, of course. However, it's the power that partially allows for the performance. I'd rather buy a R-300 at 325 mhz with a power connector, than a R-300 at 250 Mhz without it.

Edit: I'd also rather have a R-300 "now" at 0.15 @ 325 Mhz with a power connector, than wait 6 months and get R-300 "then" at 0.13 @ 325 Mhz without a power connector.

Actually, I would rather have even an EXTERNAL ("Voodoo 6000 Voodoo Volts") power supply, if that meant doubling the performance relative to someone "staying within AGP power Specs. ;)
 
Joe DeFuria said:
Every time you claim that we're making "unsubstantiated statements based on hunches and gut feelings", I can say the same about your posistion.

Except I'm not speculating on anything I've written.

But anyways, carry on.
 
Except I'm not speculating on anything I've written.

Your speculation is implicit. :rolleyes:

You say you don't see anything to even suggest a lower clock speed. So, for example, you ARE speculating:

* that the yield issues will have no impact on clock speed.
* that nVidia is targeting high power consumption
* that nVidia's past clock history of new architectures is not indicative of NV30
* that nVidia will do similar "magic" (hand tune, etc.) to get clock speed high as ATI did.

It is impossible to NOT speculate that all of the above are true, and still not see ANYTHING to suggest a lower clock speed.

Carry on indeed.
 
You forgot one, I also had to speculate that the NVIDIA engineers didn't transform into monkeys overnight.

Sure, I can hunt for reasons why things might not follow the natural order, but that's called speculation.

I'll admit though, I did speculate that technical details had any place in a discussion on Beyond3d. ;)
 
The power connector on the R 9700 is a drawback of the .15 tech, which has been turned into an advantage by ATI. Surely if it wasn't there then we wouldhave had one of 2 situations:

1) lower clockspeed Radeon 9700 - to reduce 'power suckage' ..
or
2) ATI waiting for .13 tech to mature enough to reduce the 'power suckage'

IMHO :LOL:
 
Edit: I'd also rather have a R-300 "now" at 0.15 @ 325 Mhz with a power connector, than wait 6 months and get R-300 "then" at 0.13 @ 325 Mhz without a power connector.

I agree completely.

Therefore, i would guess that the powerconsumtion for a GPU isn't the top thing that NVidia or ATi is worried about when designing new chips (high end desktop).
Probably more like (without knowing anything about it really):

- what do we want (functionality) ?
- how many transistors do we need for this functionality ?
- what process do (will we/might we) have available ?
- will this be economically reasonable ?

If they then need a extra power connector or not, well, then's probably
not something that they care about that much.
But then again, maybe they do (for some reason not known to me).
 
You forgot one, I also had to speculate that the NVIDIA engineers didn't transform into monkeys overnight.

Hehe.

I agree.

The thing that Joe and Pascal want to say is:

what if:

1: Nvidias engineers will be really careful about meeting the AGP specs even though they might have to lower their specs because of it. Even though it might be the difference between 10 % faster or 10% slower (equals to a huge difference in this business)

2: They are getting lazy and won't handtune their design since they don't "have to".

3: A extra power connector looks really bad when trying to market the card at OEM's. Thus avoid it even though it .. (see 1)

Then, the advantage of the 0.13 micron process might not be an advantage anymore.
 
Ok, have any of you replaced your GF3 with a GF4 TI4600? And if you have did you happen to see what happens to your voltage? I run MBM, and I have to tell you that all my voltages dropped. And this was on 2 different brands of motherboards with 2 different brands of powersupply. While I had no problems, IF I had a margial powersupply, I would have! My point is that, IF the GF4 had the ability to accept power directly from the powersupply, chances are the voltage to the motherboards would have not dropped, or, if it did, then certianly would have dropped much less. Most of us remember the problems that the original GF's had with AGP voltages.

In the context of selling a product that doesn't compremise the stability of your customers system (at least where agp voltage is concerned), I give kudos to ATI.
 
definately need more data, these theoretical arguments are great for the first 2 pages then they degenerate into a disgusting mishmash of "what if" scenarios.

Well heres some speculation of my own to add. On the OEM issue earlier on about the power connector. Perhaps the power connector is only needed for the high end Radeons, and, perhaps this fabled "9500" will be re-engineered to not use the connector.

All in all, I see this ending up a lot like the Athlon vs. P4 issue.
 
You forgot one, I also had to speculate that the NVIDIA engineers didn't transform into monkeys overnight.

No, there is no history of nVidia's engineers turning into monkeys, or any basis to believe that would in fact happen.

EDIT:

Sure, I can hunt for reasons why things might not follow the natural order, but that's called speculation.

Reviewing nVidia's history is not speculation. It's fact. Based on that fact, you then speculate that they will maintain the same trend, or break from it. You decide which is more outside the "natural order."

TSMC's 0.13 yield issues are not speculation, they are fact. Is it outside natural order to believe this will impact clock speed, or not?

Russ, don't pass off your own speculation as what is "the natural order."
 
how many transistors did the parhelia have again?

and what about the P10 as well?

might provide a bit of a reference for performance/transistor count.
 
8ender said:
how many transistors did the parhelia have again?

and what about the P10 as well?

might provide a bit of a reference for performance/transistor count.

Parhelia 512: 80 million transistors

P10: over 76 million transistors
 
Back
Top