another Dave Orton interview

Bjorn said:
You could of course turn that around and say that most people were surprised that the R420 has close to zero new features. It was clear pretty early that it wasn't going to be a SM3.0 part but that it didn't have VS3.0 or any tweaks to FSAA/AF or things like that was not what i expected.

And i don't think it's that surprising that Nvidia haven't been able to reach as high clock frequencies as they have with the R420, it's a 222 million transistor chip on a different process.

Not that i expected him to be unbiased though. It's not exactly his job :)

I think we'll have to wait on some appreciation of nV40 yields before we'll know whether what Orton said about die size and clocks was biased, or whether it was simply accurate, right? I mean, I have no trouble at all following his reasoning. Also, I think you might translate a possible peripheral meaning as to him being surprised about nV40 clocks not being higher as, "We were surprised nVidia opted to stay away from low-k, since they could have chosen to do it like we did, and gotten better clocks as a result." That's an alternative meaning there, I think. So why *did* nVidia stay away from low-k, when it might've helped them? Interesting question on many levels, I think, apart from their choice to use IBM initially, as I think the answer here is one of multiple parts.

As far as the "few new features" concept, I'm not sure I understand you, or at least your point of view. What about the concept of "new features" R4x0 has that are entirely absent from nV3x? In fact, R4x0 has many of the same "new features" that were also in R3x0, which nV3x didn't have as well. nV40 also has a lot of "new features" relative to nV3x, about as many as R3x0 had over nV3x...;) But as to your points concerning advertised "new features" that nV40 has that R4x0 does not, let's wait and see whether those few that there are make a substantial difference between R420 and nV40, as that is anything but clear at this point. Once we establish the efficacy of those nV40 features beyond marketing bullets, and we can see some empirical benefits to them, then I think it will be the time to discuss them, as then they'll be worthy of discussion, imo...;) As you point out, the R4x0 feature set has been well and truly demonstrated over the last 20 months in R3x0. Now we'll get to see how well nVidia does with them, and more, in nV40 over the next few months.
 
Evildeus said:
Sabastian,
Here's the part that to me seems to say that R800 is the really beginning of 3 unified teams:
So the R600 family will mainly be centred primarily in the Valley and Orlando with a little bit from Marlborough, and then the R800 would be more unified.
;)
http://www.beyond3d.com/interviews/daveorton/index.php?p=3

Not too nit pick here but Dave Orton says it is already unified... and indeed they are working on the R600 as one group the Silicon Valley/Orlando portions sharing the bulk of the workload and the Marlborough team lending a hand on it. It is not only one of the teams working solely on the R600 project like when the Marlborough team did the R300.(IIRC) Anyhow I am not going to argue about this, it is pretty clear that the workload is disbursed between these groups in a joint team effort already. Also they have more then one project "on the go" between them at present.
 
Joe DeFuria said:
Actually, they're probably really worried about the 5200+ series...but that's really another debate. ;)

I thought that most developers already made a DX8 path for the 5200 and stop worried about it :) (and yes, that's another debate, though one that i would probably agree with you)

That's jsut it...most feature sets will be pretty much set in stone for titles this year, the better the game performs, the better the dev "looks". The dev can more easily exploit (make heavier use of) existing features in their engines for higher performing cards. Instead of a few normal maps on a couple characters....do it everywhere for example.

The main benefit of the X800 is mainly it's added shading power which requires new art to take benefit from if i'm not mistaken. As for adding normal maps and other stuff so that you couldn't run the game with full settings on, say a 9800 XT, let me just say that i doubt that we'll see any such game this year. Well ok, probably a few that Ati and Nvidia has "sponsored".

And of course, i still don't believe that the "execution problems" was the main reason for the R420 = R300 on steriods and not "we thought about the benefit for the developers" which was what the discussion was all about.

It's not just new "features" that allows for more quality effects...its better performance of existing features.

That's true. What i'm saying is that a higher clocked native 12 pipe part would also have performed much better then the 9800 XT and would give much of the same benefits. It would of course be a bit slower but i doubt that we'll see any game this year that you can't run on full settings, acceptable resolutions on the 12 pipe versions of the new cards. In fact, as i said above, i think you can do that with a 9800 XT.

Well already know that:
1) Its larger
2) It's more power consumption.
3) It's overall slower. (I believe that's the concensus, correct?)

From what i've seen in the reviews, it's not exactly slower in all cases. And it's much to early do declare a "winner" in the speed department imo.

What we don't know is how "producible" it is in comparison, but we have one big clue: The X800 Pro is shipping, and the "lower end" version of the NV40 isn't. The fact that ATI has the more voluminous part shipping before their high end part is pretty telling IMO. One of the reviews mentioned that the only "reason" why the XT is later, is because they're waiting on the 600 Mhz Ram shipments. That could also explain the lack of the presence of the 6800 Ultra (assuming they aren't actually going to ship with overclocked Ram), but does not explain the absense of the 6800 non ultra or GT.

We don't really know when they started the production, for all we know, Ati might have started 2 months earlier then NVidia. What i think will be more telling is how quick they can ramp up production after they're available.
 
WaltC said:
I think we'll have to wait on some appreciation of nV40 yields before we'll know whether what Orton said about die size and clocks was biased, or whether it was simply accurate, right?

The NV4X still has more transistors and a new SM model + other additional features. That makes stuff about comparing clock speeds difficult imo. And since he don't know, it's FUD.

And, if you're to believe Nvidia, the NV4X has

- 40% more transistors, yet only 9% bigger die. Costs 10-15% lower, with better process capacity.

Shouldn't we wait and see if that's accurate also ?

In fact, R4x0 has many of the same "new features" that were also in R3x0, which nV3x didn't have as well. nV40 also has a lot of "new features" relative to nV3x, about as many as R3x0 had over nV3x...;) ..

If a card is released with close to no new features relative to a two year old card then i would say that it has no "new features". And of course, the NV3X got a lot of crap (rightly so) for the features it lacked.
 
Yes but before they were done in 1 site, now and till R800 it will be more or less 2 sites, and beginning with R800 all the 3 sites will be completly involved.
Sabastian said:
Evildeus said:
Sabastian,
Here's the part that to me seems to say that R800 is the really beginning of 3 unified teams:
So the R600 family will mainly be centred primarily in the Valley and Orlando with a little bit from Marlborough, and then the R800 would be more unified.
;)
http://www.beyond3d.com/interviews/daveorton/index.php?p=3

Not too nit pick here but Dave Orton says it is already unified... and indeed they are working on the R600 as one group the Silicon Valley/Orlando portions sharing the bulk of the workload and the Marlborough team lending a hand on it. It is not only one of the teams working solely on the R600 project like when the Marlborough team did the R300.(IIRC) Anyhow I am not going to argue about this, it is pretty clear that the workload is disbursed between these groups in a joint team effort already. Also they have more then one project "on the go" between them at present.
 
If a card is released with close to no new features relative to a two year old card then i would say that it has no "new features".

How many new features must a card have before you'll accept that it's got new features?
 
Quote:

In fact, R4x0 has many of the same "new features" that were also in R3x0, which nV3x didn't have as well. nV40 also has a lot of "new features" relative to nV3x, about as many as R3x0 had over nV3x... ..


If a card is released with close to no new features relative to a two year old card then i would say that it has no "new features". And of course, the NV3X got a lot of crap (rightly so) for the features it lacked.

Its not that the nv3x got crap because of features it lacked. It got crap because the features it did have worked like crap

The nv3x series had a lot of great features like p.s2.0 extended , fp 32 . But it couldn't run at acceptable speeds with that on.

The nv4x may still get the same crap if it turns out all the extra features on it run much much slower than the features ati has .

although from what i see it looks like a solid card .
 
jvd said:
Its not that the nv3x got crap because of features it lacked. It got crap because the features it did have worked like crap

That also yes. But also for missing MRT's f.e.

The nv4x may still get the same crap if it turns out all the extra features on it run much much slower than the features ati has .

Slower doesn't matter that much. The problem is if it can't use the features at acceptable speeds/resolutions. Which yes, remains to be seen. But as you say, it seems like a solid card.
 
Heathen said:
How many new features must a card have before you'll accept that it's got new features?

Well, strictly talking, one new feature of course means that the card's got new features. As for how many ? Don't know. And the other problem is, it's not the amount that counts, it's what the benefits are by using them.
 
DaveBaumann said:
3, why do board vendors have them in the current config that can't run without 2 psu's?

As has already been discussed here (another thread, about NVidia Q1 2005..), according to Jen Hsun, the 6800 Ultra won't need the second molex connector as long as it runs at the specified clock rate. Which is 400 MHz clock and 550 MHz DDR Ram.

Will be fun to see if that's true though. And, if there'll be versions without it.

And yes, i've read this from HardOCP:

"With one connector, we are getting some artifacting in 3DMark03 during Battle of Proxycon and Trolls, but not much, some shimmering and black lines. Played UT2004 no problem with one connector. "

And as they say:

Another issues to factor into all of this is that these are not "retail" cards that are being tested at the moment and very few will find their way into gamer's hands. We have put off power testing till we get real retail 6800Us to test with. I think that is when we will see the real truth emerge about the power needs of the GeForce 6800 Ultra and other GF6 cards.
 
Joe DeFuria said:
3) It's overall slower. (I believe that's the concensus, correct?)

Yet to be determined. It wins on some benchmarks, loses on others. Which ones and how many depends on who's doing the reviewing, what games they used, and what driver they're using (immature drivers written in less than 4 months for a new architecture at that)

What we don't know is how "producible" it is in comparison, but we have one big clue

That clue could mean anything. It could mean that NVidia PR likes to do flashy events in their own schedule regardless of availability. If not, and they were having production problems, they could have delayed the launch to solve them.


The fact that ATI has the more voluminous part shipping before their high end part is pretty telling IMO. One of the reviews mentioned that the only "reason" why the XT is later, is because they're waiting on the 600 Mhz Ram shipments.

There are also "bad rumors" about ATI as well. That the XT PE is a "cherry pick" and they're having problems with XT yields. I don't believe either way, but these things flow into my mailbox.

That could also explain the lack of the presence of the 6800 Ultra (assuming they aren't actually going to ship with overclocked Ram), but does not explain the absense of the 6800 non ultra or GT.

I got a good explanation for it: NVidia's PR department sucks. It's poor execution. More concern with shipping press releases on time, less concern with shipping product. Like Valve, they announce products with nonsense optimistic timelines. Has it been 45 days yet since NV40 launch? Aren't you jumping the gun a little bit before making conclusions? The announced availability date was 30-45 days after launch right? So why not hold your assumptions and start speculating after they miss this date?
 
There are also "bad rumors" about ATI as well. That the XT PE is a "cherry pick" and they're having problems with XT yields. I don't believe either way, but these things flow into my mailbox.

The only source of that rumor is JHH. Want to quote Dave Orton about Nv40 yeilds too ? :devilish:
 
No, but that's my point. Every negative speculation or rumor about NV is continually replayed in these threads. Seems to me it is too early to draw conclusions.
 
Laa-Yosh said:
We are still learning in the channel, especially how to be more effective in Eastern Europe, China and South America.

I wonder what Dave Orton means by being more effective - they already have the majority of the enthusiast market here in Hungary :D

I don't think he meant the entushiast market when he mentioned those countries. I'd rather think that he was talking about the OEMs, where still Nvidia is the leader.
 
If you consider that is was Nvidia best interest to launch the Nv40 as fast as they could, if you remember that both chips taped in december but only X800 pro are already buyable, albeit in limited quantities, it should tell you which company has prolly yeilds problems.
 
Actually, DC, it's about 50-50 in terms of who's doubting who. If we look at the recient past - oh, lets say for about the last 2 years - has one of the above companys been as truthful about its products as its compitition? IF you had to pick one of the above companys telling the truth - and only one - who would you pick? Tell the truth, now....... ;)

As far as what's new - Bjorn, come on, now don't be so rigid/closedmined. Features are nice, but usable features are what count. I want to play this game on my high end machine. I have a choice of these 2 cards. One can be played 1280X1024, 4Xfsaa, 16X AF. The other can be played at the same resolution and AF, but can use 6X FSAA, maybe 12X with as good or better speed(than the other card)...... Now just where you gona go?

My point is that the X800 series still has better usuable features than the 6800 series right now. And, in the future, while it "may*" not scale as high due to the added features of the 6800 , it will still have better FSAA..... which will always be a very usuable feature, in almost every game.....

*But, it (X800) "may" be as fast or faster using 2.0 as the 6800 is using 3.0...
 
martrox said:
As far as what's new - Bjorn, come on, now don't be so rigid/closedmined. Features are nice, but usable features are what count. I want to play this game on my high end machine. I have a choice of these 2 cards. One can be played 1280X1024, 4Xfsaa, 16X AF. The other can be played at the same resolution and AF, but can use 6X FSAA, maybe 12X with as good or better speed(than the other card)...... Now just where you gona go?

If it was just as simple as "who has the FSAA quality" then i would choose the one that had better FSAA. The point is, it isn't.

My point is that the X800 series still has better usuable features than the 6800 series right now. And, in the future, while it "may*" not scale as high due to the added features of the 6800 , it will still have better FSAA..... which will always be a very usuable feature, in almost every game.....

I'm guessing that you're talking only about FSAA then. The problem with that is that the difference in the FSAA department is rather small now. And temporal AA seems to have to many drawbacks imo. And could also be implemented by Nvidia afaik.

If we look at the recient past - oh, lets say for about the last 2 years - has one of the above companys been as truthful about its products as its compitition? IF you had to pick one of the above companys telling the truth - and only one - who would you pick? Tell the truth, now.......

I would pick none of them :)

But i take everthing that f.e Jen Hsun says with a very large pinch of salt. Perhaps a pinch or two more then for Dave Orton :)
 
Bjorn said:
The NV4X still has more transistors and a new SM model + other additional features. That makes stuff about comparing clock speeds difficult imo. And since he don't know, it's FUD.

Heh...;) How quickly we forget...;) nV30 had "more transistors" than R3x0, was manufactured on a smaller process, was clocked higher, and still ran much slower than R3x0, and suffered abysmal yields. Not FUD, fact.

Again, there was a big, big difference between what nVidia told us about the capabilities of nV30 and the actual chip itself, wasn't there? Not FUD, fact. What would make you assume before you know otherwise that nVidia was doing something different with nV40?

He didn't say he was comparing clock speeds, he said he was surprised about the clock, seeing that both gpus are 16x1 and similar in other general ways. Again--I think he was talking obliquely about ATi's decision to go with low-k and how that affected ATi's clocks, contrasted with nVidia's decision not go with low-k, which definitely affects clocks and power consumption--else ATi wouldn't have used it. Not FUD, fact.

What Orton knows, however, is that the nV40 die size is bigger than R420 (fact), that it takes more power to operate reliably (fact), that it runs hotter at a lower MHz clock (fact.) These are all facts clearly in evidence at present. What would make you think otherwise?

OK, so what was it you think he doesn't know that's FUD? Didn't quite catch that...;) Additionally, it is a fact that R420-based 3d cards are shipping to retailers at the moment, but nV40-based products aren't (at least as far as any nV40 AIB OEMs or retail distributors have announced.) nV40 was announced first, right? Again, that's fact, not FUD.

So what can be deduced from these facts apart from FUD? I'll leave you with that to contemplate.

And, if you're to believe Nvidia, the NV4X has

- 40% more transistors, yet only 9% bigger die. Costs 10-15% lower, with better process capacity.

Shouldn't we wait and see if that's accurate also ?

Than what? NV35/8? I certainly hope that's true, for their sakes. Again, nV40 was announced first, but it is a fact it will be second to market, isn't it? What does that tell you?

If a card is released with close to no new features relative to a two year old card then i would say that it has no "new features". And of course, the NV3X got a lot of crap (rightly so) for the features it lacked.

Well, R360s in XTs shipped at 412MHz and were 8x1. R420PE will ship at 520MHz and is 16x1. That's not even counting all of the other >R360 capability, but just that by itself seems to me to be quite a fundamental, and substantial, "new feature" of R4x0 over R3x0. I think you are in such a rush to downplay R420 versus R360 that you literally can't see the forest for the tree...;)
 
Back
Top