no 6800ultra Extreme, still the same 6800ultra

Status
Not open for further replies.
ANova said:
ATI is not holding technology back, nvidia is jumping ahead.
That's what leaders do.

The advantages of SM3 are marginal and considering ATI would have had to increase their transistor count 33% it would have also effected cost and yields, as we are seeing with the 6800. SM3 is just an interim build for DXNext; aka PS1.4 all over again.
Where did that 33% number come from? The NV40 doesn't have 33% more transistors than the R420. And ATI's R5xx will likely be SM3.
 
Chalnoth said:
Where did that 33% number come from? The NV40 doesn't have 33% more transistors than the R420. And ATI's R5xx will likely be SM3.

You're right the nv40 has 39% more transistors than the r420.
 
DaveBaumann said:
radar1200gs said:
2. Vertex textures: required for displacement mapping. Can be used to great effect for nearly-flat animated surfaces (such as water).

Again, is possible with SM2.

Not really. Only with CPU emulation.

Technically a two pass mechanism can be used whereby the pixel shader will lookup the displacement values from the displacement map texture and then passed back to the vertex shader for the actual displacment.

Three questions in that case:

1) how does performance with the two pass mechanism compare to a native vertex displacement/texture operation (such as in NV40)?

2) Is it compatible with programs that expect to find native vertex displacement/texturing or must it be specifically supported?

3) is it actually possible on ATi hardware available now? Technically possible and actually possible are two different things.
 
AlphaWolf said:
Chalnoth said:
Where did that 33% number come from? The NV40 doesn't have 33% more transistors than the R420. And ATI's R5xx will likely be SM3.

You're right the nv40 has 39% more transistors than the r420.
A = 1.39B does NOT imply A = 1.39*(B+Cache).
In the end, one of the most important aspects is die size, and that's only +- 10% bigger for NVIDIA. If anything, NVIDIA is inflating their transistor count for marketing.
We had these discussions before - didn't we? ...

Uttar
 
I have two more questions, one for Dave and one for Uttar:

1) Given what ATi employees have said recently (in the chat concerning trylinear I think) about ATi being reluctant to expose features in the driver unless they enhance a gamers overall experience, how likely is it that ATi would support vertex displacement/texturing in the manner you described?

2) Uttar, have nVidia ever actually explained exactly what transistors they are counting when they give a figure? So long as all the transistors actually exist on the die, then nVidia is deceiving nobody by stating a number; it doesn't matter what they are used for, only that they exist, unless they are claimed to be used for a purpose and it can be shown that they in fact are not.
 
Heathen said:
I see this as underhanded and just plain wrong. ATI is holding back game technology in an attempt to hold onto marketshare.
Pretty much what Nvidia done with the GF4 and N3* then.
Um, definitely not.

The NV20 was the first really programmable hardware, and the NV25 was its refresh. nVidia greatly expanded that programmability with their next new architecture. ATI has not done that here.

The NV3x was a mistake. There were a few bad design decisions coupled with process problems that resulted in a relatively poor chip. I don't call this holding games back. I call it a mistake.
 
anaqer said:
Chalnoth said:
The NV20 was the first really programmable hardware, and the NV25 was its refresh.
Nope. The NV20 refresh was the TI200 and TI500.
Those were respins of the exact same core. That's not what is typically called a "refresh" when referring to nVidia.
 
Chalnoth said:
Those were respins of the exact same core. That's not what is typically called a "refresh" when referring to nVidia.
Bzzzt... WRONG.
GF DDR, GF2 Pro, GF2 Ultra, GF2 Ti, GF4MX-x8, GF4TI-x8... :rolleyes:
 
anaqer said:
Chalnoth said:
Those were respins of the exact same core. That's not what is typically called a "refresh" when referring to nVidia.
Bzzzt... WRONG.
GF DDR, GF2 Pro, GF2 Ultra, GF2 Ti, GF4MX-x8, GF4TI-x8... :rolleyes:

Pretty much every IHV I have seen releases faster versions of the same core ect, With added features. We've seen ATI do it, Werent its original 9000 cards only 4x compliant? Then made 8x Compliant? Almost sure of it.

I mean is the Geforce 4 MX 8x all that different from a Geforce FX 5200 PCI express card?

I think it's safe to say all those lines were from the same core,

But not really fair to compare to a change from the NV20-NV25. The Nv25 did add an additional vertex unit, and an improved memory controller. (Would mention the Quincunx AA post processing being done on Ramdac but I'm pretty sure thats Not a core change)

Anyway, I agree with you. The Nv25 was different from the Nv20. Perhaps what we're disagreeing on is what a "refresh" product is? I call AGP 8x and PCI express changes. Or things like the NV38 Marchitecture, Where there are relatively small differences. But something like the Nv20-NV25 a refresh product. Or the r300/r360 a refresh product.
 
Chalnoth said:
That's what leaders do.

If that's what you want to call them. I, on the other hand, think of them as a business in it for the profit. After the NV3x fiasco I'm sure they wanted something they could tote as an advantage over ATI's offering in the hopes of redeeming their position in the market and thus reaping in more profits.

Where did that 33% number come from? The NV40 doesn't have 33% more transistors than the R420. And ATI's R5xx will likely be SM3.

It came directly from ATI, as an estimate. And so what if the R500 supports SM3, I doubt we'll see it until mid to late 2005 and by that time games might actually be using SM3.

ChrisRay said:
Anyway, I agree with you. The Nv25 was different from the Nv20. Perhaps what we're disagreeing on is what a "refresh" product is?

A refresh is a card that is based on the same architecture as the current generation and usually offers little more then speed and AA/AF enhancments
 

ANova said:
Chalnoth said:
That's what leaders do.

If that's what you want to call them. I, on the other hand, think of them as a business in it for the profit. After the NV3x fiasco I'm sure they wanted something they could tote as an advantage over ATI's offering in the hopes of redeeming their position in the market and thus reaping in more profits.
ANova said:
Where did that 33% number come from? The NV40 doesn't have 33% more transistors than the R420. And ATI's R5xx will likely be SM3.

It came directly from ATI, as an estimate. And so what if the R500 supports SM3, I doubt we'll see it until mid to late 2005 and by that time games might actually be using SM3.


ChrisRay said:
Anyway, I agree with you. The Nv25 was different from the Nv20. Perhaps what we're disagreeing on is what a "refresh" product is?

A refresh is a card that is based on the same architecture as the current generation and usually offers little more then speed and AA/AF enhancments



On the first part, your argument fails to take history into account. nVidia have a long established (and visible to a blind man in a dark alley) history of offering new features with and within each new generation.

On the second part, adding a vertex shader would qualify as more than speed & AA/AF improvements along with a host of more minor tweaks that together really do differentiate a GF3 from a GF4.

One more thing I didn't address before: you were attempting to paint SM3.0 as nVidia's PS1.4, sorry, but SM3.0 has been part of DX9 from the very beginning, PS1.4 was tacked on to DX8 with DX8.1.
 
Chalnoth said:
The NV3x was a mistake. There were a few bad design decisions coupled with process problems that resulted in a relatively poor chip. I don't call this holding games back. I call it a mistake.

You, sir, are a first-class apologist. When we see several game developers publically state that they were foregoing the use of DX9 tech. last year due to FX chips' performance issues with it, that is most definitely holding games back regardless of whether or not NV30 was a "mistake".
 
AlphaWolf said:
Chalnoth said:
Where did that 33% number come from? The NV40 doesn't have 33% more transistors than the R420. And ATI's R5xx will likely be SM3.

You're right the nv40 has 39% more transistors than the r420.

During a X800 telephonic (is that even a proper word?) press conference I had with ATI a few weeks back I specifically asked what adding SM 3.0 and FP32 would've cost in terms of added transistors and the very forthright and quick response was something along the lines of: "Oh, easily 25%."
 
John Reynolds said:
telephonic (is that even a proper word?)
It sounds like a term that a PR guy would use...
bleh2.gif
 
Chalnoth said:
The NV3x was a mistake. There were a few bad design decisions coupled with process problems that resulted in a relatively poor chip. I don't call this holding games back. I call it a mistake.
So it doesn't hold games back to downplay PS2.0 (while simultaneously extolling DX9) and "encourage" developers to recode their shaders to try to coax more equal performance out of them, much of the time ending on poorer quality regardless?

NV3x was a mistake, but nVidia's damage control for it certainly had the same sort of effects being complained about now.

'sall a matter of timing. nVidia didn't see DX9 getting out the door quickly, so they built their chips around speed-o-riffic old generation tech and had DX9 capabilities, but not concentration--nor did they have all the desired features for it. ATi with this gen felt we had not yet pushed the boundries of SM2.0 and pushed for really high performance there, opting to wait on SM3.0 until the advantages of it can really be taken advantage of by later hardware. We've seen how SM2.0 has played out since the 9700 and is playing out now, and we'll see how SM3.0 plays out between now and next generation--and THEN we'll be able to properly judge the decisions of both.

Frankly, I like businesses like this making smarter decisions, and making money is not bad either. I mean, the more money they have the more they can invest in R&D for even better things to come, eh? ;) Seems like the "holding back the industry" charges can be levelled at every leader at some point or another, as they all have different concentrations and gameplans from one chip to another. Ultimately, though "pushing tech for tech's sake" always pleases the enthusiasts, it's not always going to be practical or particularly useful, and companies don't always know which way the market is going to move. Periodically they ride things out until their next cards can surface, and periodically they try to force the market to go in their direction. When are they "holding things back" and driving things forward; how do you tell when all the players are usually driving in differect directions at the same time--each more than the other?

Uh... beats me. Hindsight is at least closer to 20/20 than guesswork at the beginning, though.
 
radar1200gs said:
On the first part, your argument fails to take history into account. nVidia have a long established (and visible to a blind man in a dark alley) history of offering new features with and within each new generation.

Yes, none of them usable in games until the following generation of product from Nvidia, either being ignored by devlopers or so slow as to render any game unplayable. :rolleyes:
 
Bouncing Zabaglione Bros. said:
radar1200gs said:
On the first part, your argument fails to take history into account. nVidia have a long established (and visible to a blind man in a dark alley) history of offering new features with and within each new generation.

Yes, none of them usable in games until the following generation of product from Nvidia, either being ignored by devlopers or so slow as to render any game unplayable. :rolleyes:

With the exception of the NV3x series, I have owned or used every nVidia card released since the TNT and I find your claims above laughable as would a vast majority of games who also owned nVidia cards thru the TNT thru GF4 period.

There were an awful lot of cards sold, and it wasn't for their good looks or ability to run Glide...
 
Status
Not open for further replies.
Back
Top