Tech-Report blasts GeForce FX

No, I buy the card with the best performance and most features, I usually ignore the price if it is within some range ( < $500) Sometimes I buy both cards for completeness. I have multiple PCs. I like to own whatever is the fastest or best, a $100 price differential is off the radar.

I ignored the console wars for example. I bought a PSX, N64, DreamCast, X-Box, PS2, and GameCube. Buy the most fully feature mobile phone usually, got the digicam with the highest megapixel rating, etc.

There is not sufficient difference between the NV30 and R300 for me to easily decide that one is "obviously better" than the other. Each has upside and downside. R300 has better AA IQ, NV30 more interesting to me as a developer at the moment. But neither dominates the other for gaming concerns.

Basically, I am a spendaholic when it comes to electronics.
 
I agree with Joe on this one.

I normally don't buy the top end computer parts on the market. I usually try to balance price versus performance. For instance I got my athlon 1700+ cpu when it was about $100. For $10 less I could have had a 1600+, but for about $30-40 more I could have had th 1800+. There really isn't much difference between the two, especially when you consider that you can go two to three increments faster than that simply by overclocking.

With the 9700pro, things are a bit different. Right now you've basically got two sweet spots, one at the 9700pro, and one at the Ti4200/R8500 price range. For the 9700pro, look at where it currently is positioned. As Joe mentioned, the current online price is hovering right around $310-$320. For $100 less you can get a ti4600, and I don't think anyone really will dispute that the 9700pro offers a good deal more in terms of performance and features than the ti4600 does (and this is on current games). On the other hand, you've got the NV30 coming down the line, which according to retail is going to be $100 over the 9700pro, but when it's introduced will probably be more like $150 or even $200 more on the street. We haven't seen any concrete benchmarks, but I really doubt that the NV30 can offer the same kind of feature and performance enhancements over the 9700pro that the 9700pro offers over the ti4600. From most accounts (even those that are pro nvidia) the NV30 and the 9700pro are going to be in the same ballpark on a lot of benchmarks, and the features the NV30 is introducing over the R300 are mostly post-DX9 stuff, which is years down the road.

What's more interesting is that ATI really has a pretty balanced lineup coming out (atleast in terms of speed vs price). The 9500pro offers a lot more than the 9500, and the 9700 offers a fair amount more than the 9500pro. The 9700pro really doesn't offer much over the 9700, but it's already sold enough cards that ATI doesn't need to be concerned as much with the 9700 stealing it's future sales. A potentially redesigned and higher clocked R350 could offer a fair amount more than the 9700/9700pro, so they could easily continue this trend. It makes it a lot harder to be content with one of the lower end cards if the higher end ones are not just a speed bump that could be overcome by overclocking. I'm curious how many people will be content to stay with the 9500pro, but how many others will be temtped up to the 9700.

Nite_Hawk
 
Doomtrooper said:
So in the end the ultimate reason to upgrade to a new card is to deliver features that can be used now, faster frames would be one, I guess FSAA and eye candy would be another..but touting mile long pixel shader lengths as a feature is a joke, as by the time a DX 9 title is released that card is now considered slow....

I agree that NV30's shader features do not provide a compelling reason for consumers to buy the card. It's a different story for 3D professionals though. While this is a comparatively small market, expect them to flock to the GeforceFX due to it's programmability.

Even still, I applaud Nvidia for pushing the envelope on programmability. It is true that the GeforceFX's shader improvements won't benefit games released during the card's useful life. But isn't that missing the point? I mean, if 3D hardware manufacturers only focused on features that benefited today's games, all progress would come to a halt! We'd be playing Quake3 at 1200 fps, but is that what we want? Nvidia determined that more flexible programmability is important for the future, so they built it now. And in doing this, they've ultimately *accelerated* the adoption of these features in the future.

We all know how it works - the feature comes first, developers follow. Would we have Nvidia wait until their next generation to add these programmability features? The generation after that? Thus delaying the adoption of these features for another few years? As it stands, ATI will respond with a part that matches the programmability of NV30, and developers will make use of that programmability all the more rapidly. We all win.

Was it a good business decision to focus on programmability rather than performance in today's games? Maybe not, time will tell. But was it a decision that is ultimately good for gamers and 3D fans? Certainly.
 
Nite_Hawk said:
you've got the NV30 coming down the line, which according to retail is going to be $100 over the 9700pro, but when it's introduced will probably be more like $150 or even $200 more on the street.

$200 more than 9700pro? :eek:

People always exaggerate the price of nVidia cards... when GeForce3 was launched, people quoted numbers as extravagant as $580, by the time it hit store shelves the prices were well in the reasonable sub-$400 range.

I seriously doubt that NV30 will end up costing $150-$200 more than R300.
 
SteveG:

That's true, but there are other arguements that can be made as well. One for instance, is that if Nvidia would have gone with a more conservative design they wouldn't have been so late to market. This way developers would have already had advanced features for the last 3 months, and would be better prepared to handle new features added in later cards. As it sits now, developers probably won't be getting the NV30 for atleast another month or two, and basically all of nvidias features that are more advanced than DX9 are only going to be accessable through cinefx. (or perhaps opengl 2.0 when it comes out). I don't think anyone is argueing that nvidia shouldn't have introduced new features, but personally, I think nvidia over extended themselves too much. The card is better than DX9, but doesn't support enough features to be ps/vs 2.0 compliant.

Perhaps cinefx will catch on enough that it won't be as bad for nvidia (atleast in the professional market) but Direct3d is pretty well intrenched. They've got to recoup the $400M they are spending to get the NV30 line out the door, and the professional market can't support that. It's also why so many companies like SGI couldn't make it anymore. At $500 a pop, they'll need to sell a lot of cards, and while that's cheap for the pro market, it's pretty darn expensive for the consumer.

Nite_Hawk
 
This way developers would have already had advanced features for the last 3 months, and would be better prepared to handle new features added in later cards. As it sits now, developers probably won't be getting the NV30 for atleast another month or two, and basically all of nvidias features that are more advanced than DX9 are only going to be accessable through cinefx.

If they've got working samples now, which they have because we've seen them, you can almost guarantee that some top line developers have some already. It wouldn't surprise me if Carmack/Sweeney even had an earlier revision.
 
SteveG said:
I agree that NV30's shader features do not provide a compelling reason for consumers to buy the card. It's a different story for 3D professionals though. While this is a comparatively small market, expect them to flock to the GeforceFX due to it's programmability.

As said before...the user base determines support..I'm sorry but as said many times on this forum, if a developer wrote a game to utlize 9700/Nv30 's shaders...they are limiting themselves to a very small user base in General.DX8 class hardware is different IMO. (since Dx8 is almost 2 years old now too and lots of user base).
In all honesty we are not going to see significant DX9 games until at least late fall/winter of 2003..maybe even later

Was it a good business decision to focus on programmability rather than performance in today's games? Maybe not, time will tell. But was it a decision that is ultimately good for gamers and 3D fans? Certainly.

No IMO..these extra instructions will not be usefull to anything but maybe in Cinematic scenarios. We need cards to deliver features that can be used during the life span of that card..usefull features that will be used..

1)Truform to me was one...made older titles look better (if implemented properly)
2) FSAA..better implentation
3) Better filtering Anisotropic
4) Whats next ??

DX9 is not even released yet so if a developer is starting his title today, we are looking at least 1.5 years to be completed...then this new high end card today is now considered low end (and during the life span of that card did any of these DX 9 features get used at all...or just the usual FSAA etc...
Yes I know how it works but it is not working correct for the PC graphic business IMO...Feature sets should be derived from the API and the API should be derived from IHV's, Game Developers etc..so future hardware will be current with the titles..not having hardware 2 years ahead of the developers...of course all IMO.
nixweiss.gif
 
Dave:

That's true, though I don't think it invalidates the arguement. How many Sweeny's and Carmacks are there out there? If a more conservative approach were taken, how much earlier would they have gotten hardware? In the same vein, how worth it (even to people like carmack and sweeny) is it to have features that go beyond DX9, but are not good enough for ps/vs 2.0?

BodoZerg:

The List price on the FX is going to be $500 right? How much cheaper do you think street will be than list when it is introduced? I mean, by that time the 9700pro should easily be at or under $300. I think it's pretty reasonable that the street price of the FX will debut at ~$450-$500. Especially since the rumors are that it's going to be introduced in limited quantities. There is no reason for the street price to drop until the things are released in mass.

Nite_Hawk
 
The street price will be whatever is necessary to sell. If it sells well at $500, then it will be $500.

Or, if it needs to be $275 to sell, it will sell for that.
 
Typedef Enum said:
I don't recall this same sentiment when a certain board sported PS 1.4 support. It was regarded as something that went beyond the initial GF3/DX8 spec, and was considered a "good thing."

While I don't disagree with your general sentiment, since most people seem quite content to argue from the "other side of the fence" now that ATi and nVidia have somewhat slightly reversed roles, there is a difference here in the comparison you are trying to make.

PS1.4 wasn't about allowing "more tricks" to be done with shaders, but was about collapsing passes into a single pass. It was a performanc enhancement, though some people tried to glorify it into something more.

Conversely, extremely long shaders are hardly a performance enhancement, so DT's point is still somewhat valid. Touting PS1.4 was not the same, since it could actually make nice shader effects a bit faster, had it been supported. Even if 1000 instruction shader programs were used, they would be essentially useless due to performance requirements.
 
How much control does nVidia have on prices? Don't their OEM's need to make money at some point? Or will nVidia be building all the cards and just having the OEM's resell them, the way ATI has been doing with the 9700 Pro?
 
RussSchultz said:
The street price will be whatever is necessary to sell. If it sells well at $500, then it will be $500.

Or, if it needs to be $275 to sell, it will sell for that.

If it costs the retailer $325, it won't be sold for $275. If the chip costs the manufacturer $60, the memory $60, and the PCB $100, they aren't going to sell it to retailers for $200.

;)

All made up prices... of course. :D
 
I'm with Russ on this.

Street price will depend primarily on three things:

1) How much "better" (if at all) the NV30 is perceived to be compared to the 9700. (I will elaborate below).
2) it will depend the typical selling price of Radeon 9700 whenever NV30 does make it to market.
3) It will depend on the quantity of NV30 products available. (Lower the quantity, higher the price...supply and demand).

WRT number 1). The primary means of determining "perceived betterness" is two things: benchmarks and marketing "Features." Other, less tangible things also come into play, like brand recognition. If the market "perceives" nVidia to be a better brand, the market will pay a higher price for it.

I do not expect the performance and feature characteristics of a 500 MHz NV30 to warrant a $150-$200 price premium over the Radeon 9700 Pro. (I don't think the market would accept it.) However, if there is very limited quantity of NV30 parts, or if nVidia was successful in their PR campaign (getting people to "believe" the NV30 is better than it actually is), that could make such a price difference viable.
 
Joe:

I think both you and Russ are correct, but to a certain extent so is Bigus. Here is my guess:

Nvidia wants to have the fastest card out there so that they maintain their "crown" so to speak. It doesn't really matter how much it costs so long as people precieve that they are still the "fastest". It's released in limited quantities because honestly it's just the videocard equivelant of a trophy wife. Hopefully it will be fast enough in most benchmarks that it can be placed as a "clear" winner, and thus they can basically claim that it deserves such a high price point because it's the fastest thing out.

Next they release a card that has about the same street price as the 9700pro, (maybe a little more expensive to imply that it's better) and while slower most of the time, has a couple specific benchmarks that it's faster at. (something like a 400MHz core/memory card). They'll try to market this card as being in the same family as the $500 gorilla, and use their reputation as being the fastest to sell it. This card then will take the brunt of any pricewars with ATI, and will be cheaper to produce than the FX, so it won't hurt them quite as badly in the long run as dropping the price on the FX would have. It'll probably have a similar name as the $500 card to confuse buyers.

Nite_Hawk
 
DemoCoder said:
So from my point of view, increases in the core clock are "free"...

Yeah, "free" except:
a) The design is 6 months late due to 0.13 um process, whose purpose was to make the core larger and clocked faster

b) The design costs more due to the exotic cooling and 0.13 um process

c) The cooler occupies an extra slot, could be unreliable due to dust (note I said "could"), could be noisy to some, etc.

If all these are for minimal advantages, what's the point? There is a cost to increasing the core clock. NV30 has over 50% higher clock rate than R300, but it won't be near that much faster (most of the time anyway), as we've already seen in benchmarks released by NVidia.

Oh yeah, and don't use the Parhelia argument. NV30 is the other way around. We had much higher expectations from Parhelia than it delivered due to theory. Heck, in a game like Quake3 it should be faster than R300 per clock (I'm sure of this), but it's not even close. For NV30, however, the theory predicts less than spectacular performance, and you can rarely outpace theory.

Anyway, I don't think it's worth arguing much more. You are firmly in favour of NV30 due to its ability to execute fast shaders, which is true. Still, there is no need for 8 pipes in this case:
andypski said:
If I can't actually achieve my peak fill figure then I might redesign and go to a design with less pipelines but where each pipeline can run more instructions per clock - this might make better use of my silicon area and memory resources. The more pixels I actually have 'in flight' the more buffering/FIFO memory I may need internally to hold intermediate calculations.
This is the main problem with NV30. 8 pipes at such a high clock are only good for one thing - Stenciled shadows, e.g. Doom 3. You will never really get that pixel rate on NV30, so why waste the silicon? They could have kept the 4x2 architecture card, and it would pretty much be just as fast in every other situation, but smaller, and thus cheaper.
 
Anyway, I don't think it's worth arguing much more. You are firmly in favour of NV30 due to its ability to execute fast shaders, which is true.

I'm still curious about the shaders - as it seems that 128bit will still take two cycles per instruction according to one thread.
 
DemoCoder said:
Let's say I build a house with new energy efficient windows. I paid the upfront cost for the purposes of saving heating/utility costs.
But then you also decide to install a powerful central vacuum system, and whenever you clean the house, all the heat you saved gets sucked out... :D

Gotta love mocking that NV30 cooler.
 
Mintmaster said:
Anyway, I don't think it's worth arguing much more. You are firmly in favour of NV30 due to its ability to execute fast shaders, which is true.

That's where you are wrong. Go read my posts when the R300 came out. There is this blanket assumption that if you are defending wrong-headed and unfair criticism that you are somehow "for" one over the other. When the R300 came out, I consistently defended it's "limitations" in shader ability, go check the threads. In fact, I even got accused of being pro-ATI at one point by Derek Smart. More than that, I actually have an R9700PRO in my system today.

I have been adamant since day 1 that these two cards will be about on par and that they will each beat each other in some categories and fail in others and that there will be NO OVERALL WINNER. It is the inability of people to process this and avoid turning everything into a damned horse race that leads to these bogus critiques. The attack on the size of the fan is the most absurd I've seen and the idea that people care about the extra slot is absurd. You won't see the same people complaining about USB or Audio connectors eating a slot. Today's systems have oodles of free slots, especially systems with integrated networking, audio, raid, scsi, etc.

Yes, I like NV30's (hypothetical) fast shader execution rate. I think that is the future. I also like ATI's AA, anisotropic filtering, and HW tesselation ability. I like that ATI went with a 256-bit bus, paving the way for others to make the leap. (although P10 and Parhelia beat them). I like ATI's multiple render-target support.

The fact is, both cards have things that I like and things that I don't like, and I wish there was a card that combined the best aspects of both.


As for the 8 pipeline issue, or the balance issue. You are still wrong. This is not a matter of being pro-Nvidia, it is a matter of being right or wrong. Fact is, the high clock allows the NV30 to both match the R300 in older game performance, but to gain a benefit in shader execution rate. It also means that they will have comparable stencil/shadow buffer rates. The card is simply "balanced" for compute-centric shaders.

The fact that NVidia went to .13 micron, yes, had an extra cost. However, like I said, both ATI and NVidia are going to pay this cost, and NVidia simply decided to pay it sooner. Nvidia needed .13um to squeeze 125M transistors into the process. Their die simply may not have been possible at .15um. Thus, .13um gave them a side benefit of increased clock scaling potential.

Moreover, since NVidia has paid this design cost, it will reduce the design cost of the NV31/NV35, since many of the kinks of the .13um design process have now been worked out.

The idea of a "balanced" card has to be taken in context of the use case for which the card is designed. We had this same discussion back in the days of 1, 2, or 3 TMUs per pipe. If the average game was a dominated by dual textured blending modes, then the 2 TMU card is more "balanced", since both TMUs will be non-idle most of the time. If games were dominated by single texturing, then the second TMU unit would be idle most of the time and the card would not be "balanced" in that it would have extra wasted bandwidth for the second TMU, which is not used 90% of the time.

Nvidia's whole strategy, right or wrong, is that they see the future as being compute bound. Even on short 16-32 length shaders, you need shader performance, and Nvidia has burnt transistors insuring this scenario is speedy.

Whether or not any "killer" DX9 games come out during the lifespan of the card is a separate argument. But the NV30 is balanced for the requirements for which it is designed. The extra pipelines exist, because, stencil/Z fillrate is important for future games like Doom3 with unified lighting.

If you were looking at future games like Doom3, Halo2, Splinter Cell, Unreal2+, etc today, you would design to maximize two things: z/stencil rate and shader compute rate. So, IMHO, a card designed for these scenarios is balanced.
 
The attack on the size of the fan is the most absurd I've seen and the idea that people care about the extra slot is absurd.

Well ATi caught copious amounts of heat for requiring a power connector, and lets face it the Nv30 cooler is a level of absurdity higher than that.
Call it geek smack if you want but its going to happen, and the fan is an easy target that even to non techies looks somewhat farcical.
 
Back
Top