Tech-Report blasts GeForce FX

Ultimately what matters is performance. If the NV30 performs better on older games than the R300 and if it executes the pixel shaders faster, than the "unbalanced" arguments are in fact irrelevent and wrong. If an "unbalanced" card runs faster on both new and old games, what's the point of trying to use "unbalanced" as a negative connotation against it?
 
Ultimately what matters is performance.

No, performance and price.

If an "unbalanced" card runs faster on both new and old games, what's the point of trying to use "unbalanced" as a negative connotation against it?

There is none...unless the "unbalanced" card costs more. And technically, I'm talking about costs more to make, not actual selling price. (Because we all know that selling price in a competitive market is not dictated by cost.)

Company X could make an "unbalanced" card with a 100 Mhz, 8 pipe core coupled to a 2 Ghz, 1024 bit wide memory interface. Company Y could make another "unbalanced part" that pairs a 5 GHz, 16 pipe core with 100 Mhz 256 bit DDR ram.

BOTH of those products might be "absolutely faster" than card Z: 100 Mhz, 8 pipe, paired with 100 Mhz 256 bit DDR.

But I'd say Card Z is the "best" card, because either Companies X and Y will either be charging a LOT more for their part of nominal superiority, or they won't last too long as they lose money competing with Company Z on Price.
 
Pure speculation, you have no data.. Card X is produced on .15micron, Card Y is produced on .13 micron. Card Y may in fact be clocked higher "for free" and may ultimately be cheaper to produce as well.

Let's wait and see how this plays out in a few months, but IMHO, NV30 and R300 are "on par" and these "unbalanced" comments are just the usual sour grapes. NV30 is "balanced" toward more compute bound shaders. You may as well perform character assassination on the R300 for being an IMR as well. How "ugly" and "inelegant" are these "brute force" renderers.

If you are one of those "all I care about is performance on current games and not DX9 features that won't be used for 2 years" people, then let's just agree to disagree. I am sick of that argument.

Fact is, end users simply don't care how things are implemented. They care how their games perform. These same arguments came up during the PVR days. How deferred rendering was be more elegant, more balanced, and cheaper, since it didn't require exotic memory. Well, in the end, IMR performed better and was simultaneously cheaper.

That why I think the "bandwidth numerologists" fetishists are ultimately wrong.
 
Pure speculation,

Gee, you think? ;) Remind me to accuse you of that the next time you start a sentence with "Imagine a future where..."

Let's wait and see how this plays out in a few months,

Of course.

but IMHO, NV30 and R300 are "on par" and these "unbalanced" comments are just the usual sour grapes.

IMO, the NV30 and R-300 will have relative strengths and weaknesses (somewhat on Par overall), and all the "less bandwidth doesn't matter" comments are also the usual sour grapes.

So does that make us even?

If you are one of those "all I care about is performance on current games and not DX9 features that won't be used for 2 years" people, then let's just agree to disagree. I am sick of that argument.

No, I am not one of those people. I am one of the "get your money's worth" type of people. (though if "getting your money's worth" means playing games over the next year or two, then for that person, "performance on current games" is right to the point.)

For the record, I have not "accused" Nv30 to be "horribly unbalanced" or some such thing, and my judgement of it cannot be rendered until we see benchmarks, availability, and price.
 
The "get your money's worth" people should be buying 9500's and NV31's or whatever. The bleeding edge high end cards are never "worth it". I'm one of the "early adopter, get it first while it's fresh and hot" people. I don't mind paying for something that rapidly depreciates in value and may be lacking in content.
 
DemoCoder said:
Pure speculation, you have no data.. Card X is produced on .15micron, Card Y is produced on .13 micron. Card Y may in fact be clocked higher "for free" and may ultimately be cheaper to produce as well.
A transitor at .13 is 75% of the size of the same transitor at .15. NV30 has about 14% more transitors than R300. .13 transitors currently cost more than .15. Tell me again what is free? Oh, that's right, TANSTAAFL.
 
DemoCoder said:
UT2003 is way more CPU limited and in fact, I find 1280x1024 4xFSAA to be playable. Personally, however, I think is unimpressive visually. It spends way more polygons on details that could just as easily been normal mapped and in the end, it still looks rather conventional lighting wise. The VAST majority of games on the market right now and shipping this year are not limited by bandwidth. Try picking 20 games, and 18 of them will be CPU limited.

I personally am sick and tired of games that look bland lighting wise. I'd rather have more games like Doom3, Halo2, or Splinter Cell even if I had to drop down to 2X FSAA or 1024x768. I am totally with Nvidia on this one. I want more cinematic quality from shaders. That's why we have DX9.

I don't want same old bland looking games, just with less aliasing.

For what it's supposed to be UT2003 looks pretty damn impressive in my opinion. I also really enjoyed the game, but then I just stopped playing it and now after a month or two of not playing I totally suck, LOL. I understand what you're saying about lighting, but I don't think you'll really see that in an online FPS game anytime soon (if ever). Maybe in single player games and possibly online RPGs (although they have to be dumbed down for the masses who actually play them).

DemoCoder said:
The "get your money's worth" people should be buying 9500's and NV31's or whatever. The bleeding edge high end cards are never "worth it". I'm one of the "early adopter, get it first while it's fresh and hot" people. I don't mind paying for something that rapidly depreciates in value and may be lacking in content.

You hit the nail on the head! :)
 
OpenGL guy said:
DemoCoder said:
Pure speculation, you have no data.. Card X is produced on .15micron, Card Y is produced on .13 micron. Card Y may in fact be clocked higher "for free" and may ultimately be cheaper to produce as well.
A transitor at .13 is 75% of the size of the same transitor at .15. NV30 has about 14% more transitors than R300. .13 transitors currently cost more than .15. Tell me again what is free? Oh, that's right, TANSTAAFL.

#1 keyword "ultimately" As production and yields go up, price-per-transistor will be cheaper on .13. What you say may be true now, it won't be true much longer. And NVidia may eat any extra costs in terms of lower margins to establish a base, so it is not neccessarily reflected in the enduser cost. The will make up margins later when the production costs go down.

#2 I said "clocked higher for free" In other words, you may choose .13, copper interconnect, low-k, for purposes other than clockrate, like power requirements, or simple transistor density. A higher clock rate scaling may be a side benefit in that the new process allows you to scale the clock more easily. You've already paid the cost up front for .13 for reasons OTHER THAN CLOCK, therefore being able to ramp the clock IS FREE.

Let's say I build a house with new energy efficient windows. I paid the upfront cost for the purposes of saving heating/utility costs. But what if the materials used for the window had a side benefit, for example, perhaps they were more sound-proof. Therefore, I won't have to pay to "sound proof" my windows, as sound proofing came "free"
[/quote]
 
Crusher said:
Two years from now you'll be up to your eyeballs in DX8 games, and the only thing it will do is cause you to complain that there aren't enough DX9 games around yet.

Thats nice, that doesn't help the situation at all, for the consumers that bought the DX 8 card two years prior, by the time a game is written that would use that cards features...that two year old card is now too slow.

So in the end the ultimate reason to upgrade to a new card is to deliver features that can be used now, faster frames would be one, I guess FSAA and eye candy would be another..but touting mile long pixel shader lengths as a feature is a joke, as by the time a DX 9 title is released that card is now considered slow....

Yes this has been talked before, and I'm not exactly pleased about the quality of the engines being released today on 2.5 GHZ PC's on DX8 hardware and getting 50 fps with medium detail... :rolleyes:
 
Joe DeFuria said:
There is none...unless the "unbalanced" card costs more. And technically, I'm talking about costs more to make, not actual selling price. (Because we all know that selling price in a competitive market is not dictated by cost.)
Company X could make an "unbalanced" card with a 100 Mhz, 8 pipe core coupled to a 2 Ghz, 1024 bit wide memory interface. Company Y could make another "unbalanced part" that pairs a 5 GHz, 16 pipe core with 100 Mhz 256 bit DDR ram.

BOTH of those products might be "absolutely faster" than card Z: 100 Mhz, 8 pipe, paired with 100 Mhz 256 bit DDR.

But I'd say Card Z is the "best" card, because either Companies X and Y will either be charging a LOT more for their part of nominal superiority, or they won't last too long as they lose money competing with Company Z on Price.
What's the point? You don't know the cost, you know that it's selling price who drive the market and not cost price, and you still want to compare the "unbalanced cost"? How will we ever know a cost? What were the alternatives? What will it cost in the futre? Who support the cost? etc.

A card which is faster and provide better quality is overall the best card, balanced in my point of view (as a buyer). That's all count for me (the buyer) and for the chip maker (the seller).

There's no comparison between nominal superioty and price. Remember the GF2 Ultra? Some people are willing to pay for a bit more performance even if it's much more price.

Finally, how do you know that it will be card Z which will stay on the market? If it's not much expensive then i would go directly to Card X,Y. And in the long (or even short) run, card maker Z, could vanish because of its lame product ;) , so who wins? ;)
 
Doomtrooper said:
Crusher said:
Two years from now you'll be up to your eyeballs in DX8 games, and the only thing it will do is cause you to complain that there aren't enough DX9 games around yet.

Thats nice, that doesn't help the situation at all, for the consumers that bought the DX 8 card two years prior, by the time a game is written that would use that cards features...that two year old card is now too slow.

So in the end the ultimate reason to upgrade to a new card is to deliver features that can be used now, faster frames would be one, I guess FSAA and eye candy would be another..but touting mile long pixel shader lengths as a feature is a joke, as by the time a DX 9 title is released that card is now considered slow....

Yes this has been talked before, and I'm not exactly pleased about the quality of the engines being released today on 2.5 GHZ PC's on DX8 hardware and getting 50 fps with medium detail... :rolleyes:

I dont think you're even listening. Without the ability to use those features in the first place, developers will NEVER implement them and we'd be stuck at the same situation forever. nVidias only real desire is to make money, but in order to do that, they have to keep a market open for themselves. Game developement is a tough industry, many companies never survive. In order to survive they have to ensure that they're games can be played on the maximum ammount of systems available, so they shoot for whatever is in wide circulation. Its a give and take situation, but its quite obvious that the way this is laid out, the hardware HAS to lead, if only to sustain itself.

Touting "mile long pixel shader lengths" is not a joke, its a necessity, and an obvious evolution of the hardware. If noone ever implemented them, it would be like the proverbial horse chasing the carrot...only in this case the horse would be standing still :rolleyes:
 
but touting mile long pixel shader lengths as a feature is a joke, as by the time a DX 9 title is released that card is now considered slow....

I don't recall this same sentiment when a certain board sported PS 1.4 support. It was regarded as something that went beyond the initial GF3/DX8 spec, and was considered a "good thing."

While I fully understand the whole concept in that buying something with more features doesn't translate to a heck of a lot without game support...the part about it being a "joke" is...err...a joke :0
 
Doomtrooper said:
Thats nice, that doesn't help the situation at all, for the consumers that bought the DX 8 card two years prior, by the time a game is written that would use that cards features...that two year old card is now too slow.

So my 2 year old GTS is too slow for the DX7 games it was built for? Funny, Dungeon Siege seems to run fine on it. So does NOLF 2. UT2003 isn't as speedy as it could be, but I think that has more to do with my 850 MHz CPU than the video card.

Doomtrooper said:
So in the end the ultimate reason to upgrade to a new card is to deliver features that can be used now

That's part of it. Another part is to create a user base for features to be used in the future. Also, there are people out there who will be content with the performance a Radeon 9700 Pro or a GeForce FX will give them in DX9 applications when they arrive, not to mention the 2 years of relatively good performance they'll get out of the card in the meantime.

Yes this has been talked before, and I'm not exactly pleased about the quality of the engines being released today on 2.5 GHZ PC's on DX8 hardware and getting 50 fps with medium detail... :rolleyes:

I think the typical response to this type of comment is "so go make a better one yourself and stop bitching" :) There are plenty of DX7 game engines out there that run just fine on DX7 hardware, let alone DX8/DX9 hardware. Probably more than you're aware of.
 
DaveBaumann said:
#1 This is no different on the R300. The available bandwidth per clock is comparable.

It was quite interesting - at our launch they actually admitted that it didn't have enough bandwidth to output all eight pixels.

However, something struck me the other day - NV30 is pretty much the same as Radeon 9500 PRO, just with nearly twice the clockspeed on both the core and memory. People should look for comparisons between the 9500 PRO and 9700 to see how a 128-bit bus will constrain an 8 pipe card.

but something bear in my mind is why R9500/R9500 Pro use tooo much expensive chip and only 2x64 bit Controller?i prefer 32x4 bit . maybe one day we willl see the low cost PCB and Real cut down version of chip later. i think ATI need more attention to mainstream market,because we love competition, we want real highly ratio C/P product to put into the sig, that's all about it
 
Mr.huang said:
DaveBaumann said:
#1 This is no different on the R300. The available bandwidth per clock is comparable.

It was quite interesting - at our launch they actually admitted that it didn't have enough bandwidth to output all eight pixels.

However, something struck me the other day - NV30 is pretty much the same as Radeon 9500 PRO, just with nearly twice the clockspeed on both the core and memory. People should look for comparisons between the 9500 PRO and 9700 to see how a 128-bit bus will constrain an 8 pipe card.

but something bear in my mind is why R9500/R9500 Pro use tooo much expensive chip and only 2x64 bit Controller?i prefer 32x4 bit . maybe one day we willl see the low cost PCB and Real cut down version of chip later. i think ATI need more attention to mainstream market,because we love competition, we want real highly ratio C/P product to put into the sig, that's all about it

You are not the CEO of a certain company are you mr huang???
:eek:
 
Crusher said:
So my 2 year old GTS is too slow for the DX7 games it was built for? Funny, Dungeon Siege seems to run fine on it. So does NOLF 2. UT2003 isn't as speedy as it could be, but I think that has more to do with my 850 MHz CPU than the video card.


..read my previous post about a decent DX7 class video card with a very fast platform is a better buy today then buying a state of the art DX9 card...DX9 is not even here yet STILL.
I am stating the same damn thing...If you think my complaint about poor development support for DX8 and DX9 class hardware is just me try reading through the Dungeon Siege forums, or better yet UT 2003..2.5 GHZ processors and Geforce 4's and 8500's hitting 20-30 fps...if you want to defend that be my guest.
 
Typedef Enum said:
but touting mile long pixel shader lengths as a feature is a joke, as by the time a DX 9 title is released that card is now considered slow....

I don't recall this same sentiment when a certain board sported PS 1.4 support. It was regarded as something that went beyond the initial GF3/DX8 spec, and was considered a "good thing."

While I fully understand the whole concept in that buying something with more features doesn't translate to a heck of a lot without game support...the part about it being a "joke" is...err...a joke :0

Yes it was considered a good thing, yet almost 2 years now since DX8 class hardware has been on the market and still really no DX8 engines out there..so bragging about a pixel shader that will not be used for another two years (PS 2.0)..still waiting for 1.3 and 1.4 support :LOL:

Is just plain silly..as my faith in PC games has dwindled greatly over the last couple of years..specific IHV enhancements for simple effects that any card can do, poor support from the developers has made a console look better and better.
 
Mulciber said:
Touting "mile long pixel shader lengths" is not a joke, its a necessity, and an obvious evolution of the hardware. If noone ever implemented them, it would be like the proverbial horse chasing the carrot...only in this case the horse would be standing still :rolleyes:

Its a joke IMO, since realistically looking at developer support so far they always state they code for the user base..since DX6 and DX7 class hardware dominate the user base for a developer too use some insane PS 2.0 DX9 (or above) feature when they have not even got around to DX8 is not realistic.
What are they going to do..totatlly avoid DX8 class hardware (now there is economical DX8 cards in the mainstream with Radeon 9000 and Xabre)

So until I see the developers start catching up to the hardware, this PC gamer will be more concerned about platform power vs graphic power.
 
The "get your money's worth" people should be buying 9500's and NV31's or whatever. The bleeding edge high end cards are never "worth it".

Usually you are correct wrt to the PC market.

However, the situation we have at the high end of the Graphics card spectrum is not the same at this time, which is what makes the 9700 so "special."

Which card is typically regarded by most as the "best bang for buck" right now, the Ti 4200?

The cheapest 64 MB 4200 (pricewatch) is listed at $109. The cheapest 128 MB 9700 is listed at $310. (U.S. dollars)

In high rez, (1280x1024 and higher) with AA and aniso, the 9700 will quite often (typically, I believe) beat a 4200 by as wide or wider margin than the factor of its price differential.

I would actually argue that the best bang-for buck card you can buy right now is the 9700 Pro. That doesn't mean that everyone can afford to buy one of course, but those who do buy one are in fact getting their money's worth relative to the cheaper, previous generation parts. We haven't had this situation since the V2-SLI days.

In any case, I'm not talking strictly about "getting your money's worth" in the sense of comparing these high end cards to the cheaper price bracket. I'm talking about comparing these high-end cards to one another.

I don't mind paying for something that rapidly depreciates in value and may be lacking in content.

Are you telling me that you will just buy "the most expensive" card out there, irrespective of the competition and actual value of the card? In other words, if the NV30 and R-300 were released at the same time, you would just buy whichever one is more expensive?

Somehow, I doubt that.

I know what you meant to say: that if one card costs a more than the other, you would buy it as long as the product is in most respects demonstratably superior, or at least superior in some aspect that you consider of the utmost importance.

That's fine...I just don't believe that you are not concerned abouit "getting your money's worth". Or would you have bought your 9700 if it cost $2000?
 
Back
Top