GeFX canned?

duncan36 said:
It seems like Anand may have had a good idea this was coming, it seemed a little odd at the time that he'd include Gf FX scores at 400/800, but now it makes a little more sense.

We wanted to wait and do the non-ultra review when we had the actual card in our hands. Also, we knew after seeing Ultra numbers that the non_ultra is still not going to be a 9500Pro killer.... Hopefully we will see more optimized drivers soon.
 
Dave H said:
demalion said:
I said:
Perhaps yields in the 500 MHz bin were too low, or there are cost problems for 500 MHz DDR-II, but if that's the case, better to quietly scale back volume of the Ultra part rather than cancel it altogether. After all, "Ultra" parts are allowed to be rare.
Hmm...I see what has happened as a "'terminal' volume scale back", not a "cancel it altogether". The Ultra parts will be rare, not non-existant.
Except that this is a decision that can and will be reported as hard news, and very embarrassing news. No Ultras at retail; all further Ultra production cancelled. Whereas if it was just a matter of Ultras being very rare in retail, probably very few people would notice and those that did wouldn't make a big deal of it. I mean, 9500 Pros are said to be very rare in retail, but no one doubts that's a real part or "shouldn't count" when discussing ATI's lineup.

My only guess is that perhaps Nvidia wanted to keep this quiet and thought they could get away with it...?

I think our perception of such news is GREATLY magnified in regards to the impact on the perception of consumers in general.

I considered going with a really large font too, but it would have just been annoying and still failed to get across the full import of my thought. ;)
 
Mulciber said:
1. you cannot confirm that 400mhz DDRI dissipates less heat than 400mhz DDRII.

The DDR2 memories move the signal termination off of the board and into the dram and gpu. So all of the power (and thus heat) that was formerly used by the external termination is now in the dram itself. Putting more power into the same space means more heat is created. Look at the datasheets for the two memories and you'll see that the maximum power for the ddr2 parts is 4.5W, while the max for the ddr1 device is 3.3W.

While you're there, look at how they created the termination - it is a pair of resistors on each signal line, one connected to power, one connected to ground. While the termination is enabled those will be constantly consuming power and generating heat.

Mulciber said:
2. you cannot show me a "spec sheet" or anything of the sort that shows the latency of samsungs GDDRII OR the latencies of DDRI at 400mhz (as it doesn't even exist on a product yet).

3. you cannot tell me one reason why slightly higher latency would have any sort of negative impact on the GeForceFX. It DOES NOT access memory in the same patterns as a CPU and the comparission cannot be made.

The spec sheet for the DDR2 Samsung parts that Nvidia is using is here.
For comparison, here is their latest 350MHz DDR1 device.

For latencies, things like CAS latency, RAS to CAS delay, and row cycle time are important factors. The 400MHz DDR2 part is 5, 7, and 18 clock cycles for those three. The DDR1 part is 4, 5, and 15, but if you scale that from 350 to 400 (so the speeds are comparable), you wind up with 5, 6, and 17. So the DDR2 parts do have higher latency.

Now the only people who can tell you for sure what kind of effect latency has on the FX are Nvidia engineers. However since they would've undoubtedly designed their controller to support a variety of values, you should be able to take an FX and program it to have higher values than necessary and see what kind of benchmark results you get.
 
RGB said:
Mulciber said:
1. you cannot confirm that 400mhz DDRI dissipates less heat than 400mhz DDRII.

The DDR2 memories move the signal termination off of the board and into the dram and gpu. So all of the power (and thus heat) that was formerly used by the external termination is now in the dram itself. Putting more power into the same space means more heat is created. Look at the datasheets for the two memories and you'll see that the maximum power for the ddr2 parts is 4.5W, while the max for the ddr1 device is 3.3W.

While you're there, look at how they created the termination - it is a pair of resistors on each signal line, one connected to power, one connected to ground. While the termination is enabled those will be constantly consuming power and generating heat.

Mulciber said:
2. you cannot show me a "spec sheet" or anything of the sort that shows the latency of samsungs GDDRII OR the latencies of DDRI at 400mhz (as it doesn't even exist on a product yet).

3. you cannot tell me one reason why slightly higher latency would have any sort of negative impact on the GeForceFX. It DOES NOT access memory in the same patterns as a CPU and the comparission cannot be made.

The spec sheet for the DDR2 Samsung parts that Nvidia is using is <a href="http://www.samsungelectronics.com/semiconductors/Graphics_Memory/GDDR-II_SDRAM/128M_bit/K4N26323AE/ds_k4n26323ae_rev17.pdf">here</a>.
For comparison, <a href="http://www.samsungelectronics.com/semiconductors/Graphics_Memory/DDR_SDRAM/128M_bit/K4D263238E/ds_k4d263238e-gc.pdf">here</a> is their latest 350MHz DDR1 device.

For latencies, things like CAS latency, RAS to CAS delay, and row cycle time are important factors. The 400MHz DDR2 part is 5, 7, and 18 clock cycles for those three. The DDR1 part is 4, 5, and 15, but if you scale that from 350 to 400 (so the speeds are comparable), you wind up with 5, 6, and 17. So the DDR2 parts do have higher latency.

Now the only people who can tell you for sure what kind of effect latency has on the FX are Nvidia engineers. However since they would've undoubtedly designed their controller to support a variety of values, you should be able to take an FX and program it to have higher values than necessary and see what kind of benchmark results you get.

Thank you for finding those documents for us. It will take a while to read through all of them.

A couple things. The maximum of 4.5W stated there is the absolute maximum dissipation for the particular parts. I couldn't find an actual operating power dissipation for specific parts/clocks. I wont argue that DDRII will probably run hotter than DDRI because of on-die termination as well as other things, but .5W one way or the other really isn't going to make much of a difference.

Yes, the latency is deffinately higher on DDRII also. My main point was that memory read/write patterns are not the same on a VPU as they are on a CPU. From my understanding it would be very simple for nVidia to work around the latency issue so that it was a non-issue through simple pipeline and cache timing, (especially since they had DDRII in mind during the design phase) much like the PIV has done with RDRAM. Latency doesn't really make a difference to the performance if the pipeline is working at maximum efficiancy and the pipelines are being fed, AFAIK.

My problem with people stating these "problems" as if they were facts was because they simply couldn't provide me with valid documentation, only speculation. I really appreciate these documents, it'll take me a while to finish reading through them ;)
 
I think our perception of such news is GREATLY magnified in regards to the impact on the perception of consumers in general.

True, but then again consumers in general don't buy $300 video cards at retail! I think a large fraction of those in the market for an NV30/R300-based add-in card will hear enough about this fiasco to have a significantly lower opinion of Nvidia than they started with (not that they will necessarily hear an accurate version of events). I think those who purchase an NV30/R300-based card as part of an OEM computer won't necessarily hear anything, but then again the Ultra was never going to make it in the OEM market anyways.

Consumers in general will not hear or care to hear about any of this stuff. However, consumers in general often get computer purchasing advice from someone they know who is a little bit nerdier than they are, who in turn might learn what he knows from someone a bit nerdier than he, and so on in a giant version of the game "telephone". Point is that events like this do tend to significantly change brand perception amongst at least a decent fraction of consumers, only the changes are often based on misinformation and often come months too late and linger for years too long.

For example, ATI is only now crawling out from the stigma of bad drivers, even though this problem has been essentially cured for many months now. Similarly ATI is only beginning to gain reputation for having faster high-end cards, even though they've been demolishing the best Nvidia has to offer for 6 months now. Or look at AMD: they were considered a joke by most until maybe 6 or 9 months into the Athlon's "golden age" when it truly outran the best Intel had to offer; now the general consensus among self-appointed enthusiasts is still absurdly pro-AMD and anti-P4 even though AMD hasn't had a competitive part in about a year now and the advantages of the P4 design have really started shining through.

You will of course counter that the above paragraph only refers to the extremely small (which is probably a good thing all things considered :oops: ) slice of humanity that reads about the performance of PC components on the Internet. And point out that the purchaser of the average PC (including business PCs in particular) still has an overrated perception of Intel and underrated perception of AMD. True enough.

But again, the collection of purchasers of mid-to-high-end 3d cards is a much narrower demographic, and much more closely related to the read-about-computer-components-on-the-Internet crowd. And, incidentally, AMD still manages to charge obscenely high prices for AthlonXPs--they actually get the same for an AXP with "quantispeed rating" of x(+) as what Intel gets for an x MHz P4!--so the pro-AMD perception has obviously filtered down to enough people. Meanwhile, AMD couldn't get anywhere near parity pricing for an Athlon clocked the same as a PIII, or even for a 1.4 GHz Thunderbird compared to a 1.4 GHz P4! The upcoming comparison between, say, the 3200+ Barton and a 3.2 GHz P4 with 800 MHz FSB will put the performance gap into near-Cyrix territory, but I doubt anyone will notice. (Of course this is all a good thing insofar as it keeps AMD alive.)

But I digress. Message: I disagree. GFfx might not hurt Nvidia's brand perception among ordinary consumers yet, but it has already hurt them in the high-end, and it will probably trickle down to the masses, even if by that time it is no longer deserved.
 
As much as I concur with anything else you said, you're AMD v Intel information is so far off base it is nearly offensive ;)
You might want to switch out the Intel chip you're using and get a real chip, like an AXP. If you think for a second that my Athlon 2ghz doesn't maintain paritiy with my friends PIV 2.53ghz on a 533mhz bus, then you're just deluding yourselfs. Oh and mine cost quite a bit less. :oops:
 
As much as I concur with anything else you said, you're AMD v Intel information is so far off base it is nearly offensive
You might want to switch out the Intel chip you're using and get a real chip, like an AXP. If you think for a second that my Athlon 2ghz doesn't maintain paritiy with my friends PIV 2.53ghz on a 533mhz bus, then you're just deluding yourselfs. Oh and mine cost quite a bit less.

First off, I'm posting this on a 1.2 GHz Thunderbird, purchased back when AMD could still compete in price/performance. (My other computer is a laptop with PIIIM, the only current choice for thin-'n-light.) Thanks for the tip, the unwarranted assumption, and the condescension. :rolleyes:

Second, you may have paid significantly less for your 2400+ than for a similarly "rated" P4, but these days the price difference is not so great. Although I noticed just now that according to Anand's weekly price roundup, AXP prices dropped a lot last week while P4 prices did not. Nonetheless, the 2800+ still costs more than the 2.8 GHz P4, and the 2600+ still costs ~= to avg(2.53 GHz, 2.66 GHz) P4. 2400+ (after a $25 drop this week) is $38 cheaper than a 2.4 GHz (533 FSB) P4, and $80 cheaper than a 2.53...but as we'll see, it's a lot slower as well.

Third, it is you who are "deluding yourselfs". Since I was curious and apparently in an extraordinarily nerdy mood (or secretly offended by the suggestion I buy a new PC?), I decided to take all the benches from Anand's 2800+ review (first thing I could find with scores for 2.8 GHz P4, 2.53 GHz P4, 2.4 GHz P4 (533 FSB), 2800+ and 2400+), and take the geometric mean.

(In case you're unfamiliar with the properties of the geometric mean, it's ensures that each benchmark recieves equal weight in the final score in all cases. The results are unitless but directly comparable. In case you want to follow along at home, you compute the geometric mean by taking the product of all n benchmarks and raising to the 1/n power. Remember to take the reciprocal for all "lower is better" scores.)

And the results for these 15 benchmarks? (Note: all scores renormalized to the performance of your AXP 2400+, so you can see at a glance exactly how much performance you're missing! :devilish: )

2.8 GHz P4: 1.191
2.53 GHz P4: 1.108
2.4 GHz P4: 1.070
AXP 2800+: 1.092
AXP 2400+: 1

AMD's best chip can't even match the 2.53 GHz P4! And your 2400+, sorry to say, is lagging your friends' chip by almost 11%!

Now, some could argue with Anand's benchmark selection, although frankly the only thing that's gonna get the AXP back in the race is something like ScienceMark, and I doubt either you or your Intel Inside friends spend your time calculating the thermodynamic behavior of liquid Argon atoms in your spare time. :LOL:

But an argument can be made that including the SSE2 enhanced Lightwave scores is somehow unfair to the AXP, even though if anything SSE2 software is underrepresented in this comparison as a reflection of its place among modern CPU-taxing software. So, the results with those tests removed:

2.8 GHz P4: 1.128
2.53 GHz P4: 1.061
2.4 GHz P4: 1.032
AXP 2800+: 1.084
AXP 2400+: 1

Well, at least the 2800+ can beat the 2.53 GHz P4 now! But still not anywhere near close to equivalence.

Or maybe the only performance-hungry application you use your computer for is playing games. That's respectable. Here the results may be a little less representative since the sample size is only 4--not to mention the fact that video card performance will probably compress the scores a bit--but let's find out if you can at least keep up with your friends' machine here:

2.8 GHz P4: 1.096
2.53 GHz P4: 1.045
2.4 GHz P4: 1.026
AXP 2800+: 1.076
AXP 2400+: 1

Sorry, no.

Thing is, when AMD started their little rating system, an AXP "point" was roughly equal or even a tad bit faster than a Thunderbird MHz. After 1000 points scaled according to the very scientific formula 66MHz = 100 points--during which time the P4 bumped its FSB, enlarged its cache and underwent some other low level enhancements--an AXP point was suddenly slower than even the dreaded and derided P4 MHz! (Another important point: since that time, more and more applications (particularly performance-critical ones) are compiled with modern compilers that avoid certain behavior like unaligned memory accesses, which incur an unnecessary performance penalty on all modern architectures but a larger one on P4.)

Since the 2400+, the formula has changed a bit, but still not enough to keep up, as evidenced by Anand's results. And current indications are that it will get even worse when Barton hits. (As if the 9% lead of 2.8 GHz P4 over AXP 2800+ isn't enough!)

So, nice try, thanks for playing.

PS - you might want to switch out the AMD chip you're using and get a real chip, like a P4.

Chump. :p
 
That reply took you several hours, and convinced me of nothing ;) I do quite enjoy that. (note - no I dont happen to use lightwave and neither does my friend)

So your conclusion is that a .026 deviation in performance is equivalent to "cyrix-territory!". Sorry, you might wanna add a cyrix in there and check your math a little "chump" :p

When you do your price comparison you may want to throw in the fact that in order to attain that performance, per anandss review, you must be using the I850E chipset with rambus.

I can pickup an Epox nForce2 for $80, 512mb of PC2700 for $75, and an AXP 2400+ for $160. The least expensive i850E is going to cost you around $140, add $120 for PC800 ram, and $229 for the PIV 2.53ghz. Add it all up and what do you get? You get a price difference of $170...and all for a 2.6% speed increase. ;)

Thanks for taking this waaay off topic. Why dont you just admit your petty AMD/cyrix comparison is just your fanboys dream ;)
 
Dave H said:
It'd be interesting to know exactly what happens when a correctly functioning card is returned at retail. Presumably the store sends it back to the card OEM and the OEM swallows almost all of the cost. But do they then test the card and, if it passes, put it in a new box and sell it again? Or at least try to sell it as a refurb? If not, I bet there's some provision in their contracts for Nvidia to share part of the cost of returns.

I'm very sure they re-box it, or at least sell it as refurb. They don't just throw it in the trash, for sure.
 
Wow! Well I paid $51 for a XP 1700 T Bred B (C 8)8)L) and $50 for a XP333 and overclocked it to XP2400 levels and beyond :D. I really couldn't do that with Intel, no $50 chips worth buying nor any good $50 mobo's ;).
 
Mulciber-

You keep mentioning a supposed "2.6% performance difference", as if that was the difference in performance between your 2400+ and your friend's 2.53 GHz P4. In fact, that difference is 10.8%, not 2.6%. The 2.6% figure refers to the difference in a small, partially GPU-limited subset of the tests, between your 2400+ and a 2.4 GHz P4, not a 2.53 GHz P4.

As for Lightwave, the point is not whether you happen to use it or not but whether it broadly represents a class of CPU-hungry programs you probably will use in the lifetime of your computer, i.e. SSE2 enabled programs, and the answer is "yes".

As for your "pricing comparison": thanks for pointing out that the Intel systems were using PC800! I'd assumed (without even looking) from the immense margin of victory that they had to be using PC1066!

845PE with PC2700 is considerably faster than 850E with PC800, so the only price difference once again becomes the CPU, and you can add another oh, say, 2.6% (since you like that number so much) to the performance gap. (That gets us to 13.4% BTW.)

You are right about one thing, though: it was obviously a waste of my time to try to convince you of anything in writing as you obviously lack basic reading comprehension. :oops:
 
Mulciber said:
That reply took you several hours, and convinced me of nothing ;) I do quite enjoy that. (note - no I dont happen to use lightwave and neither does my friend)

And this suprises whom? While I am an avid user of AMD myself, only a blind person would say that they are competitive with Intel..... The only exception is at the lower end, where the under $150.00 (mailorder) AMD processors absolutely slay their Intel counterparts.......
 
The other consideration is the motherboards that go with these 2 CPU brands. In general Intel based chipset mobos are very stable and require little maintenence in the form of driver updates.
The name "Via 4 in 1" is the computer hardware industries curse and is probably the cause of more RMA's / technical support time than any other single problem.
 
THe_KELRaTH said:
The name "Via 4 in 1" is the computer hardware industries curse and is probably the cause of more RMA's / technical support time than any other single problem.

This sounds pretty much like "ATIs drivers suck".
There have been problems with driver installations, of course, but since Windows XP (which includes the necessary drivers) these issues belong to the past. That's 1,5 years now.

I hated messing with the viagart back in the days of the MVP-3 chipset, but those days are looong gone now. And it's easy to find egg to throw in Intels face too.

For just about anyone, motherboard chipsets are quite stable these days, at least from Intel, VIA and SIS. nVidia are said to have done a decent job with their nForce2 as well.

BTW, as an exceedingly old fart in computational science, problems that used to run overnight when I started out in my field now complete in a second. How anyone could get worked up over the exceedingly small differences (either way) between AMDs quantispeed and Intels MHz, is quite frankly beyond me. The whole quantispeed business is distasteful from a technical standpoint of course, but to start arguing a few percentage points just doesn't make sense given how application dependent the relative performance of these processors are.

But have fun doing it. :)

Entropy
 
While I'm not here to arguet against the idea that AMD's competitiveness is lacking at the moment, or get involved with a "new" type of "vendor bias" clash, I felt it necessary to point out some things:

One of the tests, utilized in more than one benchmark, was Sysmark 2002, and this benchmark's validity has been questioned (perhaps an understatement).

It would be nice if it was discussed whether SSE could be utilized for the Athlon XP in those benchmarks where SSE2 optimizations were tested, and whether it was or not. Perhaps it is a given that utilizing SSE does not offer significant opportunities for optimization or does not perform well on the Athlon XP?

To factor such things into a composite evaluation without regard to these issues is not a good idea, in fact I think it is a worse idea than examining the benchmarks in isolation. I'd recommend just sticking to the actual benchmark results and discussing them individually, as there are plenty of comparisons that can be made on that grounds.

A minor point for the game benchmarks is that different cards (or perhaps more accurately, their drivers) interact with each CPU architecture in different ways...what you are discussing in particular is GF 4 Ti 4600 game performance on those systems. I'm not sure whether the picture becomes "worse" or "better", whatever your side, with other cards.
 
Nagorak said:
Dave H said:
It'd be interesting to know exactly what happens when a correctly functioning card is returned at retail. Presumably the store sends it back to the card OEM and the OEM swallows almost all of the cost. But do they then test the card and, if it passes, put it in a new box and sell it again? Or at least try to sell it as a refurb? If not, I bet there's some provision in their contracts for Nvidia to share part of the cost of returns.

I'm very sure they re-box it, or at least sell it as refurb. They don't just throw it in the trash, for sure.

From my personal experience with smaller retailers and distributors, retail returns it to the distributor. The distributor tests, and if ok, places it on the shelf for future RMAs. If there's more than a couple on the shelf, then they get returned to the manufacturer (who probably also keeps it on the shelf for RMAs). Neither retail nor distribution channels will keep the defective if they can get credit for returning it.

I don't believe that anyone can rebox and sell as new, as I believe there are laws in place to prevent that.
 
THe_KELRaTH said:
The other consideration is the motherboards that go with these 2 CPU brands. In general Intel based chipset mobos are very stable and require little maintenence in the form of driver updates.
The name "Via 4 in 1" is the computer hardware industries curse and is probably the cause of more RMA's / technical support time than any other single problem.

Been using Via 4 in 1's for a long time....and haven't had a single problem since KT133 chipset.... AND, I use Creative Lab's soundcards! The biggest problem that has been attributed to the Via chipset is it's compatibility with Creative Labs. But, Intel has as many problems with them....... Entropy is absolutely right!
 
Back
Top