ben6 said:
Friend of mine was doing something somewhere in Nvidia's labs , and said he saw working GeforceFXs at 550MHz+ cores and higher . Interestingly enough, DigitLife saw the same thing
http://www.digit-life.com/articles2/gffx/index3.html
EDITED: uh for those that already read the other figure, sorry that was a nono
So, does your friend have any idea of voltages and cooling employed?
Here's what the digit.life article said:
"Besides, it's possible they will release NV30 versions working at a higher core speed - 550 or even 600 MHz - in the NVIDIA's lab the first chips were running error free at such frequencies."
Notice no comments on voltages or cooling--conspicuously absent, I'd say. Further, I would not be surprised to see current R300s in ATI's labs running at ~500MHz or more, under certain controlled conditions, too (this is the kind of thing labs do, after all.) These kinds of anecdotes mean almost nothing as to what the capabilities of production chips might be.
One thing really stuck out for me, though, in the Digit.Life article. Whenever, and wherever, there was a comparison between similar aspects of technology employed within nv30 and R300, even though the author admits he has no certain knowledge and is always estimating, he gives the nod to nVidia's nv30 version of whatever that technology happens to be. He consistently estimates nv30 as superior in areas in which he admits a total lack of hands-on or first-hand knowledge and experience. I wasn't very impressed. It'll be nice when nv30 ceases being vaporware and ships so that we can get some concrete appraisals.
Here's but one example:
Note that the claimed effective memory bandwidths are equal [60gigs/sec Digit.Life says, with no attribution as to the sources for these numbers]
! Well, we can't verify it as the memory optimization techniques can't be disabled. But we estimate the real efficiency of these algorithms as 1.5 times for ATI and 2.0 times for NVIDIA of a physical bandwidth in typical scenarios. That is why an effective bandwidth is probably about 28 GB/s for ATI and 32 GB/s for NVIDIA, which can provide at least a 30% gain for NVIDIA in typical applications.
Actually, I've seen ATI talk of up to 176gigs/second--and have never seen a static "60 gigs/sec effective bandwidth" number bandied about by ATI--and ATI stated it preferred to use physical bandwidth numbers and make no mention of effective bandwidth through compression simply because, ATI says, they weren't convinced anyone would believe those numbers.
But you can see here how, out of the blue, with no explanation, he thinks that the algorithms are 25-30% more efficient for the nv30 than for R300--never having held an nv30 product in his greedy little hands for testing purposes. There are several such examples, unfortunately. Would have been nice to hear his explanations for his estimations...
Then there are the shader instruction examples--which are all wrong according to information provided by ATI employees on the Rage3D site and Beyond3D forums as well--stating that there are more capabilities in R300 that are not exposed in the current drivers, but will be exposed as DX9 is released and time goes on. With loopback, R300 shader could handle ~64K instructions--but you won't see this on any of these so-called comparisons.
One of the problems here is that the release package for nv30 that was delivered to websites at the launch contained a bunch of specific information about R300 that was wrong (R300 info provided by nVidia, not ATI.) Some web sites bothered to cross-check to some degree--others simply repeated nVidia's R300 info verbatim. One such error we all saw repeated several times was the statement that R300 does no color compression--this particular article in Digit.Life gets at least that part of it correct--but many web sites did not and erroneously stated that "Unlike the R300, which does not provide color compression in hardware, nv30 does....etc., etc." Pretty sad.
Overall, I am very disappointed with much of the nv30/R300 comparisons published on the Internet to date. Most of them are guessing games based on incomplete or erroneous information (as much of this Digit.Life article seems to be based), and few of them get it right for either chip, I think.
So, I tend to take all such reports with a large grain of salt--for instance I put no stock in a .13 micron R350 rumor, and by the same token think that seeing a chip in a lab running at certain speeds under unknown conditions means little if anything, regardless of which chip it is.