Hrm interesting stuff- GeforceFX at 600MHz+

I honestly can't say. Only reason I brought it up was because of the DigitLife story. I will say that cards running above 600MHz , even if just in the labs is exciting. Hopefully we'll see cards in the 1GHz range soon .
 
Slightly OT - isn't one of their charts the same as the error-ridden R300/NV30 comparisons that Nvidia's PR put out?
 
Yes, the one chart is pre-canned Nvidia-PR material...

--|BRiT|
 
ERP said:
Makes you wonder if the big fan is there primarilly to cool the RAM.

Why would you wonder that? Look on the back side of the card, there's no big-ass fan there, just a flat heatsink despite the presence of another four RAM chips!

Of course the fan's there to cool the core.

*G*
 
Grall said:
ERP said:
Makes you wonder if the big fan is there primarilly to cool the RAM.

Why would you wonder that? Look on the back side of the card, there's no big-ass fan there, just a flat heatsink despite the presence of another four RAM chips!

Of course the fan's there to cool the core.

*G*

I read that it's there to cool the DVI connector 8)
 
hmm.... 125M transistors running at 500MHz.... nah, just a little aluminum heat spreader should do the trick :LOL:
 
But we estimate the real efficiency of these algorithms as 1.5 times for ATI and 2.0 times for NVIDIA of a physical bandwidth in typical scenarios. That is why an effective bandwidth is probably about 28 GB/s for ATI and 32 GB/s for NVIDIA, which can provide at least a 30% gain for NVIDIA in typical applications.

Now those are some smelly, brown figures; I wonder where they found them...
 
Theres more...

The GeForce FX also includes a new MSAA mode 8x, and a new hybrid SS/MSAA mode 6xS (this one only for DirectX). That is why the chip can record up to 8 MSAA samples from one value calculated by a pixel shader.

The NVIDIA's approach to realization of anisotropy primarily depends on computational resources of the chip and after that on the memory's bandwidth. ATI lays the most load on the memory using the algorithm based on RIP cards.

Claimed effective bandwidth -- 60 GB/s
in the table for both ati and nvidia :?
 
ben6 said:
Friend of mine was doing something somewhere in Nvidia's labs , and said he saw working GeforceFXs at 550MHz+ cores and higher . Interestingly enough, DigitLife saw the same thing


http://www.digit-life.com/articles2/gffx/index3.html

EDITED: uh for those that already read the other figure, sorry that was a nono

So, does your friend have any idea of voltages and cooling employed?

Here's what the digit.life article said:

"Besides, it's possible they will release NV30 versions working at a higher core speed - 550 or even 600 MHz - in the NVIDIA's lab the first chips were running error free at such frequencies."

Notice no comments on voltages or cooling--conspicuously absent, I'd say. Further, I would not be surprised to see current R300s in ATI's labs running at ~500MHz or more, under certain controlled conditions, too (this is the kind of thing labs do, after all.) These kinds of anecdotes mean almost nothing as to what the capabilities of production chips might be.

One thing really stuck out for me, though, in the Digit.Life article. Whenever, and wherever, there was a comparison between similar aspects of technology employed within nv30 and R300, even though the author admits he has no certain knowledge and is always estimating, he gives the nod to nVidia's nv30 version of whatever that technology happens to be. He consistently estimates nv30 as superior in areas in which he admits a total lack of hands-on or first-hand knowledge and experience. I wasn't very impressed. It'll be nice when nv30 ceases being vaporware and ships so that we can get some concrete appraisals.

Here's but one example:

Note that the claimed effective memory bandwidths are equal [60gigs/sec Digit.Life says, with no attribution as to the sources for these numbers]! Well, we can't verify it as the memory optimization techniques can't be disabled. But we estimate the real efficiency of these algorithms as 1.5 times for ATI and 2.0 times for NVIDIA of a physical bandwidth in typical scenarios. That is why an effective bandwidth is probably about 28 GB/s for ATI and 32 GB/s for NVIDIA, which can provide at least a 30% gain for NVIDIA in typical applications.

Actually, I've seen ATI talk of up to 176gigs/second--and have never seen a static "60 gigs/sec effective bandwidth" number bandied about by ATI--and ATI stated it preferred to use physical bandwidth numbers and make no mention of effective bandwidth through compression simply because, ATI says, they weren't convinced anyone would believe those numbers.

But you can see here how, out of the blue, with no explanation, he thinks that the algorithms are 25-30% more efficient for the nv30 than for R300--never having held an nv30 product in his greedy little hands for testing purposes. There are several such examples, unfortunately. Would have been nice to hear his explanations for his estimations...;)

Then there are the shader instruction examples--which are all wrong according to information provided by ATI employees on the Rage3D site and Beyond3D forums as well--stating that there are more capabilities in R300 that are not exposed in the current drivers, but will be exposed as DX9 is released and time goes on. With loopback, R300 shader could handle ~64K instructions--but you won't see this on any of these so-called comparisons.

One of the problems here is that the release package for nv30 that was delivered to websites at the launch contained a bunch of specific information about R300 that was wrong (R300 info provided by nVidia, not ATI.) Some web sites bothered to cross-check to some degree--others simply repeated nVidia's R300 info verbatim. One such error we all saw repeated several times was the statement that R300 does no color compression--this particular article in Digit.Life gets at least that part of it correct--but many web sites did not and erroneously stated that "Unlike the R300, which does not provide color compression in hardware, nv30 does....etc., etc." Pretty sad.

Overall, I am very disappointed with much of the nv30/R300 comparisons published on the Internet to date. Most of them are guessing games based on incomplete or erroneous information (as much of this Digit.Life article seems to be based), and few of them get it right for either chip, I think.

So, I tend to take all such reports with a large grain of salt--for instance I put no stock in a .13 micron R350 rumor, and by the same token think that seeing a chip in a lab running at certain speeds under unknown conditions means little if anything, regardless of which chip it is.
 
WaltC: First of all, the guy is not biased to NVIDIA nor to ATI, he is a pro at what he does. The 60gb/s figure was a mistype and was later fixed to 48gb/s, which was provided by NVIDIA sometime ago in one of their papers (not released).

Second of all, the guy did hold an actual NV30 board in his hands and he did test it (even though the A01 revision is pretty early), so I find no basis to your comments.

Third of all, if anything, you seem to be quite a big fan of ATI... please restrain from posting such comments in the future without knowing anything for a fact. Digit-Life got everything right in their article and if you looked at their news page, you could see that they already have an NV30 board running at stepping A01 in their labs right now.

P.S
Don't take anything written here as an offence, or anything of this sort, as I was simply making my point, using whatever means at my disposal as we all do.
 
I also might add that Digitlife/X-bit posted articles about ATI's anisotropic filtering after this forum initially thought after some discussion and screens shots was a rip mapping technique ( almost a few days after in fact ...or in other words some X-bit/Digitlife lurkers possibly..they have also never made any amendments to their articles to say otherwise..even though their article is not accurate.
They are also the same review site that showed a Geforce 4 beating a Radeon 9700 in UT2003...SURE...I don't care what build it is, it is not a accurate review of that card.

I also took the liberty to browse their forums with a translator(Russian) since I was told by some Russian Rage3D members x-bit fourm members were being banned if they did not agree with their (X-bit) Nv30 rumors and confirmed...threats were there to members who disagreed with their laughable rumors..like beta Nv30 boards were running in April :rolleyes:

In other words X-bit/Digitlife IMO is another Nvidia bitch site...as if we need another...inaccurate and poor biased journalism does not reflect well on that site.
 
Yes I can see your logic, posting inaccuate information and show the fastest current card (9700) in the worst possible light and to add to that threatening their forum members to not question their so obvious Bias Nvidia slant on their own forums is a great way of running a tech site that is supposed to be Neautral...

How silly of me
banghead.gif
 
alexsok said:
Second of all, the guy did hold an actual NV30 board in his hands and he did test it (even though the A01 revision is pretty early), so I find no basis to your comments.

Third of all, if anything, you seem to be quite a big fan of ATI... please restrain from posting such comments in the future without knowing anything for a fact. Digit-Life got everything right in their article and if you looked at their news page, you could see that they already have an NV30 board running at stepping A01 in their labs right now.

What?
Who cares if he "held one in his hands"? WHere did he, for instance, get those "effective" bandwidth estimations for ATI?
Eh? oh wait. out of his ass...
I find it quite funny that you claim "digit life got everything right" yet admit to at least one mistake they made, lol! And beside - how would you (who has been proven to be a huge nVidia fan as well as a very very unreliable source of info - re all your nv30 insider info) know whats correct or not?
 
Doomtrooper: You make yourself look idiotic, more than ever! :LOL:
I don't even understand why people bother reading your pathetic posts, in light of your trolling and your f*anatism towards ATI!

Althornin: A very unreliable source? now tell me where did I
actually get anything wrong about NV30? The specs changed between the timeframe I posted my first specs and the annoucement of the chip and I made changes to reflect these significant changes in my posts.

He could get these "effective" bandwidth numbers from plenty of sources, considering that the guy has plenty of contacts.

The 60gb/s figure was a mistype, I don't treat such things as mistakes!

I know what's correct or not since I pretty much know everything that's going on with the two largest players today, I have plenty of sources and they provide me with plenty of reliable info that I no longer post around here. The difference between you and me is that I know what I'm talking about and I'm well informed, you don't.
 
alexsok said:
Althornin: A very unreliable source? now tell me where did I
actually get anything wrong about NV30? The specs changed between the timeframe I posted my first specs and the annoucement of the chip and I made changes to reflect these significant changes in my posts.

He could get these "effective" bandwidth numbers from plenty of sources, considering that the guy has plenty of contacts.

The 60gb/s figure was a mistype, I don't treat such things as mistakes!

I know what's correct or not since I pretty much know everything that's going on with the two largest players today, I have plenty of sources and they provide me with plenty of reliable info that I no longer post around here. The difference between you and me is that I know what I'm talking about and I'm well informed, you don't.
Lol. So you are telling me that in 2 months they redesigned the nv30 from a 256bit bus to a 128bit one, and cut out all the extra texture units.
No, the difference here is, i admit to that which i do not know, and you are a poseur pretending to have sources. Sorry buddy, you dont have any cred here. And you are right on one thing - i dont know what you are talking about - cause its BS.
 
Lol. So you are telling me that in 2 months they redesigned the nv30 from a 256bit bus to a 128bit one, and cut out all the extra texture units.
No, the difference here is, i admit to that which i do not know, and you are a poseur pretending to have sources. Sorry buddy, you dont have any cred here. And you are right on one thing - i dont know what you are talking about - cause its BS.

Two months? I assume u didn't follow my posts...
Besides, you don't know the history and the way NV30 was constantly undergoing changes, changes that are far more severe than just it's memory bus.
 
Back
Top