Anand talk R580

DemoCoder said:
What's amazing is all the people who were claiming the opposite back in the days when NV had angle independence and ATI did not. ("not very perceptable because most games have 90 degree angles")
Well games have a lot more variety now. If you look at Unreal Tournament, Quake 3, and similar games you didn't get many angled surfaces. Games like flight simulators, outdoor FPS games like SS, FarCry, COD, etc. all expose the problem a lot more.

For NV3x, they had good quality in that way (wrt to angle) but they messed around with the LOD and trilinear filtering so much that you couldn't really say it was much better. In any case, R300 had too much going for it for that one problem to put a dent in its reputation.

If you want to go back to the GF4, well, it dropped like a rock with AF. Sometimes to less than half the framerate, and running equally well two resolutions higher without AF. That's a pretty stiff penalty to pay. I've been an AF junkie since the original Radeon, and even I might have left it off on the GF4 if I had one.

It's always been performance first, image quality second.
 
Last edited by a moderator:
DemoCoder said:
Nope what? Did you ever bother to read my message? I never said anything about per-pipe per-clock improvements.
Okay, this is what you said:
DemoCoder said:
Adding 8 pipes to the 6800 made the G70 GTX 256mb 100% faster than the 6800 in Shadermark tests. Bumping the clockrate by 27% made the GTX 512mb 25% faster in most shadermark tests. (Dave's article)
WTF am I supposed to think? You are clearly saying the 7800GTX 256MB is 100% faster than the 6800U (or GT) in ShaderMark. Go to Dave's 7800GTX 256MB review, and it's 60% faster, not 100%. Your extrapolations are handing NVidia a 30% advantage. I think that's worthy of a nope. Now you're linking to RightMark tests, and even then, there's only ONE test where it's 100% faster. On average 77% faster.

And even though you didn't explicitly say anything about per-pipe-per-clock advantage, you're strongly implying it: "Adding 8 pipes to the 6800 made the 256MB G70 100% faster". WTF is your problem? Can't you take one second to reread your own posts before presuming I'm illiterate?
 
Chalnoth said:
Er, bear in mind that you don't have easy access to external storage as in a CPU. So there is always the possibility of algorithms that simply require more registers to execute, and would otherwise need to be broken up into multiple passes (which HLSL and GLSL don't help with). More constant registers is probably more helpful for HLSL and GLSL than ASM programming, actually, as you can realistically make much longer programs.
Think about the types of things pixel shaders run for a second. You have to do something really crazy to need that many registers. Compilers can narrow down the number of registers required like crazy.

I remember when FarCry had that PS3.0 patch, Demirug pulled out the longest shader he could find to see why it couldn't run on PS2.0. It was something like 50 instructions long, and it used only 5 or 6 registers. The MS compilier was tuned to do it that way. For the long, long term future or for really bizarre GPGPU applications, maybe it'll come in handy. The fact is you can't output more than four 4D values from the pixel shader, so having a lot of data in flight is not common, and comparisons to CPU programs is pointless.

A concrete example will be the best thing to illustrate its use. Maybe multiplying large matrices? The output limitation is the problem there. When you have trouble thinking of a theoretical case, you know its real world advantage is next to nothing.

But really, I'd love for you to prove me wrong because it means I'm learning something new. I was hoping for DC to challenge my points in the post you responded to for the same reason.
 
Well, the output limitation really isn't all that much of an issue. It's not hard at all to think of a mathematical situation where it may become useful to make use of many registers, while still only having one output. One simple case would be integration. Another would be the product of a vector, a matrix and another vector (the result of which is just a number).

That aside, for speed, you may not want to minimize the number of registers. It will depend upon how the hardware handles large numbers of registers, obviously, but being able to use more registers means that fewer register reassignments need to be done.
 
Thinking of G70 v r580 early next year don't forget that each G70 at the very top end will only have to draw, on average, one quarter of the screen. I feel this is the over riding factor in performance when compared to r580, at least if each r580 has to draw approx half the screen.
 
DemoCoder said:
What's amazing is all the people who were claiming the opposite back in the days when NV had angle independence and ATI did not. ("not very perceptable because most games have 90 degree angles")
Ya.
I had an 8500 (as one might guess) and I enabled 16X filtering in every game but games that had possibly negative lods the filtering was horrible in terms of aliasing (and no driver setting fixed that nor did disabling AF so the filtering in general was just bad) and if you ever played a game other than an fps (hills++) it was easily noticed.
The performance was great but not so good in general filtering for a number of games and RPG games really showed off the weaknesses of ati's speed hack.
It's funny how people slammed the GF3's and 4s AF hit and praised ati when at the time ati still used old fashioned SSAA.
 
SugarCoat said:
http://www.penstarsys.com/previews/graphics/nvidia/512_7800gtx/512gtx_3.htm

no idea how reliable that stuff is, will be interesting if true.

It seems to jive with what Wesley Fink was saying just about an hour ago is his RD580 Preview:

Anandtech said:
......which means the R580 GPU is still scheduled for launch in January.

ATI RD580: Dual x16 Crossfire Preview

Is it really possible ATi is cutting their losses on the R520 & releasing the R580 on the original timetable?
 
http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=2609
ATI told us emphatically the delays with X1800XT were NOT the result of the die-shrink to 90nm. We were told the issue was a defect in a third-party IP (Intellectual Property) that was used in the x1800XT GPU die. It took ATI quite a while to find and correct this design flaw. Why does this matter? Since this flaw was specifically related to the X1xxx family, design work continued on future video cards, and there were no delays on that front. Development continued on introductions that will follow R520, which means the R580 GPU is still scheduled for launch in January.
R580 already in January?
ATI was quite clear they will be introducing a "PE version" of X1800XT to compete with 7800GTX 512.
A PE version before the R580, or is the R580 going to be the PE version?
 
I care about quality
I blame ati in the past for his AF angle dependent, now nvidia do worse

I really
don't know if some guys have problems detecting the big difference of filtering with x1800 and 7800, but I simply don't care for more fps, if this frames are bad frames

I care of quality, and if I have to pay 800 dollars for a card, I want the quality

if you make me choose between 70 highQuality perfectly filtered frame per second for 600$
against 80 bad filtered frame per second for 800$
I'm not dumb, I choose ever and forever the 600$ solution :devilish:

I hope in g80 nvidia learn the lesson
 
AlphaWolf said:
I doubt they would label a board based on r580 as x1800xtpe. The PE cards have always just been a speed increase.

I agree
I think of the PE thing as a 675-690 Mhz core (ati have raisend xt clock from 625 to 650 some days ago) and 1600 Mhz mem, not a new gpu

I think that R580 can arrive in feb. 2006, and a PE version in december
 
SynapticSignal said:
I agree
I think of the PE thing as a 675-690 Mhz core (ati have raisend xt clock from 625 to 650 some days ago) and 1600 Mhz mem, not a new gpu

I think that R580 can arrive in feb. 2006, and a PE version in december

At some point Ati actually needs to get more than 100 cards in circulation before they start talking something new that wont arrive yet till months later again.
 
SynapticSignal said:
if you make me choose between 70 highQuality perfectly filtered frame per second for 600$
against 80 bad filtered frame per second for 800$
I'm not dumb, I choose ever and forever the 600$ solution :devilish:

I hope in g80 nvidia learn the lesson

Fine, but what if it's 15-20 properly filtered fps against 40-45 "badly" filtered fps? You'd still go for the slower one?
 
Hellbinder said:
At some point Ati actually needs to get more than 100 cards in circulation before they start talking something new that wont arrive yet till months later again.

Give it up already ffs. Newegg has sold more than that in the last 2 days, you can even buy one there right now.
 
AlphaWolf said:
Give it up already ffs. Newegg has sold more than that in the last 2 days, you can even buy one there right now.

Unlike anywhere in Europe (and rest of the world for that matter)?
 
Glad to hear that the PE is coming out.

As a cutting edge (very rich) gamer my X1800XT will be old hat by December and the PE will handily tide me over until January when I get my r580.


A year or so ago both companies indicated that the product developement cycle was going from 6 months to 12/18 and everyone gnashed their teeth with woe because the most exciting times at B3D are in a lead up period.

Luckily for us and Dave the intense competion has meant that developement times for new products might be 18 months but that still allows 9 refreshes and respins in the meanwhile :D

Currently we seem to be in a constant lead up period :) Woohoo ....
 
Last edited by a moderator:
Back
Top