R520 = Dissapointment

Waltar said:
Quick question, since the cooler basically looks like an nv30 cooler with a bigger fan, how godamn hot is this thing running? has anyone used a temp probe on it to check?
It runs cooler than my R480 board.
 
DemoCoder said:
... so I think it is premature to claim that these will be the R520's forte vs the G70.

Agreed....To be clear, I was not trying to imply that in an absolute sense, the 520 has more "shader power" than G70 given complex shaders. Only pointing out that generally, the more complex shader you have, the "more powerful" that R520s shaders should appear.

Whether or not we could generalize one card having more "shader power" than the other would best be left to a future exercise with a history of a bunch of diverse shader apps / games to test with.
 
Sigma said:
The G70 has complete FP texture filtering (I think this is a big advantage...)
It'll be interesting to see if any games ever actually use FP textures and filter them.

Humus's discussion:

http://www.beyond3d.com/forum/showthread.php?t=23276

seems to suggest that FP16 texture filtering in dedicated hardware is the stuff of dreams.

But then again, FP16 blending with AA in the render target/back buffer was a bit of a dream until recently...

Does the R520 support TransparentAA?
Yes, but it seems it's only supersampling of transparent areas - not the "dithered" technique that 7800GTX also supports.

Jawed
 
The card delivers pretty much what I expected out of it. Wins some benchmarks and loses others. Pretty much par for the course with this generation. Unfortunately it will probably be 5-6 months late in terms of quantity in the channel. That is just abysmal if you think about it.

When the R580 comes out it will be interesting to see what Nvidia has planned to counter it.
 
digitalwanderer said:
Really? Do R480s run a lot hotter than R420s? Or are they about the same? (Just trying to get an idea)
Not sure off the top of my head and I don't have an R420 handy. Didn't some web site already post this info some time ago?
 
so the XL cant beat the 7800GTX and the 1800XT wont be out until mid-November and will be a 2 slotted beast with a captain crunch plastic fan noisemaker.

I wonder what the prices of the 7800GTX will be mid November? Right now they are at $419 on newegg.com The fact that the 1800XT is using 512MB of ram and that ram is faster than the 7800GTX makes you wonder how much performance is just from the ram itself. If nvidia has a 7800GTX "ultra" with 512MB of fast ram I wonder if it would not be faster than the 1800XT
 
Junkstyle said:
so the XL cant beat the 7800GTX and the 1800XT wont be out until mid-November and will be a 2 slotted beast with a captain crunch plastic fan noisemaker.

It's very a very quiet fan according to the reviews, and it exhausts hot air out of the back of your case.
 
Last edited by a moderator:
Joe DeFuria said:
The more complex a shader that you have, the better ATI's part will do.
I don't know if I'd qualify it in that way. It's not necessarily the complexity of the shader that is important, but rather how instructions are ordered and grouped.

Basically, nVidia's hardware will do best if the texture reads are distributed at nearly even intervals throughout the shader, as are the special functions. ATI's hardware doesn't much care which order the instructions come in. In other words, the decoupled texture and ALU units result in a dramatic reduction in the number of scenarios under which the pipelines will stall. This isn't related to more complex shaders, just different ones.

Some of this may be managed by working more carefully on optimizing the shaders, either through developer effort or through better compilers. Some of the time it just won't be possible.
 
I'm glad to see Monarch hasn't begun to charge crazy prices on the X1800 XL.

Has there been any word on whether AIBs vendors will be able to offer overclocked cards by ATI?
 
According to Hexus

Sadly, and while not as bad as the Radeon X850 XT Platinum Edition, the board is hot and it is noisy. At the full fan speed you experience at first boot, the cooler is loud and bothersome, whining at a constant pitch with a decibel level to make even my deaf old ears prick and take notice. Shortly after it reverts to temperature controlled mode. The steppings from slowest speed (it never stops completely) upwards towards full speed aren't analogue, the stepped pitch changes of what appears to be a digital control making the transition to higher or lower speeds noticeable.
 
OpenGL guy said:
Not sure off the top of my head and I don't have an R420 handy. Didn't some web site already post this info some time ago?
What, you expect me to hunt it up? ;)

I think they are about the same actually, but I'll poke around and check. My 420 idles around 37 and peaks around 61 I think.
 
Dave Baumann said:
Register usage is still the crunch on NVIDIA's parts.
I doubt it's that much of an issue. From what I can tell, register usage only relates to how many of the NV4x's functional units can possibly be active for any given clock cycle, and the minimal usage is two FP32 operations per clock. NV's parts were typically ahead of the R4xx on a per-clock shader op basis because they have more functional units. The fact that these units aren't always able to be used due to register pressure just places a maximum limit on the possible amount of performance that nVidia can get out of the architecture.

No, I'm really convinced that the reason that the R5xx sometimes tramples the NV4x is due to register pressure. If it were, enforcing FP16 for all operations should relieve this pressure and bring the NV4x back up....I doubt that it would. I'm willing to bet that it's due to pipeline stalls that the NV4x is experiencing that the R520 is not.
 
Back
Top