R520 = Dissapointment

Sxotty said:
You know this means nothing right?

IMO* ATI made a pretty smart decision. They attempted to make a core that would run at a higher clock and therefore they could reduce the parallel nature, and make a smaller chip**, getting higher yields (hopefully). In other words it may end up being a really good idea, and making these boards get really high margins. If this is the case though the price war will commence. If it does not then we know things did not pan out.

*which is worthless

** 288mm^2 @ 90nm; 320Mt
 
From looking at the reviews out there it looks like the R5XX and the RV5XX weak spot is pixiel shaders. The VS is very strong but the PS power looks very weak and is the bottle neck on the new cores. The thing to find out is if its a hardware limit or a driver problem. If its hardware then this will bottle neck the new cores and we will be stuck with it but if its drivers then there still hope for a large improvment down the road and with new offical drivers every month then if its a driver problem It might be improved in short time.
 
{Sniping}Waste said:
From looking at the reviews out there it looks like the R5XX and the RV5XX weak spot is pixiel shaders. The VS is very strong but the PS power looks very weak and is the bottle neck on the new cores. The thing to find out is if its a hardware limit or a driver problem. If its hardware then this will bottle neck the new cores and we will be stuck with it but if its drivers then there still hope for a large improvment down the road and with new offical drivers every month then if its a driver problem It might be improved in short time.
I just think they dont have alot of "raw" PS power like nvidia does, they instead of effcient PS power ;)
 
caboosemoose said:
Well, there's M52 on "the map". I am assuming that is based on 520, not 580. I also understand january is a time pencilled in for a tour of the new part.
M52 is the very low end - they incremen the numbers according to performance.
 
radeonic2 said:
I just think they dont have alot of "raw" PS power like nvidia does, they instead of effcient PS power ;)

I would word it a different way:

The more complex a shader that you have, the better ATI's part will do.
 
G70 is at 110nm right? And the R520 at 90nm? Doesn't this mean that moving to 90nm for the refresh could mean significant clock boosts for the G70? (assuming no hitches going to 90nm).
 
Quick question, since the cooler basically looks like an nv30 cooler with a bigger fan, how godamn hot is this thing running? has anyone used a temp probe on it to check?
 
R520 = meh, but a decent card nevertheless. (HDR + AA seems most worthwile from my POV)

Hype was wrong but this is no NV30 (thankfully) :D
 
Tell ya what I want:

A comparison of X1800XL at 500/500 against X800XT at 500/500. I hope Dave's benchmarking will provide this data. That'll be a great start in understanding how good the efficiency gains are.

Jawed
 
Waltar said:
Quick question, since the cooler basically looks like an nv30 cooler with a bigger fan, how godamn hot is this thing running? has anyone used a temp probe on it to check?
Are the XT's in reviewer's hands real XTs or are they "fixed XLs" that just squeak in at XT speeds?

Jawed
 
Joe DeFuria said:
I would word it a different way:

The more complex a shader that you have, the better ATI's part will do.

I think that will be more true of the R580 than the R520. The R520 has a deficit of ALUs vs G70 and Xenos, and so depends on theoretical (but IMHO, not proven, given no one has compared a R520 at 430Mhz and identical memory clock to an X800XT) efficiency gains if it wants to beat a 24ALU part. But long complex shaders might be pathological in ways we don't understand yet, so I think it is premature to claim that these will be the R520's forte vs the G70.

I think the only reasonable conclusion right now that that the small batches lead to better dynamic branching behavior.
 
Jawed said:
Are the XT's in reviewer's hands real XTs or are they "fixed XLs" that just squeak in at XT speeds?

Jawed
Dunno about that, but i noticed that the die markings on the driverheaven review says A15FG while beyond3d and techreport show A14FG.
 
Jawed said:
Tell ya what I want:

A comparison of X1800XL at 500/500 against X800XT at 500/500. I hope Dave's benchmarking will provide this data. That'll be a great start in understanding how good the efficiency gains are.

Jawed

agreed
 
Uttar said:
Analyzing the R520's featureset, it has two major advantages and two minor advantages:
- Better AF (with a performance hit)
- HDR AA support.
- 6xAA MSAA.
- 3Dc.
The G70, on the other hand, has one major advantage and one minor advantage:
- Less hacky Dual-GPU support (Crossfire2 doesn't seem bad, but compared to SLI2... heh)
- Vertex Texturing (that'd be a major advantage if it had better performance and/or supported filtering)

The G70 has complete FP texture filtering (I think this is a big advantage...)
Does the R520 support TransparentAA?
 
Love_In_Rio said:
That´s what i was saying: 232 mill. + 20 mill = 252 million transistors. Take the EDRAM out and put it in a pc card with its 256 bus to memory. The unified architecture doesn´t need directx 10 and if the simulations prove that the performance is better than the one from R520...( at least ATI claimed that R500 performance was like a 32 pipe card ) once you are late in the market why not to smash nvidia now with a much better and efficient design ?

The daughter-die on Xenos is an integral part of the design - bear in mind it contains all sorts of stuff in addition to the EDRAM. One of the big plus points of Xenos design is the exceptionally low hit for Anti-Aliasing which is very important in terms of image quality in modern games. I believe ATI claim that Xenos suffers just a 5% hit in performance for 4XAA. In comparison, look at the figures in the Extremetech review of R520:

Extremetech Benchies

Both the R520 and G70 suffer a 50% hit in performance when using 4XAA (and Anisotropic filtering). G70 can be considered a 'brute force' chip in much the same way as your proposed 32-pipe ATI card so you can see the reason why ATI is looking towards more innovative techniques. To all intents and purposes, your brute force chip would need to be twice as fast as the Xenos chip to be able to render the same scene using Anti-Aliased graphics. How many transistors would it need to be able to do this? Bear in mind that with consoles especially, transistor budget and therefore cost is one of the most important factors.

Ultimately, the brute force approach can work, but at what cost? It's not just ATI that is looking creating innovative new designs, you can guarantee that NV is doing much the same. Whether or not ATI's approach is the right way to attack performance and efficiency issues, only time will tell.
 
Back
Top