Anand talk R580

no-X said:
Why to transfer composited image back to the card? The chipset could have own RAMDAC/TMDS.
The compisited image is just output to the display. No, I meant to transfer the pre-composite half-frame to the card that does the compsiting.
 
Chalnoth said:
No, I meant to transfer the pre-composite half-frame to the card that does the compsiting.
Supposedly it is the RD580 chipset, not the card, that will do the compositing.
 
ANova said:
If they launched the R580 now they would lose lots of profits on the R520 and have no answer to the G72.

The R520 is enough to compete against the G70 with roughly equal performance and more features.

The R520 is NOT enough to sell loads of cards, though. R580 is desperately needed to get back high-end market share and make customers "happy". R520 is a great chip, but it doesn't make G70 look bad the way it would need to in order to sell well.
 
_xxx_ said:
The R520 is NOT enough to sell loads of cards, though. R580 is desperately needed to get back high-end market share and make customers "happy". R520 is a great chip, but it doesn't make G70 look bad the way it would need to in order to sell well.

Interesting theory. Maybe we should tell the IHV's they need to go and stay on a staggered schedule. Every 6mo for each, but staggered --then each gets a nice three month period of selling at high-prices/margins before the other gets their turn. :LOL:
 
It might be interesting to you .. but I agree with _xxx_

ATI dropped the ball by releasing a very late R520. As much as they want to try and recover costs for the R520 . .I think they'll do better by releasing the R580 now and drop the prices of the R520 by $100.

They dropped the ball and yet they expect people to support them. Ermm .. not today I won't.

The GTX 512MB is beating the crap outta the R520 .. why should I buy the R520?

I'm not the only one who thinks like this.

ATI dropped the ball . .if they want to pick it up again they can do so by releasing the R580.

I got a feeling thought that ATI is only going to release the R580 later .. and by that time . .I expect Nivida to release a 90nm GTX .. making it even faster than the R580. Way to go ATI.
icon_rolleyes.gif


US
 
Say for the sake of argument, Dave Orton is doing his usual morning reading of B3D over coffee, hits your post, and sez, "Damn, that US really nailed it there. Let's do it."

When do you suppose you'd be able to buy R580s, that decision having been made this morning?
 
geo said:
Say for the sake of argument, Dave Orton is doing his usual morning reading of B3D over coffee, hits your post, and sez, "Damn, that US really nailed it there. Let's do it."

When do you suppose you'd be able to buy R580s, that decision having been made this morning?

Screwball knows .. but ATI have dropped the ball. And it's gonna hurt them .. as it's been doing so for the last 4 months.

US
 
Unknown Soldier said:
Screwball knows .. but ATI have dropped the ball. And it's gonna hurt them .. as it's been doing so for the last 4 months.

US

I agree. R520 is a good chip, but a lot of the wind was taken out of it by G70 getting there first, with months to spare. Nvidia was seen to lead the way, raised the stakes, and now ATI has finally caught up. What would have been a stunning chip from ATI six months ago is now seen as being not much better than was already out there from the competition. Whether or not this is the case (because I think the R520 is better in several ways and has more legs on it that G70) is somewhat irrelevent - it is perceived that Nvidia got there first with a part that is just as good as R520.

What ATI has to do is reclaim that high ground and the "halo effect". They have to do what Nvidia did to them - ATI have to raise the stakes first, make a jump ahead and make Nvidia look as if they are the ones falling behind and scrabbling to catch up at some point in the future. A sooner release of R580 is the only way for ATI to do this if they want to get back mindshare before they jump to next gen Vista parts like R600.
 
I agree that the R520 got the wind taken out of its sails because of the 7800 GTX. I think there is something about R580 that we dont know well enough yet to justify saying that the 90 nm part from Nvidia will be able to keep up or even surpass the R580...unless Nvidia also has something up their sleeves.
 
Unknown Soldier said:
They dropped the ball and yet they expect people to support them. Ermm .. not today I won't.

The GTX 512MB is beating the crap outta the R520 .. why should I buy the R520?
They expect people to support them? They fully know how damaging this delay is. They're trying to sell a product just like NVidia.

Beating the crap outta R520? WTF are you talking about?
Digit-life: Ignoring Riddick (but D3 still included), only 10% faster on average through 6 tests.
Anandtech: Ignoring B&W2 (bug?), 15% faster through 6 tests.
Tech-report: 13%, 7 tests (and they omitted Farcry and Splinter cell, which they usually test)
(All numbers for 1600x1200 w/ AA/AF, because you'd be a moron for not enabling them with these cards)
None of these include the >30% performance fix for FEAR either.

The X1800XT costs less, too, and performs right alongside NVidia's Uber edition in the newest games. It uses less power, and really blows the crap out of the 512MB GTX when dynamic branching is used (I mean 50-100%, not a piddly 10%). Crytek have shown this, so has Epic with Unreal Engine 3 (I think), and ATI in their SDK. Carmack is also on this path, especially with XB360 being his primary development platform. R520 also has high quality AF, does 6xAA with transparency antialiasing waaaay faster than 8xS (looks better too), and can do HDR and MSAA together.

No, not everybody knows these reasons, but saying "why should I buy the R520?" is stupid, especially as a senior member of B3D. There are tons of reasons, and if I was buying a card now R520 is the clear winner to me, just like last gen I'd get NV40 even though R480 was faster overall.

Yes ATI dropped the ball, but the card is here now, so why are you still bitching? To me the biggest blunders are the D3/Q4 OpenGL screw-up and the FEAR problem (and who knows what else), which really took the oomph out of their launch.

EDIT: Forgot to mention a big R520 feature: HDR+MSAA
 
Last edited by a moderator:
Unknown Soldier said:
It might be interesting to you .. but I agree with _xxx_

ATI dropped the ball by releasing a very late R520. As much as they want to try and recover costs for the R520 . .I think they'll do better by releasing the R580 now and drop the prices of the R520 by $100.

They dropped the ball and yet they expect people to support them. Ermm .. not today I won't.

The GTX 512MB is beating the crap outta the R520 .. why should I buy the R520?

I'm not the only one who thinks like this.

ATI dropped the ball . .if they want to pick it up again they can do so by releasing the R580.

I got a feeling thought that ATI is only going to release the R580 later .. and by that time . .I expect Nivida to release a 90nm GTX .. making it even faster than the R580. Way to go ATI.
icon_rolleyes.gif


US
Count of "Dropped the ball" = 3 :p
 
suryad said:
I think there is something about R580 that we dont know well enough yet to justify saying that the 90 nm part from Nvidia will be able to keep up or even surpass the R580...
Is anyone saying that?

You can get a decent idea of R580 performance by looking at the X1600XT. When it gets close to or surpasses the 6800GT/GS (without AA - I'm only considering pixel shaders, because that's what R580 brings to the table), then R580 will likely double G70's performance (I mean the original GTX). Basically, you need math heavy shaders. We see this in FEAR and to lesser degree COD2. I can't see a 32-pipe 90nm 650MHz G7x part coming close in these situations.

There is one more thing to consider: XBox 360.

If there is any X-factor in how ATI will fare next year, it will be XBox 360 development influencing the PC space. While R580 doesn't have the unified architecture of Xenos, it should have similar pixel shader performance characteristics due to its architecture. COD2 is one example. I think XB360 will be the biggest driver of advanced shaders into PC games, especially regarding the use of dynamic branching and lots of math. Developers will finally have a real financial incentive for good graphics.
 
Last edited by a moderator:
serenity said:
Count of "Dropped the ball" = 3 :p

I actually said it four times including my other post. :p Shame you didn't even have anything to add.

Mintmaster I see what you saying .. but people are interested in which is better. All they'll see is Nvidia has a faster card than ATI. I'm still waiting for SC:CT because the X1800XT walked the old GTX. All I know is it's gonna hurt ATI in the end.

Me, i'm waiting for the R580. But also as far as i'm concerned, Nvida has won this round.
 
Unknown Soldier said:
I actually said it four times including my other post. :p
You got my point. ;)
Unknown Soldier said:
Shame you didn't even have anything to add.
What did you add? We all know R520 is late, you are just stating (repeating) the obvious.
 
There is already an ATI email circulating around that states Xenos only has slightly more pixel shading power than the X800XT.

Whether this is an "on the average" or "all ALUs used for pixel processing" assessment is not clearly defined but I suspect the former is the case.

Just to expand a bit a situation where 8-12 of Xenos's ALUs are dedicated to vertex work leaving 36-40 ALUs available for pixel work would seem to make the afore mentioned comments make sense taking into account also where Xenos's ALUs are clocked at 500Mhz and the X1800XT's are at 625Mhz. (are both Xenos's and the X1800 XT's ALUs 5D capable...Vec4+scalar?) Assuming 8-12 vertex ALUs are enough the handle the vertex load as in PC parts because any more would only reduce power available for pixel work.

If I understand the 16-1-3-1 number correctly to mean each of R580 pipes to have 3 shader processors where each would 2 ALUs then R580 should have 96 pixel shader ALU's that are always dedicated to pixel work.

Forgive me because I am a n00b, but it would seem to me that this would be make for quite dissimilar pixel shader performance where there should be some parity with dynamic branching between Xenos and R580. As far as efficiency is concerned both parts are ultra threaded so both parts should do well in keeping utilizing available resources.

In my view Xenos may will indeed be somewhat catalytic in the use of lots of math and dynamic branching in shaders (perhaps RSX as well), but PC developers will have to be self motivated to really take advantage of PC hardware...as always. This and considering how I feel R600 and G80 aren't THAT far off which should both be more capable than R580 it would seem that as soon as next year X360 ports not be enough to take advantage of what PC GPUs can handle so again developers will need to motivated by others things (money) or themselves (to show off...and make more money).

-----------------------------------------------------------------------------------------------

I do agree that G70 is going to have a tough time of it keeping up with R580. As I see it, it would be hard for 48 ALUs to best 96ALUs in R580 despite them being a little more capable (if they still are). A simply...FRIGHTENING clockrate for Nvidia HW would be required and on the other end of the spectrum ATI would need only raise the clocks of R580 a little to give Nvidia all kinds of headaches.

I've been thinking that if R580 is for real then Nvidia would seem forced to increase it's shader processor to pipe ratio if it hopes to keep up without clocking the...ULTRA...to high heavy if necessary clocks could even be achieved. Just adding 1 more shader processor to the already 24 pipes G70 has would give Nvidia parity with ATI in the number of ALUs each part would have. Then the battle would fall again on the IPC and clockrate of each part where Nvidia at least to me would rather fight the battle as they have a leg up as far as IPC is concerned.

If ATI surprised Nvidia and it's too late in the game for such a part to be crafted as nothing suggests they had any clue about R580 (shameless fishing expedition) then the other option would seem SLI on a single board. But SLI...what? Only GTX's would give Nvidia ALU parity with ATI's R580 and that option seems too expensive to be viable.

So I'm back to square one. I don't see where Nvidia has an answer to R580 as all of my ideas seem far fetched...but then I can't really claim to know enough to be having any ideas in the first place :)
 
Back
Top