R520 = Dissapointment

suryad said:
There is an xbitlabs article as well:
http://www.xbitlabs.com/articles/video/display/radeon-x1000.html

Hope this has not been posted already. I just dont understand why ATI decided not to have the same number of pipelines as Nvidia...I mean that would put an even larger performance gap between ATI and NVidia wouldnt it?
Ati felt it was best to stick with 16 pipes and make them effcient.
Obivously it pays off well in the flow control bench at techreport.
http://techreport.com/reviews/2005q4/radeon-x1000/shadermark-shadow.gif
 
Well, the R520 basically wins in Direct3D by a notable percentage, but loses in OpenGL by a similar percentage. This says to me that ATI has once again dropped the ball in OpenGL, and OpenGL is crucial for the way I use my PC.
 
Because ATI decided to spend the transistors on something else. BTW, I wonder how a hypothetical Rx chip would perform had ATI simply implemented 8 R420 quads - it would fit into R520 320M budget with gates to spare.


I know people cringe at the mention on NV30, but there are enough parallels between it and R520 to give the latter a tile of "NV30 done right".

Consider:

R520 is the biggest chip on the market (like NV30 was)
The transistor budgets us used to implement more features/flexibility then competitor
It has the same number of pipelines as previous generation
It relies on the high frequency to compete with more-pipelined competitor
Uses a new manufacturing process, ran into foundry problems.
Both use the dual-slot cooler to sustain the clockspeed.

I can't help but see these superficial architectural similarities between the two.
 
real men dont use silly operating systems like linux ;)
Geeforcer said:
Because ATI decided to spend the transistors on something else. BTW, I wonder how a hypothetical Rx chip would perform had ATI simply implemented 8 R420 quads - it would fit into R520 320M budget with gates to spare.


I know people cringe at the mention on NV30, but there are enough parallels between it and R520 to give the latter a tile of "NV30 done right".

Consider:

R520 is the biggest chip on the market (like NV30 was)
The transistor budgets us used to implement more features/flexibility then competitor
It has the same number of pipelines as previous generation
It relies on the high frequency to compete with more-pipelined competitor
Uses a new manufacturing process, ran into foundry problems.
Both use the dual-slot cooler to sustain the clockspeed.

I can't help but see these superficial architectural similarities between the two.

And the X series has horrible DX9 performance.. oh wait.. sorry.. got confused there for a second.
There is nothing similar between the two.
Only in your mind young padawan.
The FX was a DX8.1 chip with a DX9 engine duct taped on.
 
Last edited by a moderator:
radeonic2 said:
real men dont use silly operating systems like linux ;)
Ah, I suppose because real men don't have any work to get done?

And the X series has horrible DX9 performance.. oh wait.. sorry.. got confused there for a second.
There is nothing similar between the two.
Only in your mind young padawan.
The FX was a DX8.1 chip with a DX9 engine duct taped on.
Er, he listed quite a lot there that was similar between the two. The key point is that the #1 thing that is not similar between the two is the thing that the NV30 was most known for: its poor shader performance.
 
Chalnoth said:
Ah, I suppose because real men don't have any work to get done?


Er, he listed quite a lot there that was similar between the two. The key point is that the #1 thing that is not similar between the two is the thing that the NV30 was most known for: its poor shader performance.
I can name quite a few people who make a good living on windows OSes.
Ultimately,however, I was just giving you a hard time.
Dual slot coolers are quite common, the extra features the FX have like extended pixel shader lengths wouldn't have done much if it couldn't cope with just simply short shaders.
I'm unsure if the fact they ran into problems with the .09 process can be viewed as the same as the FX since ati had no trouble with the C1 chip.
 
radeonic2 said:
I can name quite a few people who make a good living on windows OSes.
Ultimately,however, I was just giving you a hard time.
I know. But what I'm suggesting is that the work that I do (I'm a graduate student researcher) requires Linux. It is possible to use Cygwin, but it's rather inconvenient.
 
Did you guys even get the hkepc benchmarks?

AMD Athloin 64 FX-55 2.6GHz Socket 939
Gigabyte K8NXP-SLi nForce 4 SLi
Geil DDR 400 512MB x 2 (CL 2-2-2-5)
ATi Radeon X1800XT 256MB (625MHz/1.5GHz)
HIS Radeon X1800XL 256MB (500MHz/1GHz)
nVidia Geforce 7800GTX 256MB (430MHz/1.2GHz)
nVidia Geforce 7800GT 256MB (400MHz/1GHz)
Catalyst Version 8.173 (R520 Test Driver)
nVidia Forceware 81.84 (Beta Version)

RED means winner

x1800xtxl1dg.png


http://www.hkepc.com/hwdb/atix1800-10.htm

I can't read japenese but pictures and numbers are universal

i suggest you check out all the pics starting from page 1: http://www.hkepc.com/hwdb/atix1800-1.htm
 
radeonic2 said:
real men dont use silly operating systems like linux
radeonic2 said:
And the X series has horrible DX9 performance.. oh wait.. sorry.. got confused there for a second.

Yes, you quite obviously did, since I have not said anything specific about performance, noting except what was implied by the "done right", focusing instead on... how did I put it? Oh yes, "superficial architectural similarities".

There is nothing similar between the two.
Only in your mind young padawan.

Really, old master? Nothing at all? Because I could have sworn my list was for the most part factually accurate; the part about using the same number of pipelines as the previous-generating chip in conjunction with high clock speed on a new process to compete with a lower-clocked part with more pipelines on an older process is particularly relevant.
 
Chalnoth said:
I know. But what I'm suggesting is that the work that I do (I'm a graduate student researcher) requires Linux. It is possible to use Cygwin, but it's rather inconvenient.
I WAS MESSING WITH YOU!!!
I actually like to mess around with linux when im bored.
I couldn't agree more about ati drivers being one sided.
Geeforcer said:

Yes, you quite obviously did, since I have not said anything specific about performance, noting except what was implied by the "done right", focusing instead on... how did I put it? Oh yes, "superficial architectural similarities".


Really, old master? Nothing at all? Because I could have sworn my list was for the most part factually accurate; the part about using the same number of pipelines as the previous-generating chip in conjunction with high clock speed on a new process to compete with a lower-clocked part with more pipelines on an older process is particularly relevant.
The NV30 could never be "done right" because of the intentions nvidia had with it.
I couldn't care less how many pipelines a graphics card has.
Along as it performs competetivly, it could have 1 quad or 8 quads for all I care.
As techreport's flow control benchmark works, ati has def some Sm3 right, as ati said they would.
 
Last edited by a moderator:
Regarding dual slot coolers:

I absolutely love the ArticCooling Silencer dual slot cooler I bought for my 9700. It makes the thing significantly more overclockabe, it's silent, and it reduced case temp noticeably. Can't beat that. And it's not like we're low on PCI slots these days. Hell I know people who have only a video card for their expansion cards. So that loss of a PCI slot you shouldn't use anyway (especially on AGP boards) isn't much of a loss at all.
 
suryad said:
There is an xbitlabs article as well:
http://www.xbitlabs.com/articles/video/display/radeon-x1000.html

Hope this has not been posted already. I just dont understand why ATI decided not to have the same number of pipelines as Nvidia...I mean that would put an even larger performance gap between ATI and NVidia wouldnt it?

"Today we saw that the Shader Model 3.0 support was really done for good: just look how the RADEON X1600 XT manages to leave behind a far more expensive and enhanced GeForce 7800 GTX during dynamic branching and pixel shaders 3.0 processing."

...
 
Jima13 said:
"Today we saw that the Shader Model 3.0 support was really done for good: just look how the RADEON X1600 XT manages to leave behind a far more expensive and enhanced GeForce 7800 GTX during dynamic branching and pixel shaders 3.0 processing."

...

Too bad X1600 is defeated soundly most of the time in real games. I think it's due to the goofy 4 ROP layout. The thing sure looks like everything before that is beefy enough.
 
Geeforcer said:
Why exactly are you arguing with me again?
?
you claim because it has the same number of pipes and simply clocks them higher, it is like the FX.
I claim that's bullshit.
:D
 
radeonic2 said:
?
you claim because it has the same number of pipes and simply clocks them higher, it is like the FX.
I claim that's bullshit.
:D
UGH, I claim that there are similarities between the two in regard to pipelines–clocks-process. Which is a fact.

Let's recap the conversation so far.

Me: "There are some secularities between NV30 and R520 design philosophy" /list
You: "There is nothing similar at all"
Me: "Really? What about..." /recap of the list
You: "I don't care about that"
Me: "Then why argue?"
You: "Because what you say is bullshit".

There is term for this kind of argumentative strategy that I can't quite recall... Although I do belive it starts with "t" and end with "..ingâ€￾.

 
Geeforcer said:
I know people cringe at the mention on NV30, but there are enough parallels between it and R520 to give the latter a tile of "NV30 done right".

Consider:

R520 is the biggest chip on the market (like NV30 was)
The transistor budgets us used to implement more features/flexibility then competitor
It has the same number of pipelines as previous generation
It relies on the high frequency to compete with more-pipelined competitor
Uses a new manufacturing process, ran into foundry problems.
Both use the dual-slot cooler to sustain the clockspeed.

I can't help but see these superficial architectural similarities between the two.

I think you get a lot of things right and that there are some similarities and that R520 is "NV30 done right," but I also think there is so much complexity beyond these superficial facts to say that the comaprison may not be perfectly apt. However, I would not be surprised if it turns out that R520 doesn't quite have the oomph to carry out what it proposes and it will be the tripling of the shader processors in the R580 that will come to the rescue.

It's a very interesting way of looking at it. :smile:
 
Along those lines, what would ATi have to do (besides immediate availability) to hit a home run? Would the R580 have to be released sooner than later? i.e. released in December?
 
wireframe said:
I think you get a lot of things right and that there are some similarities and that R520 is "NV30 done right," but I also think there is so much complexity beyond these superficial facts to say that the comaprison may not be perfectly apt. However, I would not be surprised if it turns out that R520 doesn't quite have the oomph to carry out what it proposes and it will be the tripling of the shader processors in the R580 that will come to the rescue.

It's a very interesting way of looking at it. :smile:
QFMFT
 
Back
Top