R520 = Dissapointment

You know as i read more today..

You cant pick 1600x1200 4x FSAA +8x or 16X AF ignore everything else and call it a day.

Example... read through this review..

http://www.trustedreviews.com/article.aspx?head=15&page=4377

This is the pattern of all reviews that show more than just one settings. What about 1280x1024 what about higher than 1600x1200. So ATi hand tuned the drivers for 1600x1200 knowing that this is the settings that they would *recomend* reviewers use.

Look at Xbit Labs.. The R520 gets whouped in most of the synthetic tests except vertex shaders.

If you want to only look at 1600x1200 then fine.. But there is a whole lot more to gaming then hand picking one resolution and declaring a winner.
 
we need to see some screenshots of image 'quality' as well in reviews. at least see if it's noticable quality diff or just a 6 of 1, half a dozen of the other sort of thing.
 
Hellbinder said:
You cant pick 1600x1200 4x FSAA +8x or 16X AF ignore everything else and call it a day.

I agree, but I believe the theory is that "if it's playable at 1600x1200 4x AA and 8x AF, then it will be playable at any settings below that." Of course you should show it so people can make their own mind up and to be able to play with numbers with what they consider playable.

I, for one, am never terribly interested in resolutions beyond 1600x1200 for two reasons. One, I don't think I will be needing a resolution higher than that for a long time. And, two, we often see numbers above these resolutions where you have a "winner," but if you look at the absolute scores you see that every competitor is actually a loser; some just lose less than others, I suppose.

Example... read through this review..

http://www.trustedreviews.com/article.aspx?head=15&page=4377

This is the pattern of all reviews that show more than just one settings. What about 1280x1024 what about higher than 1600x1200. So ATi hand tuned the drivers for 1600x1200 knowing that this is the settings that they would *recomend* reviewers use.

Interesting review and the first time I see it. They have some strange results for the X1800XT in there. Look at the results for Far Cry, for example. If you remove the GTX comparison scores you would be hard pressed not to think the XT was CPU limited for all the tests. It hardly budges, but clearly has some scores that are lower than the GTX.

If you want to only look at 1600x1200 then fine.. But there is a whole lot more to gaming then hand picking one resolution and declaring a winner.

You are absolutely correct, but I am not sure why you are moaning about this and with the linked review in particular. They do use multiple resolutions although, strangely, they don't seem to use all resolutions for all tests. The XT v GTX battle is limited to three while the XL gets a fuller treatment. To their credit, I think they used the three most popular resolutions for the XT v GTX.
 
Well it's a pure FP32 card and beats or matches the 7800GTX in where it matters and this release pushes prices so i'm satisfied :D
I wont buy it but as said it is pushing down the prices.
 
swaaye said:
Too bad X1600 is defeated soundly most of the time in real games. I think it's due to the goofy 4 ROP layout. The thing sure looks like everything before that is beefy enough.
I think it's more due to having only 4 texture units. Typically there are a many texture ops per pixel, so it seems like a more likely bottleneck. Especially with AF I can see the lack of texturing power really hurting performance in today's games. Beyond that, I would look at the drivers.

Oh well, I hope NVidia releases a cheap, G70-derived product for AGP.
 
Unit01 said:
Well it's a pure FP32 card and beats or matches the 7800GTX in where it matters and this release pushes prices so i'm satisfied :D
I wont buy it but as said it is pushing down the prices.

I hope you didn't fall too deep into ATi clever marketing of "Always ON 32-bit processing." Heh. That is clever word shaping for "we don't support Partial Precision" and I'll let you be the judge of which has more options. Maybe FP16 is horrible, but I would really like to see a FP16 tuned title, just to see what it can do and what extra performance can be eeked out of an architecture that lets you do some "free stuff" while that is happening.

I just wanted to point out that I thought it was funny how ATi uses "always on 32-bit" and you gave me the opportunity. I thought I was reading marketing for an ADSL or Cable modem for a while. :smile:
 
About resolutions, I think 1920*1200 should be added to the list of "standard" ones to test. Now that widescreen finally seems to be taking off, I think that will be a very popular resolution in the high-end.
 
wireframe said:
I hope you didn't fall too deep into ATi clever marketing of "Always ON 32-bit processing." Heh. That is clever word shaping for "we don't support Partial Precision" and I'll let you be the judge of which has more options. Maybe FP16 is horrible, but I would really like to see a FP16 tuned title, just to see what it can do and what extra performance can be eeked out of an architecture that lets you do some "free stuff" while that is happening.

I just wanted to point out that I thought it was funny how ATi uses "always on 32-bit" and you gave me the opportunity. I thought I was reading marketing for an ADSL or Cable modem for a while. :smile:
Agreed. :smile:

Especially "free" FP16 normalize seems useful.
 
Subtlesnake said:
The Hexus benchmarks show the X1800 leading in Far Cry, FEAR and Battlefield 2. The Tech Report benchmarks add Splinter Cell to this list. The Driver Heaven benchmarks add Half-Life 2: Lost Coast, CS:S - VST, NFSU 2 and Fable. I don't understand how people can say the X1800 is losing in most games.

I think that people who shell out the money for an X1800 are going to be looking at a lot more than a few initial frame-rate benchmarks, such as the feature set pertaining to IQ and the likelyhood that R5x0-specific driver revisions will mature substantially over the next few months. I personally think there's much under the R5x0 hood that has yet to be admired at this point in time but certainly will be admired in upcoming drivers and game engines.

I think the folks who think "ATi lost this round" based solely on a few very inconclusive frame-rate benches done so far by a mere handful of people, with older software and first-run drivers, are either smoking crack or else are victims of their own apparently very low standards...;) I mean, the truth of the matter is simply this: when framerates are achievable in the 60's + at most every resolution then "it's the IQ, stupid," that's going to determine the ultimate winner of this round and all the rounds going forward. We are fast approaching the date when frame-rate oriented hardware reviews, reviews that concentrate on frame-rate contests at the expense of everything else about 3d that 3d customers very much want in addition to frame rate, will be considered second-rate anachronisms. The 3d market is maturing quickly and it's time for some of the web sites which attempt to cover 3d to grow up as well. The market is going to mature with or without such sites.
 
WaltC said:
We are fast approaching the date when frame-rate oriented hardware reviews, reviews that concentrate on frame-rate contests at the expense of everything else about 3d that 3d customers very much want in addition to frame rate, will be considered second-rate anachronisms.
Not gonna happen, Walt. Framerate is about the only thing that can be easily measured objectively.
 
Chalnoth said:
Well, not all of it, but most. With respect to this, there are a few things to keep in mind:
1. ATI's algorithm is not nVidia's: they may use different sample positioning or a different number of samples.
Then why were you comparing them in the first place?
2. ATI's architecture, since it doesn't have hardwired texture latency hiding, is likely to do better in situations where there's a large number of samples over a small area, which is the typical case for anisotropic filtering.
Huh? Of course the HW has texture latency hiding, do you really think it would perform so well if texture latency *wasn't* hidden?
3. nVidia's GeForce4 had an additional latency associated with enabling anisotropic filtering, likely related to the circuitry that was calculating the degree of anisotropy, that resulted in a reduction in performance for face-on surfaces with anisotropic enabled.
That's great. So why do you still insist that your GeForce 4 4200 took less hit than a 9700 at 8x AF?
That said, I'll describe exactly what I did to perform this test. The test was very simple. All that I did was I created a new level in UnrealEd. This level was a simple rectangular room with a rather low ceiling. The player was placed into the level at one corner facing the opposite corner. This resulted in a situation where the floor and ceiling were completely horizontal, and the opposing walls completely vertical. This highly synthetic scenario resulted in a situation where a very large portion of the screen had to use the maximum degree of anisotropy available (I don't have the level I used around any longer, but I would hazard to guess 30%-40%).
And you're absolutely certain that application specific optimizations didn't come into play? Did you try to create your own test app?
 
wireframe said:
I hope you didn't fall too deep into ATi clever marketing of "Always ON 32-bit processing." Heh. That is clever word shaping for "we don't support Partial Precision" and I'll let you be the judge of which has more options. Maybe FP16 is horrible, but I would really like to see a FP16 tuned title, just to see what it can do and what extra performance can be eeked out of an architecture that lets you do some "free stuff" while that is happening.

I think I've never really understood what "Free FP16 normalize" actually means. What does it mean? Does ATI have "Free FP32 normalize" now in the X1000 family?

It's always seemed to me that folks who insist that somehow ATI would be faster if they had FP16 have never understood what ATI is trying to tell them on that point. Doesn't a factor have to be the bottleneck before increasing it will increase performance?
 
geo said:
It's always seemed to me that folks who insist that somehow ATI would be faster if they had FP16 have never understood what ATI is trying to tell them on that point. Doesn't a factor have to be the bottleneck before increasing it will increase performance?

Sure, but I am talking about Geforce FX here, mainly. I'm going to go out on a limb here for you Geo and I will most likely be killed for it, so at least try to savor the moment when someone said this:

I think a game like Half-Life 2 could be made to run entirely with partial precision SM 2.0 and it would look and play the same and even Geforce FX users would be able to run it in DX 9 mode.

There, now that I have said it and am smoking my final cigarette before the lynching starts, I will sit back and ponder exactly why I even find that thought exciting.

PS. It's true that with the NV40 fp16 is no longer a major performance booster, but it's still something that is going unusued because it "makes things complicated." I would just like to see what could really be done with that precision level rather than the constant banter about who has more bits and "always on" etc etc.
 
  • Like
Reactions: Geo
wireframe said:
I think a game like Half-Life 2 could be made to run entirely with partial precision SM 2.0 and it would look and play the same and even Geforce FX users would be able to run it in DX 9 mode.

Wasn't that proved not to be the case when Far Cry and HL2 got partial precision in places it shouldn't have during a patch? There were a couple of cases of quite noticable degredation in IQ. IIRC, there was glass in HL2 and lab floors in Far Cry that looked terrible with PP that were later fixed.
 
wireframe said:
Sure, but I am talking about Geforce FX here, mainly. I'm going to go out on a limb here for you Geo and I will most likely be killed for it, so at least try to savor the moment when someone said this:

I think a game like Half-Life 2 could be made to run entirely with partial precision SM 2.0 and it would look and play the same and even Geforce FX users would be able to run it in DX 9 mode.
That experiment was done long ago and I believe the end result was that the 5800 Ultra was on part with the X300... not exactly ideal.
 
Bouncing Zabaglione Bros. said:
Wasn't that proved not to be the case when Far Cry and HL2 got partial precision in places it shouldn't have during a patch? There were a couple of cases of quite noticable degredation in IQ. IIRC, there was glass in HL2 and lab floors in Far Cry that looked terrible with PP that were later fixed.

I'm pretty sure the lab floor in Far Cry was another problem as I am pretty sure this problem has vanished from the FX as well. However, this doesn't exactly prove anything. Of course you can have a situation where you will get a clash. That's why I said a fully tuned fp16 app. Basically what I am saying is that if Valve were forced to create Half-Life 2 using only fp16, they would get the same product. Now, the way they have gone about doing things, I am sure it could be easily demonstrated that simply changing the precision will muck it up. Then again, I could be totally wrong, but I don't think I am.

It's not really important and really has no relevance to the future of 3D as partial precision is dead. It is already here and it works, but it sure would be fun to see what a game could look like running a "measly fp16". I am sure a lot of people would not want to accept the results after all the 16 v 24 v 32 bit debates. heh.
 
wireframe said:
Sure, but I am talking about Geforce FX here, mainly. I'm going to go out on a limb here for you Geo and I will most likely be killed for it, so at least try to savor the moment when someone said this:

I promise to have a drink or three at your wake. The good stuff too. ;)

Someone already mentioned a few "misadventures" with FP16 quality. And it seems to me that was "the rub". There was no way to easily do it without impacting quality. It had to be analyzed on a case by case by case by case by. . .well, you get the picture. It wasn't how long it took to do each single one, which apparently was relatively trivial. . .it was that 'n' in the calculation that was the killer. Developers could rarely make the gain worth the pain, assuming they cared about quality and didn't just do a "search and replace". I had a conversation with a developer about it some months back, I believe, somewhere around here.
 
geo said:
I think I've never really understood what "Free FP16 normalize" actually means. What does it mean? Does ATI have "Free FP32 normalize" now in the X1000 family?

It's always seemed to me that folks who insist that somehow ATI would be faster if they had FP16 have never understood what ATI is trying to tell them on that point. Doesn't a factor have to be the bottleneck before increasing it will increase performance?
It means there's a dedicated unit (per pipe) for executing normalize instructions with FP16 precision. (To normalize a vector means to scale it to a length of 1, quite commonly needed to give correct results.)

Whether ATI should have included something similar, I don't know. At FP32 a dedicated unit would probably be too expensive in transistors to be worth it.
 
geo said:
Someone already mentioned a few "misadventures" with FP16 quality. And it seems to me that was "the rub". There was no way to easily do it without impacting quality. It had to be analyzed on a case by case by case by case by. . .well, you get the picture. It wasn't how long it took to do each single one, which apparently was relatively trivial. . .it was that 'n' in the calculation that was the killer. Developers could rarely make the gain worth the pain, assuming they cared about quality and didn't just do a "search and replace". I had a conversation with a developer about it some months back, I believe, somewhere around here.

Well, sure, it won't be as pleasant to work with, that's for sure. Ok, so I first had this thought when the most horrid "R300 v NV30" war was raging and I recently had it again due to my rediscovery of gaming consoles (basically, PS3 and Xbox 360 got me excited enough to pick up a PS2 to see what it was all about. I was impressed). These little "toys" do an amazing job at delivering quality with nowhere near the hardware we are accustomed to under our desks. This is mainly due to the fact that these are coded "to the metal" with developers squeezing out the goods where everyone thought there was none. On the PC we rarely see this. Instead the developers are spoiled with more powerful hardware to make their job easier. Not so on a closed system console. So, this provoked the thought again and I am sure that it's possible. Well, almost...

Now I think I will proceed to the sequence where I put out my cigarette, look bravely out over the gathered crowd, and turn in for a "little slice of death".
 
wireframe said:
I think a game like Half-Life 2 could be made to run entirely with partial precision SM 2.0 and it would look and play the same and even Geforce FX users would be able to run it in DX 9 mode.
Well, the NV3x, particularly the NV30, really aren't that great at even FP16 performance. And you really don't want to use FP16 for texture coordinates, in particular.
 
Back
Top