R520 = Dissapointment

geo said:
Someone already mentioned a few "misadventures" with FP16 quality. And it seems to me that was "the rub". There was no way to easily do it without impacting quality. It had to be analyzed on a case by case by case by case by. . .well, you get the picture. It wasn't how long it took to do each single one, which apparently was relatively trivial. . .it was that 'n' in the calculation that was the killer.
I expect that shader libraries will make this a relative non-issue. Basically, FP16 fails when it has to deal with information of a specific type (like texture addresses), so it should be very easy for a robust shader library to properly-optimize for FP16 usage while being certain to always maintain optimal image quality.

The problem with implementing partial precision in games like FarCry or Half-Life 2 is simply that these games have thousands of shaders. It's just unfeasible to go through each of those shaders by hand and check for situations where FP16 will break. So the developers just turn FP16 on everywhere and suffer problems, or use full precision everywhere and lose performance (on architectures like the NV3x and NV4x).

Now, once shader libraries make it to the forefront (and they will because artists are not programmers, and you want artists to be able to make custom shaders), testing for situations where FP16 breaks will suddenly become much, much easier. First of all, the actual code pieces will be much shorter, making it easier for the programmers to just examine the code and find the bad spots. Secondly, one only needs to examine the pieces to see if FP16 will be a problem.

Side comment: yes, it is potentially possible that very long shaders will also cause issues, but I suspect that this won't be an issue, since very long shaders as they relate to games will likely result in the adding of incoherent data sets which will therefore not be prone to banding artifacts. This clearly won't be the case for recursive algorithms, but those again are simple pieces of code and would be taken care of within the shader library.
 
Chalnoth said:
Not gonna happen, Walt. Framerate is about the only thing that can be easily measured objectively.

Yes, but when framerate was initially important several years ago there was a good reason for it--if framerates were too low or inconsistent, games often became unplayable and the "suspension of disbelief" necessary to enjoy them was shattered. The situation today is much different, at least on the higher end of the hardware spectrum. The raw framerate performance across the board is so good today versus even five years ago that it's scarcely the issue it once was. Today, frame rate comparisons are most often used in mindless marketing schemes more than for any other reason. Does it really matter if there's a 10% frame-rate differential if the lowest frame-rate is already 80-100fps? I think not.

That said, IQ improvements will continue to stress the hardware, of course, making frame-rate under those conditions a lot more meaningful. But only under a much narrower set of conditions than has traditionally been the case. I'm of the opinion, as I've been for a long time, that hardware "reviews" centered chiefly around frame-rate comparisons are of questionable value to consumers as far as the information imparted being of benefit. I think the reason we still see the frame-rate-only benchmark pursued is because of two primary reasons: it's much simpler to do frame-rate benchmark "review" than it is do to an IQ review, and it often suits the IHV to have its product appear on the "top" of the benchmark in terms of frame rate because even if the general IQ is on the bottom, that part of the equation is never revealed in such a review. IE, good frame rates do not always equal good IQ. Yet it is both acceptable IQ and acceptable frame rate that a consumer is interested in, isn't it? 3d is, of course, a visual medium, and so I think consumers lose whenever frame-rate is examined at the expense of IQ. IMO, a review isn't really a review unless IQ is examined thoroughly. Mindlessly printing nothing except the numbers some canned benchmarks spit out without also examining the type of IQ a product delivers and its impact on that product's performance is worthless to the consumer. My opinion, of course...
 
Just wait until really shader-heavy games make it to the forefront, Walt. Framerate will be more important for next year's crop of games than it was for this year's.

And since it's still impossible to objectively measure image quality, that area always has been and always will be the more murky area.

For example, higher resolutions are an improvement in image quality, but so is better anti-aliasing. If one architecture is better at one than the other, who's to say which offers the best image quality?
 
Chalnoth said:
Now, once shader libraries make it to the forefront (and they will because artists are not programmers, and you want artists to be able to make custom shaders), testing for situations where FP16 breaks will suddenly become much, much easier. First of all, the actual code pieces will be much shorter, making it easier for the programmers to just examine the code and find the bad spots. Secondly, one only needs to examine the pieces to see if FP16 will be a problem.

I was wondering whether PP was simply going to fade away with the advent of G70 and R520, with developers simply not bothering with older, weaker hardware that isn't as powerful in shaders.

Why is it taking so long for such libraries to appear? Surely they would have been more useful to have when NV3x and NV4x were in the forefront and in need of such advantages?
 
Chalnoth said:
Just wait until really shader-heavy games make it to the forefront, Walt. Framerate will be more important for next year's crop of games than it was for this year's.

And since it's still impossible to objectively measure image quality, that area always has been and always will be the more murky area.

For example, higher resolutions are an improvement in image quality, but so is better anti-aliasing. If one architecture is better at one than the other, who's to say which offers the best image quality?

Yes, I think I mentioned the fact that as IQ improves it will still be important to look at frame rates--under those conditions. I mean, what is the whole purpose of moving to shader architectures anyway, if not IQ at the pixel level?

You keep talking about objectivity, which leads me to ask you how "objective" it is to experience a difference in playing a game at 100fps versus 90fps. How "objective" would such an experience be? My answer: I do not believe a player would notice the difference in gameplay. OTOH, most people can easily objectively tell the difference between playing with FSAA on and off--just from what they can see on the screen--and even if the frame-rate should drop as it often does when FSAA is turned on, most people objectively prefer the IQ improvement FSAA makes to the higher frame-rate they'll get with FSAA off, since the game won't often play any "faster" from the player's point of view at all without FSAA. So then, you can see that frame-rate comparisons easily can be just as subjective as IQ comparisons, certainly.

Of what value is it to report "100fps" if a player playing the game at "90fps" can objectively experience no difference between the two frame rates? So then, not only are raw frame-rate numbers anything but objective, they often do not impart any useful information at all--above letting a consumer know that a given product is likely to be able to run his 3d games at frame rates suitable for smooth and immersive gameplay.

IE, even if the R5x0's raw frame-rates are indeed higher than the G7's, if the frame-rate delivered by the G7 is sufficiently adequate to provide smooth gameplay then the difference will not qualitatively affect the consumer in a meaningful way. So it will be the IQ delivered by the G7x0 versus the R5x0 that will, in the end, represent the most meaningful and also the most objectively quantifiable difference to the consumer. IMO, of course...;)
 
Chalnoth said:
And since it's still impossible to objectively measure image quality, that area always has been and always will be the more murky area.
This is only true to a point...

Drivers and hardware absolutely MUST be compared on a more fair and even playing field. The COD2 Demo, for example, had the issue with a totally different set of textures being used at 1600x1200 due to incorrectly identifying the vram as 128mb vs 256mb in this isolated resolution. On the reversed side, shadows, bloom and heat distortion effects were totally disabled by default for the other IHV. The only luck here was the fact the source doing the reporting was wise enough to explore, analyze and revise the benchmark numbers to give a more clear picture of things. This step is usually devoid from most review sources.

The barebones minimum that needs to occur is:
1) Uncompressed (bmp or png) static screenshots need to be provided for consumer review. Closer range and far range for each suite tested need to be provided in order to see mipmapping, AA and texture quality variances.
2) Any timedemos or benchmark scripts need to be provided/linked and hosted for consumer review. After all- consumers are more interested in what's actually being tested .
3) *optional but recommended* Static screenshot deltas (greyscale) used to give quantitative views of just how far the variance in IQ truly is.

I agree with Walt that framerate graphs alone serve little to no purpose 99% of the time. Without the provided means above, a consumer should find no value in them.
 
wireframe said:
I'm pretty sure the lab floor in Far Cry was another problem as I am pretty sure this problem has vanished from the FX as well. However, this doesn't exactly prove anything. Of course you can have a situation where you will get a clash. That's why I said a fully tuned fp16 app. Basically what I am saying is that if Valve were forced to create Half-Life 2 using only fp16, they would get the same product. Now, the way they have gone about doing things, I am sure it could be easily demonstrated that simply changing the precision will muck it up. Then again, I could be totally wrong, but I don't think I am.

It's not really important and really has no relevance to the future of 3D as partial precision is dead. It is already here and it works, but it sure would be fun to see what a game could look like running a "measly fp16". I am sure a lot of people would not want to accept the results after all the 16 v 24 v 32 bit debates. heh.

The lab floor in Far Cry was related to normalized cubemaps. I am almost 100% certain of that. You can run Far Cry in entirely FP16 precision and it looks virtually identical to running FP24 or Fp32 precision running the 9700 Pro shader pathways.
 
Chalnoth said:
Well, the R520 basically wins in Direct3D by a notable percentage, but loses in OpenGL by a similar percentage. This says to me that ATI has once again dropped the ball in OpenGL, and OpenGL is crucial for the way I use my PC.

That is a fair summation. ATi do have a weakness in the OpenGL speeds that while not debilitatingly bad does cause them to lose to nVidia parts. While I'd love to see that fixed I suspect the fact that the vast bulk of titles are DirectX based makes the case for putting the engineering support into fixing the issue a hard one.
 
Hellbinder said:
You know as i read more today..

You cant pick 1600x1200 4x FSAA +8x or 16X AF ignore everything else and call it a day.

Example... read through this review..

http://www.trustedreviews.com/article.aspx?head=15&page=4377

This is the pattern of all reviews that show more than just one settings. What about 1280x1024 what about higher than 1600x1200. So ATi hand tuned the drivers for 1600x1200 knowing that this is the settings that they would *recomend* reviewers use.

Look at Xbit Labs.. The R520 gets whouped in most of the synthetic tests except vertex shaders.

If you want to only look at 1600x1200 then fine.. But there is a whole lot more to gaming then hand picking one resolution and declaring a winner.


Thanks that is a really good link. The different resolutions tell a whole new story.
 
Last edited by a moderator:
The high resolution AA/AF could easily be explained by bandwith discrepencies IMO. The 7800GTX also seems to have better pixel shader performance due to stronger ALUS. Also keep in mind Nvidias ALUS are probably most optimal when not using AF. It's fair to say however that most people that buy hardware such as the 7800GTX/X1800XT expect to use these features though at such resolutions. But the lower resolutions do indeed help show the bandwith advantage.
 
Hellbinder said:
You know as i read more today..

You cant pick 1600x1200 4x FSAA +8x or 16X AF ignore everything else and call it a day.

Example... read through this review..

http://www.trustedreviews.com/article.aspx?head=15&page=4377
I'm not sure you read the review thoroughly.

http://www.trustedreviews.com/article.aspx?head=15&page=4386
1800 XL beats 7800 GT at pretty much all settings in HL2.

http://www.trustedreviews.com/article.aspx?head=15&page=4387
1800 XT/XL win pretty much all benchmarks in 3DMark05.

http://www.trustedreviews.com/article.aspx?head=15&page=4388
1800 XT/XL win pretty much all the non-CPU limited cases in 3DMark03.
This is the pattern of all reviews that show more than just one settings. What about 1280x1024 what about higher than 1600x1200. So ATi hand tuned the drivers for 1600x1200 knowing that this is the settings that they would *recomend* reviewers use.
Actually, we hand tuned for 1597x1203 but, alas, no one used that mode :(
If you want to only look at 1600x1200 then fine.. But there is a whole lot more to gaming then hand picking one resolution and declaring a winner.
If only they'd chosen 1597x1203, then we'd see who's boss! :LOL:
 
In regards to 3dmark03. What game test is CPU limited in that test other than GT1? I have never encountered a problem with 3dmark03 scaling with GPU performance in GT 2 GT3 and GT 4. Even without AA/AF. As a matter of fact. I find 3dmark03 to scale with GPUS more than 3dmark05 does.
 
ChrisRay said:
In regards to 3dmark03. What game test is CPU limited in that test other than GT1? I have never encountered a problem with 3dmark03 scaling with GPU performance in GT 2 GT3 and GT 4. Even without AA/AF. As a matter of fact. I find 3dmark03 to scale with GPUS more than 3dmark05 does.
GT2/3 can be rather CPU limited. I can't even get close to the fastest CPUs with my 3.4 Ghz P4 in those tests. According to that site, the 7800 GTX leads by 4.6% at 10x7 but at 12x10 the lead falls to 2% then to 1.4% at 16x12.
 
I'm gonna have to say I disagree with that. GT2/GT3 have shown huge performance boosts on my A64 3800+ by adding a second GPU or increasing core clocks on either my 6800GT SLI setup. Or my 7800GTX sli setup and still increase with overclocks in SLI mode by healthy margins. And the resolution scaling of 3dmark03 also tends to show this. There might be some CPU limitation there. But the tests responds far too well to GPU clock thresholds((Or by adding a second GPU in multi GPU configs)) for me to believe its a CPU limited enviroment in those tests.

Either way we can return to our ATI verses Nvidia discussions now. ;)
 
tEd said:
Benchmarks are misleading IMO. Quality wise ATI is superior of course the number don't reflect that.

I've read 3-4 reviews then i stopped reading. It's a waste of my time. If reviewers can't get their act together why even make reviews?

how the quality reflects in plain numbers? That's the main thing what gaming freaks are looking for first. they don't give a damn if you have a bettter AA or some explosions are brighter... when you are running fast pasced action game in 200fps, you don't have time to check out these things. Again, if game does not run in 200fps, that's baaaad... your mate has alread headshotted you.

(*cough* Matrox Parhelia *cough*)
 
OpenGL guy said:
Actually, we hand tuned for 1597x1203 but, alas, no one used that mode :(

If only they'd chosen 1597x1203, then we'd see who's boss! :LOL:

It's great that you hand tune resolutions, but I gotta ask, what is this resolution for? I've never seen that before. It's so...odd.

Ok, I get it, it's a joke. Went right over my head. Doh!
 
Last edited by a moderator:
OpenGL guy said:
I'm not sure you read the review thoroughly.
...
If only they'd chosen 1597x1203, then we'd see who's boss! :LOL:

hmmh...
1597 / 8 = 199.625
1203 / 8 = 150.375

tsk, tsk OpenGL Guy... bad resolution for joke. it is not multiply of 8, so it's impossible to set up with at least R3xx and R4xx cores. (...or does this mean that you have really gotten over this artificial limitation at last?)

to make you happy,
same goes to all NV4x cores as well. They "imitate" that they support 1366x768, but it's actually 1360x768 with 3 pixels clipped from each side and it works only if drivers detect the monitor supporting 1366x768. You cannot add the resolution manually.

conclusion:
---> Both still sucks. If you have exotic monitor or need for wierd resolution to get some job done, go for Matrox. This is the area where they definetely shine.
 
Last edited:
Wow, card releases always produce these kinds of debates, huh!?!

I don't care if I get 70fps instead of 85 in *some* of the games (I don't know where you got *most* from, look around). The image quality for the ATI cards is through the roof compared to the NVIDIA.

They're both excellent cards, but IQ does it for me, even at the expense of some FPS here and there, if the case may be.
 
cloudscapes said:
The image quality for the ATI cards is through the roof compared to the NVIDIA.

I really don't see that yet. I see the potential for that to be the case, but I do not see that happening from the pictures I have seen so far. Of course it is completely subjective. For example, what is through the roof for IQ now? Maybe it is something I would view as small. Just like a 5fps jump can be huge depending on how slow it is to begin with. IMO we are near diminishing returns for IQ, and thus a huge investment by a company for AA and AF simply does not pay as good of dividends as other investments. I think the Xenos has the right idea personally even though it won't fly in the PC space yet. I really think it is better in the long run though.
 
Sxotty said:
IMO we are near diminishing returns for IQ, and thus a huge investment by a company for AA and AF simply does not pay as good of dividends as other investments.

Good point, Sxotty. Where IQ was a noted issue with past videocards, it currently is not one and the reviews reflect that.
 
Back
Top