ATI Benchmarking Whitepaper

Therefore, it's no more wise to assume they are not being rendered than it is to assume they are, unless you can see that frames are being skipped.
I agree, this is one optimisation that has remained 'beyond the pale' for many years and I hope it remains so. But let's make sure we all watch for it, by keeping the framerate down to the point where we might see it!

The point I'm making is that if the original rate is down under the refresh rate, then it is possible to spot, while if it is above vsync, then it is virtually impossible (although backend scene capture should still be able to work it out if someone is prepared to go through every frame and count the tears - since we don't have backend scene capture, this is pretty much a moot point!).

Normally these days I run with vsync on, but at 160Hz @640/480-800/600, 120Hz @ 1024/768, 100Hz @ 1152x864/1280x1024 and 85Hz at 1600x1200, so it hasn't been that much of a problem for me, as my refresh rate is high enough not to throttle the processing power of the vpu--most of the time.
The ideal solution for me is a reasonably high refresh rate (the ones there are all good) and triple buffer - but I hate vsync tears, I find it easier to put up with framerate stutter.
 
andypski said:
Long post warning...

Not speaking for ATI but just for myself, my opinion is that part of the problem is that generally reviewers don't have enough experience with 3D hardware to actually correctly interpret the screenshots that they take - I think I've touched on this in posts before.

Note that I'm not talking about knowledge of 3D hardware in the general sense, but in the specific case of actually understanding of what the hardware does to produce the final image.

When I look at hardware reviews or image quality analyses I frequently find myself spotting particular differences in the images that I can usually identify as being caused by some specific behaviour of the hardware, and I can then have a good expectation of what will happen when the image is in motion, whether some artifact will be more visible when you get close to it etc. All too frequently the very same images result in the reviewer stating things like 'The images are identical' or 'There are some differences, but you can't say which is better', and I just find myself pounding my head against a wall. Often I can see what (to me) are glaring differences in rendering - ones that I expect (or know from experience) to be visible in actual gameplay, which are passed without comment.

Of course even if the reviewer has a highly technical understanding, are most people really interested in exactly why the images differ, and why one is 'better'? How does Joe Public with relatively little knowledge of the subject correctly differentiate between one reviewer who really knows what they are talking about, and another spouting cheap, third-hand Star Trek technobabble in order to try to appear knowledgable (or worse yet, actually believing that repeating third-hand technobabble really makes them knowledgable)?

I think I can at least partially agree here--that I think most people are not much interested in the "why" behind differences of IQ. But I'd have to disagree if you are asserting that they aren't interested in whether or not IQ differences can be demonstrated to exist. I really think most people who read hardware reviews would be interested in knowing about such differences, even if no attempt is made to explain them in detail. To that end the important thing would be to demonstrate what differences in IQ, if any, exist. For instance, when fog isn't rendered, lights in a scene (or lighting in general) aren't rendered the same way, when FSAA doesn't work as advertised, shader effects don't render properly, etc. I would think that most people taking the time to read a review would be interested in this information--those that aren't would likely not read the review at all, but rather make a purchasing decision based on some other criteria--like a friend's recommendation, for instance.

Basically, I would think that the "why" of rendering differences is not important in a review from the standpoint of the general consumer--but a comparison of rendering IQ between products is very important to a substantive product review.

I agree with you on the frustration of reading remarks like, "It all looks the same to me," and so forth. But really, all those people are doing is admitting that they haven't actually looked for any differences in rendering IQ.

What I'm suggesting for ATi as a part of its reviewer's guide is that they walk a reviewer through the steps of what to look for when evaluating differences in IQ among competing products--laying out a basic, practical method for doing it to get them started. You could start with a discussion on the "proper" way to grab a frame, up to and including screen shot examples illustrating the types of things commonly spotted. The screen shots would show them what to look for, in other words, because I agree with you that some of them simply don't know. You might even begin with a summary of why IQ differences affect frame-rate performance, which some might not know or clearly understand. Basically, I think a "reviewer's guide" should illustrate a basic framework of a coherent product review--especially since most product reviews are actually product comparisons. It may not move mountains, but something like that couldn't hurt...:)
 
OpenGL guy said:
Generally it's not the video card skipping frames in these cases. If the framerate drops too low, whether because the video card is heavily loaded or CPU limitations, it's not the video card skipping frames that causes the chugginess, it's the fact that so few frames are rendered at all. The application compensates for low framerates by reducing the number of frames it generates per second. Of course, this just means that animations and such proceed normally, but it doesn't do anything to alleviate the low framerates.

Heh...:) OK, I can't see how...:) How does turning off vsync enable my cpu to process more data at a faster rate, or change the load on the card, for instance? In the cases I refer to, simply turning off vsync resolved the problem. Therefore, the vsync frame-rate cap governed the performance of the 3d card in those instances. Right?

The way it looked to me was that the vsync cap not only affects maximum frame-rate, it also affects minimum, as the minimum can never be above the maximum cap. My idea was that the minimum frame rate for those scenes needed to be higher than the vsync-forced cap allowed in order for those scenes to smoothly process in terms of visible frames. Since turning off vsync solved the problem, that would seem to be the case, and would seem to indicate that vsync had actually governed the frame-rate performance of the card--in those instances. I mean, the data to be processed didn't change, nor did the resolution, nor any other factor--except the vsync cap on the card was removed. The frame rate performance as I recall (since UT had a counter) did pick up quite a bit at the same time as well, I noticed.

It just appeared to me that had it been merely a matter of data load on a part of the system then turning vsync off would not have cured the problem.
 
andypski said:
Framerate will never be a solved problem as long as the goalposts keep moving. That's fundamental.

Exactly. :)
And looking back through time, I draw the same conclusion you (partly) did - that over the last years expectations re:graphics quality has increased continously, whereas new games that are released have had roughly the same typical framerates as games had 5-7 years ago running on the hardware of their time.

This means that pretty much all additional horsepower we've gotten over these years has been invested into increasing the graphics quality rather than frame rate by the game designers. So, although reviews still use predominantly fps graphs, what the graphs represent is not merely "achievable fps", but actually the trade-off between frame-rate and quality level (as on this site).

Since, as you say, the goal posts are moving as regards graphics quality (although seemingly very little for frame rate), this trade-off is likely to be critical for the foreseeable future, and reviews will continue to reflect that.
Which is actually eminently reasonable.
 
andypski said:
How does Joe Public with relatively little knowledge of the subject correctly differentiate between one reviewer who really knows what they are talking about, and another spouting cheap, third-hand Star Trek technobabble in order to try to appear knowledgable (or worse yet, actually believing that repeating third-hand technobabble really makes them knowledgable)?

Sadly, this is nothing that can be fixed by one reviewer, one IHV, or one set of directions. "Joe Public" will already be swayed by FPS number he doesn't know how to put into perspective, so things will be even more difficult with IQ examinations. For people like Joe, they can only get better if said person has the inclination and does the broadscale reading and learning to turn "relatively little knowledge" around and see things a lot more clearly. The impetus has to come from Joe, though, as without it he functions basically by crap-shoot on which site to look at, and glosses over charts while not reading text anyway--or perhaps just skipping right to the conclusion before happily opening his wallet.

It's a sad state, but most consumers just don't care enough. <shrugs>
 
AAlcHemY said:
WaltC,
wasn't the choppy gameplay solved by enabling triple buffering ?

Both of the games as I recall were played in D3d. But I have used triple buffering quite a bit in the past. As I say, though, in both of these instances the problem was solved merely by removing the vsync cap on frame rates. It's very possible that TB could have solved it, though...:)

These days I rarely if ever experience that kind of thing with vsync on, as I'm running much higher refresh rates than my monitor would support at the time. It was one of those "Ah, Ha!" solutions of the type I remember.
 
WaltC said:
OpenGL guy said:
Generally it's not the video card skipping frames in these cases. If the framerate drops too low, whether because the video card is heavily loaded or CPU limitations, it's not the video card skipping frames that causes the chugginess, it's the fact that so few frames are rendered at all. The application compensates for low framerates by reducing the number of frames it generates per second. Of course, this just means that animations and such proceed normally, but it doesn't do anything to alleviate the low framerates.

Heh...:) OK, I can't see how...:) How does turning off vsync enable my cpu to process more data at a faster rate, or change the load on the card, for instance? In the cases I refer to, simply turning off vsync resolved the problem. Therefore, the vsync frame-rate cap governed the performance of the 3d card in those instances. Right?
Yes, but, again, it's not the video card skipping frames but the application.

When you disable vsync, then the video driver doesn't have to wait for the next vsync before submitting a frame, which means the CPU spends less time in an idle loop, which means the application has more time to process a new frame.
 
andypski said:
When does image quality become the yardstick in reviews, and not FPS graphs? What minimum frame-rate do we need to achieve before we give up on speed as the differentiator and really look at what's being rendered? When does providing better image quality become a large enough factor that it 'wins' comparative reviews, even if the frame rates are a bit slower?

Judging on past performance - never. People will always be too excited by graphs showing 500fps in Quake3 as that's something tangible, differences between images remain intangible - we simply can't provide an easy number that says which is better. Reviewers get accused sufficiently frequently of bias (by one side or the other) when simply trying to present verifiable benchmark numbers - imagine what would happen to them if they were to give victory in reviews based on something more subjective...

Just IMNSHO. ;)

When does providing better image quality become a large enough factor that it 'wins' comparative reviews, never? If that were true ATI would be losing market share. The fact that there has been so much talk on forums about this illustrates it's not all about frame rates. Obviously a minimum acceptable frame rate is the first criteria but IMO once a card's minimum frame rate, in a benchmark, exceeds the refresh rate (especially at the highest IQ settings) it's primarily about image quality. So if we are talking about “benchmarkingâ€￾ top of the line cards (9800, 5900) using UT2003, IMO it's all about image quality.
 
Would it be possible to turn off Ansio/AA every couple frames to make it look like the frame rate was much faster? If you are going 100fps how would you know if every 5th frame had quality removed.
 
rwolf said:
Would it be possible to turn off Ansio/AA every couple frames to make it look like the frame rate was much faster? If you are going 100fps how would you know if every 5th frame had quality removed.
It would flicker.
 
Xmas said:
rwolf said:
Would it be possible to turn off Ansio/AA every couple frames to make it look like the frame rate was much faster? If you are going 100fps how would you know if every 5th frame had quality removed.
It would flicker.

Most likely not possible for AA too.
But for AF, you might be able to do a slow degradation and upgrade ( slowly switching between full bilinear and full trilinear ) and then maybe you also could detect when a screenshot is coming and give the max quality then.

Would be pretty icky stuff though! Not sure it'd be worth the time programming it either :)


Uttar
 
Hmm, someone was saying something about agressive PR...

The conference began with a panel presentation that included Nvidia chief technical officer Kurt Akeley, chief scientist David Kirk, software engineering vice president Dwight Diercks, and software engineering director Nick Triantos. The panel opened with a vague reference to the "colorful rumors" that have spread across the Internet like wildfire concerning the seemingly lackluster performance of Nvidia's GeForce FX 3D accelerator cards compared to ATI's Radeon 9800, which was released in April, to say nothing of the competitor's newly-released 9800 XT card.

The panel highlighted what Nvidia apparently believes to be the most important aspects of designing new graphics hardware. One of these aspects is creating cards with physical architecture that allows them to be powerful, yet affordable. The company's current line of cards based on the GeForce FX architecture (which is included in the GeForce FX 5800 and 5900 series), attempts to maximize high-end graphics performance by supporting both 16-bit and 32-bit per-color-channel shaders--most DirectX 9-based games use a combination of 16-bit and 32-bit calculations, since the former provides speed at the cost of inflexibility, while the latter provides a greater level of programming control at the cost of processing cycles. The panel went on to explain that 24-bit calculations, such as those used by the Radeon 9800's pixel shaders, often aren't enough for more-complex calculations, which can require 32-bit math. As the panel explained, the GeForce FX architecture favors long shaders and textures interleaved in pairs, while the Radeon 9800 architecture favors short shaders and textures in blocks.

The panel then discussed Nvidia's comprehensive internal QA policy on optimizations, which states that the company refuses to optimize its drivers for specific benchmarks that emphasize features not found in real games, which is, as the representatives suggested, the reason why the most recent cards haven't shown universally high performance in recent benchmarks. The company also reiterated its commitment to image fidelity--rather than opt not to draw certain parts of a scene, GeForce FX cards draw every last part and effect. As an example, the panel showed two screenshots of an explosion from an overdraw benchmark, in which the GeForce card drew the entire explosion as a bright white flare, ATI Radeon card didn't draw every layer of the explosion (the upper-right corner had a slight reddish tinge).

The rest of the event featured individual product demonstrations, though panel discussion was capped off at midday with a "fireside chat" featuring Nvidia CEO Jensen Huang and id Software CEO Todd Hollenshead, who discussed the importance of advanced graphical effects in id's upcoming Doom 3. Huang also made the interesting claim that although his company has recently experienced a loss of market share (Nvidia has traditionally sold the most graphics cards, from its entire product line, of any manufacturer), this loss was due not to competition from ATI, but rather, to competition from Intel's integrated graphics. According to Huang, Intel's integrated graphics hardware (or "Free-D"), which comes bundled with new pre-built Intel PCs, is very attractive to mainstream users because of its price point.

http://www.gamespot.com/pc/news/news_6077157.html
 
Where were you guys when nVIdia relased all of the hacks for the nV30 cards (cheating drivers, application detection, etc)

Here are are a year later and nVidia still has not caught up to the original R300 technology in terms of DX 9 performance.

Lets see here

1. we have companies wasting their time developing for nV30 specific code paths because it is WELL KNOWN FACT THAT NV3X DOES NOT HAVE WHAT IT TAKES TO COMPETE. This will have an impact on what we pay for games and graphics hardware.

2. We have nVidia encrypting their drivers as to prevent anyone from seeing what they are doing (I will just wait for the MIT boys to crack these open)

3. nV30 the dawn of cinematic computing? Now that is a good one since they cannot even do 4xFSAA right, or Trilinear for that matter.

4. Was not nVidia the ones pushing for FP32 claiming that ATi FP24 was inferior? Now here we are over 1 year later and when you force both cards to run at MAX IQ the R3X0 cards completely dominate everything Nvidia has to offer. I was under the impression that when you pay $500 for a video card, you should get $500 worth of performance. I have seen situations where even a 9500Pro is faster than a 5900Ultra running under max IQ settings.

5. PS 2.0 Even Nvidia's latest card cannot touch the ATI cards when it comes to PS 2.0 performance....


Nvidia needs to do one of the following

1. pack up their bag and go home since they cannot play

or

2. Get serious about competing and produce a QUALITY PRODUCT and quit wasting their time by trying to smear other companies.


PS my favorite quote of the day "Intel is our main competition"
Borsti said:
Well, I must say I´m very concerned about the 9600XT reviewers guide (the document Dave posted is included in it). I mean the file 9600XT_BENCHMARK_GUIDE.pdf.

The findings they speak about may be correct but I don´t like the way in princible. The guide is like a pre-generated review including benchmarks, IQ results, conclusions etc. You can just copy/paste most of it an voilá you have your full article. This is too much IMHO. Their job is to deliver the products and the job of the press is to test it and write about it. What I also don´t like is the fact that most of the findings they write about were findings discovered by reviewers. If they like findings then they should simply put links to the articles and give the credit to the reviewers and publications. Then editors can read it from an independend source. In the end, what ATI tries is heavy manipulation of the press. As I said, the content in the documents may be correct but in principle it´s pure manipulation. They can give hints on what they think is important for their products and talk about the strengths of their products in certain areas, but the way they do it right now is way too much.

Lars
 
DaveBaumann said:
ATI have produced an interesting document on modern benchmarking practices, take a look and tell us what you think:
http://www.beyond3d.com/downloads/RADEON_9600XT_Benchmarking Commentary.pdf

The content of this document is one part of the 9600XT reviewers guide. I liked that part, as was written quite "objective" and provided a lot of facts. But the full reviewers guide also consists of controversal parts, e.g. about texture filtering. They bash NVDA for not doing correct trilinear filtering, proving it with screenshots made with common filter testing tools. At the same time, they themself switch back to non-trilinear anisotropic filtering for most texture stages if you force anisotropic filtering. I think I already know the content if NVDAs upcoming "reviewers guide". :rolleyes:
 
I cannot be arsed to quote YeuEmMaiMai,

Perhaps you should take a look at the Nvidia techdemos and that would give you a idea of "the dawn of cinematic computing"

I just shows you what the card can do and how realistic realtime graphics are getting.

The Vulcan demos is the most impressive realtime graphics demo i've ever seen, and I've never seen volumeric texures like that fire before, jesus. :rolleyes:
 
PiXEL_ShAdER said:
I cannot be arsed to quote YeuEmMaiMai,

Another personal attack ? Great way to improve your standing on these forums...

Perhaps you should take a look at the Nvidia techdemos and that would give you a idea of "the dawn of cinematic computing"

Like the Dawn demo which runs faster on ATI hardware, although the specific OpenGL extensions have to go through a wrapper ?

And I agree with YeyEmMaiMai, you can't pretend to provide "the dawn of cinematic computing" while you are busy cheating as much as possible and bringing real-time graphics back to circa 1999 in order to win benchmarks...

I just shows you what the card can do and how realistic realtime graphics are getting.

http://www.ati.com/developer/demos/r9800.html
http://www.ati.com/developer/demos/r9700.html

Every IHV worth its grain of salt can make some fancy tech demos.

The Vulcan demos is the most impressive realtime graphics demo i've ever seen, and I've never seen volumeric texures like that fire before, jesus. :rolleyes:

NV-colored glasses... Don't get outside without them.
 
PiXEL_ShAdER said:
Perhaps you should take a look at the Nvidia techdemos and that would give you a idea of "the dawn of cinematic computing"

I just shows you what the card can do and how realistic realtime graphics are getting.

The Vulcan demos is the most impressive realtime graphics demo i've ever seen, and I've never seen volumeric texures like that fire before, jesus. :rolleyes:
But no games support those features, you'll only find them in them fancy nVidia tech demos.

It's kind of misleading. Yeah, nVidia's cards can do some neat nVidia tricks; but it can't play the games the way they're supposed to.
 
CorwinB said:
PiXEL_ShAdER said:
I cannot be arsed to quote YeuEmMaiMai,

Another personal attack ? Great way to improve your standing on these forums...

Perhaps you should take a look at the Nvidia techdemos and that would give you a idea of "the dawn of cinematic computing"

Like the Dawn demo which runs faster on ATI hardware, although the specific OpenGL extensions have to go through a wrapper ?

And I agree with YeyEmMaiMai, you can't pretend to provide "the dawn of cinematic computing" while you are busy cheating as much as possible and bringing real-time graphics back to circa 1999 in order to win benchmarks...

I just shows you what the card can do and how realistic realtime graphics are getting.

http://www.ati.com/developer/demos/r9800.html
http://www.ati.com/developer/demos/r9700.html

Every IHV worth its grain of salt can make some fancy tech demos.

The Vulcan demos is the most impressive realtime graphics demo i've ever seen, and I've never seen volumeric texures like that fire before, jesus. :rolleyes:

NV-colored glasses... Don't get outside without them.

I used to have a Radeon 9700Pro and can play the demos in my current card, they are not as impressive as the Nvidia demos.

Why is it you come back to the same old crappy remarks, FFS, and why the hell do you think i'm having a go at ATI, I didnot even mension ATI in the fricking post. :rolleyes:

Jesus, you cannot say nothing good about anything Nvidia because the ATI trolls come marching in., get a grip man.
 
digitalwanderer said:
PiXEL_ShAdER said:
Perhaps you should take a look at the Nvidia techdemos and that would give you a idea of "the dawn of cinematic computing"

I just shows you what the card can do and how realistic realtime graphics are getting.

The Vulcan demos is the most impressive realtime graphics demo i've ever seen, and I've never seen volumeric texures like that fire before, jesus. :rolleyes:
But no games support those features, you'll only find them in them fancy nVidia tech demos.

It's kind of misleading. Yeah, nVidia's cards can do some neat nVidia tricks; but it can't play the games the way they're supposed to.

oh FFS, are you saying only nvidia can do the effects, I mean what the f*** are you taking about man, if deveopers got off there arses are put these effects in the games you would'nt be saying that.

The GF2 could do per pixel lighting and shadows but no deveopers used the technology.
 
Back
Top