9700 Pro < Ti4600 at MINIMUM fps - is this REALLY true?

Discussion in 'General 3D Technology' started by Joe Cool, Nov 9, 2002.

  1. Joe Cool

    Newcomer

    Joined:
    Nov 8, 2002
    Messages:
    11
    Likes Received:
    0
    Can we get our more advanced readers here to look at this and get this settled once and for all?

    In a number of benchmarks I've seen in various places, in places where the mininum benchmark is measured, I've (more than once) seen the Radeon 9700 Pro lose - and sometimes quite convincingly, to the GeForce 4 Ti4600.

    The latest example happened recently in Tom's Hardware Guide, where there was a roundup of various 9700 Pro cards out there, but a Ti4600 card was thrown in for good measure too.

    Check the following URL...

    http://www17.tomshardware.com/graphic/02q4/021104/r9700pro-cards-17.html

    ...And you'll see that (this is the worst case), at 1280x1024, using Unreal Tournament 2003, where the 9700 pro got 27 fps, the Ti4600 beat it quite handly - at 43 fps. That's a large margin to lose by!

    There are other places too which have shown similar results at least with this game, but of course not many other games or dedicated benchmarks measure minimum fps.

    When I brought this up earlier in a Rage3D post, I was told by one of the posters there that the above sort of result is not really true - that tthere's simply a quirk in the 9700 Pro drivers that causes benchmarks to record inaccurately the first time a benchmark type program is run - creating a massively low "spike" only at the beginning that is not at all indicative of how low the 9700 Pro *really* scores in actual use. Apparently the people doing the benchmarks are only doing one run instead of multiple runs to catch this.

    Unfortunately, if this is true, a number of sites like Tom's Hardware Guide either don't know or care about this - and I'd like to confirm it in any case.

    Minimum fps is very important, because if it goes low enough, it's where almost any user will feel the program to become annoyingly choppy. Yet it's rarely discussed when benchmarks are thrown around, even though (to me anyways) it's a LOT more important than the maximum frame rate that almost all benchmarks focus on.

    I'd love to even see an article focusing on just this, and comparing how the various cards out there compare.

    But serious discussion about this would be great too, because let's face it, the 9700 Pro *should not* be losing to any card in this area, especially by the sort of margin my above example showed.... :(
     
  2. OpenGL guy

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,357
    Likes Received:
    28
    Re: 9700 Pro < Ti4600 at MINIMUM fps - is this REALLY tru

    You have to be sure that you are measuring what you think you are measuring. The way UT2003 computes framerate is by computing the length of time between Present calls. For example, if one frame took 0.25 seconds, it would give an instantaneous framerate of 4. Does this necessarily mean the user experienced 4 fps? No.
     
  3. jvd

    jvd
    Banned

    Joined:
    Feb 13, 2002
    Messages:
    12,724
    Likes Received:
    8
    Location:
    new jersey
    this could also be do to immature drivers could it not ? the geforce 4 drivers have been tweaked since the geforce 3.
     
  4. Joe Cool

    Newcomer

    Joined:
    Nov 8, 2002
    Messages:
    11
    Likes Received:
    0
    That's the thing OpenGL Guy. Is the measuring for the 9700 Pro (all cards, really) being done properly? If so, why is the 9700 pro falling behind so badly in the result I specifically mentioned?

    I'm not trying to bash ATI here. I have a Radeon 9700 Pro card, myself!

    I just want to find out what's really going on. And I'm really surprised that there hasn't been a lot more discussion and investigation into this. Are the Maximum frame rates all that everyone else is interested in?
     
  5. crypto1300

    Newcomer

    Joined:
    Feb 7, 2002
    Messages:
    62
    Likes Received:
    0
    My 9700 does report lower minimum fps than my Ti4400 in UT2003 using the same settings and maps. And, it also has a stuttering problem that my Ti4400 doesn't exhibit.

    There are certain areas of certain maps when my 9700 will drop to a much lower framerate than my Ti4400, but there are also areas that are absolutely through the roof with the 9700.

    The lower minimum fps don't bother me as much as the stutter.
     
  6. Ozymandis

    Regular

    Joined:
    Feb 6, 2002
    Messages:
    617
    Likes Received:
    0
    Location:
    Maryland
    Could it have to do with CPU limitations during certain points of a game? I've seen benchmarks that the GeForce4 beats the 9700 when they're both obviously limited by the processor.
     
  7. Bigus Dickus

    Regular

    Joined:
    Feb 26, 2002
    Messages:
    943
    Likes Received:
    16
    1) Review sites do not focus on "maximum framerates" as you say. They focus on average framerates. Now, you can argue if you want that average is not a good measuring stick either, but we should at least make sure we're starting the discussion on the same ground.

    2) While minimum framerates are certainly nice to know, and I suppose are more useful than not having such an indicator, I would argue that it isn't much better at all than just not knowing. Why?

    Here's why. How often did the "minimum framerate" occur? 10% of the time? 1%? .01%? Just once per game, such as during the initial loading or during exiting? How should we know?

    What would truly be useful for these types of comparisons is a plot of framerate vs. time, or something along those lines, that records all dips, spikes, and plateaus in framerates. I may be mistaken, but I think UT2003 is capable of outputting this data. Such a plot would let you know how often you get dips, how long they last, and in what situations they occur (for example, during heavy battle scenes or during some mundane task, such as switching weapons).
     
  8. DemoCoder

    Veteran

    Joined:
    Feb 9, 2002
    Messages:
    4,733
    Likes Received:
    81
    Location:
    California
    The average is a bad figure as well. Let's say the game spends 50% of its time at 1 fps and 50% of its time at 99fps. The average fps is 50fps. However, half the time, you are playing at 1fps.


    A better approach is to just publish a histogram.
     
  9. Fuz

    Fuz
    Regular

    Joined:
    Apr 17, 2002
    Messages:
    373
    Likes Received:
    0
    Location:
    Sydney, Australia
    You make a good point here. For me, the min frame rates are important, but of course as you pointed out, we need to know where the dips take place.

    For example, if the min frame rate was at the start of each level due to texture thrashing (or whatever), and lasted only for a few seconds, then it wouldn't really matter at all. If it drops down at regular intervals during game play, then you need to worry.

    I think a time vs. frame rate graph is a great idea.
     
  10. Joe Cool

    Newcomer

    Joined:
    Nov 8, 2002
    Messages:
    11
    Likes Received:
    0
    Both Bigus and Democoder make good points - just showing how there's lies, damned lies, and statistics.

    I stand corrected of course that most benchmarks DO focus on AVERAGE frame rates.

    However, since we've mainly been talking about high end cards like the 9700 Pro and Ti4600, both of whom can in most games give you a very good average frame rate, I think its especially with these cards that the minimum frame rates that you can get become more noticeable and important.

    Let me put it this way. If the Ti4600 averages 100 fps in a game, and the 9700 Pro averages 160 fps in the same game, that game will seem pretty smooth to most people no matter what card they're using, even though the 9700 Pro is faster on average by a huge margin. That's because most cards will deliver a smooth experience the vast majority of the time.

    But if in that same game it spikes - even for a fraction of a second - down to 10 fps on the 9700 Pro where the Ti4600 gets 20 fps, I suspect that many and maybe even most players will notice this a LOT MORE on the 9700 Pro and think their *overall* experience is *better* on the Ti4600. 20 fps isn't great, but I think everyone finds 10 fps pretty horrible.

    The 9700 Pro users will probably REALLY notice that 10 fps.

    So while a histogram of fps over time would also be very useful, even just one spike - for a tiny time - of a very low frame rate - can be very noticeable. That is, depending on the game and the sensitivity of the viewer.

    And in the case of the TI4600 vs the 9700 Pro, it seems that at least some of the time, and at least in UT 2003, the Ti4600 is possibly really beating up on the 9700 Pro.

    Why?
     
  11. dbeard

    Newcomer

    Joined:
    Feb 7, 2002
    Messages:
    18
    Likes Received:
    0
    I vote for histograms.

    I agree with Democoder. A histogram would seem to be a better way to represent the "feel" of the game while playing on the video card.

    Who knows how many debates have been held about how many FPS are good enough. With a histogram, you can select your FPS level and get a feel for the amount of time you would get what you think is important. Obviously, this would be related to a particular machine on a particular map, drivers, etc. but are not all benchmarks.

    A time versus framerate would leave you wondering just what was being displayed at the 1 fps points on the graph. I don't know if it would really add any real information beyond the number of minimum FPS points.

    All, IMHO

    D. Beard
     
  12. Althornin

    Althornin Senior Lurker
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,326
    Likes Received:
    5
    Re: I vote for histograms.

    Ah, it would allow us to see how various cards handle various scenes - to see if the cards share the same low points, etc. This can allow us to more accurately predict future performance, as well as raising very good questions and aiding in finding of bugs/problems. For instance, if the Dip in the radeon 9700 occurs 6 times (example), which would be "better": Knowing that they all come right in the first 3 seconds due to texture thrashing, and afterwards, its all A-OK or having to deal with the possibility (if you only have a histogram) that they are in fact spread evenly out through the benchamrk, and appear to the gamer as regular stuttering?
     
  13. DadUM

    Newcomer

    Joined:
    Oct 11, 2002
    Messages:
    55
    Likes Received:
    0
    I don't recall exactly, but can't you set Serious Sam:The Second Encounter to give you the data for instaneous frame rate per time? I know I have seen plots exactly of that nature. It is rather interesting to see how games spike and flow.

    As for the game being CPU bound causing lower framerates on the 9700, wouldn't that just indicate that the 9700 driver is vastly less efficient than the nVidia one? If that were the case, then it's a strike against the 9700.
     
  14. Bigus Dickus

    Regular

    Joined:
    Feb 26, 2002
    Messages:
    943
    Likes Received:
    16
    I agree.

    A Histogram would certainly be a tremendous step up in the usefulness of benchmarking methods, a true time dependent plot is better still, and I suppose it is really the ultimate form of benchmarking.

    As Althornin pointed out, knowing just where (or more likely, you'd only know when) the dips occured could provide useful information. It's been said three times now, but I'll say it again: a few dips during initial loading is not nearly as much of an issue as a few dips during the middle of the game... especially during intense moments of gameplay.

    The ture ultimate would be if the game/benchmark could output data on what the game engine was doing at corresponding times on the graph. A full output would likely consume much of the CPU overhead, but I think if someone was really sharp, a nice benchmark could be written (please MO, read this) which could give basic information as to what the engine was doing, perhaps what geometry (in general) was being sent to the card (perhaps as in vertex count), general information like what texturing was being done, and perhaps key API calls as well... at least noting when PS/VS were being utilized for different things.

    [edit] fixed my [ ] [/] tags [/edit]
     
  15. Sharkfood

    Regular

    Joined:
    Feb 7, 2002
    Messages:
    702
    Likes Received:
    11
    Location:
    Bay Area, California
    I'm a big subscriber of the "play and feel" method of benchmarking a card.

    Fire up UT2003 with a 9700 Pro at 1280x1024 or 1600x100, then on a Ti4600. In both cases, play a couple full server, online games and quickly discover the Ti4600 gets it's ass handed to it by such an incredible margin that it isnt even funny. This was my immediate discovery here. It becomes even more obvious once you add AA or AF to the play period.

    As far as benchmarks go, there are only two that I know of that support interval histograms for measuring framerate- Serious Sam (and SE) and VulpineGL, with the latter being a bit too complex for most websites to use correctly as it has three different "flavors" of running concerning which OpenGL extensions to use. Simply firing it up and taking defaults will lead to misleading benchmarks as a totally different test will default between NVIDIA and ATI gear.

    As far as getting a good min/max/avg FPS is concerned, the only semi-applicable example I have seen of this is GLExcess. This test performs several seconds of benchmark before and after actual measurement. In other words, you might watch a 23 second benchmark, but only 4-6 seconds is actually measured and it's usually in two different points in the middle of that 23 seconds. This gives a much better approximation to weed out driver initialization latency differences, throttling or caching that can happen at the very beginning and ending of measurement. Still not entirely accurate but definately a more appropriate method.
     
  16. gravioli

    Newcomer

    Joined:
    Jul 29, 2002
    Messages:
    12
    Likes Received:
    0
    Location:
    Salt Lake City, UT
    A histogram would be really nice, but wouldn't a standard deviation value be a good indicator of how much framerates fluctuate? I don't think this would be too hard to implement in a program like FRAPS.
     
  17. MikeC

    Newcomer

    Joined:
    Feb 9, 2002
    Messages:
    194
    Likes Received:
    0
    FRAPS has the ability to log frame rate per second.
     
  18. MikeC

    Newcomer

    Joined:
    Feb 9, 2002
    Messages:
    194
    Likes Received:
    0
    Here are some results using a GeForce4 Ti 4600 from actual gameplay on loaded servers. It's based on the UT2003 demo using older 30.82 Detonator drivers. Anisotropic filtering, AA, and shadows were disabled, but every other graphics setting was maximized.

    1280x960 - No Shadows - Software 3D Audio

    DM-Antalus
    2002-10-07 19:14:44 - UT2003
    Frames: 27078 - Time: 533917ms - Avg: 50.715 - Min: 33 - Max: 83

    DM-Asbestos
    2002-10-07 19:37:06 - UT2003
    Frames: 43058 - Time: 537623ms - Avg: 80.089 - Min: 42 - Max: 152

    1280x1024 - No Shadows - Software 3D Audio

    DM-Antalus
    2002-10-07 19:48:33 - UT2003
    Frames: 20353 - Time: 413655ms - Avg: 49.202 - Min: 28 - Max: 100

    DM-Asbestos
    2002-10-07 19:56:20 - UT2003
    Frames: 46784 - Time: 639620ms - Avg: 73.143 - Min: 37 - Max: 153
     
  19. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,902
    Likes Received:
    218
    Location:
    Seattle, WA
    What we really could use are some nice framerate graphs. They're really easy to make with UT2k3. The per-frame statistics are output to a file in the Benchmark/CSVs folder. These spreadsheets store copious amounts of per-frame data, and could be used to easily diagnose specific performance problems (i.e. how much time spent on rendering meshes, terrain, etc.).

    And what would be even more meaningful, much more than displaying the minimum framerate (after all, it could have just been one frame at the very beginning...due to a loading issue...something that won't occur most of the time), would be the standard deviation.

    Measuring the standard deviation becomes very simple when you have per-frame framerate data. The only question is whether to take the standard deviation of the rendering time per frame, or the frames per second. I think taking the standard deviation of the rendering time per frame makes more sense, as you want very high rendering times to count more towards the standard deviation than very low rendering times.

    The nice thing about a standard deviation is that it will encompass most of what you see in a framerate graph, when combined with the mean framerate, in a form that's actually measurable, and not subjective.

    Side note: Mike, you may want to try the 40.72 drivers. They give a nice boost in UT2k3.
     
  20. V3

    V3
    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    3,304
    Likes Received:
    5
    If frame rate, it would be abit funny, its better if you just measure, the time to render each frame, and graph that.

    An average in real time system is an alright performance indicator, but its not a good indicator of the quality you experience during your usage.

    Especially games, where each frame has a time constraint before it should be drop.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...