Futuremark: 3DMark06

Discussion in 'Graphics and Semiconductor Industry' started by trinibwoy, Dec 23, 2005.

  1. Joe DeFuria

    Legend

    Joined:
    Feb 6, 2002
    Messages:
    5,994
    Likes Received:
    71
    Whoa there....read what I wrote.

    I said that people use the CARDS with AA enabled (you know, when they use their cards to play games). Not that that they run the becnhmark with AA on the majority of times.

    Of course, whatever default FM decides on, that's going to be the "most run" for that benchmark.
     
  2. Joe DeFuria

    Legend

    Joined:
    Feb 6, 2002
    Messages:
    5,994
    Likes Received:
    71
    Look, I don't really have a problem with it one way or the other....there are pros and cons to each approach. Either just manually compare the SM2 AA scores (and don't produce an overall score), or produce an overall score using their formulas to get a "pseudo comparison." Just be consistent about it!. (Have I not used the word "consistent" enough? ;) )
     
  3. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,058
    Likes Received:
    3,116
    Location:
    New York
    Ah, now I get your meaning. But your wording above isn't exactly clear in that respect. And with regard to the comparison to games, 3dmark06 does exactly what the game does with Nvidia cards and HDR+AA - it doesn't run at all !! :wink:
     
  4. Joe DeFuria

    Legend

    Joined:
    Feb 6, 2002
    Messages:
    5,994
    Likes Received:
    71
    Yes, I would. Am I not clear on that?
     
  5. Xmas

    Xmas Porous
    Veteran Subscriber

    Joined:
    Feb 6, 2002
    Messages:
    3,344
    Likes Received:
    176
    Location:
    On the path to wisdom
    Some valid points, but how does the absence of parallax mapping favour NVidia? Sometimes I get the impression that some people look at the branching advantage R520 has over G70 and from there extrapolate an advantage in arithmetic- and texture-heavy shaders that just isn't there. Then they expect R520 to perform better G70 in "PS3.0" and if that expectation isn't met the benchmark must be crap. While of course that is a possibility (and I won't comment on 3DMark06 because I haven't seen it yet), there's also the point that G70 does indeed several things faster than R520.

    Doesn't ATI get a better score from their specific non-DX feature (fetch4)?

    You're jumping to a conclusion and then expect the tests to support it. Not exactly scientific method.

    Because it doesn't do any good for sparsely sampled filter kernels, especially as you cannot efficiently index vector components.


    For those who are interested in how "CPU-limited" 3DMark06 is: use a null renderer (e.g. DXTweaker, 3D-Analyze).
     
    Jawed likes this.
  6. Pete

    Pete Moderate Nuisance
    Moderator Legend

    Joined:
    Feb 7, 2002
    Messages:
    5,777
    Likes Received:
    1,814
    And I'm assuming that all current cards can just as easily, if not moreso, use AA rather than bump the res. Seems to me FM settled on 12x10 and not 12x9 or even 720p b/c the first matches with the most common LCD res. LCD uses would likely prefer AA to higher res, just like CRT users might prefer 2xAA rather than a notch higher res with no AA.

    I don't know, I think HDR is becoming more standard, so why not the option to use AA with it? And if one can't, why not let that be reflected in the score--a score, rather than N/A? I'm going to completely read FM's whitepaper and reviewer's guide before I mouth off anymore, tho, to be fair to Nick & Co. (better late than never).

    Well, that's one way to look at it, and quite a few ppl consider B3D ATI's last bastion of hope/hype. But you could argue that 3DM03 and 05 were partially tilted toward ATI. NV cheated on the first with the FX vs. ATI's 9-series, but they didn't need to do so (detectably) with the 6-series, whose shader power outgunned the X-series. And tho 05 had the DST and PCF brouhaha, its vertex setup limitation seemed to help ATI in relation to its weaker pixel shaders (see X1600 up with the 256-bit big boys, but falling behind on most games).

    Well, to be fair, I haven't seen Derek in Ars' or even AT's forums in quite some time. :) Unfortunately, ppl get vocal everywhere, and ultimately his time is better spent reviewing than arguing (see Brent). That's not to say he can't just read the forums, disregard most posts, and maybe pick up a tip or two.

    Damien took care of the former for you. Check out Hanners' EB article for the latter. (I referenced his #s a page or three back: 25 and 17% hit on a GF6 and GF7, respectively).

    Wait, 06 goes back to no AF by default? Didn't 05 and maybe even 03 use 4xAF? Has FM given a reason for this (e.g., too many texture accesses otherwise, most games start with 1xAF, etc.)?
     
    #546 Pete, Jan 20, 2006
    Last edited by a moderator: Jan 20, 2006
  7. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    From what I'm gathering from this thread, Fetch4 is not currently used on most ATI hardware (notably the R520) because most ATI hardware doesn't support Fetch4 at the precision that Futuremark is asking (24 bit).

    Also bear in mind that none of the SM3 benchmarks use either PCF or Fetch4. And in the SM2 benchmarks where some ATI hardware can use Fetch4, nVidia hardware is always using PCF (since nVidia supports PCF at the precision 3DMark is asking).
     
  8. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,090
    Likes Received:
    694
    Location:
    O Canada!
    DST24 was implemented at the same time as the boards that implemented Fetch4 - i.e. if the ATI hadware supports Fetch4 that hardware will also support 24 bit depth texture formats.
     
  9. ANova

    Veteran

    Joined:
    Apr 4, 2004
    Messages:
    2,226
    Likes Received:
    10
    Indeed, and this is where the problems lie. It's simply not an accurate benchmark because the cards are running differently. Nvidia is using the PCF optimization in 24 bit while ATI is running without any optimizations in 32 bit. Now you have to give Futuremark some credit, the X1800 simply does not support D24X8 nor fetch4 for some reason (I guess timing), so there's little Futuremark could have done, but I would still be interested in the results of both cards running 16 bit DST without any optimizations. Either that or both cards running with R32F and all possible optimizations.
     
  10. Neeyik

    Neeyik Homo ergaster
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,231
    Likes Received:
    45
    Location:
    Cumbria, UK
    I've only tried a P4 3GHz system with a 6600 GT so far, but the SM2.0 tests seem remarkably CPU-limited. The HDR tests are a little better, with the first one being much less CPU bound than the second. Odd...
     
  11. rwolf

    rwolf Rock Star
    Regular

    Joined:
    Oct 25, 2002
    Messages:
    968
    Likes Received:
    54
    Location:
    Canada
    Only back then NV30 performed in games the same as it did in 3DMark. Which is not the case in this situation.
     
  12. Neeyik

    Neeyik Homo ergaster
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,231
    Likes Received:
    45
    Location:
    Cumbria, UK
    For anybody who is interested, I've collated shader dumps from all of the tests (bar the batch tests) from 3DMark06:

    http://www.neeyik.info/fmark/06shaders.rar

    Each folder in the rar file contains the vertex and pixel shaders from the respective tests, as caught by 3DAnalyze. The SM2.0 and HDR folders contain quite a lot of shaders because in the case of the SM2.0 tests, I ran them twice: default and then without HW DST. The same applies for the HDR tests but this time default and then with software FP filtering.

    A quick glance at some of the longer pixel shaders in the HDR tests shows that a couple of them are using if not equal to...else with a bucket load of instructions that can be skipped; the shader particle test is also using flow control in its vertex shader that performs the vertex texturing. Oh and the Perlin noise test is also one hell of a PS!
     
  13. Rys

    Rys Graphics @ AMD
    Moderator Veteran Alpha

    Joined:
    Oct 9, 2003
    Messages:
    4,182
    Likes Received:
    1,579
    Location:
    Beyond3D HQ
    Is it just me or is the Perlin noise one just a (very) long < 512 instruction shader that'd compile as a pixelshader 2.0 test?
     
  14. Unknown Soldier

    Veteran

    Joined:
    Jul 28, 2002
    Messages:
    4,047
    Likes Received:
    1,669
    Actually, Joe has a point.

    Think about it. Every review site on the planet when reviewing graphics cards, will benchmark the cards using no AA+AF as well as adding AA+AF results. Check any site for the last 4 years and you'll see it's the standard.

    FutureMark have been in this industry for quiet a while and for them to still not include AA+AF results as a final score is worrisome. Of course, you can do so with the advanced and professional editions, but the basic edition can't do so.

    Of course since the program is used to analyse a range of cards, this is most probably not convienient atm.

    Back to all that's been happening at hand, well Futuremark could've maybe did more, they don't think so and it's their prerogative as developer.
     
  15. mrcorbo

    mrcorbo Foo Fighter
    Veteran

    Joined:
    Dec 8, 2004
    Messages:
    4,024
    Likes Received:
    2,851
    Actually, IIRC, at the time 3DMark 2003 was released, the 5800 Ultra was even or performed better than the 9700 pro in most games w/o AA/AF. I think NVidia had convinced developers as well as consumers to wait to get serious about DX9 until NV30 was released because it was going to be such a great product. So, at the time it came out, 3DMark '03 was really the only indication of how bad a DX9 implementation NV30 really was. It wasn't until later on that the predictions made by '03 were validated by actual games.
     
  16. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    Well, again, it'd be ridiculous to do comparisons in this way with the full score. You'd want to break down the score and only compare the SM2 FSAA results between the two IHV's. But that's what you can do now.
     
  17. Unknown Soldier

    Veteran

    Joined:
    Jul 28, 2002
    Messages:
    4,047
    Likes Received:
    1,669
    Hence the

    ;)

    US
     
  18. mrcorbo

    mrcorbo Foo Fighter
    Veteran

    Joined:
    Dec 8, 2004
    Messages:
    4,024
    Likes Received:
    2,851
    I would hope that a competent reviewer would choose effective and meaningful over convenient.
     
  19. Cowboy X

    Newcomer

    Joined:
    May 22, 2005
    Messages:
    206
    Likes Received:
    2
    I cannot be the only one who remembers the large scale cheating done in titles that weren't even in the dx 9 weak point of the NV30 .
     
  20. Demirug

    Veteran

    Joined:
    Dec 8, 2002
    Messages:
    1,326
    Likes Received:
    69
    How do you come to this conclusion? The Perlin noise shader is a 3.0 shader.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...