Final Doom3 benches at HardOCP

Discussion in '3D Hardware, Software & Output Devices' started by Johnny Rotten, Jul 22, 2004.

Thread Status:
Not open for further replies.
  1. hovz

    Regular

    Joined:
    May 10, 2004
    Messages:
    920
    Likes Received:
    0
    i never said it was, what was a big deal was using ps 2.0 and 2.0b features and passing them off as 3.0 while disabling them on capable cards. this is not the case now tho.
     
  2. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    That's such a gross misrepresentation of the truth, hovz. The fact is that the new lighting model can only be applied to SM2 hardware for three lights or less. SM3 hardware can still do it with more lights. Regardless of how many times you've said that SM3 doesn't add anything in this instance, it most certainly does.

    And when the HDR effects make it in later, there should be a very significant image quality improvement from using SM3.
     
  3. WaltC

    Veteran

    Joined:
    Jul 22, 2002
    Messages:
    2,710
    Likes Received:
    8
    Location:
    BelleVue Sanatorium, Billary, NY. Patient privile

    Oh, how sweet it is...;) (To quote Jacky Gleason)...;) You're absolutely right, Martrox.

    How short people's memories can be is the thing that confounds me. Flash back about one year ago to [H]'s "Doom III Preview" that was published JustInTime for nVidia's nV35 product launch, and--wow!--jeepers!--R3x0 was pulling an incredible TEN FRAMES PER SECOND in [H]'s 2003 DOOM III PREVIEW, and nV3x was just blowing it away with 60-90 fps benchmark scores, in the incredible, remarkable, come-one-come-all, DIII Preview, a world-wide exclusive hosted by [H]. Wow....what a shocker that was....uh, until...

    (1) Allegations of nVidia's cheating in 3dMk03 first surfaced about three days later--a fact [H] apparently never recovered from as the site thought the 3dMk03 news was merely an attempt to steal [H]'s DIII Preview publicity away from [H]. And said so. Loudly.

    (2) It became apparent nVidia was having severe yield problems with nV35, just as it had with nV30 (switching-the-FAB blame game, notwithstanding.)

    (3) It became apparent that nV30 was simply utterly incapable of thrashing R3x0 at *anything*, which made the [H] world-wide 2003 DIII Preview, closed-benchmark run results completely suspect to everyone.

    (4) It became known that DIII wasn't actually going to ship at any time in 2003, which made the DIII Preview of 2003 little more than an exercise in silliness...;)

    (5) It became known that not only had nVidia actually in fact cheated in 3dMk03, but the company was proud of it, declared war on FM, and kept it up for the entire year, in one form or another, even after formally declaring a truce with FM that was at least skin deep.

    Heh...;) Return to the present, and we've got Carmack admitting it was a closed benchmark run--but only this time conducted on the premises of ID--instead of what they did last year, which was to ship "sealed" systems to [H], supposedly under guard or something--and Carmack admits that while nVidia's OpenGL drivers were heavily optimized for the D3 game--ATi's were not (which was, if you recall, the same exact thing that happened in the [H] 2003 D3 Preview.)

    In short, I see very little reason to suspect these numbers will remain constant between the competing cards moving into the future, any more than nV3x ever thrashed R3x0 last year--despite all the indications that it could and would that were provided us by the [H] D3 Preview of 2003...;)

    I really don't see it. So, people can say I'm a "fan boy" all they like--but I prefer to think of it as a matter of "once bitten, twice shy"...but that's just me, you know...;)

    I'm going to wait until ATi gets a crack at "optimizing" its drivers for the D3 code, and then I suspect we'll see several horses of somewhat different colors rounding the track.

    All this "deja vu" is getting a bit tiresome...;) It's remarkable that more people haven't commented on this rerun we're seeing, but memory just isn't what it used to be, I suppose.
     
  4. AntShaw

    Regular

    Joined:
    Mar 22, 2004
    Messages:
    883
    Likes Received:
    7
    Location:
    Maryland, USA
    Well the fact of the matter is JC coded the game, he knows what it 'IS' supposed to look like. Hence the statement 'unless you know what you are looking at.' From that statemenet I would clearly assume JC can see the difference in the opts. From his statement 'but it doesn't visibly impact the quality of the game unless you know exactly what to look for on a specific texture,' I think it clearly states 'HE' can see the difference but people that don't know what they are looking for can't see the difference. So the fact of the matter is the difference is there, which means there IS a loss in IQ, which means that is a problem and a bad thing.
     
  5. Maintank

    Regular

    Joined:
    Apr 13, 2004
    Messages:
    463
    Likes Received:
    2
    I find it kind of funny you tell us that John Carmack says not to use this as the end all for purchasing cards and we should believe him. But when it comes to IQ and his opinions we shouldnt?

    btw this is not surprising. In fact what is surprising is how well the 9800 did compared to the 5900. I really expected the difference to be large in the 5900s favor.

    And yes this is a big deal because there are a lot of gamers out there who are waiting to build a machine for Doom 3. At this point in time anything in the ATI camp for Doom 3 is a waste of cash compared to the 6800s.

    What in the world makes you think they havent had a chance yet? I seem to remember a leak from ATI almost 2 years ago. This indicates to me at least they have had plenty of time to work on "optimizing" for D3.

    Dont fall into the trap most supporters of the 5900 did and think a miracle driver release will happen and save the thing from its misery. It wont happen. You "may" get slight increases over time but nothing so dramatic it will turn this around.
     
  6. hovz

    Regular

    Joined:
    May 10, 2004
    Messages:
    920
    Likes Received:
    0
    speaking of gorss misrepresentation, hdr has nothing to do with sm3
     
  7. Quitch

    Veteran

    Joined:
    Jun 11, 2003
    Messages:
    1,521
    Likes Received:
    4
    Location:
    UK
    IIRC, didn't Carmack comment on the filtering optimisation when it was first known, saying it was an intelligent optimisation/clever idea and he was suprised no one had done it before... or was that for nVidia's? Anyway, I'm sure he dd so I was a bit surprised by his comments now.
     
  8. hovz

    Regular

    Joined:
    May 10, 2004
    Messages:
    920
    Likes Received:
    0
    why is it people begin to point out typos when trying to prove a point?
     
  9. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    No matter how great of an optimization it is, it should still be possible for the user to disable the optimization if it doesn't provide mathematically-equivalent results.
     
  10. AAlcHemY

    Newcomer

    Joined:
    Jun 17, 2003
    Messages:
    215
    Likes Received:
    0
    Location:
    Belgium
    Hmm,

    1600x1200 0xAA 8xAF AGP:
    6800U 61.8fps
    X800 XT PE 49.8fps

    1600x1200 0xAA 8xAF PCI eX:
    6800U 68.5fps ( 10.8% increase )
    X800 XT PE 51.8fps (4% increase


    1600x1200 4xAA 8xAF AGP:
    6800U 40.7fps
    X800 XT PE 33.5fps

    1600x1200 4xAA 8xAF PCI eX:
    6800U 42.5fps ( 4.4% increase )
    X800 XT PE 33.5fps (0% increase )

    I don't see why 0xAA ( compared to 4xAA ) stresses the bus more?
     
  11. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    Because with 0xAA, you're running more frames per second. This equates to more data being sent across the bus.
     
  12. Quitch

    Veteran

    Joined:
    Jun 11, 2003
    Messages:
    1,521
    Likes Received:
    4
    Location:
    UK
    I don't care about the optimisation per-see, I'm simply trying to get Carmack's quotes straight.
     
  13. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    Right, well, my point was that he can still think it's a great optimization while at the same time desire the ability to turn it off.
     
  14. LeStoffer

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,262
    Likes Received:
    22
    Location:
    Land of the 25% VAT
    Remember that the PCI-Ex and AGP systems were not equal:

    PCI-Ex = 3.6GHz Pentium 4 processor and 4GB of 533MHz DDR2 Ram on an Intel i925X motherboard.
    AGP = Intel Pentium 4 3.2GHz processor and 2GB of DDR400 Ram on an i875 motherboard.

    I seem to remember that AGP was generally faster with nVidia, so I don't really know what is up here.

    Edit: See http://www.anandtech.com/video/showdoc.aspx?i=2096
     
  15. WaltC

    Veteran

    Joined:
    Jul 22, 2002
    Messages:
    2,710
    Likes Received:
    8
    Location:
    BelleVue Sanatorium, Billary, NY. Patient privile
    Heh...;) Huh?....;) That's exactly what I pointed out--and was told I was a "fan boy" (of ATi, I guess), for saying it. Not by you, of course. But you've reiterated above exactly what I originally pointed out to a poster who thought all Carmack was doing was praising nVidia and criticizing ATi. (Next time, you might wish to consider the context of my comments...;))

    And what, pray tell, is the purpose of trilinear filtering in a 3d game if not to improve image quality? To that end, if your trilinear optimization doesn't "visibly impact the quality" of a 3d game running full trilinear, but at the same time it improves the game's performance, how can that possibly be construed as a negative?

    To me, that's like saying, "It's OK to turn on 4x FSAA, even if the game engine doesn't support FSAA, because what's important is turning on FSAA, not whether it provides any benefit to IQ." I think you'd agree with me that such thinking is silly (several current game engines have difficulty with FSAA support, btw), as turning on FSAA in a Cpanel when the game engine doesn't support it will *slow you down* without improving IQ one iota.

    Likewise, it seems to me, a Trilinear optimization which is indistinguishable in game play IQ from "full trilinear," but which advances performance above full trilinear, is a desirable thing to have.

    I think you're also forgetting that nVidia started out on the Trilinear optimization path last year, well ahead of ATi. nVidia got caught at it precisely because it *did* "visibly impact the quality of the game" and it was highly noticeable to many people. From what I read these days, nVidia's trilinear optimizations are much better than they were originally, with respect to degrading IQ, and if that's true then I have no more objection to them doing it than I do for ATi doing it--if the IQ benefit of trilinear isn't lost in the process. What's important is not the filtering algorithm, but rather the IQ it produces.

    Query: Why did Carmack think it necessary to mention ATi's trilinear optimizations, even pointing out that they turn on and off automatically, while neglecting to say a word about nVidia's trilinear optimizations?...;) I wonder why....as he certainly doesn't say anything about feeling a necessity to manually turn nVidia's trilinear optimizations off--I almost wonder if he did? Now, wouldn't *that* be a hoot if he didn't? Heh...;)

    Huh?...;) Do you mean the difference between turning trilinear filtering itself on or off? Because if you are talking with respect to ATi's trilinear optimizations, JC *cannot* turn them on or off manually, as they operate automatically under the driver's programmed control, and turn themselves on or off without user intervention (very unlike nVidia's manual on-off slider/switch thing.)

    A key difference would be that when running colored mipmap tests the ATi drivers always *turn off* the optimization and it's beyond user intervention to force them ON during a similar test. With nVidia's optimizations, if they are on, the same colored mipmap tests will run optimized trilinear--and you have to turn them off manually to get full trilinear.

    So let me get this straight--because you can't tell that ATi's trilinear optimizations are on, because they turn themselves off when they might become visible--that's a negative. But since nVidia's trilinear optimizations are noticeable, you'd want to turn them off, and would turn them off and consider that a positive. Heh...;) Okay...;)

    Another suggestion might be that you just disable trilinear filtering under the Catalysts, and that way you could always get bilinear when you wanted it--reliably and predictably. IMO, the only reason you'd *want* to turn off a trilinear optimization is because it degraded IQ. But if it doesn't, and the IQ is the same as legacy trilinear, but it boosts performance at the same time, then why would you *ever* want to turn it off?

    Right, I mean, everybody knows that cigarette smoking leads inexorably to heroin addiction and male/female/animal prostitution--of course--so it's only natural to think that trilinear optimization will lead to us all into overdosing on bilinear filtering, after we've become hopelessly addicted to bilinear filtering, of course, and can no longer *tell the difference*...;) Uh-huh...;)

    You need to decide something, though--you need to decide whether the difference is unnoticeable, or it isn't...;) Until you decide that about whomever's trilinear optimizations you want to talk about--you'll never know whether the slope is slippery, or whether it's all in your head...;)
     
  16. grecco_julio

    Newcomer

    Joined:
    May 23, 2004
    Messages:
    233
    Likes Received:
    0
    I can't figure out why NVIDIA is better!? It only wins by a marginal diference in Far Cry, Doom 3 and OpenGL Games.
     
  17. digitalwanderer

    digitalwanderer Dangerously Mirthful
    Legend

    Joined:
    Feb 19, 2002
    Messages:
    18,992
    Likes Received:
    3,532
    Location:
    Winfield, IN USA
    Sorry, even I disagree with that. If these benchmarks are indicative of the cards performance on D3 than I think "marginal" isn't an accurate way to describe the differences.

    Sorry, I just don't. :(
     
  18. WaltC

    Veteran

    Joined:
    Jul 22, 2002
    Messages:
    2,710
    Likes Received:
    8
    Location:
    BelleVue Sanatorium, Billary, NY. Patient privile
    Nope, the fact of the matter is you missed the part in my last reply to you where I pointed out the salient fact that, in truth, Carmack's statement is worded so that you cannot tell *which IHV's optimizations * it is Carmack is discussing when he says "It doesn't visibly impact the quality of the game unless you know what you are looking for." Read that sentence in the context of the paragraph it is in and you'll see what I mean. It could apply to "any vendor" or "nVidia" as well as to "ATi" as he's worded it. As I said, Carmack deliberately switches gears in that paragraph and waxes vague and indistinct. But of course, you are certainly free to believe whatever you choose as I am sure you shall...;)
     
  19. T2k

    T2k
    Veteran

    Joined:
    Jun 12, 2002
    Messages:
    2,004
    Likes Received:
    0
    Location:
    The Slope & TriBeCa (NYC)
    Honestly I really don't understand why two DIFFERENT CPU have been used... :?: :?:
     
  20. WaltC

    Veteran

    Joined:
    Jul 22, 2002
    Messages:
    2,710
    Likes Received:
    8
    Location:
    BelleVue Sanatorium, Billary, NY. Patient privile
    Did you bother to read Carmack's statement--you know, the one I quoted where he specifically states that the nVx nVidia drivers he used for the benchmark were *heavily optimized for D3* and would likely run *much slower* in other games because of that--nV40 and especially nV3x? Refresh your memory, and then consider that he says nothing of the kind about the ATi drivers he used, does he? Consider as well how Carmack states that he has *no doubt* that future driver revisions will change the current numbers. If you can't see that from his statements--nothing I can do for you...;)

    As well, everyone knew exactly why nV3x would never, ever catch R3x0, and that was because the physical architectures were so different. nVidia cut the IQ down to crap and it still made no difference--R3x0 sailed right by it in performance with comparably impeccable IQ at the same time. nV3x architecturally wasn't in the same league with R3x0. Just the facts, jack...:D That's why no nVidia driver on earth could save it in comparison with R3x0.

    But many, many things about nV40 and R4x0 are very, very similar. In fact, things are much more similar in terms of pixels per clock and other things then they were between the last generation of chips. It's just that ATi is generally clocked a lot higher, at least at the top end--but be that as it may. There's simply no reason, no reason at all that ATi could not catch and exceed the best nV4x performance in D3, given sufficient driver optimization--because the difference between the current chips doesn't remotely resemble the difference between R3x0 and nV3x. And, that is why you're wrong...;)

    (While I've said much in this thread about memory lapses, I have to confess that I tend to forget that a lot of people weren't paying much attention a year ago, apparently.)
     
Loading...
Thread Status:
Not open for further replies.

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...