Half Life 2 Benchmarks (From Valve)

Discussion in 'Architecture and Products' started by Dave Baumann, Sep 11, 2003.

  1. digitalwanderer

    digitalwanderer Dangerously Mirthful
    Legend

    Joined:
    Feb 19, 2002
    Messages:
    18,992
    Likes Received:
    3,532
    Location:
    Winfield, IN USA
    ROFLMFAO~~~

    No I hadn't, thank you for pointing it out to me and making me blow coffee all over my monitor...it was worth it. :lol:
     
  2. Joe DeFuria

    Legend

    Joined:
    Feb 6, 2002
    Messages:
    5,994
    Likes Received:
    71
    Not for with DX9 effects. Sounds OK to me.

    I did read in one of these blurbls that these benchmarks are with trilinear filtering, too.
     
  3. Joe DeFuria

    Legend

    Joined:
    Feb 6, 2002
    Messages:
    5,994
    Likes Received:
    71
    Well, the problem with that, is that Valve still has nVidia customers. In other words, they can say "nVidia, fix your shit!", but if VALVE has some control on making performance better, they feel some obligation to their customers for doing it.

    Of course, after this whole exercise, Valve seemed to learn that in the end, it wasn't worth it....but there's no way they could really know that until they went through the exercise first.

    Valve's "recommendation" to other developers is now, in fact, more or less what you suggest. Don't bother wasting much time trying to tune for nVidia DX9. Write the standard path (sure, use hints and what not)...give it to FX users as an option. And that's it. (Put the ball in nVidia's court.)
     
  4. AndY1

    Newcomer

    Joined:
    Aug 22, 2003
    Messages:
    24
    Likes Received:
    0
    [​IMG]

    Does anyone have any idea, why R9800Pro is only 10FPS+ ahead of R9600Pro?

    Is game so CPU bottlenecked?

    I would expect at least 50% better performance of R9800Pro over R9600Pro...
     
  5. Doomtrooper

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    3,328
    Likes Received:
    0
    Location:
    Ontario, Canada
    FSAA benchmarks would seperate those two cards real quick. Gabe Newell runs a 9800 and HL2 @ 1600 x 1200 2XFSAA
     
  6. Socos

    Newcomer

    Joined:
    Feb 23, 2003
    Messages:
    48
    Likes Received:
    0
    The unfortunate thing is some website ( I won't name names!!) will take screenshots, and say, " while if you zoom in you can see the IQ degredation, when your running around you will never notice it"

    Which of course will make it all okay. :roll:
     
  7. Joe DeFuria

    Legend

    Joined:
    Feb 6, 2002
    Messages:
    5,994
    Likes Received:
    71
    I'll make you a deal. If other web sites run benchmarks / reviews with unreleased DET 50 drivers, then you have my "permission" ;) to investigate the drivers and publish your findings. :D
     
  8. digitalwanderer

    digitalwanderer Dangerously Mirthful
    Legend

    Joined:
    Feb 19, 2002
    Messages:
    18,992
    Likes Received:
    3,532
    Location:
    Winfield, IN USA
    Nah, I can't give him a free pass on that one...I'm still miffed about the "simple-minded" comment aimed at me yesterday by Mr. Tan.
     
  9. WaltC

    Veteran

    Joined:
    Jul 22, 2002
    Messages:
    2,710
    Likes Received:
    8
    Location:
    BelleVue Sanatorium, Billary, NY. Patient privile
    Heh...;) I guess we do diverge...:D I'll make some quick comments, but really don't want to skew the thread anymore off-topic than we've managed to take it thus far...

    I thought it was better from a quality standpoint...but you're right--that's subjective. As I said I didn't see much value at the time in displaying 8-bit and 16-bit game engines in 24/32-bits because of the performance hit. What I found is that 3dfx's 16-bit display far outclassed the TNT's 16-bit display, and was far faster than TNT2's 24/32-bit displays.

    Of course it matters a great deal if your starting premise was that 3dfx was "opposed" to 32-bit products--since the 32-bit V5 was originally slated to ship late fall 1999. This proves no such opposition actually existed in the company. The TNT1 only proved 3dfx's point about 24-bit support at the time--you could get it but the price was very low performance. My TNT was a slide show in 24-bits. TNT2 improved that somewhat, but was still not compelling for me at the time for reasons already stated.

    Heh...:) I can imagine it easily--I'm sure all 2-3 of your games were DX in '99...;) The thing was that for a good year or two after the V1 shipped the only 3d games available were GLIDE, and then there was a period of time when developers shipped GLIDE/D3d titles (both versions with the same game), and it wasn't really until late '99 when the first D3d-only titles of any importance whatsoever began to appear (mainly because D3d didn't really catch GLIDE functionality until DX6.)

    In my case I had a sizable library of 3d games the vast majority of which were GLIDE. That was a big negative for non-3dfx products at the time from my perspective. Of course, if you hadn't been collecting 3d games for a couple of years and didn't have a GLIDE library (maybe QuakeX was the only thing you played) it wouldn't have been a consideration. (Saying "you" here figuratively.)

    Heh...;) Sort of ironic for nVidia these days, in'nt?....:D


    Show me *one* 3d game that overflows AGP bandwidth with data other than textures...;) AGP after all is but an extension of PCI. But--I'm not saying I don't "like" AGP--for me it's not a question of liking or disliking--it's a question of understanding it and its origins (which I alluded to in an earlier post.) In 1999 there wasn't a spit's/nickel's worth of performance difference between PCI66 and AGP x2 (V5 was an AGP card--just supported PCI66, was all. The V4, shipped after the V5, was the single chip version which did support AGP--but it made no difference in performance because the VSA-100 reference designs were all so much faster than AGP at the time with regard to their onboard buses. As well, the AGP spec at the time didn't support multiple chips--which is why the AGP V5 only support PCI66 but the V4 supported AGP.) Back then, it was purely a marketing term when used to describe the performance differences between the fastest 3d cards shipping at the time.

    I understand what you're saying relative to what you did with the Savage designs, but just don't agree it would have made any difference with the existing nVidia or 3dfx reference designs at the time.

    But the problem with that line of thinking is that while other vendors were shipping 16-mb cards i74x was shipping 8 meg cards, and when other vendors were doing 32-meg cards i75x was doing 16 (IIRC.) Therefore, it seemed to me that Intel deliberately hobbled i7xx by making it AGP texture-dependent in relation to the competing products of the time, which handled textures locally. IMO, the reference designs Intel was hawking for i7xx at the time simply didn't have enough onboard ram to texture locally, even if that had been the design. So, if it was broken it always seemed deliberately broken to me...;)

    OK, I'm done with the off-topic here...:) It was fun, though...
     
  10. Bjorn

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,775
    Likes Received:
    1
    Location:
    Luleå, Sweden
    I worry more about "Missing fog, lowered filtering quality .." then "the IQ degradation that you only see when you're zooming in". After all, it's a game , not a syntethic benchmark.

    Well, actually, i don't worry at all since i don't own a GF FX, but i would if a did :)

    I also wonder about the low performance difference between the 9600 Pro and 9800 Pro. Maybe it's really CPU limited in this case but i shudder to think of how bad the 5900 really performs if this is the case. There's of course the FSAA thing but that's mostly a bandwidth thing. Shouldn't there also be a huge difference between the 9800 Pro and 9600 Pro's shader performance ? (Not that i don't think it's great if the 9600 Pro has such good shader performance compared to the 9800 Pro).
     
  11. Doomtrooper

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    3,328
    Likes Received:
    0
    Location:
    Ontario, Canada
    The Shader performance on a 9600 Pro is very close to a 9800 Pro, all the R3.XX are in fact based on the same shader design. The only thing seperating the performance is overall throughput of the cards itself...fillrate etc...
     
  12. digitalwanderer

    digitalwanderer Dangerously Mirthful
    Legend

    Joined:
    Feb 19, 2002
    Messages:
    18,992
    Likes Received:
    3,532
    Location:
    Winfield, IN USA
    Dumb question, but the 9700 Pro numbers should fall pretty close to the 9800 Pro numbers if'n you've got yourself a slightly OCed 9700 Pro....right? (Simple questions from simple-minded people, I know. :p )
     
  13. nelg

    Veteran

    Joined:
    Jan 26, 2003
    Messages:
    1,557
    Likes Received:
    42
    Location:
    Toronto
    Hey Reverend, do you remember when I asked you this...
    Well it seems some dev.'s are pissed.
    BTW, is the new mantra now ,"just wait for the Det. 51's not the 50's" ?
     
  14. Bjorn

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,775
    Likes Received:
    1
    Location:
    Luleå, Sweden
    Of course they're based on the same design. But i was under the impression that the 9800 Pro had more shader units also and not only higher fillrate and more bandwidth.
     
  15. Ilfirin

    Regular

    Joined:
    Jul 29, 2002
    Messages:
    425
    Likes Received:
    0
    Location:
    NC
    Aye, that pixel shader bench I wrote about 8 months ago showed the entire 3xx line in the same general ballpark. The 9500Pro was right around 3000 MIPS, with the 9700Pro around 3600MIPS, and the 5800Ultra right around 500MIPS IIRC. Granted this is a bit more extreme of a difference than you'd likely see in a real game.. that bench had almost exactly 0 CPU or Vertex work.. it was definitally a pure synthetic FP pixel shader bench.
     
  16. nelg

    Veteran

    Joined:
    Jan 26, 2003
    Messages:
    1,557
    Likes Received:
    42
    Location:
    Toronto
    This just ticks me off.
    this is from THG. Dave's post and the TechReport's story have Gabe saying nVidia's drivers not "unnamed manufacturers". I also like (sic) how they diffuse blame towards nVidia by saying "manufacturers" instead of manufacturer. ! And whiel on the topic of poor. sloppy or biased reporting, I hope Kyle takes time to re-read those Power Point slides he was given by nVidia before any bench marking of HL2 with Det 5.x's !
     
  17. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,090
    Likes Received:
    694
    Location:
    O Canada!
    You get to a certains speed and you'll probably see that its CPU limited with HL2.
     
  18. NapalmV

    Newcomer

    Joined:
    May 20, 2003
    Messages:
    7
    Likes Received:
    0
  19. KnightBreed

    Newcomer

    Joined:
    Feb 7, 2002
    Messages:
    203
    Likes Received:
    0
    I suspect that the performance gap should decrease in HL2 as well, when AA and/or AF is enabled as the bottleneck should move toward memory bandwith, rather than shader performance.

    Is this really a surprise? I suggest waiting for full reviews with benchmarks from both apps and both video cards and both driver revisions.
     
  20. Dave H

    Regular

    Joined:
    Jan 21, 2003
    Messages:
    564
    Likes Received:
    0
    Exactly. In a fragment shader limited situation (which HL2 in full DX9 mode will be whenever it's not CPU limited), the 9800 Pro will get nearly double the framerate of the 9600 Pro, because it has exactly double the fragment processing resources and nearly the same clock rate.

    It's interesting to note that HL2 is CPU limited at 60 fps with full settings on a P4 2.8C. Obviously most of the gee-whiz physics is going to have to be turned off for that 800 MHz P3 user to play this game at minimum specs.

    Conversely, this means 9800 Pro users will be able to bump the res/AA/AF up a decent amount and still get close to 60 fps. And that the "true" HL2 performance difference between the 9800 Pro and 5900 Ultra is more like 3:1, not 2:1.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...