Half Life 2 Benchmarks (From Valve)

jjayb said:
Anyone else notice that Kyle left this slide out of his editorial:

Image3.jpg
ROFLMFAO~~~

No I hadn't, thank you for pointing it out to me and making me blow coffee all over my monitor...it was worth it. :LOL:
 
DemoCoder said:
Anyway, 60fps @ 1024x768 with no AF/AA on a R9800 PRO sounds kind of troubling.

Not for with DX9 effects. Sounds OK to me.

I did read in one of these blurbls that these benchmarks are with trilinear filtering, too.
 
DemoCoder said:
Valve should have simply said the following to NVidia:

1) we have hinted our HLSL shaders which don't need full precision

2) your current compiler sucks. Please ship one that is better. Until then, you get default compiled by MS FXC, and you're driver better figure out how to deal with DX9 PS2.0 PP hints

3) until then, DX9 is disabled on your cards, and HL2 will run in DX8 on them

4) After NVidia ships a better HLSL compiler, Valve will issue a patch with the newly compiled NV3x shaders

Well, the problem with that, is that Valve still has nVidia customers. In other words, they can say "nVidia, fix your shit!", but if VALVE has some control on making performance better, they feel some obligation to their customers for doing it.

Of course, after this whole exercise, Valve seemed to learn that in the end, it wasn't worth it....but there's no way they could really know that until they went through the exercise first.

Valve's "recommendation" to other developers is now, in fact, more or less what you suggest. Don't bother wasting much time trying to tune for nVidia DX9. Write the standard path (sure, use hints and what not)...give it to FX users as an option. And that's it. (Put the ball in nVidia's court.)
 
hl21.gif


Does anyone have any idea, why R9800Pro is only 10FPS+ ahead of R9600Pro?

Is game so CPU bottlenecked?

I would expect at least 50% better performance of R9800Pro over R9600Pro...
 
Doomtrooper said:
As with previous tests on NV3.xx hardware, what really makes them improve is....precision drops...FX12 to be exact.

That is what I'm taking from that comment, and only image quality tests would probably show it.

The unfortunate thing is some website ( I won't name names!!) will take screenshots, and say, " while if you zoom in you can see the IQ degredation, when your running around you will never notice it"

Which of course will make it all okay. :rolleyes:
 
Reverend said:
Why not? I can just label the article "A study on NVIDIA's Beta Detonator 50 drivers" and it can be about certain NVIDIA GFFX cards, with some image output comparisons of HL2 with ATI cards, to show any possible image output differences between ATI DX9 and GFFX cards.

I'll make you a deal. If other web sites run benchmarks / reviews with unreleased DET 50 drivers, then you have my "permission" ;) to investigate the drivers and publish your findings. :D
 
Joe DeFuria said:
Reverend said:
Why not? I can just label the article "A study on NVIDIA's Beta Detonator 50 drivers" and it can be about certain NVIDIA GFFX cards, with some image output comparisons of HL2 with ATI cards, to show any possible image output differences between ATI DX9 and GFFX cards.

I'll make you a deal. If other web sites run benchmarks / reviews with unreleased DET 50 drivers, then you have my "permission" ;) to investigate the drivers and publish your findings. :D
Nah, I can't give him a free pass on that one...I'm still miffed about the "simple-minded" comment aimed at me yesterday by Mr. Tan.
 
OpenGL guy said:
This is where we diverge. The V3 was certainly faster but I wouldn't say it was "better" because that's very subjective. It certainly wasn't better from a quality standpoint.

Heh...;) I guess we do diverge...:D I'll make some quick comments, but really don't want to skew the thread anymore off-topic than we've managed to take it thus far...

I thought it was better from a quality standpoint...but you're right--that's subjective. As I said I didn't see much value at the time in displaying 8-bit and 16-bit game engines in 24/32-bits because of the performance hit. What I found is that 3dfx's 16-bit display far outclassed the TNT's 16-bit display, and was far faster than TNT2's 24/32-bit displays.

Shoulda-woulda-coulda... none of this matters.

Of course it matters a great deal if your starting premise was that 3dfx was "opposed" to 32-bit products--since the 32-bit V5 was originally slated to ship late fall 1999. This proves no such opposition actually existed in the company. The TNT1 only proved 3dfx's point about 24-bit support at the time--you could get it but the price was very low performance. My TNT was a slide show in 24-bits. TNT2 improved that somewhat, but was still not compelling for me at the time for reasons already stated.

None of my games were GLIDE... imagine that :)

Heh...:) I can imagine it easily--I'm sure all 2-3 of your games were DX in '99...;) The thing was that for a good year or two after the V1 shipped the only 3d games available were GLIDE, and then there was a period of time when developers shipped GLIDE/D3d titles (both versions with the same game), and it wasn't really until late '99 when the first D3d-only titles of any importance whatsoever began to appear (mainly because D3d didn't really catch GLIDE functionality until DX6.)

In my case I had a sizable library of 3d games the vast majority of which were GLIDE. That was a big negative for non-3dfx products at the time from my perspective. Of course, if you hadn't been collecting 3d games for a couple of years and didn't have a GLIDE library (maybe QuakeX was the only thing you played) it wouldn't have been a consideration. (Saying "you" here figuratively.)

*shrug* 32-bits worked for me. I also enabled trilinear filtering whenever possible.

Heh...;) Sort of ironic for nVidia these days, in'nt?....:D


You're still missing the point. Lots of stuff goes into AGP all the time. VBs, IBs, textures, etc. It's all about making AGP work well. I wouldn't want to put a Z buffer or AA buffer into AGP, but it can be done.

Show me *one* 3d game that overflows AGP bandwidth with data other than textures...;) AGP after all is but an extension of PCI. But--I'm not saying I don't "like" AGP--for me it's not a question of liking or disliking--it's a question of understanding it and its origins (which I alluded to in an earlier post.) In 1999 there wasn't a spit's/nickel's worth of performance difference between PCI66 and AGP x2 (V5 was an AGP card--just supported PCI66, was all. The V4, shipped after the V5, was the single chip version which did support AGP--but it made no difference in performance because the VSA-100 reference designs were all so much faster than AGP at the time with regard to their onboard buses. As well, the AGP spec at the time didn't support multiple chips--which is why the AGP V5 only support PCI66 but the V4 supported AGP.) Back then, it was purely a marketing term when used to describe the performance differences between the fastest 3d cards shipping at the time.

I understand what you're saying relative to what you did with the Savage designs, but just don't agree it would have made any difference with the existing nVidia or 3dfx reference designs at the time.

The real reason Intel used full-time AGP texturing on the i740 was because local texturing was just plain broken ;) If it was slow, you can't necessarily blame AGP, you have to also consider the speed of the chip, it's AGP implementation, etc.

But the problem with that line of thinking is that while other vendors were shipping 16-mb cards i74x was shipping 8 meg cards, and when other vendors were doing 32-meg cards i75x was doing 16 (IIRC.) Therefore, it seemed to me that Intel deliberately hobbled i7xx by making it AGP texture-dependent in relation to the competing products of the time, which handled textures locally. IMO, the reference designs Intel was hawking for i7xx at the time simply didn't have enough onboard ram to texture locally, even if that had been the design. So, if it was broken it always seemed deliberately broken to me...;)

OK, I'm done with the off-topic here...:) It was fun, though...
 
while if you zoom in you can see the IQ degredation, when your running around you will never notice it"

I worry more about "Missing fog, lowered filtering quality .." then "the IQ degradation that you only see when you're zooming in". After all, it's a game , not a syntethic benchmark.

Well, actually, i don't worry at all since i don't own a GF FX, but i would if a did :)

I also wonder about the low performance difference between the 9600 Pro and 9800 Pro. Maybe it's really CPU limited in this case but i shudder to think of how bad the 5900 really performs if this is the case. There's of course the FSAA thing but that's mostly a bandwidth thing. Shouldn't there also be a huge difference between the 9800 Pro and 9600 Pro's shader performance ? (Not that i don't think it's great if the 9600 Pro has such good shader performance compared to the 9800 Pro).
 
The Shader performance on a 9600 Pro is very close to a 9800 Pro, all the R3.XX are in fact based on the same shader design. The only thing seperating the performance is overall throughput of the cards itself...fillrate etc...
 
Doomtrooper said:
The Shader performance on a 9600 Pro is very close to a 9800 Pro, all the R3.XX are in fact based on the same shader design. The only thing seperating the performance is overall throughput of the cards itself...fillrate etc...
Dumb question, but the 9700 Pro numbers should fall pretty close to the 9800 Pro numbers if'n you've got yourself a slightly OCed 9700 Pro....right? (Simple questions from simple-minded people, I know. :p )
 
Hey Reverend, do you remember when I asked you this...
It is kind of funny how nVidia’s philosophy of Dx9 everywhere (at least in theory where the 5200 is concerned) is coming back to bite them in the ass. BTW Reverend do you think that there is any backlash towards nVidia from developers, because of this. I think that if someone buys a game and it does not perform well they will partly ascribe blame to the game itself. OTOH, a game that list the use of advanced features is visually unimpressive due to the fact that, unknown to the user, that these very features are limited or bypassed for performance reasons, again will hold the developers somewhat to blame.

Well it seems some dev.'s are pissed.
BTW, is the new mantra now ,"just wait for the Det. 51's not the 50's" ?
 
Doomtrooper said:
The Shader performance on a 9600 Pro is very close to a 9800 Pro, all the R3.XX are in fact based on the same shader design. The only thing seperating the performance is overall throughput of the cards itself...fillrate etc...

Of course they're based on the same design. But i was under the impression that the 9800 Pro had more shader units also and not only higher fillrate and more bandwidth.
 
Doomtrooper said:
The Shader performance on a 9600 Pro is very close to a 9800 Pro, all the R3.XX are in fact based on the same shader design. The only thing seperating the performance is overall throughput of the cards itself...fillrate etc...

Aye, that pixel shader bench I wrote about 8 months ago showed the entire 3xx line in the same general ballpark. The 9500Pro was right around 3000 MIPS, with the 9700Pro around 3600MIPS, and the 5800Ultra right around 500MIPS IIRC. Granted this is a bit more extreme of a difference than you'd likely see in a real game.. that bench had almost exactly 0 CPU or Vertex work.. it was definitally a pure synthetic FP pixel shader bench.
 
This just ticks me off.
During the development of that benchmark demo Valve found a lot of issues in current graphic card drivers of unnamed manufacturers:
this is from THG. Dave's post and the TechReport's story have Gabe saying nVidia's drivers not "unnamed manufacturers". I also like (sic) how they diffuse blame towards nVidia by saying "manufacturers" instead of manufacturer. ! And whiel on the topic of poor. sloppy or biased reporting, I hope Kyle takes time to re-read those Power Point slides he was given by nVidia before any bench marking of HL2 with Det 5.x's !
 
I suspect that the performance gap should decrease in HL2 as well, when AA and/or AF is enabled as the bottleneck should move toward memory bandwith, rather than shader performance.

Is this really a surprise? I suggest waiting for full reviews with benchmarks from both apps and both video cards and both driver revisions.
 
DaveBaumann said:
You get to a certains speed and you'll probably see that its CPU limited with HL2.

Exactly. In a fragment shader limited situation (which HL2 in full DX9 mode will be whenever it's not CPU limited), the 9800 Pro will get nearly double the framerate of the 9600 Pro, because it has exactly double the fragment processing resources and nearly the same clock rate.

It's interesting to note that HL2 is CPU limited at 60 fps with full settings on a P4 2.8C. Obviously most of the gee-whiz physics is going to have to be turned off for that 800 MHz P3 user to play this game at minimum specs.

Conversely, this means 9800 Pro users will be able to bump the res/AA/AF up a decent amount and still get close to 60 fps. And that the "true" HL2 performance difference between the 9800 Pro and 5900 Ultra is more like 3:1, not 2:1.
 
Back
Top