Half Life 2 Benchmarks (From Valve)

LeStoffer said:
Yeah, but besides John Carmacks remark about the twichyness of the NV30 shader architecture and some performance investigations by people here on the forum, developers haven't exactly been vocal about this. :?

Is it because it doesn't matter to them? Nah, I guess it something to do with the fact that you have everything to loose and nothing to gain by upsetting nVidia. So hush-hush it is.

Developers often aren't terribly vocal as a rule, and it usually takes something extreme either good or bad to get them to start talking. But in this case I think the loudest talking is coming from software like 3dMk03, shader benches, and DX9 games like HL2 (not to forget Tomb Raider, etc.) I think their software often speaks volumes even though they themselves might not. I think that the stimulation motivating Gabe to talk like he did was major and extreme. It was either people coming down on Valve and accusing them of partisanship or lousy programming, or it was Valve explaining things to people and attempting to educate them on why their software is as it is. I think they wisely opted for the second choice--not necessarily because they wanted to, but because they felt like they had to. Had nV3x been DX9-compliant from a hardware perspective, none of the things we've witnessed this year would have happened, IMO. Developers would rather not choose among IHVs, but when one progresses with the standards of the API and one does not, they get forced into these decisions. I don't blame them--I'd be pissed, too.
 
Bjorn said:
Of course they're based on the same design. But i was under the impression that the 9800 Pro had more shader units also and not only higher fillrate and more bandwidth.
Bjorn, after being around here so long surely you know that pixel shader units are essentially built into the R(v)3x0's pipeline? :) The RV350 has as much shader power per pipeline as the R350, in that it has the same pixel shader in each pipe--but only half the pipelines, thus half the fillrate. Same shader power per clock, though, as I understand it. If both cards were asked to render, say, one PS2.0-shaded pixel in one clock, I'd imagine the 9600P would score practically equal to the 9800P. The latter's extra memory bandwidth may become a factor with four PS2.0 shaded pixels in one clock: though it's still within the RV350's design capability to output four shaded pixels per clock, its 2x64 crossbar may limit it. (I'm speaking from no in-depth knowledge whatsoever, though, so my surface understanding of the memory switch to the pixel shaders and subsequent seemingly logical pairing of the two may be totally off-base.)

The R350 does have twice the vertex shader units as the RV350, but that's probably to balance its doubled fillrate--I'm not sure the RV350 would benefit greatly, if at all, from extra VSs.
 
Pete said:
Bjorn said:
Of course they're based on the same design. But i was under the impression that the 9800 Pro had more shader units also and not only higher fillrate and more bandwidth.
Bjorn, after being around here so long surely you know that pixel shader units are essentially built into the R(v)3x0's pipeline? :) The RV350 has as much shader power per pipeline as the R350--but only half the pipelines, thus half the fillrate. Same shader power per clock, though, as I understand it. If both cards were asked to render, say, one PS2.0-shaded pixel in one clock, I'd imagine the 9600P would score practically equal to the 9800P. The latter's extra memory bandwidth may become a factor with four PS2.0 shaded pixels in one clock: though it's still within the RV350's design capability to output four shaded pixels per clock, its 2x64 crossbar may limit it. (I'm speaking from no in-depth knowledge whatsoever, though, so my surface understanding of the memory switch to the pixel shaders and subsequent seemingly logical pairing of the two may be totally off-base.)
The RV350 has half the pixel shaders of the R350. That means half the fillrate and half the pixel shading power.
 
Bjorn, after being around here so long surely you know that pixel shader units are essentially built into the R(v)3x0's pipeline? The RV350 has as much shader power per pipeline as the R350--but only half the pipelines, thus half the fillrate. Same shader power per clock, though, as I understand it.

Yep, i have been around here a long time :)

The RV350 has half the pixel shaders of the R350. That means half the fillrate and half the pixel shading power.

PS Though that doesn't automatically mean that i have learned that much :) DS
 
Dave H said:
Obviously most of the gee-whiz physics is going to have to be turned off for that 800 MHz P3 user to play this game at minimum specs.

You quoted exactly my chip, too. :cry: :cry: :cry:

Must... upgrade... soon...!
 
So the NV3X has been out since January why does Nvidia still have to code new drivers to make games work with reasonable performance?
 
Could this be a reason why intel doesnt have an integrated solution for Longhorn(since rumours have it requires dx9), and nv's poor performance in HL2?

No, this isn't it.
They don't even have a DX8 integrated solution (no Pixel Shaders), I'm not even sure if they have a chipset which fully supports DX7...
 
more info

have anyone read this?

Ever since the release of NV30 with it's new CineFX pixel engine people have been wondering about its internal structure. The developer of the chip, nVidia, has been reluctant to answer questions about internal details in the past, seeking refuge in well-sounding phrases carrying only small bits of information.

Even the architecture specific OpenGL API Extensions nVidia presented for programming the CineFX engine could not shed light on those details. Therefore many people tried to lift the veil of mystery, equipped with coarse information and several theoretical benchmarks taken from the real object. The author of this article has been amongst those people, too. But with regard to these efforts, one source of information has been almost completely ignored. It is easily understandable that noone looked there because in the past virtually no information on brand new chips has been found there.

Patent offices! In fact, the world patent with the registration number WO02103638 was published December 27, 2002, carrying the title "PROGRAMMABLE PIXEL SHADING ARCHITECTURE". Officially, of course, this patent doesn't relate to NV30 or CineFX. But that is not uncommon because nVidia never linked their patent texts to certain chips or marketing names in the past. Still, since there are enough indicators that this patent covers CineFX and NV30, we will regard it as such.

In the following pages we're going to analyze this patent. We're especially interested in answering the well-known questions about NV30:

Why does NV30 perform so poorly when executing 1.4 or 2.0 pixel shaders which have been developed on ATi hardware?

How can NV30 take advantage of shaders optimized for it?

Where does the chip still have hidden performance potential that could be revealed by the driver?

continues.... http://www.3dcenter.de/artikel/cinefx/index_e.php
[/quote]

Opinnions about it ,referring the topic...
 
I've been trying to purposely stay out of this whole debacle/farce as best I can, but the public response has been nothing short of overwhelming.

I'm a little skeptical of all that has been said mainly because, if we use history as a guide, what is generally the single most important points of benchmarks are those things the scores do NOT show... graphs of numbers are for all intents and purposes, useless information if there is no comprehensive analysis or even a little bit of research performed by 3rd party/unbiased individuals to try and explain the seen behavior.

In this case, from already explored limitations in the NV3X architecture, it would be fairly expected to see lower performance in PS2.0 performance, but I dont think anyone expected as bad as what we are seeing. Alleged rumors of screen capture tampering and shader cheats bemuddle NVIDIA's standpoint even further. In all, it's making hammering the NV3x a fairly easy, almost "stylish" thing to do.

I've run both a 5900U and a 9700 Pro side by side, and yes- there is pretty huge disparity between the two products, but not to the degree being seen with HL2. Especially given changes in shadermark scores FX12->FP16 enhancements since the NV30 and whatnot.

I think NVIDIA would be 100x better off if they just left the hardware alone, stopped with the alleged application/screen capture hacks (if these pan out to be reproducible, verifiable findings) so as some real research can be accomplished. I also hope Beyond3D in it's normal style will do some research to help put a finger on the behavior being seen through insight into the hardware and final rendered image analysis. We haven't seen any of this from the sites yet... just posting blind benchmark number with no exploration to find "why" or quantify the issues.

Right now, we dont have anything tangible or substantial... it's a bit dissapointing to see the same patterns repeat from years of old, just this time the IHV's have swapped names. I thought the 3D enthusiast fans had adapted to be a bit more sketical/insightful, but unfortunately it seems the same old habits have resurfaced- cheerlead graph figures that one cannot even verify, test, explore or research on their own.
 
Sharkfood - not for nothing but this isn't hte ati quack issue. This is just another lie in a year full of huge lies . It all started off with the nv30 and it taping out. Then horrible delays , insane cooling fans on the high end products , a very limited run of the high end cards. Also see the 3dmark fisaco. Now we come to half life 2 which is a real dx 9 game. The nv30 does not do well on this with current drivers. Even with valve tweaking for the card. Then a statment from valve basicly saying the 50dets are garbage . Same as what happened with 3dmark.

So once again this isn't one thing that we can give them the benifit of the doubt with. This is just one more to add to us being weary of them.
 
I think that was very well put jvd.

I get a jaded rundown feeling whenever I hear about NVIDIA and their drivers and their PR. It almost always is something negative when a time not so long ago NVIDIA drivers were the 'gold standard.'

How things have changed in a relatively short amount of time.

Word to the wise: It takes a lot of effort and time to gain trust. NVIDIA should be focusing on gaining the trust back and should not be concerned about trying to make their cards look faster than they are. It is too late for that NVIDIA. Wake up and do the right thing. Just execute and you can redeem yourselves slowly. I want the old NVIDIA of TNT days through to the GF3 days. I dont like the post 3dfx NVIDIA.
 
Look at the brightside: all these people who are blindly loyal to nVidia in the face of all the evidence are going to get exactly what they deserve. 8)

(Hey, it's my happy thought when the fanboys start swarming with nonsense lately. ;) )
 
Tahir said:
I want the old NVIDIA of TNT days through to the GF3 days. I dont like the post 3dfx NVIDIA.

What makes you think today's Nvidia is any different from yesteryear's Nvidia? The only difference I see is yesteryear's Nvidia was never caught...
 
BRiT said:
What makes you think today's Nvidia is any different from yesteryear's Nvidia? The only difference I see is yesteryear's Nvidia was never caught...

Agree completely. The difference now is that nVidia has some real competition--and they're just falling apart in the face of it. This is what happens to companies used to having it their way for a couple of years--when the competition rears its head with better products it takes them a long time to understand they can't weasel their way out of it with PR gimmicks, driver tricks, and gross misrepresentation.

I've seen this happen time and again in this industry--companies get fat and start feeling entitled to the "number 1" spot--and shortly thereafter they get creamed--and they never even saw it coming. A lot of it has to do with the milking syndrome which is coupled tightly with the entitlement syndrome--they become legends in their own minds to the degree that they are no longer competitive. The amazing thing is that these companies often can't see it until their bad habits become so deeply ingrained the corporation simply can't change, and they become casualties of the technology wars.

The thing is that nVidia didn't have to be so obvious with 3dfx because 3dfx was 3dfx's worst enemy--basically all nVidia had to do was sit back and watch them destroy themselves through overreaching, over expansion and overspending. ATi is a competitor of a different breed altogether. So I imagine nVidia is beginning to seriously flounder at this point in time. What's nVidia *really* got in the face of this kind of competition? I don't know, but it's for sure we'll find out within the next 6 months. nVidia's going to have its mettle tested--in the fire. I'm certain the tactics they used with 3dfx will get them nowhere in this race. If their products aren't number 1, then they won't be, either. And for once I say it's about time to see the companies stand or fall on the products they make--and to hell with PR.
 
BRiT said:
What makes you think today's Nvidia is any different from yesteryear's Nvidia? The only difference I see is yesteryear's Nvidia was never caught...

I agree 100%, which is the main message behind my post.

The problem is- for those that recognized the issues and were smart enough to require genuine factual information rather than blindly follow numbers/graphs seem to now be using a different measuring stick for what is truth versus fiction.

The single most important points to be discovered are generally what you DONT see on the bars and graphs. It's interesting that some read this as being another "quack" issue when there was no mention of such thing. The only point being, without some solid, conclusive analysis, what has been provided is still mostly useless and meaningless until peer review and some research/logical progressions of findings can be provided.

I've always looked to Beyond3D and it's forum folks to open up this channel. It's one of the best parts of this userbase and the site's staff. I just think it's a little early to be popping any corks for any IHV, even though the end result will likely be what is being illustrated- it's just not awardable until some more delving has been done. :)
 
Sharkfood said:
It's interesting that some read this as being another "quack" issue when there was no mention of such thing.

I think people were assuming a reference from you with list line in your last post: Right now, we dont have anything tangible or substantial... it's a bit dissapointing to see the same patterns repeat from years of old, just this time the IHV's have swapped names. If you were referring to another thing completely, you probably just had to be less ambiguous in your wording. The "quack" issues gets brought up a lot.

At any rate, there really IS a lot of information out there regarding some of the fundamental "why"s. (For instance this thing here.) Problem is for most reviewers they don't have the tools nor the access nor the contacts able to part with all the details they need nor the equipment to get REALLY into the nitty-gritty, so they basically have to guide by posted numbers and use official benches/reports, run some of their own tests, and then play out some conjecture. Devs are hardly ever forthright about the entire process the go through for one chipset or another, and the IHV's themselves tend to keep specifics to themselves as well, so it's hard to say ANYTHING "for sure"--not to mention the amazing amount of time and effort that has to be put into these kinds of studies.

But really, all one NEEDS to do is look around more. There's plenty out there to absorb. ^_^
 
eh? sure is a lot of fuss over something so trivial. All NVIDIA has to do is release the NV40 on time, and such that it is the undesputed perf leader. Things changed in an instant for ATi when the 9700 was launched --- and they can just as quickly change for NVIDIA.
 
OpenGL guy said:
Pete said:
Bjorn said:
Of course they're based on the same design. But i was under the impression that the 9800 Pro had more shader units also and not only higher fillrate and more bandwidth.
Bjorn, after being around here so long surely you know that pixel shader units are essentially built into the R(v)3x0's pipeline? :) The RV350 has as much shader power per pipeline as the R350--but only half the pipelines, thus half the fillrate. [...]
The RV350 has half the pixel shaders of the R350. That means half the fillrate and half the pixel shading power.
Just to be clear: pixel shaders (meaning hardware) aren't communal like vertex shaders, right? AFAIK, one pixel shader will work on one pixel to be shaded--you can't use two pixel shaders to halve the time of computation on a single pixel, because of the exclusive nature of the pipelines and this pixel shaders, correct?

I left out "half the pixel shading power" b/c I thought it was obvious from my previous statement that pixel shades are "one" per pipeline.

I'm also assuming nV's pixel shaders are also pipeline-exclusive, so that a 5600, with two multi-texture pipelines, has "two" shaders. Am I right? Are pixel shaders on the FX line really a "sea" or parallel rivers, like with ATi?

(Perhaps I should read that 3DCenter article more closely....)
 
Back
Top