HL2 40% faster on X800 compared to NV40?

P.S. to my previous post.

And why is it plain stupid to deliberately make your product run better on one piece of HW versus another?

Well why would you give control of your destiny to another party? That is so counter-intuitive to good business, not to mention the egos involved.
 
london-boy said:
:|
Wow, this forum is turning into the next generation of the Console Forum.....

How on earth would HL2 be 40% faster on one next gen card than another in the same "range" is beyond me. Come on, you clever geeks should know better than that....
Especially considering the "relationship" Valve has had with Ati for quite a while...

If tomorrow John Carmak comes out saying "Doom3 40% faster on NV40" (which i wouldn't be too surprised with to be honest), this place would explode.


I think with current benchmarks of both the X800 and 6800, that if one of these becomes 40% one of these card over the other - this only means that the company that makes the games is taking more advantage of one hardware more than the other. IE using 2.0 shaders on one hardware but limited to 1.x on the others..

Only fair test is comparing the cards using same level of shaders. It gets kind of fussy when talking about difference between 2x (ATI ) and 3.0 ( NVidia ). I believe if game supports one of these and is faster than the other - it only means that the game has been optimized for that companies GPU.
 
So it is the same build that was stolen?
I sorta thought it was some "techdemo" thingie for reviewers, shouldnt valve or someone be pretty pissed if reviewers use the stolen builds?
Well, I guess its not much to go by, but given that gabe isnt stating under what circumstances the 40% advantage is, neither is his statement..
If it was high res, 6xAA+, 16xAF it wouldnt be any big news i guess.. hehe
 
jolle said:
So it is the same build that was stolen?
I sorta thought it was some "techdemo" thingie for reviewers, shouldnt valve or someone be pretty pissed if reviewers use the stolen builds?
Well, I guess its not much to go by, but given that gabe isnt stating under what circumstances the 40% advantage is, neither is his statement..
If it was high res, 6xAA+, 16xAF it wouldnt be any big news i guess.. hehe

There is no "techdemo" released for reviewers, xbit is basically running warez. I'd actually be shocked if they owned any of the games they benchmark.
 
Stryyder said:
jolle said:
This benchmark doesnt really hint to any 40% differences..
http://www.xbitlabs.com/articles/video/display/r420-2_12.html

but I guess its not as new as the version Valve have, might be driver related and whatnot aswell..
or maybe Gabe means in 8xAA 16xAF...

Yeah but that benchmark is from a stolen build with a lot of missing textures running on god knows what path so it is worthless anyway.

All that means if true is that Gabe and his gang Optimized HL2 to be 40% faster on the ATI over the NVidia card. Or they they made the NVidia path worst purposely.

Running the test on code before the cards were out of NDA is better because it could mean that it is based on the perfromance of R3xx series and which if so the NV40 appears to be just as good as the R420 in HL2 if Xbitlabs benchmarks are correctly done.
 
hstewarth said:
Stryyder said:
jolle said:
This benchmark doesnt really hint to any 40% differences..
http://www.xbitlabs.com/articles/video/display/r420-2_12.html

but I guess its not as new as the version Valve have, might be driver related and whatnot aswell..
or maybe Gabe means in 8xAA 16xAF...

Yeah but that benchmark is from a stolen build with a lot of missing textures running on god knows what path so it is worthless anyway.

All that means if true is that Gabe and his gang Optimized HL2 to be 40% faster on the ATI over the NVidia card. Or they they made the NVidia path worst purposely.

Running the test on code before the cards were out of NDA is better because it could mean that it is based on the perfromance of R3xx series and which if so the NV40 appears to be just as good as the R420 in HL2 if Xbitlabs benchmarks are correctly done.

how do u reach that .

we know from gabe that there are many paths that an nvidia card can use .

For all we know the path in the benchmarks are the ones that gave the 5900ultra a 60% increase in framerates by going mixed persicion or less .

So we could be comparing a r420 p.s 2 full persicion to a nv40 running 1.1-2.0 shaders with all 16 fp pp .
 
AlphaWolf said:
There is no "techdemo" released for reviewers, xbit is basically running warez. I'd actually be shocked if they owned any of the games they benchmark.
Well they don't actually use the Half Life2 name, so you can't PROVE they are actually using a stolen HL2 as a benchmark, but we all know they are. Warez it is.
 
DemoCoder said:
I thought forcing the NV3x and NV40 to R300 device id removed the IQ artifacts?
wyea it also reduced performance alot . So people are saying therei s a problem making it run on the r300 path or something
 
jvd said:
DemoCoder said:
I thought forcing the NV3x and NV40 to R300 device id removed the IQ artifacts?
wyea it also reduced performance alot . So people are saying therei s a problem making it run on the r300 path or something

That's the point. It's running at decreased IQ for increased speed. 8)
 
Eronarn said:
jvd said:
DemoCoder said:
I thought forcing the NV3x and NV40 to R300 device id removed the IQ artifacts?
wyea it also reduced performance alot . So people are saying therei s a problem making it run on the r300 path or something

That's the point. It's running at decreased IQ for increased speed. 8)
ya but alot of people don't seem to want to admit that :)
 
DemoCoder said:
Especially if it's comparing 6xMSAA to 2xOGSS+4xRGMS.

Why is that not comparable ?? Consumers don't have a freakin clue what Multisampling vs. Supersampling is, all they care about is higher number of AA samples must mean better AA.

I never saw this arguement used during the endless AA benchmarks used with the GF3+4 vs the Supersampled 8500.
 
jvd said:
Eronarn said:
jvd said:
DemoCoder said:
I thought forcing the NV3x and NV40 to R300 device id removed the IQ artifacts?
wyea it also reduced performance alot . So people are saying therei s a problem making it run on the r300 path or something

That's the point. It's running at decreased IQ for increased speed. 8)
ya but alot of people don't seem to want to admit that :)

nVidiots, all of 'em.
 
Eronarn said:
jvd said:
Eronarn said:
jvd said:
DemoCoder said:
I thought forcing the NV3x and NV40 to R300 device id removed the IQ artifacts?
wyea it also reduced performance alot . So people are saying therei s a problem making it run on the r300 path or something

That's the point. It's running at decreased IQ for increased speed. 8)
ya but alot of people don't seem to want to admit that :)

nVidiots, all of 'em.
It is sad but true.
 
Back
Top