Massive Development (Aquamark 3) comments on HL2

g__day

Regular
This from their MD in response to my questions last night about why MD show NVidia closer to ATi in DX9 under Aquamark 3 than Half Life 2. More to be examined here for sure:

http://arc.aquamark3.com/forum/showthread.php?s=&threadid=109

g__day

Any comments on the Half Life 2 Benchmarks?

A few folk are reeling tonight as half a dozen really solid 3d sites are seeing just what Half Life 2 reveals about NVidia vs ATi performance in DX9 shaders. Beyond3d, The Tech Report, and Extreme Tech aren't fanboy, unprofessional or shallow sites - there are amongst the very, very best reviewers out there in my opinion.

Shadermark, Tomb Raider 4, Dawn Wrapper, 3d Mark 2003 were dismissed as being synthetic, biased or poorly coded.

It hard to level any of these acquisations against Valve, especialy when they put optimised DX9 code paths for NV3x into their benchmarks, and used partial precision hints, and hand coded shaders for NVidia top end cards.

Massive has said expect to see < 50% performance difference between ATi and NVidia at the top end (and actually hinted that the difference is more like 15%), but what is the story with Half Life 2.

1) Are you puzzled and surprised by these results given your shaders run more evenly?

2) Did Valve do something wrong or is it too hard to tell?

3) Are you surprised by Valve's and NVidia's reaction to the beta Detonator 50.xx drivers - its incredible that Valve offically says don't use them - illegal shader tampering - whilst NVidia say offically - only use Det 50.xx - nothing else is valid.

Some one slip me a reality check pill please!

* * * * * *

Alexander Jorias
Admin :: Raytraced

--------------------------------------------------------------------------------

Posts: 24
Userid: #2
Registered: 03, 2003
Location: Earth
Post IP: Logged

#1827 posted on September 12th, 2003 at 02:50 :: Report this post to a moderator

--------------------------------------------------------------------------------



quote:
--------------------------------------------------------------------------------
1) Are you puzzled and surprised by these results given your shaders run more evenly?
--------------------------------------------------------------------------------



the results of a performance measurement depend on the intention you have. if it is your intention you can favour a specific hardware over the other. for obvious reasons nobody would do so and no game developer who has the intention to cover the widest range of customer hardware will create a technology which heavily depends on the one or the other. you have to make the correct choice to give your customers the best gaming experience. if your technology is unbalanced you will sacrifice sales.

if it is your choice to design special code paths for each hardware you will end up with tons of unmaintainable vs/ps combinations. to make it worse, even new generations or revisions of a hardware of the same vendor might change the performance behaviour of your code dramatically. so it is your taks to make the solomnic and best choices for your solutions.

at the end of the day your decisions should be based upon the optimal solution for a specifc problem.


quote:
--------------------------------------------------------------------------------
2) Did Valve do something wrong or is it too hard to tell?
--------------------------------------------------------------------------------



i don't want to comment on other developers. i'm convinced that they know what they do and why they do it.


quote:
--------------------------------------------------------------------------------
3) Are you surprised by Valve's and NVidia's reaction to the beta Detonator 50.xx drivers - its incredible that Valve offically says don't use them - illegal shader tampering - whilst NVidia say offically - only use Det 50.xx - nothing else is valid.
--------------------------------------------------------------------------------



as with every piece of software there is always room for improvement. for us the driver and the hardware are closely tied together and we think that any driver improvements which give an overall speed increase to the customer are good (no matter which vendor we are talking about). you have to remember, that we as developers use an api (dx9) which is generic and ideally solomonic. so we can not code to the metal and get the best out of a specific hardware. so the hardware vendor has the task to map the api optimally to the hardware. this may turn out to be a laboursome and painful task as you are aiming at a moving target (i.e. constantly chaning game engines). so you should think of the driver team as a constant service you as a customer get after you bought your hardware.
__________________
Alexander Jorias / Managing Director / Massive Development
 
Aqua mark aren't gonna save nvidia the only one who can now is carmack and unless they release a demo soon it might be to late to save their revenue for the 6-12 moths :)
 
Alexander Jorias said:
we as developers use an api (dx9) which is generic and ideally solomonic. so we can not code to the metal and get the best out of a specific hardware. so the hardware vendor has the task to map the api optimally to the hardware.
Am I misunderstanding this or is that a fancy way of saying application specific optimizations are ok? :?:
 
bloodbob

Aquamark 3 should raise questions over why one major DX9 engine struggles for one IHV's card and flys on another one.

I make no ascertions other than its interesting the variance in results and really should be delved into deeply, as both companies have close relationships with the IHVs and alot of knowledge about shaders and DX9 game development.

Secondly Carmack isn't doing anything DX9 at the moment - so how would his actions help NVidia? Doom 3 is OpenGL at a DX8.1 equivalent.

digitalwanderer

Alexander made it clear in another post Massive Development code for a generic DX 7, 8 or 9 featureset, not a specific vendors' cards - although they do provide Precision Hints in their shaders.

http://arc.aquamark3.com/forum/showthread.php?s=&threadid=101

Alexander Jorias
Admin :: Raytraced
the code path are not optimized in a vendor-specific way. the capabilites of the gfx card are considered when the engine determines which set of shaders is used for a spefic material / effect. so if two cards report the same capability flags for a given situation exactly the same code / vs / ps path are used.

and I asked to confirm further

1) Detect card
2) Check capabilities - assign DX9 shaders based on capabilities detected
3) user accepts or modifies visual settings defaults
4) run benchmark with assigned shaders and requested visual settings
--------------------------------------------------------------------------------

kind of that. if you use advanced settings you are able to tweak am3's visual settings and by this you might overrule am3 default choice for your specific gfx card.

however if you do a TRISCORE benchmark run (wich results in an overall score, a gpu score and a cpu score) the decision cascade of am3 can not be overruled (if you could do so the scores wouldn't be comparable at all).
 
the results of a performance measurement depend on the intention you have. if it is your intention you can favour a specific hardware over the other. for obvious reasons nobody would do so and no game developer who has the intention to cover the widest range of customer hardware will create a technology which heavily depends on the one or the other. you have to make the correct choice to give your customers the best gaming experience. if your technology is unbalanced you will sacrifice sales.

if it is your choice to design special code paths for each hardware you will end up with tons of unmaintainable vs/ps combinations. to make it worse, even new generations or revisions of a hardware of the same vendor might change the performance behaviour of your code dramatically. so it is your taks to make the solomnic and best choices for your solutions.

at the end of the day your decisions should be based upon the optimal solution for a specifc problem.

Basically, at the end of the day what Gabe said was that sticking to the API was Valve's "optimal solution" for coding for R3x0 and nV3x, DX9 and DX8.1, respectively. Pretty simple and straightforward premise based on their negative experience in devoting significant time and resources to doing an optimal code path for nV3x when a straight DX8.1 approach would have sufficed.

I can't really make heads or tails of the gobbledegook written above, unfortunately...unless it's his way of saying that they're just writing DX8.1 support in for everybody running AquaMark so as to make the "correct choice" for the "widest range of customer hardware"...
 
It takes a minute to register, their MD gives of of his time and is a nice guy - why not ask directly?

http://arc.aquamark3.com/forum/register.php

My interpretation from the above two quotes is they simply don't code for vendor specific capabilities - it would be a nightmare - they code for dx 7, 8 or 9 featuresets present.
 
g__day said:
Aquamark 3 should raise questions over why one major DX9 engine struggles for one IHV's card and flys on another one.

The answer to this is simple, Aquamark3 utilizes less PS2 than HL2 does. Everyone knows PS2 is Nvidia GF-FX achilles heal, thus the more you use it the worse it performs on GF-FX hardware. It also seems PS1.4 isn't too keen on Nvidia GF-FX either.
 
BRiT said:
g__day said:
Aquamark 3 should raise questions over why one major DX9 engine struggles for one IHV's card and flys on another one.

The answer to this is simple, Aquamark3 utilizes less PS2 than HL2 does. Everyone knows PS2 is Nvidia GF-FX achilles heal, thus the more you use it the worse it performs on GF-FX hardware. It also seems PS1.4 isn't too keen on Nvidia GF-FX either.
I don't think it's quite that simple. . . There's a matter of what other DX9 features are used (FP textures, for example) and how strong NVidia and ATI are with them.
 
Ostsol said:
BRiT said:
g__day said:
Aquamark 3 should raise questions over why one major DX9 engine struggles for one IHV's card and flys on another one.

The answer to this is simple, Aquamark3 utilizes less PS2 than HL2 does. Everyone knows PS2 is Nvidia GF-FX achilles heal, thus the more you use it the worse it performs on GF-FX hardware. It also seems PS1.4 isn't too keen on Nvidia GF-FX either.
I don't think it's quite that simple. . . There's a matter of what other DX9 features are used (FP textures, for example) and how strong NVidia and ATI are with them.

Generally speaking: that's even more simple!
HL2 heavily depends on shader performance - where the whole GFFX-family LAG WELL BEHIND the Radeons, in terms of hardware resources which equals lot weaker and limited performance.

I bet AM3 doesn't utilize the shaders same way and amount.

(OFF BTW, haven't MD used some non-dx based tool during the development process? As I've read somewhere earlier they just compiled back but I dunno what does it mean... maybe just my old brain playing with me...:))

Let's get it, O.: after the years of being CPU limited and then bandwidth-limited, now HL2 brought to you the Shader-limited feeling.

The King is Dead, Long Live The King! :cool:
 
I think it could be because HL2 uses shaders for everything, and it's probably PS2.0 by default, so NV gets really dumpy performance. AM3 probably uses a combination of shaders and fixed function, so NV is more competitive.
 
I haven't seen the Aquamark benchmark, but the half-live 2 demo is really quite amazing. The power of the engine is absolutely unreal.

I think the results of the hl2 bench, and the halo bench, and tombraider all show the same results.

I think Valve is pissed that the Det. 50 drivers contain all the bad "optimizations" they listed in their presentation.
 
g__day said:
bloodbob

Aquamark 3 should raise questions over why one major DX9 engine struggles for one IHV's card and flys on another one.

I make no ascertions other than its interesting the variance in results and really should be delved into deeply, as both companies have close relationships with the IHVs and alot of knowledge about shaders and DX9 game development.

Secondly Carmack isn't doing anything DX9 at the moment - so how would his actions help NVidia? Doom 3 is OpenGL at a DX8.1 equivalent.

But its not just one major DX9 it was also 3dmark which is arguable as a engine BUT ALSO THE LAST TOMBRAIDER!!!! WHICH IS A GAME for those 3dmark isn't a game therefore invalid ppl.

Yeah but doom 3 is other BIG game thats gonna come out in the not to distant future and there is a very good chance that it be licensed by many companies raven is already using so thats 1.
 
Good - people are starting to ask the questions that should be asked - that's what I wanted.

What else can I add from what I have gleaned of MD and AM3:

1. MD were a bit dismissive of the calibre of shaders and engine in Tomb Raider 4

2. MD imply there are on the cutting edge of shader based engines - remember they were the first major engine NVidia touted to demonstrate the capabilities of GeForce 3 with the original Aquanox

3. Everything is shader based according to documentation on the Krass engine and their comments in the forums

4. I believe there is a mixture of PS 1.0, 1.1, 1.4 and 2.0 code and Partial Precision hint. MD are reluctant to say how much of each. I guess there is alot of PP hint - I have no view as the how much of the game is more advanced PS 2.0 shaders and how this might compare to Half Life 2. For all we know they might run more PS 2.0 code then valve - just better written code. We don't really know yet do we. Why not ask MD and Valve about it, that's what reviewers do!

Mind you 3dGPU who do have the benchmark and are skilled had this to say about maybe AM3 is not running as many PS 2.0 shaders

http://www.3dgpu.com/phpbb/viewtopic.php?t=6395&start=75
Brian
Site Admin
Joined: 17 Apr 2002
Posts: 366
Location: Huntsville, AL

jb wrote:
g__day wrote:
Look I really like ATi but I want to know - why do NVidia gain so much ground vs ATi in Aquamark 3?

This isn't all over by a long shot - keep your minds open over the next few weeks.


Thats easy the Aq3 probably is not as intensive PS2 test as 3dmarks PS2.0 test, right mark, shadermark, TR:AOD, HL2 or Halo(PC).


I know that's not true. The demo has the ability to color the polygons based on the pixel shader used, and the vast majority are 2.0, and there are parts that make a 9800Pro crawl...I mean single digits.

I think Aquamark is a great benchmark so far.


Hope that helps.
 
This is good....


Found this image at this location....
http://www.3dgpu.com/phpbb/viewtopic.php?t=6395&start=75

HL2.jpg
 
PS1.4 may have been used over PS1.1 or 1.3 due to the instruction count and texture sampling limitations that the latter have. PS1.4 allows for more of both, plus a single dependant texture read all in the same pass. Of course, if the NV3x does not actually support PS1.4 in hardware and merely emulates it with PS2.0, it certainly will not help except by guaranteeing a lower maximum instruction count as compared to the default PS2.0 shaders.
 
Generally speaking: that's even more simple!
HL2 heavily depends on shader performance - where the whole GFFX-family LAG WELL BEHIND the Radeons, in terms of hardware resources which equals lot weaker and limited performance.

It's not quite that simple... Even the nature of your shader routine can have an effect on performance.

Since Ostol mentioned dependent reads... Routines that tend to do a lot of procedural generation (lots of ALU ops) will tend to faver the R3xx cores, while routines that have a nice mix of tex ops and alu ops (like lots of dependent reads) will likely show the NV3x cores in a better light...
 
Back
Top