MS, ATI, NVidia, DX .....

Uttar said:
Well, to make myself clearer, I question the message too thus. Without knowledge of how many hours & weeks 5x is, it doesn't seem to mean much to me.

But don't worry about me today. I have a headache and am constantly thinking about 248594 things "Damn, I never realized I was THAT dumb!"... Oh well.


If you had to choose to do one of two jobs, and you knew one was going to take five times longer than the other, which one would you choose to do?

If I offered you an un-named amount of cash, or five times that same amount, which would you choose to have? It it still meaningless?

In the context of what Vavle was saying, you just need to know that it takes five times more effort (whatever that "effort" is) to program for NV35 than a standard DX9 path - and NV35 end up being slower and with lower IQ.
 
I think we already know a lot about what the possible optimizations for NV3x were:
- Going through the shader code and checking out which instructions could use a partial precision hint.
- Then checking the result to see if it does not degrade the image quality too much.

Now, if Valve really has 1200 shaders for HL2, then this adds up quite a lot...
 
Bouncing Zabaglione Bros. said:
If you had to choose to do one of two jobs, and you knew one was going to take five times longer than the other, which one would you choose to do?

If I offered you an un-named amount of cash, or five times that same amount, which would you choose to have? It it still meaningless?
But this is not about choosing between two jobs, but whether some work is worth the effort. If optimizing for ATI takes one minute and optimizing for NVidia takes 5 minutes, then there's no doubt both are worth the effort.

In the context of what Vavle was saying, you just need to know that it takes five times more effort (whatever that "effort" is) to program for NV35 than a standard DX9 path - and NV35 end up being slower and with lower IQ.
So what's a standard DX9 path? When Valve started implementing their DX9 path, they only had ATI hardware. So their so-called "generic" path surely takes R300 performance characteristics into account. Maybe when the XGI Volari comes up we'll see how "generic" that path really is.

NV35 ending up being slower (than R300, I suppose, not than "a standard DX9 path") doesn't mean it's hard to optimize for it. It rather just means that the chip can perform less operations per clock.
 
DaveBaumann said:
Have you been talking to Mr Kirk?

Are you talking about David Kirk or Captain Kirk?
I'm not sure if you're trying to rip a joke there. :)

Mr Kirk said that the NV3x's poor perfromance is due to software and not hardware.
 
Xmas said:
But this is not about choosing between two jobs, but whether some work is worth the effort. If optimizing for ATI takes one minute and optimizing for NVidia takes 5 minutes, then there's no doubt both are worth the effort.

I think we can guess Valve were talking about larger units than minutes. :rolleyes: Besides, Valve have already told us it wasn't worth the effort and they wish they had just left the NV3x doing the DX8 path.

Xmas said:
So what's a standard DX9 path? When Valve started implementing their DX9 path, they only had ATI hardware. So their so-called "generic" path surely takes R300 performance characteristics into account. Maybe when the XGI Volari comes up we'll see how "generic" that path really is.

You mean like how everyone used to code to Nvidia because they had the best cards? Well if the shoe is on the other foot, then that's the way it should be. It was alright when it was working in Nvidia's favour, now it's working for another company with the best technology (shrug).

We've also had no reason not to think that ATI has stuck far more closely to DX9 (no sub-DX9 partial precision or CG HLSL replacements), something we know Nvidia has not done with NV3x


Xmas said:
NV35 ending up being slower (than R300, I suppose, not than "a standard DX9 path") doesn't mean it's hard to optimize for it. It rather just means that the chip can perform less operations per clock.

You say that as if it's a good thing: "It's not harder to optimise for - it's just a slow card". :rolleyes:

We know from Valve and others than NV3x is harder to optimise for, as it needs all kind of fiddling about to see where you can use lower precision and live with the IQ degredation, or where you can live with the low speed performance at a decent IQ. It simply has a more complex and fiddly architecture without the power to drive it fully, so you need to babysit everything it does. Developers hate that kind of stuff- they just want to write their code to the API and have the hardware run it properly - something NV3x is particularly bad at doing according to loads of developers.
 
Bouncing Zabaglione Bros. said:
You mean like how everyone used to code to Nvidia because they had the best cards? Well if the shoe is on the other foot, then that's the way it should be. It was alright when it was working in Nvidia's favour, now it's working for another company with the best technology (shrug).
I have no problem with that. I just find it a bit silly to say, 'hey, it took us five times longer to optimize for NV3x than optimizing our "generic" path for the hardware we developed it on right from the start.'

We've also had no reason not to think that ATI has stuck far more closely to DX9 (no sub-DX9 partial precision or CG HLSL replacements), something we know Nvidia has not done with NV3x
What came first, the chip specs or DX9? Partial precision isn't sub-DX9, it's part of it.

You say that as if it's a good thing: "It's not harder to optimise for - it's just a slow card". :rolleyes:
No, I actually say "it is a slower card (bar sin/cos ops and texture reads), but that's no indicator for whether it's hard to optimize for" ;)
 
Xmas said:
We've also had no reason not to think that ATI has stuck far more closely to DX9 (no sub-DX9 partial precision or CG HLSL replacements), something we know Nvidia has not done with NV3x
What came first, the chip specs or DX9? Partial precision isn't sub-DX9, it's part of it.

FP12?

Xmas said:
You say that as if it's a good thing: "It's not harder to optimise for - it's just a slow card". :rolleyes:
No, I actually say "it is a slower card (bar sin/cos ops and texture reads), but that's no indicator for whether it's hard to optimize for" ;)

That's a somewhat circular argument. Valve says NV3x take five times longer to optimise for, you translate that as Valve meanng "the card is slower". Then you complete the circle by saying that because NV3x is slower, that doesn't necessarily mean it's harder to optimise for. :oops: :rolleyes:

We *know* the card is harder to optimise for because Valve and other developers say that you have to spend significantly more time and effort optimising NV3x, and at best get parity performance on DX8, and at worst are significantly slower on DX9. Isn't this obvious from all we know about NV3x, it's benchmarks, it's gaming performance, and what developers say about it?
 
Bouncing Zabaglione Bros. said:
You mean FX12? Partial Precision should be FP16.

That's a somewhat circular argument. Valve says NV3x take five times longer to optimise for, you translate that as Valve meanng "the card is slower". Then you complete the circle by saying that because NV3x is slower, that doesn't necessarily mean it's harder to optimise for. :oops: :rolleyes:
I think you got me wrong. You said "In the context of what Vavle was saying, you just need to know that it takes five times more effort (whatever that "effort" is) to program for NV35 than a standard DX9 path - and NV35 end up being slower and with lower IQ." And I just wanted to point out that NV35 ending up being slower is not related to the effort.
 
The point is, what if every card came with "twitchy" architectures and beta drivers? Then Valve would have to spend that same 5x time on every card? I think their complaint is as much about the FX's architecture as it is about the FX's underperforming drivers (at the time, though the 52.14 seems to have improved things). If IHV's don't include good drivers at a card's release, then devs have to basically choose between spending time optimizing during game development, or hope that the IHV sorts things out before the game is released, right?
 
Laa-Yosh said:
I think we already know a lot about what the possible optimizations for NV3x were:
- Going through the shader code and checking out which instructions could use a partial precision hint.
- Then checking the result to see if it does not degrade the image quality too much.

Now, if Valve really has 1200 shaders for HL2, then this adds up quite a lot...

A quick comment: Valve already specified what they did for the NV3x path.

  • Uses partial-precison registers when appropriate.
  • Trades off texture fetches for pixel shader instruction count.you missed this
  • Case-by-case shader restructuring.

Carmack has said something similar as far as the difference between the "NV30" and "ARB2" paths, correct?

To the discussion in general:

Valve also specified their problem with doing this: they got a poor return on their larger investment, in image quality and performance. These factors are directly related to why Valve has a problem with the effort, and why they recommend other developers code to the DX 8.1 standard instead (as nVidia now also seems to recommend in some statements). Come to think of it, I believe Carmack said something similar about not all developers being able to spend the effort he has on Doom 3 in a specific NV3x path, moving forward, for OpenGL as well. Do I misremember or misrepresent here?
 
well, after the leakage of HL2 source .... it will be interesting to hear some opinionon how well-optimised are Valve's shaders ...
?
 
RussSchultz said:
nelg said:
You know Russ, I somewhat agree with you in believing that M$ does not want any one IHV to wield too much power. In this round though, I would chalk up the decision to simply being the best one. No conspiracy.
Meh. I simply harken back to the past: "DOS ain't done 'till lotus won't run" was a popular slogan in the 80's. The OS/2 fiasco wasn't a paragon of cooperation between Microsoft and a supplier/partner.

I know I'll never invest in a company who depends on being in Microsoft's good graces to survive and prosper. Love is such a fickle thing.

It doesn't mean that there was an outright conspiracy, of course; or that NVIDIA didn't drop the ball, engineering wise.

Not to get off topic, but they (Microsoft) were actually paying people to post bad things about OS/2 in news groups around the net. Even though too this day OS/2 has better multi-tasking (IMO). OS/2 was a superior OS to Windows back in the day in nearly every way. PR got the best of Big blue on that one and the rest is history.

Of course it could be a pre-cursor to what is happening today with big [N] and ATI!!
 
demalion said:
. . . Valve also specified their problem with doing this: they got a poor return on their larger investment, in image quality and performance. These factors are directly related to why Valve has a problem with the effort, and why they recommend other developers code to the DX 8.1 standard instead (as nVidia now also seems to recommend in some statements). Come to think of it, I believe Carmack said something similar about not all developers being able to spend the effort he has on Doom 3 in a specific NV3x path, moving forward, for OpenGL as well. Do I misremember or misrepresent here?

True; he said so.
 
since you love to question everything that makes nVidia look bad, now will you explain exactly how us, the consumer will not have to pay more for games due to nVidia's crap product and even crappier management decisions.

Uttar said:
Joe DeFuria said:
I'm scratching my head a little on this Uttar.

It's quite obvious that the "context" in which the statement was used was in Valve's displeasure at the level of optimizing effort and return for that effort it got them.

The number "5x" isn't particularly important, other than to illustrate Valve's overall message.

Well, to make myself clearer, I question the message too thus. Without knowledge of how many hours & weeks 5x is, it doesn't seem to mean much to me.

But don't worry about me today. I have a headache and am constantly thinking about 248594 things "Damn, I never realized I was THAT dumb!"... Oh well.


Uttar
 
Bouncing Zabaglione Bros. said:
I think we can guess Valve were talking about larger units than minutes. :rolleyes: Besides, Valve have already told us it wasn't worth the effort and they wish they had just left the NV3x doing the DX8 path.

Valve also told us that HL2 would be released on Sep30th :rolleyes:


edit: below not from BZB
I have no problem with that. I just find it a bit silly to say, 'hey, it took us five times longer to optimize for NV3x than optimizing our "generic" path for the hardware we developed it on right from the start.'
I agree with this and I said it originally everyone doesn't seem to follow the argument tho...
 
Sxotty said:
Bouncing Zabaglione Bros. said:
I think we can guess Valve were talking about larger units than minutes. :rolleyes: Besides, Valve have already told us it wasn't worth the effort and they wish they had just left the NV3x doing the DX8 path.

Valve also told us that HL2 would be released on Sep30th :rolleyes:


edit: below not from BZB
I have no problem with that. I just find it a bit silly to say, 'hey, it took us five times longer to optimize for NV3x than optimizing our "generic" path for the hardware we developed it on right from the start.'
I agree with this and I said it originally everyone doesn't seem to follow the argument tho...
I think it was more of the fact that look nv3x performs like shit on this game. So we went and we worked really hard on making it work better but it still didn't work better .
 
Back
Top